Hyperspectral imaging in medicine: image pre-processing problems and solutions in Matlab.
Koprowski, Robert
2015-11-01
The paper presents problems and solutions related to hyperspectral image pre-processing. New methods of preliminary image analysis are proposed. The paper shows problems occurring in Matlab when trying to analyse this type of images. Moreover, new methods are discussed which provide the source code in Matlab that can be used in practice without any licensing restrictions. The proposed application and sample result of hyperspectral image analysis. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Image quality enhancement for skin cancer optical diagnostics
NASA Astrophysics Data System (ADS)
Bliznuks, Dmitrijs; Kuzmina, Ilona; Bolocko, Katrina; Lihachev, Alexey
2017-12-01
The research presents image quality analysis and enhancement proposals in biophotonic area. The sources of image problems are reviewed and analyzed. The problems with most impact in biophotonic area are analyzed in terms of specific biophotonic task - skin cancer diagnostics. The results point out that main problem for skin cancer analysis is the skin illumination problems. Since it is often not possible to prevent illumination problems, the paper proposes image post processing algorithm - low frequency filtering. Practical results show diagnostic results improvement after using proposed filter. Along that, filter do not reduces diagnostic results' quality for images without illumination defects. Current filtering algorithm requires empirical tuning of filter parameters. Further work needed to test the algorithm in other biophotonic applications and propose automatic filter parameter selection.
Statistical Smoothing Methods and Image Analysis
1988-12-01
83 - 111. Rosenfeld, A. and Kak, A.C. (1982). Digital Picture Processing. Academic Press,Qrlando. Serra, J. (1982). Image Analysis and Mat hematical ...hypothesis testing. IEEE Trans. Med. Imaging, MI-6, 313-319. Wicksell, S.D. (1925) The corpuscle problem. A mathematical study of a biometric problem
NASA Astrophysics Data System (ADS)
Ezhova, Kseniia; Fedorenko, Dmitriy; Chuhlamov, Anton
2016-04-01
The article deals with the methods of image segmentation based on color space conversion, and allow the most efficient way to carry out the detection of a single color in a complex background and lighting, as well as detection of objects on a homogeneous background. The results of the analysis of segmentation algorithms of this type, the possibility of their implementation for creating software. The implemented algorithm is very time-consuming counting, making it a limited application for the analysis of the video, however, it allows us to solve the problem of analysis of objects in the image if there is no dictionary of images and knowledge bases, as well as the problem of choosing the optimal parameters of the frame quantization for video analysis.
New methods for image collection and analysis in scanning Auger microscopy
NASA Technical Reports Server (NTRS)
Browning, R.
1985-01-01
While scanning Auger micrographs are used extensively for illustrating the stoichiometry of complex surfaces and for indicating areas of interest for fine point Auger spectroscopy, there are many problems in the quantification and analysis of Auger images. These problems include multiple contrast mechanisms and the lack of meaningful relationships with other Auger data. Collection of multielemental Auger images allows some new approaches to image analysis and presentation. Information about the distribution and quantity of elemental combinations at a surface are retrievable, and particular combinations of elements can be imaged, such as alloy phases. Results from the precipitate hardened alloy Al-2124 illustrate multispectral Auger imaging.
Colony image acquisition and genetic segmentation algorithm and colony analyses
NASA Astrophysics Data System (ADS)
Wang, W. X.
2012-01-01
Colony anaysis is used in a large number of engineerings such as food, dairy, beverages, hygiene, environmental monitoring, water, toxicology, sterility testing. In order to reduce laboring and increase analysis acuracy, many researchers and developers have made efforts for image analysis systems. The main problems in the systems are image acquisition, image segmentation and image analysis. In this paper, to acquire colony images with good quality, an illumination box was constructed. In the box, the distances between lights and dishe, camra lens and lights, and camera lens and dishe are adjusted optimally. In image segmentation, It is based on a genetic approach that allow one to consider the segmentation problem as a global optimization,. After image pre-processing and image segmentation, the colony analyses are perfomed. The colony image analysis consists of (1) basic colony parameter measurements; (2) colony size analysis; (3) colony shape analysis; and (4) colony surface measurements. All the above visual colony parameters can be selected and combined together, used to make a new engineeing parameters. The colony analysis can be applied into different applications.
NASA Technical Reports Server (NTRS)
Ross, B. E.
1971-01-01
The Moire method experimental stress analysis is similar to a problem encountered in astrometry. It is necessary to extract accurate coordinates from images on photographic plates. The solution to the mutual problem found applicable to the field of experimental stress analysis is presented to outline the measurement problem. A discussion of the photo-reading device developed to make the measurements follows.
Research relative to automated multisensor image registration
NASA Technical Reports Server (NTRS)
Kanal, L. N.
1983-01-01
The basic aproaches to image registration are surveyed. Three image models are presented as models of the subpixel problem. A variety of approaches to the analysis of subpixel analysis are presented using these models.
Automatic single-image-based rain streaks removal via image decomposition.
Kang, Li-Wei; Lin, Chia-Wen; Fu, Yu-Hsiang
2012-04-01
Rain removal from a video is a challenging problem and has been recently investigated extensively. Nevertheless, the problem of rain removal from a single image was rarely studied in the literature, where no temporal information among successive images can be exploited, making the problem very challenging. In this paper, we propose a single-image-based rain removal framework via properly formulating rain removal as an image decomposition problem based on morphological component analysis. Instead of directly applying a conventional image decomposition technique, the proposed method first decomposes an image into the low- and high-frequency (HF) parts using a bilateral filter. The HF part is then decomposed into a "rain component" and a "nonrain component" by performing dictionary learning and sparse coding. As a result, the rain component can be successfully removed from the image while preserving most original image details. Experimental results demonstrate the efficacy of the proposed algorithm.
Raman imaging from microscopy to macroscopy: Quality and safety control of biological materials
USDA-ARS?s Scientific Manuscript database
Raman imaging can analyze biological materials by generating detailed chemical images. Over the last decade, tremendous advancements in Raman imaging and data analysis techniques have overcome problems such as long data acquisition and analysis times and poor sensitivity. This review article introdu...
Xu, Yihua; Pitot, Henry C
2006-03-01
In the studies of quantitative stereology of rat hepatocarcinogenesis, we have used image analysis technology (automatic particle analysis) to obtain data such as liver tissue area, size and location of altered hepatic focal lesions (AHF), and nuclei counts. These data are then used for three-dimensional estimation of AHF occurrence and nuclear labeling index analysis. These are important parameters for quantitative studies of carcinogenesis, for screening and classifying carcinogens, and for risk estimation. To take such measurements, structures or cells of interest should be separated from the other components based on the difference of color and density. Common background problems seen on the captured sample image such as uneven light illumination or color shading can cause severe problems in the measurement. Two application programs (BK_Correction and Pixel_Separator) have been developed to solve these problems. With BK_Correction, common background problems such as incorrect color temperature setting, color shading, and uneven light illumination background, can be corrected. With Pixel_Separator different types of objects can be separated from each other in relation to their color, such as seen with different colors in immunohistochemically stained slides. The resultant images of such objects separated from other components are then ready for particle analysis. Objects that have the same darkness but different colors can be accurately differentiated in a grayscale image analysis system after application of these programs.
On the importance of mathematical methods for analysis of MALDI-imaging mass spectrometry data.
Trede, Dennis; Kobarg, Jan Hendrik; Oetjen, Janina; Thiele, Herbert; Maass, Peter; Alexandrov, Theodore
2012-03-21
In the last decade, matrix-assisted laser desorption/ionization (MALDI) imaging mass spectrometry (IMS), also called as MALDI-imaging, has proven its potential in proteomics and was successfully applied to various types of biomedical problems, in particular to histopathological label-free analysis of tissue sections. In histopathology, MALDI-imaging is used as a general analytic tool revealing the functional proteomic structure of tissue sections, and as a discovery tool for detecting new biomarkers discriminating a region annotated by an experienced histologist, in particular, for cancer studies. A typical MALDI-imaging data set contains 10⁸ to 10⁹ intensity values occupying more than 1 GB. Analysis and interpretation of such huge amount of data is a mathematically, statistically and computationally challenging problem. In this paper we overview some computational methods for analysis of MALDI-imaging data sets. We discuss the importance of data preprocessing, which typically includes normalization, baseline removal and peak picking, and hightlight the importance of image denoising when visualizing IMS data.
On the Importance of Mathematical Methods for Analysis of MALDI-Imaging Mass Spectrometry Data.
Trede, Dennis; Kobarg, Jan Hendrik; Oetjen, Janina; Thiele, Herbert; Maass, Peter; Alexandrov, Theodore
2012-03-01
In the last decade, matrix-assisted laser desorption/ionization (MALDI) imaging mass spectrometry (IMS), also called as MALDI-imaging, has proven its potential in proteomics and was successfully applied to various types of biomedical problems, in particular to histopathological label-free analysis of tissue sections. In histopathology, MALDI-imaging is used as a general analytic tool revealing the functional proteomic structure of tissue sections, and as a discovery tool for detecting new biomarkers discriminating a region annotated by an experienced histologist, in particular, for cancer studies. A typical MALDI-imaging data set contains 108 to 109 intensity values occupying more than 1 GB. Analysis and interpretation of such huge amount of data is a mathematically, statistically and computationally challenging problem. In this paper we overview some computational methods for analysis of MALDI-imaging data sets. We discuss the importance of data preprocessing, which typically includes normalization, baseline removal and peak picking, and hightlight the importance of image denoising when visualizing IMS data.
Solution of the problem of superposing image and digital map for detection of new objects
NASA Astrophysics Data System (ADS)
Rizaev, I. S.; Miftakhutdinov, D. I.; Takhavova, E. G.
2018-01-01
The problem of superposing the map of the terrain with the image of the terrain is considered. The image of the terrain may be represented in different frequency bands. Further analysis of the results of collation the digital map with the image of the appropriate terrain is described. Also the approach to detection of differences between information represented on the digital map and information of the image of the appropriate area is offered. The algorithm for calculating the values of brightness of the converted image area on the original picture is offered. The calculation is based on using information about the navigation parameters and information according to arranged bench marks. For solving the posed problem the experiments were performed. The results of the experiments are shown in this paper. The presented algorithms are applicable to the ground complex of remote sensing data to assess differences between resulting images and accurate geopositional data. They are also suitable for detecting new objects in the image, based on the analysis of the matching the digital map and the image of corresponding locality.
Exact image theory for the problem of dielectric/magnetic slab
NASA Technical Reports Server (NTRS)
Lindell, I. V.
1987-01-01
Exact image method, recently introduced for the exact solution of electromagnetic field problems involving homogeneous half spaces and microstrip-like geometries, is developed for the problem of homogeneous slab of dielectric and/or magnetic material in free space. Expressions for image sources, creating the exact reflected and transmitted fields, are given and their numerical evaluation is demonstrated. Nonradiating modes, guided by the slab and responsible for the loss of convergence of the image functions, are considered and extracted. The theory allows, for example, an analysis of finite ground planes in microstrip antenna structures.
Image pattern recognition supporting interactive analysis and graphical visualization
NASA Technical Reports Server (NTRS)
Coggins, James M.
1992-01-01
Image Pattern Recognition attempts to infer properties of the world from image data. Such capabilities are crucial for making measurements from satellite or telescope images related to Earth and space science problems. Such measurements can be the required product itself, or the measurements can be used as input to a computer graphics system for visualization purposes. At present, the field of image pattern recognition lacks a unified scientific structure for developing and evaluating image pattern recognition applications. The overall goal of this project is to begin developing such a structure. This report summarizes results of a 3-year research effort in image pattern recognition addressing the following three principal aims: (1) to create a software foundation for the research and identify image pattern recognition problems in Earth and space science; (2) to develop image measurement operations based on Artificial Visual Systems; and (3) to develop multiscale image descriptions for use in interactive image analysis.
Inverse transport problems in quantitative PAT for molecular imaging
NASA Astrophysics Data System (ADS)
Ren, Kui; Zhang, Rongting; Zhong, Yimin
2015-12-01
Fluorescence photoacoustic tomography (fPAT) is a molecular imaging modality that combines photoacoustic tomography with fluorescence imaging to obtain high-resolution imaging of fluorescence distributions inside heterogeneous media. The objective of this work is to study inverse problems in the quantitative step of fPAT where we intend to reconstruct physical coefficients in a coupled system of radiative transport equations using internal data recovered from ultrasound measurements. We derive uniqueness and stability results on the inverse problems and develop some efficient algorithms for image reconstructions. Numerical simulations based on synthetic data are presented to validate the theoretical analysis. The results we present here complement these in Ren K and Zhao H (2013 SIAM J. Imaging Sci. 6 2024-49) on the same problem but in the diffusive regime.
NASA Astrophysics Data System (ADS)
Kidoh, Masafumi; Shen, Zeyang; Suzuki, Yuki; Ciuffo, Luisa; Ashikaga, Hiroshi; Fung, George S. K.; Otake, Yoshito; Zimmerman, Stefan L.; Lima, Joao A. C.; Higuchi, Takahiro; Lee, Okkyun; Sato, Yoshinobu; Becker, Lewis C.; Fishman, Elliot K.; Taguchi, Katsuyuki
2017-03-01
We have developed a digitally synthesized patient which we call "Zach" (Zero millisecond Adjustable Clinical Heart) phantom, which allows for an access to the ground truth and assessment of image-based cardiac functional analysis (CFA) using CT images with clinically realistic settings. The study using Zach phantom revealed a major problem with image-based CFA: "False dyssynchrony." Even though the true motion of wall segments is in synchrony, it may appear to be dyssynchrony with the reconstructed cardiac CT images. It is attributed to how cardiac images are reconstructed and how wall locations are updated over cardiac phases. The presence and the degree of false dyssynchrony may vary from scan-to-scan, which could degrade the accuracy and the repeatability (or precision) of image-based CT-CFA exams.
Analysis of Variance in Statistical Image Processing
NASA Astrophysics Data System (ADS)
Kurz, Ludwik; Hafed Benteftifa, M.
1997-04-01
A key problem in practical image processing is the detection of specific features in a noisy image. Analysis of variance (ANOVA) techniques can be very effective in such situations, and this book gives a detailed account of the use of ANOVA in statistical image processing. The book begins by describing the statistical representation of images in the various ANOVA models. The authors present a number of computationally efficient algorithms and techniques to deal with such problems as line, edge, and object detection, as well as image restoration and enhancement. By describing the basic principles of these techniques, and showing their use in specific situations, the book will facilitate the design of new algorithms for particular applications. It will be of great interest to graduate students and engineers in the field of image processing and pattern recognition.
A Mathematical Framework for Image Analysis
1991-08-01
The results reported here were derived from the research project ’A Mathematical Framework for Image Analysis ’ supported by the Office of Naval...Research, contract N00014-88-K-0289 to Brown University. A common theme for the work reported is the use of probabilistic methods for problems in image ... analysis and image reconstruction. Five areas of research are described: rigid body recognition using a decision tree/combinatorial approach; nonrigid
Standardisation of DNA quantitation by image analysis: quality control of instrumentation.
Puech, M; Giroud, F
1999-05-01
DNA image analysis is frequently performed in clinical practice as a prognostic tool and to improve diagnosis. The precision of prognosis and diagnosis depends on the accuracy of analysis and particularly on the quality of image analysis systems. It has been reported that image analysis systems used for DNA quantification differ widely in their characteristics (Thunissen et al.: Cytometry 27: 21-25, 1997). This induces inter-laboratory variations when the same sample is analysed in different laboratories. In microscopic image analysis, the principal instrumentation errors arise from the optical and electronic parts of systems. They bring about problems of instability, non-linearity, and shading and glare phenomena. The aim of this study is to establish tools and standardised quality control procedures for microscopic image analysis systems. Specific reference standard slides have been developed to control instability, non-linearity, shading and glare phenomena and segmentation efficiency. Some systems have been controlled with these tools and these quality control procedures. Interpretation criteria and accuracy limits of these quality control procedures are proposed according to the conclusions of a European project called PRESS project (Prototype Reference Standard Slide). Beyond these limits, tested image analysis systems are not qualified to realise precise DNA analysis. The different procedures presented in this work determine if an image analysis system is qualified to deliver sufficiently precise DNA measurements for cancer case analysis. If the controlled systems are beyond the defined limits, some recommendations are given to find a solution to the problem.
Kim, Won Hwa; Chung, Moo K; Singh, Vikas
2013-01-01
The analysis of 3-D shape meshes is a fundamental problem in computer vision, graphics, and medical imaging. Frequently, the needs of the application require that our analysis take a multi-resolution view of the shape's local and global topology, and that the solution is consistent across multiple scales. Unfortunately, the preferred mathematical construct which offers this behavior in classical image/signal processing, Wavelets, is no longer applicable in this general setting (data with non-uniform topology). In particular, the traditional definition does not allow writing out an expansion for graphs that do not correspond to the uniformly sampled lattice (e.g., images). In this paper, we adapt recent results in harmonic analysis, to derive Non-Euclidean Wavelets based algorithms for a range of shape analysis problems in vision and medical imaging. We show how descriptors derived from the dual domain representation offer native multi-resolution behavior for characterizing local/global topology around vertices. With only minor modifications, the framework yields a method for extracting interest/key points from shapes, a surprisingly simple algorithm for 3-D shape segmentation (competitive with state of the art), and a method for surface alignment (without landmarks). We give an extensive set of comparison results on a large shape segmentation benchmark and derive a uniqueness theorem for the surface alignment problem.
Digital image processing of bone - Problems and potentials
NASA Technical Reports Server (NTRS)
Morey, E. R.; Wronski, T. J.
1980-01-01
The development of a digital image processing system for bone histomorphometry and fluorescent marker monitoring is discussed. The system in question is capable of making measurements of UV or light microscope features on a video screen with either video or computer-generated images, and comprises a microscope, low-light-level video camera, video digitizer and display terminal, color monitor, and PDP 11/34 computer. Capabilities demonstrated in the analysis of an undecalcified rat tibia include the measurement of perimeter and total bone area, and the generation of microscope images, false color images, digitized images and contoured images for further analysis. Software development will be based on an existing software library, specifically the mini-VICAR system developed at JPL. It is noted that the potentials of the system in terms of speed and reliability far exceed any problems associated with hardware and software development.
NASA Astrophysics Data System (ADS)
Bianchetti, Raechel Anne
Remotely sensed images have become a ubiquitous part of our daily lives. From novice users, aiding in search and rescue missions using tools such as TomNod, to trained analysts, synthesizing disparate data to address complex problems like climate change, imagery has become central to geospatial problem solving. Expert image analysts are continually faced with rapidly developing sensor technologies and software systems. In response to these cognitively demanding environments, expert analysts develop specialized knowledge and analytic skills to address increasingly complex problems. This study identifies the knowledge, skills, and analytic goals of expert image analysts tasked with identification of land cover and land use change. Analysts participating in this research are currently working as part of a national level analysis of land use change, and are well versed with the use of TimeSync, forest science, and image analysis. The results of this study benefit current analysts as it improves their awareness of their mental processes used during the image interpretation process. The study also can be generalized to understand the types of knowledge and visual cues that analysts use when reasoning with imagery for purposes beyond land use change studies. Here a Cognitive Task Analysis framework is used to organize evidence from qualitative knowledge elicitation methods for characterizing the cognitive aspects of the TimeSync image analysis process. Using a combination of content analysis, diagramming, semi-structured interviews, and observation, the study highlights the perceptual and cognitive elements of expert remote sensing interpretation. Results show that image analysts perform several standard cognitive processes, but flexibly employ these processes in response to various contextual cues. Expert image analysts' ability to think flexibly during their analysis process was directly related to their amount of image analysis experience. Additionally, results show that the basic Image Interpretation Elements continue to be important despite technological augmentation of the interpretation process. These results are used to derive a set of design guidelines for developing geovisual analytic tools and training to support image analysis.
Artistic image analysis using graph-based learning approaches.
Carneiro, Gustavo
2013-08-01
We introduce a new methodology for the problem of artistic image analysis, which among other tasks, involves the automatic identification of visual classes present in an art work. In this paper, we advocate the idea that artistic image analysis must explore a graph that captures the network of artistic influences by computing the similarities in terms of appearance and manual annotation. One of the novelties of our methodology is the proposed formulation that is a principled way of combining these two similarities in a single graph. Using this graph, we show that an efficient random walk algorithm based on an inverted label propagation formulation produces more accurate annotation and retrieval results compared with the following baseline algorithms: bag of visual words, label propagation, matrix completion, and structural learning. We also show that the proposed approach leads to a more efficient inference and training procedures. This experiment is run on a database containing 988 artistic images (with 49 visual classification problems divided into a multiclass problem with 27 classes and 48 binary problems), where we show the inference and training running times, and quantitative comparisons with respect to several retrieval and annotation performance measures.
Graph Laplacian Regularization for Image Denoising: Analysis in the Continuous Domain.
Pang, Jiahao; Cheung, Gene
2017-04-01
Inverse imaging problems are inherently underdetermined, and hence, it is important to employ appropriate image priors for regularization. One recent popular prior-the graph Laplacian regularizer-assumes that the target pixel patch is smooth with respect to an appropriately chosen graph. However, the mechanisms and implications of imposing the graph Laplacian regularizer on the original inverse problem are not well understood. To address this problem, in this paper, we interpret neighborhood graphs of pixel patches as discrete counterparts of Riemannian manifolds and perform analysis in the continuous domain, providing insights into several fundamental aspects of graph Laplacian regularization for image denoising. Specifically, we first show the convergence of the graph Laplacian regularizer to a continuous-domain functional, integrating a norm measured in a locally adaptive metric space. Focusing on image denoising, we derive an optimal metric space assuming non-local self-similarity of pixel patches, leading to an optimal graph Laplacian regularizer for denoising in the discrete domain. We then interpret graph Laplacian regularization as an anisotropic diffusion scheme to explain its behavior during iterations, e.g., its tendency to promote piecewise smooth signals under certain settings. To verify our analysis, an iterative image denoising algorithm is developed. Experimental results show that our algorithm performs competitively with state-of-the-art denoising methods, such as BM3D for natural images, and outperforms them significantly for piecewise smooth images.
Design of an image encryption scheme based on a multiple chaotic map
NASA Astrophysics Data System (ADS)
Tong, Xiao-Jun
2013-07-01
In order to solve the problem that chaos is degenerated in limited computer precision and Cat map is the small key space, this paper presents a chaotic map based on topological conjugacy and the chaotic characteristics are proved by Devaney definition. In order to produce a large key space, a Cat map named block Cat map is also designed for permutation process based on multiple-dimensional chaotic maps. The image encryption algorithm is based on permutation-substitution, and each key is controlled by different chaotic maps. The entropy analysis, differential analysis, weak-keys analysis, statistical analysis, cipher random analysis, and cipher sensibility analysis depending on key and plaintext are introduced to test the security of the new image encryption scheme. Through the comparison to the proposed scheme with AES, DES and Logistic encryption methods, we come to the conclusion that the image encryption method solves the problem of low precision of one dimensional chaotic function and has higher speed and higher security.
Model-based image analysis of a tethered Brownian fibre for shear stress sensing
2017-01-01
The measurement of fluid dynamic shear stress acting on a biologically relevant surface is a challenging problem, particularly in the complex environment of, for example, the vasculature. While an experimental method for the direct detection of wall shear stress via the imaging of a synthetic biology nanorod has recently been developed, the data interpretation so far has been limited to phenomenological random walk modelling, small-angle approximation, and image analysis techniques which do not take into account the production of an image from a three-dimensional subject. In this report, we develop a mathematical and statistical framework to estimate shear stress from rapid imaging sequences based firstly on stochastic modelling of the dynamics of a tethered Brownian fibre in shear flow, and secondly on a novel model-based image analysis, which reconstructs fibre positions by solving the inverse problem of image formation. This framework is tested on experimental data, providing the first mechanistically rational analysis of the novel assay. What follows further develops the established theory for an untethered particle in a semi-dilute suspension, which is of relevance to, for example, the study of Brownian nanowires without flow, and presents new ideas in the field of multi-disciplinary image analysis. PMID:29212755
Computer vision for microscopy diagnosis of malaria.
Tek, F Boray; Dempster, Andrew G; Kale, Izzet
2009-07-13
This paper reviews computer vision and image analysis studies aiming at automated diagnosis or screening of malaria infection in microscope images of thin blood film smears. Existing works interpret the diagnosis problem differently or propose partial solutions to the problem. A critique of these works is furnished. In addition, a general pattern recognition framework to perform diagnosis, which includes image acquisition, pre-processing, segmentation, and pattern classification components, is described. The open problems are addressed and a perspective of the future work for realization of automated microscopy diagnosis of malaria is provided.
The pre-image problem in kernel methods.
Kwok, James Tin-yau; Tsang, Ivor Wai-hung
2004-11-01
In this paper, we address the problem of finding the pre-image of a feature vector in the feature space induced by a kernel. This is of central importance in some kernel applications, such as on using kernel principal component analysis (PCA) for image denoising. Unlike the traditional method which relies on nonlinear optimization, our proposed method directly finds the location of the pre-image based on distance constraints in the feature space. It is noniterative, involves only linear algebra and does not suffer from numerical instability or local minimum problems. Evaluations on performing kernel PCA and kernel clustering on the USPS data set show much improved performance.
Space-based infrared sensors of space target imaging effect analysis
NASA Astrophysics Data System (ADS)
Dai, Huayu; Zhang, Yasheng; Zhou, Haijun; Zhao, Shuang
2018-02-01
Target identification problem is one of the core problem of ballistic missile defense system, infrared imaging simulation is an important means of target detection and recognition. This paper first established the space-based infrared sensors ballistic target imaging model of point source on the planet's atmosphere; then from two aspects of space-based sensors camera parameters and target characteristics simulated atmosphere ballistic target of infrared imaging effect, analyzed the camera line of sight jitter, camera system noise and different imaging effects of wave on the target.
Optimal image alignment with random projections of manifolds: algorithm and geometric analysis.
Kokiopoulou, Effrosyni; Kressner, Daniel; Frossard, Pascal
2011-06-01
This paper addresses the problem of image alignment based on random measurements. Image alignment consists of estimating the relative transformation between a query image and a reference image. We consider the specific problem where the query image is provided in compressed form in terms of linear measurements captured by a vision sensor. We cast the alignment problem as a manifold distance minimization problem in the linear subspace defined by the measurements. The transformation manifold that represents synthesis of shift, rotation, and isotropic scaling of the reference image can be given in closed form when the reference pattern is sparsely represented over a parametric dictionary. We show that the objective function can then be decomposed as the difference of two convex functions (DC) in the particular case where the dictionary is built on Gaussian functions. Thus, the optimization problem becomes a DC program, which in turn can be solved globally by a cutting plane method. The quality of the solution is typically affected by the number of random measurements and the condition number of the manifold that describes the transformations of the reference image. We show that the curvature, which is closely related to the condition number, remains bounded in our image alignment problem, which means that the relative transformation between two images can be determined optimally in a reduced subspace.
[A spatial adaptive algorithm for endmember extraction on multispectral remote sensing image].
Zhu, Chang-Ming; Luo, Jian-Cheng; Shen, Zhan-Feng; Li, Jun-Li; Hu, Xiao-Dong
2011-10-01
Due to the problem that the convex cone analysis (CCA) method can only extract limited endmember in multispectral imagery, this paper proposed a new endmember extraction method by spatial adaptive spectral feature analysis in multispectral remote sensing image based on spatial clustering and imagery slice. Firstly, in order to remove spatial and spectral redundancies, the principal component analysis (PCA) algorithm was used for lowering the dimensions of the multispectral data. Secondly, iterative self-organizing data analysis technology algorithm (ISODATA) was used for image cluster through the similarity of the pixel spectral. And then, through clustering post process and litter clusters combination, we divided the whole image data into several blocks (tiles). Lastly, according to the complexity of image blocks' landscape and the feature of the scatter diagrams analysis, the authors can determine the number of endmembers. Then using hourglass algorithm extracts endmembers. Through the endmember extraction experiment on TM multispectral imagery, the experiment result showed that the method can extract endmember spectra form multispectral imagery effectively. What's more, the method resolved the problem of the amount of endmember limitation and improved accuracy of the endmember extraction. The method has provided a new way for multispectral image endmember extraction.
An Evaluation of the Effects of Variable Sampling on Component, Image, and Factor Analysis.
ERIC Educational Resources Information Center
Velicer, Wayne F.; Fava, Joseph L.
1987-01-01
Principal component analysis, image component analysis, and maximum likelihood factor analysis were compared to assess the effects of variable sampling. Results with respect to degree of saturation and average number of variables per factor were clear and dramatic. Differential effects on boundary cases and nonconvergence problems were also found.…
NASA Technical Reports Server (NTRS)
Tilton, James C.
1988-01-01
Image segmentation can be a key step in data compression and image analysis. However, the segmentation results produced by most previous approaches to region growing are suspect because they depend on the order in which portions of the image are processed. An iterative parallel segmentation algorithm avoids this problem by performing globally best merges first. Such a segmentation approach, and two implementations of the approach on NASA's Massively Parallel Processor (MPP) are described. Application of the segmentation approach to data compression and image analysis is then described, and results of such application are given for a LANDSAT Thematic Mapper image.
Elimination of RF inhomogeneity effects in segmentation.
Agus, Onur; Ozkan, Mehmed; Aydin, Kubilay
2007-01-01
There are various methods proposed for the segmentation and analysis of MR images. However the efficiency of these techniques is effected by various artifacts that occur in the imaging system. One of the most encountered problems is the intensity variation across an image. To overcome this problem different methods are used. In this paper we propose a method for the elimination of intensity artifacts in segmentation of MRI images. Inter imager variations are also minimized to produce the same tissue segmentation for the same patient. A well-known multivariate classification algorithm, maximum likelihood is employed to illustrate the enhancement in segmentation.
Photon Limited Images and Their Restoration
1976-03-01
arises from noise inherent in the detected image data. In the first part of this report a model is developed which can be used to mathematically and...statistically describe an image detected at low light levels. This rodel serves to clarify some basic properties of photon noise , and provides a basis...for the analysi.s of image restoration. In the second part the problem of linear least-square restoration of imagery limited by photon noise is
Ethnicity identification from face images
NASA Astrophysics Data System (ADS)
Lu, Xiaoguang; Jain, Anil K.
2004-08-01
Human facial images provide the demographic information, such as ethnicity and gender. Conversely, ethnicity and gender also play an important role in face-related applications. Image-based ethnicity identification problem is addressed in a machine learning framework. The Linear Discriminant Analysis (LDA) based scheme is presented for the two-class (Asian vs. non-Asian) ethnicity classification task. Multiscale analysis is applied to the input facial images. An ensemble framework, which integrates the LDA analysis for the input face images at different scales, is proposed to further improve the classification performance. The product rule is used as the combination strategy in the ensemble. Experimental results based on a face database containing 263 subjects (2,630 face images, with equal balance between the two classes) are promising, indicating that LDA and the proposed ensemble framework have sufficient discriminative power for the ethnicity classification problem. The normalized ethnicity classification scores can be helpful in the facial identity recognition. Useful as a "soft" biometric, face matching scores can be updated based on the output of ethnicity classification module. In other words, ethnicity classifier does not have to be perfect to be useful in practice.
Plant phenomics: an overview of image acquisition technologies and image data analysis algorithms
Perez-Sanz, Fernando; Navarro, Pedro J
2017-01-01
Abstract The study of phenomes or phenomics has been a central part of biology. The field of automatic phenotype acquisition technologies based on images has seen an important advance in the last years. As with other high-throughput technologies, it addresses a common set of problems, including data acquisition and analysis. In this review, we give an overview of the main systems developed to acquire images. We give an in-depth analysis of image processing with its major issues and the algorithms that are being used or emerging as useful to obtain data out of images in an automatic fashion. PMID:29048559
Improved Vote Aggregation Techniques for the Geo-Wiki Cropland Capture Crowdsourcing Game
NASA Astrophysics Data System (ADS)
Baklanov, Artem; Fritz, Steffen; Khachay, Michael; Nurmukhametov, Oleg; Salk, Carl; See, Linda; Shchepashchenko, Dmitry
2016-04-01
Crowdsourcing is a new approach for solving data processing problems for which conventional methods appear to be inaccurate, expensive, or time-consuming. Nowadays, the development of new crowdsourcing techniques is mostly motivated by so called Big Data problems, including problems of assessment and clustering for large datasets obtained in aerospace imaging, remote sensing, and even in social network analysis. By involving volunteers from all over the world, the Geo-Wiki project tackles problems of environmental monitoring with applications to flood resilience, biomass data analysis and classification of land cover. For example, the Cropland Capture Game, which is a gamified version of Geo-Wiki, was developed to aid in the mapping of cultivated land, and was used to gather 4.5 million image classifications from the Earth's surface. More recently, the Picture Pile game, which is a more generalized version of Cropland Capture, aims to identify tree loss over time from pairs of very high resolution satellite images. Despite recent progress in image analysis, the solution to these problems is hard to automate since human experts still outperform the majority of machine learning algorithms and artificial systems in this field on certain image recognition tasks. The replacement of rare and expensive experts by a team of distributed volunteers seems to be promising, but this approach leads to challenging questions such as: how can individual opinions be aggregated optimally, how can confidence bounds be obtained, and how can the unreliability of volunteers be dealt with? In this paper, on the basis of several known machine learning techniques, we propose a technical approach to improve the overall performance of the majority voting decision rule used in the Cropland Capture Game. The proposed approach increases the estimated consistency with expert opinion from 77% to 86%.
Rough-Fuzzy Clustering and Unsupervised Feature Selection for Wavelet Based MR Image Segmentation
Maji, Pradipta; Roy, Shaswati
2015-01-01
Image segmentation is an indispensable process in the visualization of human tissues, particularly during clinical analysis of brain magnetic resonance (MR) images. For many human experts, manual segmentation is a difficult and time consuming task, which makes an automated brain MR image segmentation method desirable. In this regard, this paper presents a new segmentation method for brain MR images, integrating judiciously the merits of rough-fuzzy computing and multiresolution image analysis technique. The proposed method assumes that the major brain tissues, namely, gray matter, white matter, and cerebrospinal fluid from the MR images are considered to have different textural properties. The dyadic wavelet analysis is used to extract the scale-space feature vector for each pixel, while the rough-fuzzy clustering is used to address the uncertainty problem of brain MR image segmentation. An unsupervised feature selection method is introduced, based on maximum relevance-maximum significance criterion, to select relevant and significant textural features for segmentation problem, while the mathematical morphology based skull stripping preprocessing step is proposed to remove the non-cerebral tissues like skull. The performance of the proposed method, along with a comparison with related approaches, is demonstrated on a set of synthetic and real brain MR images using standard validity indices. PMID:25848961
Basic research planning in mathematical pattern recognition and image analysis
NASA Technical Reports Server (NTRS)
Bryant, J.; Guseman, L. F., Jr.
1981-01-01
Fundamental problems encountered while attempting to develop automated techniques for applications of remote sensing are discussed under the following categories: (1) geometric and radiometric preprocessing; (2) spatial, spectral, temporal, syntactic, and ancillary digital image representation; (3) image partitioning, proportion estimation, and error models in object scene interference; (4) parallel processing and image data structures; and (5) continuing studies in polarization; computer architectures and parallel processing; and the applicability of "expert systems" to interactive analysis.
Content-addressable read/write memories for image analysis
NASA Technical Reports Server (NTRS)
Snyder, W. E.; Savage, C. D.
1982-01-01
The commonly encountered image analysis problems of region labeling and clustering are found to be cases of search-and-rename problem which can be solved in parallel by a system architecture that is inherently suitable for VLSI implementation. This architecture is a novel form of content-addressable memory (CAM) which provides parallel search and update functions, allowing speed reductions down to constant time per operation. It has been proposed in related investigations by Hall (1981) that, with VLSI, CAM-based structures with enhanced instruction sets for general purpose processing will be feasible.
Pattern recognition and expert image analysis systems in biomedical image processing (Invited Paper)
NASA Astrophysics Data System (ADS)
Oosterlinck, A.; Suetens, P.; Wu, Q.; Baird, M.; F. M., C.
1987-09-01
This paper gives an overview of pattern recoanition techniques (P.R.) used in biomedical image processing and problems related to the different P.R. solutions. Also the use of knowledge based systems to overcome P.R. difficulties, is described. This is illustrated by a common example ofabiomedical image processing application.
Medical image analysis with artificial neural networks.
Jiang, J; Trundle, P; Ren, J
2010-12-01
Given that neural networks have been widely reported in the research community of medical imaging, we provide a focused literature survey on recent neural network developments in computer-aided diagnosis, medical image segmentation and edge detection towards visual content analysis, and medical image registration for its pre-processing and post-processing, with the aims of increasing awareness of how neural networks can be applied to these areas and to provide a foundation for further research and practical development. Representative techniques and algorithms are explained in detail to provide inspiring examples illustrating: (i) how a known neural network with fixed structure and training procedure could be applied to resolve a medical imaging problem; (ii) how medical images could be analysed, processed, and characterised by neural networks; and (iii) how neural networks could be expanded further to resolve problems relevant to medical imaging. In the concluding section, a highlight of comparisons among many neural network applications is included to provide a global view on computational intelligence with neural networks in medical imaging. Copyright © 2010 Elsevier Ltd. All rights reserved.
MRI Segmentation of the Human Brain: Challenges, Methods, and Applications
Despotović, Ivana
2015-01-01
Image segmentation is one of the most important tasks in medical image analysis and is often the first and the most critical step in many clinical applications. In brain MRI analysis, image segmentation is commonly used for measuring and visualizing the brain's anatomical structures, for analyzing brain changes, for delineating pathological regions, and for surgical planning and image-guided interventions. In the last few decades, various segmentation techniques of different accuracy and degree of complexity have been developed and reported in the literature. In this paper we review the most popular methods commonly used for brain MRI segmentation. We highlight differences between them and discuss their capabilities, advantages, and limitations. To address the complexity and challenges of the brain MRI segmentation problem, we first introduce the basic concepts of image segmentation. Then, we explain different MRI preprocessing steps including image registration, bias field correction, and removal of nonbrain tissue. Finally, after reviewing different brain MRI segmentation methods, we discuss the validation problem in brain MRI segmentation. PMID:25945121
Plant phenomics: an overview of image acquisition technologies and image data analysis algorithms.
Perez-Sanz, Fernando; Navarro, Pedro J; Egea-Cortines, Marcos
2017-11-01
The study of phenomes or phenomics has been a central part of biology. The field of automatic phenotype acquisition technologies based on images has seen an important advance in the last years. As with other high-throughput technologies, it addresses a common set of problems, including data acquisition and analysis. In this review, we give an overview of the main systems developed to acquire images. We give an in-depth analysis of image processing with its major issues and the algorithms that are being used or emerging as useful to obtain data out of images in an automatic fashion. © The Author 2017. Published by Oxford University Press.
Tcheng, David K.; Nayak, Ashwin K.; Fowlkes, Charless C.; Punyasena, Surangi W.
2016-01-01
Discriminating between black and white spruce (Picea mariana and Picea glauca) is a difficult palynological classification problem that, if solved, would provide valuable data for paleoclimate reconstructions. We developed an open-source visual recognition software (ARLO, Automated Recognition with Layered Optimization) capable of differentiating between these two species at an accuracy on par with human experts. The system applies pattern recognition and machine learning to the analysis of pollen images and discovers general-purpose image features, defined by simple features of lines and grids of pixels taken at different dimensions, size, spacing, and resolution. It adapts to a given problem by searching for the most effective combination of both feature representation and learning strategy. This results in a powerful and flexible framework for image classification. We worked with images acquired using an automated slide scanner. We first applied a hash-based “pollen spotting” model to segment pollen grains from the slide background. We next tested ARLO’s ability to reconstruct black to white spruce pollen ratios using artificially constructed slides of known ratios. We then developed a more scalable hash-based method of image analysis that was able to distinguish between the pollen of black and white spruce with an estimated accuracy of 83.61%, comparable to human expert performance. Our results demonstrate the capability of machine learning systems to automate challenging taxonomic classifications in pollen analysis, and our success with simple image representations suggests that our approach is generalizable to many other object recognition problems. PMID:26867017
Microscopy image segmentation tool: Robust image data analysis
NASA Astrophysics Data System (ADS)
Valmianski, Ilya; Monton, Carlos; Schuller, Ivan K.
2014-03-01
We present a software package called Microscopy Image Segmentation Tool (MIST). MIST is designed for analysis of microscopy images which contain large collections of small regions of interest (ROIs). Originally developed for analysis of porous anodic alumina scanning electron images, MIST capabilities have been expanded to allow use in a large variety of problems including analysis of biological tissue, inorganic and organic film grain structure, as well as nano- and meso-scopic structures. MIST provides a robust segmentation algorithm for the ROIs, includes many useful analysis capabilities, and is highly flexible allowing incorporation of specialized user developed analysis. We describe the unique advantages MIST has over existing analysis software. In addition, we present a number of diverse applications to scanning electron microscopy, atomic force microscopy, magnetic force microscopy, scanning tunneling microscopy, and fluorescent confocal laser scanning microscopy.
Wavelet-Based Interpolation and Representation of Non-Uniformly Sampled Spacecraft Mission Data
NASA Technical Reports Server (NTRS)
Bose, Tamal
2000-01-01
A well-documented problem in the analysis of data collected by spacecraft instruments is the need for an accurate, efficient representation of the data set. The data may suffer from several problems, including additive noise, data dropouts, an irregularly-spaced sampling grid, and time-delayed sampling. These data irregularities render most traditional signal processing techniques unusable, and thus the data must be interpolated onto an even grid before scientific analysis techniques can be applied. In addition, the extremely large volume of data collected by scientific instrumentation presents many challenging problems in the area of compression, visualization, and analysis. Therefore, a representation of the data is needed which provides a structure which is conducive to these applications. Wavelet representations of data have already been shown to possess excellent characteristics for compression, data analysis, and imaging. The main goal of this project is to develop a new adaptive filtering algorithm for image restoration and compression. The algorithm should have low computational complexity and a fast convergence rate. This will make the algorithm suitable for real-time applications. The algorithm should be able to remove additive noise and reconstruct lost data samples from images.
FISH Finder: a high-throughput tool for analyzing FISH images
Shirley, James W.; Ty, Sereyvathana; Takebayashi, Shin-ichiro; Liu, Xiuwen; Gilbert, David M.
2011-01-01
Motivation: Fluorescence in situ hybridization (FISH) is used to study the organization and the positioning of specific DNA sequences within the cell nucleus. Analyzing the data from FISH images is a tedious process that invokes an element of subjectivity. Automated FISH image analysis offers savings in time as well as gaining the benefit of objective data analysis. While several FISH image analysis software tools have been developed, they often use a threshold-based segmentation algorithm for nucleus segmentation. As fluorescence signal intensities can vary significantly from experiment to experiment, from cell to cell, and within a cell, threshold-based segmentation is inflexible and often insufficient for automatic image analysis, leading to additional manual segmentation and potential subjective bias. To overcome these problems, we developed a graphical software tool called FISH Finder to automatically analyze FISH images that vary significantly. By posing the nucleus segmentation as a classification problem, compound Bayesian classifier is employed so that contextual information is utilized, resulting in reliable classification and boundary extraction. This makes it possible to analyze FISH images efficiently and objectively without adjustment of input parameters. Additionally, FISH Finder was designed to analyze the distances between differentially stained FISH probes. Availability: FISH Finder is a standalone MATLAB application and platform independent software. The program is freely available from: http://code.google.com/p/fishfinder/downloads/list Contact: gilbert@bio.fsu.edu PMID:21310746
Multivariate analysis: A statistical approach for computations
NASA Astrophysics Data System (ADS)
Michu, Sachin; Kaushik, Vandana
2014-10-01
Multivariate analysis is a type of multivariate statistical approach commonly used in, automotive diagnosis, education evaluating clusters in finance etc and more recently in the health-related professions. The objective of the paper is to provide a detailed exploratory discussion about factor analysis (FA) in image retrieval method and correlation analysis (CA) of network traffic. Image retrieval methods aim to retrieve relevant images from a collected database, based on their content. The problem is made more difficult due to the high dimension of the variable space in which the images are represented. Multivariate correlation analysis proposes an anomaly detection and analysis method based on the correlation coefficient matrix. Anomaly behaviors in the network include the various attacks on the network like DDOs attacks and network scanning.
Semisupervised kernel marginal Fisher analysis for face recognition.
Wang, Ziqiang; Sun, Xia; Sun, Lijun; Huang, Yuchun
2013-01-01
Dimensionality reduction is a key problem in face recognition due to the high-dimensionality of face image. To effectively cope with this problem, a novel dimensionality reduction algorithm called semisupervised kernel marginal Fisher analysis (SKMFA) for face recognition is proposed in this paper. SKMFA can make use of both labelled and unlabeled samples to learn the projection matrix for nonlinear dimensionality reduction. Meanwhile, it can successfully avoid the singularity problem by not calculating the matrix inverse. In addition, in order to make the nonlinear structure captured by the data-dependent kernel consistent with the intrinsic manifold structure, a manifold adaptive nonparameter kernel is incorporated into the learning process of SKMFA. Experimental results on three face image databases demonstrate the effectiveness of our proposed algorithm.
Correcting spacecraft jitter in HiRISE images
Sutton, S. S.; Boyd, A.K.; Kirk, Randolph L.; Cook, Debbie; Backer, Jean; Fennema, A.; Heyd, R.; McEwen, A.S.; Mirchandani, S.D.; Wu, B.; Di, K.; Oberst, J.; Karachevtseva, I.
2017-01-01
Mechanical oscillations or vibrations on spacecraft, also called pointing jitter, cause geometric distortions and/or smear in high resolution digital images acquired from orbit. Geometric distortion is especially a problem with pushbroom type sensors, such as the High Resolution Imaging Science Experiment (HiRISE) instrument on board the Mars Reconnaissance Orbiter (MRO). Geometric distortions occur at a range of frequencies that may not be obvious in the image products, but can cause problems with stereo image correlation in the production of digital elevation models, and in measuring surface changes over time in orthorectified images. The HiRISE focal plane comprises a staggered array of fourteen charge-coupled devices (CCDs) with pixel IFOV of 1 microradian. The high spatial resolution of HiRISE makes it both sensitive to, and an excellent recorder of jitter. We present an algorithm using Fourier analysis to resolve the jitter function for a HiRISE image that is then used to update instrument pointing information to remove geometric distortions from the image. Implementation of the jitter analysis and image correction is performed on selected HiRISE images. Resulting corrected images and updated pointing information are made available to the public. Results show marked reduction of geometric distortions. This work has applications to similar cameras operating now, and to the design of future instruments (such as the Europa Imaging System).
NASA Technical Reports Server (NTRS)
Heydorn, R. D.
1984-01-01
The Mathematical Pattern Recognition and Image Analysis (MPRIA) Project is concerned with basic research problems related to the study of the Earth from remotely sensed measurement of its surface characteristics. The program goal is to better understand how to analyze the digital image that represents the spatial, spectral, and temporal arrangement of these measurements for purposing of making selected inference about the Earth.
Modeling visual problem solving as analogical reasoning.
Lovett, Andrew; Forbus, Kenneth
2017-01-01
We present a computational model of visual problem solving, designed to solve problems from the Raven's Progressive Matrices intelligence test. The model builds on the claim that analogical reasoning lies at the heart of visual problem solving, and intelligence more broadly. Images are compared via structure mapping, aligning the common relational structure in 2 images to identify commonalities and differences. These commonalities or differences can themselves be reified and used as the input for future comparisons. When images fail to align, the model dynamically rerepresents them to facilitate the comparison. In our analysis, we find that the model matches adult human performance on the Standard Progressive Matrices test, and that problems which are difficult for the model are also difficult for people. Furthermore, we show that model operations involving abstraction and rerepresentation are particularly difficult for people, suggesting that these operations may be critical for performing visual problem solving, and reasoning more generally, at the highest level. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Information Acquisition, Analysis and Integration
2016-08-03
of sensing and processing, theory, applications, signal processing, image and video processing, machine learning , technology transfer. 16. SECURITY... learning . 5. Solved elegantly old problems like image and video debluring, intro- ducing new revolutionary approaches. 1 DISTRIBUTION A: Distribution...Polatkan, G. Sapiro, D. Blei, D. B. Dunson, and L. Carin, “ Deep learning with hierarchical convolution factor analysis,” IEEE 6 DISTRIBUTION A
General Staining and Segmentation Procedures for High Content Imaging and Analysis.
Chambers, Kevin M; Mandavilli, Bhaskar S; Dolman, Nick J; Janes, Michael S
2018-01-01
Automated quantitative fluorescence microscopy, also known as high content imaging (HCI), is a rapidly growing analytical approach in cell biology. Because automated image analysis relies heavily on robust demarcation of cells and subcellular regions, reliable methods for labeling cells is a critical component of the HCI workflow. Labeling of cells for image segmentation is typically performed with fluorescent probes that bind DNA for nuclear-based cell demarcation or with those which react with proteins for image analysis based on whole cell staining. These reagents, along with instrument and software settings, play an important role in the successful segmentation of cells in a population for automated and quantitative image analysis. In this chapter, we describe standard procedures for labeling and image segmentation in both live and fixed cell samples. The chapter will also provide troubleshooting guidelines for some of the common problems associated with these aspects of HCI.
ERIC Educational Resources Information Center
Hoogland, Kees; Pepin, Birgit; de Koning, Jaap; Bakker, Arthur; Gravemeijer, Koeno
2018-01-01
This article reports on a "post hoc" study using a randomised controlled trial with 31,842 students in the Netherlands and an instrument consisting of 21 paired problems. The trial showed a variability in the differences of students' results in solving contextual mathematical problems with either a descriptive or a depictive…
Using Microsoft PowerPoint as an Astronomical Image Analysis Tool
NASA Astrophysics Data System (ADS)
Beck-Winchatz, Bernhard
2006-12-01
Engaging students in the analysis of authentic scientific data is an effective way to teach them about the scientific process and to develop their problem solving, teamwork and communication skills. In astronomy several image processing and analysis software tools have been developed for use in school environments. However, the practical implementation in the classroom is often difficult because the teachers may not have the comfort level with computers necessary to install and use these tools, they may not have adequate computer privileges and/or support, and they may not have the time to learn how to use specialized astronomy software. To address this problem, we have developed a set of activities in which students analyze astronomical images using basic tools provided in PowerPoint. These include measuring sizes, distances, and angles, and blinking images. In contrast to specialized software, PowerPoint is broadly available on school computers. Many teachers are already familiar with PowerPoint, and the skills developed while learning how to analyze astronomical images are highly transferable. We will discuss several practical examples of measurements, including the following: -Variations in the distances to the sun and moon from their angular sizes -Magnetic declination from images of shadows -Diameter of the moon from lunar eclipse images -Sizes of lunar craters -Orbital radii of the Jovian moons and mass of Jupiter -Supernova and comet searches -Expansion rate of the universe from images of distant galaxies
Analysis of two dimensional signals via curvelet transform
NASA Astrophysics Data System (ADS)
Lech, W.; Wójcik, W.; Kotyra, A.; Popiel, P.; Duk, M.
2007-04-01
This paper describes an application of curvelet transform analysis problem of interferometric images. Comparing to two-dimensional wavelet transform, curvelet transform has higher time-frequency resolution. This article includes numerical experiments, which were executed on random interferometric image. In the result of nonlinear approximations, curvelet transform obtains matrix with smaller number of coefficients than is guaranteed by wavelet transform. Additionally, denoising simulations show that curvelet could be a very good tool to remove noise from images.
Evan Brooks; Valerie Thomas; Wynne Randolph; John Coulston
2012-01-01
With the advent of free Landsat data stretching back decades, there has been a surge of interest in utilizing remotely sensed data in multitemporal analysis for estimation of biophysical parameters. Such analysis is confounded by cloud cover and other image-specific problems, which result in missing data at various aperiodic times of the year. While there is a wealth...
Progressive data transmission for anatomical landmark detection in a cloud.
Sofka, M; Ralovich, K; Zhang, J; Zhou, S K; Comaniciu, D
2012-01-01
In the concept of cloud-computing-based systems, various authorized users have secure access to patient records from a number of care delivery organizations from any location. This creates a growing need for remote visualization, advanced image processing, state-of-the-art image analysis, and computer aided diagnosis. This paper proposes a system of algorithms for automatic detection of anatomical landmarks in 3D volumes in the cloud computing environment. The system addresses the inherent problem of limited bandwidth between a (thin) client, data center, and data analysis server. The problem of limited bandwidth is solved by a hierarchical sequential detection algorithm that obtains data by progressively transmitting only image regions required for processing. The client sends a request to detect a set of landmarks for region visualization or further analysis. The algorithm running on the data analysis server obtains a coarse level image from the data center and generates landmark location candidates. The candidates are then used to obtain image neighborhood regions at a finer resolution level for further detection. This way, the landmark locations are hierarchically and sequentially detected and refined. Only image regions surrounding landmark location candidates need to be trans- mitted during detection. Furthermore, the image regions are lossy compressed with JPEG 2000. Together, these properties amount to at least 30 times bandwidth reduction while achieving similar accuracy when compared to an algorithm using the original data. The hierarchical sequential algorithm with progressive data transmission considerably reduces bandwidth requirements in cloud-based detection systems.
Visual Attention for Solving Multiple-Choice Science Problem: An Eye-Tracking Analysis
ERIC Educational Resources Information Center
Tsai, Meng-Jung; Hou, Huei-Tse; Lai, Meng-Lung; Liu, Wan-Yi; Yang, Fang-Ying
2012-01-01
This study employed an eye-tracking technique to examine students' visual attention when solving a multiple-choice science problem. Six university students participated in a problem-solving task to predict occurrences of landslide hazards from four images representing four combinations of four factors. Participants' responses and visual attention…
Development of the Structured Problem Posing Skills and Using Metaphoric Perceptions
ERIC Educational Resources Information Center
Arikan, Elif Esra; Unal, Hasan
2014-01-01
The purpose of this study was to introduce problem posing activity to third grade students who have never met before. This study was also explored students' metaphorical images on problem posing process. Participants were from Public school in Marmara Region in Turkey. Data was analyzed both qualitatively (content analysis for difficulty and…
NASA Astrophysics Data System (ADS)
Aida, S.; Matsuno, T.; Hasegawa, T.; Tsuji, K.
2017-07-01
Micro X-ray fluorescence (micro-XRF) analysis is repeated as a means of producing elemental maps. In some cases, however, the XRF images of trace elements that are obtained are not clear due to high background intensity. To solve this problem, we applied principal component analysis (PCA) to XRF spectra. We focused on improving the quality of XRF images by applying PCA. XRF images of the dried residue of standard solution on the glass substrate were taken. The XRF intensities for the dried residue were analyzed before and after PCA. Standard deviations of XRF intensities in the PCA-filtered images were improved, leading to clear contrast of the images. This improvement of the XRF images was effective in cases where the XRF intensity was weak.
Koprowski, Robert
2014-07-04
Dedicated, automatic algorithms for image analysis and processing are becoming more and more common in medical diagnosis. When creating dedicated algorithms, many factors must be taken into consideration. They are associated with selecting the appropriate algorithm parameters and taking into account the impact of data acquisition on the results obtained. An important feature of algorithms is the possibility of their use in other medical units by other operators. This problem, namely operator's (acquisition) impact on the results obtained from image analysis and processing, has been shown on a few examples. The analysed images were obtained from a variety of medical devices such as thermal imaging, tomography devices and those working in visible light. The objects of imaging were cellular elements, the anterior segment and fundus of the eye, postural defects and others. In total, almost 200'000 images coming from 8 different medical units were analysed. All image analysis algorithms were implemented in C and Matlab. For various algorithms and methods of medical imaging, the impact of image acquisition on the results obtained is different. There are different levels of algorithm sensitivity to changes in the parameters, for example: (1) for microscope settings and the brightness assessment of cellular elements there is a difference of 8%; (2) for the thyroid ultrasound images there is a difference in marking the thyroid lobe area which results in a brightness assessment difference of 2%. The method of image acquisition in image analysis and processing also affects: (3) the accuracy of determining the temperature in the characteristic areas on the patient's back for the thermal method - error of 31%; (4) the accuracy of finding characteristic points in photogrammetric images when evaluating postural defects - error of 11%; (5) the accuracy of performing ablative and non-ablative treatments in cosmetology - error of 18% for the nose, 10% for the cheeks, and 7% for the forehead. Similarly, when: (7) measuring the anterior eye chamber - there is an error of 20%; (8) measuring the tooth enamel thickness - error of 15%; (9) evaluating the mechanical properties of the cornea during pressure measurement - error of 47%. The paper presents vital, selected issues occurring when assessing the accuracy of designed automatic algorithms for image analysis and processing in bioengineering. The impact of acquisition of images on the problems arising in their analysis has been shown on selected examples. It has also been indicated to which elements of image analysis and processing special attention should be paid in their design.
X-Ray Imaging Applied to Problems in Planetary Materials
NASA Technical Reports Server (NTRS)
Jurewicz, A. J. G.; Mih, D. T.; Jones, S. M.; Connolly, H.
2000-01-01
Real-time radiography (X-ray imaging) can be a useful tool for tasks such as (1) the non-destructive, preliminary examination of opaque samples and (2) optimizing how to section opaque samples for more traditional microscopy and chemical analysis.
NASA Astrophysics Data System (ADS)
Zhou, Jiangying; Lopresti, Daniel P.; Tasdizen, Tolga
1998-04-01
In this paper, we consider the problem of locating and extracting text from WWW images. A previous algorithm based on color clustering and connected components analysis works well as long as the color of each character is relatively uniform and the typography is fairly simple. It breaks down quickly, however, when these assumptions are violated. In this paper, we describe more robust techniques for dealing with this challenging problem. We present an improved color clustering algorithm that measures similarity based on both RGB and spatial proximity. Layout analysis is also incorporated to handle more complex typography. THese changes significantly enhance the performance of our text detection procedure.
Kernel-aligned multi-view canonical correlation analysis for image recognition
NASA Astrophysics Data System (ADS)
Su, Shuzhi; Ge, Hongwei; Yuan, Yun-Hao
2016-09-01
Existing kernel-based correlation analysis methods mainly adopt a single kernel in each view. However, only a single kernel is usually insufficient to characterize nonlinear distribution information of a view. To solve the problem, we transform each original feature vector into a 2-dimensional feature matrix by means of kernel alignment, and then propose a novel kernel-aligned multi-view canonical correlation analysis (KAMCCA) method on the basis of the feature matrices. Our proposed method can simultaneously employ multiple kernels to better capture the nonlinear distribution information of each view, so that correlation features learned by KAMCCA can have well discriminating power in real-world image recognition. Extensive experiments are designed on five real-world image datasets, including NIR face images, thermal face images, visible face images, handwritten digit images, and object images. Promising experimental results on the datasets have manifested the effectiveness of our proposed method.
Diagnosis of cutaneous thermal burn injuries by multispectral imaging analysis
NASA Technical Reports Server (NTRS)
Anselmo, V. J.; Zawacki, B. E.
1978-01-01
Special photographic or television image analysis is shown to be a potentially useful technique to assist the physician in the early diagnosis of thermal burn injury. A background on the medical and physiological problems of burns is presented. The proposed methodology for burns diagnosis from both the theoretical and clinical points of view is discussed. The television/computer system constructed to accomplish this analysis is described, and the clinical results are discussed.
Histopathological Image Analysis: A Review
Gurcan, Metin N.; Boucheron, Laura; Can, Ali; Madabhushi, Anant; Rajpoot, Nasir; Yener, Bulent
2010-01-01
Over the past decade, dramatic increases in computational power and improvement in image analysis algorithms have allowed the development of powerful computer-assisted analytical approaches to radiological data. With the recent advent of whole slide digital scanners, tissue histopathology slides can now be digitized and stored in digital image form. Consequently, digitized tissue histopathology has now become amenable to the application of computerized image analysis and machine learning techniques. Analogous to the role of computer-assisted diagnosis (CAD) algorithms in medical imaging to complement the opinion of a radiologist, CAD algorithms have begun to be developed for disease detection, diagnosis, and prognosis prediction to complement to the opinion of the pathologist. In this paper, we review the recent state of the art CAD technology for digitized histopathology. This paper also briefly describes the development and application of novel image analysis technology for a few specific histopathology related problems being pursued in the United States and Europe. PMID:20671804
Learning representative features for facial images based on a modified principal component analysis
NASA Astrophysics Data System (ADS)
Averkin, Anton; Potapov, Alexey
2013-05-01
The paper is devoted to facial image analysis and particularly deals with the problem of automatic evaluation of the attractiveness of human faces. We propose a new approach for automatic construction of feature space based on a modified principal component analysis. Input data sets for the algorithm are the learning data sets of facial images, which are rated by one person. The proposed approach allows one to extract features of the individual subjective face beauty perception and to predict attractiveness values for new facial images, which were not included into a learning data set. The Pearson correlation coefficient between values predicted by our method for new facial images and personal attractiveness estimation values equals to 0.89. This means that the new approach proposed is promising and can be used for predicting subjective face attractiveness values in real systems of the facial images analysis.
Imaging techniques in digital forensic investigation: a study using neural networks
NASA Astrophysics Data System (ADS)
Williams, Godfried
2006-09-01
Imaging techniques have been applied to a number of applications, such as translation and classification problems in medicine and defence. This paper examines the application of imaging techniques in digital forensics investigation using neural networks. A review of applications of digital image processing is presented, whiles a Pedagogical analysis of computer forensics is also highlighted. A data set describing selected images in different forms are used in the simulation and experimentation.
Twelve tips for creating trigger images for problem-based learning cases.
Azer, Samy A
2007-03-01
A trigger is the starting point of problem-based learning (PBL) cases. It is usually in the form of 5-6 text lines that provide the key information about the main character (usually the patient), including 3-4 of patient's presenting problems. In addition to the trigger text, most programs using PBL include a visual trigger. This might be in the form of a single image, a series of images, a video clip, a cartoon, or even one of the patient's investigation results (e.g. chest X-ray, pathology report, or urine sample analysis). The main educational objectives of the trigger image are as follows: (1) to introduce the patient to the students; (2) to enhance students' observation skills; (3) to provide them with new information to add to the cues obtained from the trigger text; and (4) to stimulate students to ask questions as they develop their enquiry plan. When planned and delivered effectively, trigger images should be engaging and stimulate group discussion. Understanding the educational objectives of using trigger images and choosing appropriate images are the keys for constructing successful PBL cases. These twelve tips highlight the key steps in the successful creation of trigger images.
NASA Astrophysics Data System (ADS)
Nikitaev, V. G.; Pronichev, A. N.; Polyakov, E. V.; Zaharenko, Yu V.
2018-01-01
The paper considers the problem of leukocytes segmentation in microscopic images of bone marrow smears for automated diagnosis of the blood system diseases. The method was proposed to solve the problem of segmentation of contacting leukocytes in images of bone marrow smears. The method is based on the analysis of structure of objects of a separation and distances filter in combination with the watershed method and distance transformation method.
Using spectral imaging for the analysis of abnormalities for colorectal cancer: When is it helpful?
Awan, Ruqayya; Al-Maadeed, Somaya; Al-Saady, Rafif
2018-01-01
The spectral imaging technique has been shown to provide more discriminative information than the RGB images and has been proposed for a range of problems. There are many studies demonstrating its potential for the analysis of histopathology images for abnormality detection but there have been discrepancies among previous studies as well. Many multispectral based methods have been proposed for histopathology images but the significance of the use of whole multispectral cube versus a subset of bands or a single band is still arguable. We performed comprehensive analysis using individual bands and different subsets of bands to determine the effectiveness of spectral information for determining the anomaly in colorectal images. Our multispectral colorectal dataset consists of four classes, each represented by infra-red spectrum bands in addition to the visual spectrum bands. We performed our analysis of spectral imaging by stratifying the abnormalities using both spatial and spectral information. For our experiments, we used a combination of texture descriptors with an ensemble classification approach that performed best on our dataset. We applied our method to another dataset and got comparable results with those obtained using the state-of-the-art method and convolutional neural network based method. Moreover, we explored the relationship of the number of bands with the problem complexity and found that higher number of bands is required for a complex task to achieve improved performance. Our results demonstrate a synergy between infra-red and visual spectrum by improving the classification accuracy (by 6%) on incorporating the infra-red representation. We also highlight the importance of how the dataset should be divided into training and testing set for evaluating the histopathology image-based approaches, which has not been considered in previous studies on multispectral histopathology images.
Using spectral imaging for the analysis of abnormalities for colorectal cancer: When is it helpful?
Al-Maadeed, Somaya; Al-Saady, Rafif
2018-01-01
The spectral imaging technique has been shown to provide more discriminative information than the RGB images and has been proposed for a range of problems. There are many studies demonstrating its potential for the analysis of histopathology images for abnormality detection but there have been discrepancies among previous studies as well. Many multispectral based methods have been proposed for histopathology images but the significance of the use of whole multispectral cube versus a subset of bands or a single band is still arguable. We performed comprehensive analysis using individual bands and different subsets of bands to determine the effectiveness of spectral information for determining the anomaly in colorectal images. Our multispectral colorectal dataset consists of four classes, each represented by infra-red spectrum bands in addition to the visual spectrum bands. We performed our analysis of spectral imaging by stratifying the abnormalities using both spatial and spectral information. For our experiments, we used a combination of texture descriptors with an ensemble classification approach that performed best on our dataset. We applied our method to another dataset and got comparable results with those obtained using the state-of-the-art method and convolutional neural network based method. Moreover, we explored the relationship of the number of bands with the problem complexity and found that higher number of bands is required for a complex task to achieve improved performance. Our results demonstrate a synergy between infra-red and visual spectrum by improving the classification accuracy (by 6%) on incorporating the infra-red representation. We also highlight the importance of how the dataset should be divided into training and testing set for evaluating the histopathology image-based approaches, which has not been considered in previous studies on multispectral histopathology images. PMID:29874262
[Research Progress of Multi-Model Medical Image Fusion at Feature Level].
Zhang, Junjie; Zhou, Tao; Lu, Huiling; Wang, Huiqun
2016-04-01
Medical image fusion realizes advantage integration of functional images and anatomical images.This article discusses the research progress of multi-model medical image fusion at feature level.We firstly describe the principle of medical image fusion at feature level.Then we analyze and summarize fuzzy sets,rough sets,D-S evidence theory,artificial neural network,principal component analysis and other fusion methods’ applications in medical image fusion and get summery.Lastly,we in this article indicate present problems and the research direction of multi-model medical images in the future.
Free lipid and computerized determination of adipocyte size.
Svensson, Henrik; Olausson, Daniel; Holmäng, Agneta; Jennische, Eva; Edén, Staffan; Lönn, Malin
2018-06-21
The size distribution of adipocytes in a suspension, after collagenase digestion of adipose tissue, can be determined by computerized image analysis. Free lipid, forming droplets, in such suspensions implicates a bias since droplets present in the images may be identified as adipocytes. This problem is not always adjusted for and some reports state that distinguishing droplets and cells is a considerable problem. In addition, if the droplets originate mainly from rupture of large adipocytes, as often described, this will also bias size analysis. We here confirm that our ordinary manual means of distinguishing droplets and adipocytes in the images ensure correct and rapid identification before exclusion of the droplets. Further, in our suspensions, prepared with focus on gentle handling of tissue and cells, we find no association between the amount of free lipid and mean adipocyte size or proportion of large adipocytes.
NASA Astrophysics Data System (ADS)
Qin, Chen; Ren, Bin; Guo, Longfei; Dou, Wenhua
2014-11-01
Multi-projector three dimension display is a promising multi-view glass-free three dimension (3D) display technology, can produce full colour high definition 3D images on its screen. One key problem of multi-projector 3D display is how to acquire the source images of projector array while avoiding pseudoscopic problem. This paper analysis the displaying characteristics of multi-projector 3D display first and then propose a projector content synthetic method using tetrahedral transform. A 3D video format that based on stereo image pair and associated disparity map is presented, it is well suit for any type of multi-projector 3D display and has advantage in saving storage usage. Experiment results show that our method solved the pseudoscopic problem.
Radiomics: Extracting more information from medical images using advanced feature analysis
Lambin, Philippe; Rios-Velazquez, Emmanuel; Leijenaar, Ralph; Carvalho, Sara; van Stiphout, Ruud G.P.M.; Granton, Patrick; Zegers, Catharina M.L.; Gillies, Robert; Boellard, Ronald; Dekker, André; Aerts, Hugo J.W.L.
2015-01-01
Solid cancers are spatially and temporally heterogeneous. This limits the use of invasive biopsy based molecular assays but gives huge potential for medical imaging, which has the ability to capture intra-tumoural heterogeneity in a non-invasive way. During the past decades, medical imaging innovations with new hardware, new imaging agents and standardised protocols, allows the field to move towards quantitative imaging. Therefore, also the development of automated and reproducible analysis methodologies to extract more information from image-based features is a requirement. Radiomics – the high-throughput extraction of large amounts of image features from radiographic images – addresses this problem and is one of the approaches that hold great promises but need further validation in multi-centric settings and in the laboratory. PMID:22257792
Zhang, Lei; Zeng, Zhi; Ji, Qiang
2011-09-01
Chain graph (CG) is a hybrid probabilistic graphical model (PGM) capable of modeling heterogeneous relationships among random variables. So far, however, its application in image and video analysis is very limited due to lack of principled learning and inference methods for a CG of general topology. To overcome this limitation, we introduce methods to extend the conventional chain-like CG model to CG model with more general topology and the associated methods for learning and inference in such a general CG model. Specifically, we propose techniques to systematically construct a generally structured CG, to parameterize this model, to derive its joint probability distribution, to perform joint parameter learning, and to perform probabilistic inference in this model. To demonstrate the utility of such an extended CG, we apply it to two challenging image and video analysis problems: human activity recognition and image segmentation. The experimental results show improved performance of the extended CG model over the conventional directed or undirected PGMs. This study demonstrates the promise of the extended CG for effective modeling and inference of complex real-world problems.
Nuclear norm-based 2-DPCA for extracting features from images.
Zhang, Fanlong; Yang, Jian; Qian, Jianjun; Xu, Yong
2015-10-01
The 2-D principal component analysis (2-DPCA) is a widely used method for image feature extraction. However, it can be equivalently implemented via image-row-based principal component analysis. This paper presents a structured 2-D method called nuclear norm-based 2-DPCA (N-2-DPCA), which uses a nuclear norm-based reconstruction error criterion. The nuclear norm is a matrix norm, which can provide a structured 2-D characterization for the reconstruction error image. The reconstruction error criterion is minimized by converting the nuclear norm-based optimization problem into a series of F-norm-based optimization problems. In addition, N-2-DPCA is extended to a bilateral projection-based N-2-DPCA (N-B2-DPCA). The virtue of N-B2-DPCA over N-2-DPCA is that an image can be represented with fewer coefficients. N-2-DPCA and N-B2-DPCA are applied to face recognition and reconstruction and evaluated using the Extended Yale B, CMU PIE, FRGC, and AR databases. Experimental results demonstrate the effectiveness of the proposed methods.
Survey of Neural Net Paradigms for Specification of Discrete Networks.
1988-01-31
special applications, such as 3-d imaging, scene segmentation, temporal imaging models, nor phonological analysis of speech. The cost of problem...Nov. 1985. ., .U U - - A 1 Bibliography Berge, Claude, "Principles of Combinatorics", Academic Press, 1971 Fischer, Roland, " Deconstructing Reality
Proceedings of the Third Airborne Imaging Spectrometer Data Analysis Workshop
NASA Technical Reports Server (NTRS)
Vane, Gregg (Editor)
1987-01-01
Summaries of 17 papers presented at the workshop are published. After an overview of the imaging spectrometer program, time was spent discussing AIS calibration, performance, information extraction techniques, and the application of high spectral resolution imagery to problems of geology and botany.
Analysis of x-ray hand images for bone age assessment
NASA Astrophysics Data System (ADS)
Serrat, Joan; Vitria, Jordi M.; Villanueva, Juan J.
1990-09-01
In this paper we describe a model-based system for the assessment of skeletal maturity on hand radiographs by the TW2 method. The problem consists in classiflying a set of bones appearing in an image in one of several stages described in an atlas. A first approach consisting in pre-processing segmentation and classification independent phases is also presented. However it is only well suited for well contrasted low noise images without superimposed bones were the edge detection by zero crossing of second directional derivatives is able to extract all bone contours maybe with little gaps and few false edges on the background. Hence the use of all available knowledge about the problem domain is needed to build a rather general system. We have designed a rule-based system for narrow down the rank of possible stages for each bone and guide the analysis process. It calls procedures written in conventional languages for matching stage models against the image and getting features needed in the classification process.
Borba, Flávia de Souza Lins; Jawhari, Tariq; Saldanha Honorato, Ricardo; de Juan, Anna
2017-03-27
This article describes a non-destructive analytical method developed to solve forensic document examination problems involving crossed lines and obliteration. Different strategies combining confocal Raman imaging and multivariate curve resolution-alternating least squares (MCR-ALS) are presented. Multilayer images were acquired at subsequent depth layers into the samples. It is the first time that MCR-ALS is applied to multilayer images for forensic purposes. In this context, this method provides a single set of pure spectral ink signatures and related distribution maps for all layers examined from the sole information in the raw measurement. Four cases were investigated, namely, two concerning crossed lines with different degrees of ink similarity and two related to obliteration, where previous or no knowledge about the identity of the obliterated ink was available. In the crossing line scenario, MCR-ALS analysis revealed the ink nature and the chronological order in which strokes were drawn. For obliteration cases, results making active use of information about the identity of the obliterated ink in the chemometric analysis were of similar quality as those where the identity of the obliterated ink was unknown. In all obliteration scenarios, the identity of inks and the obliterated text were satisfactorily recovered. The analytical methodology proposed is of general use for analytical forensic document examination problems, and considers different degrees of complexity and prior available information. Besides, the strategies of data analysis proposed can be applicable to any other kind of problem in which multilayer Raman images from multicomponent systems have to be interpreted.
NASA Astrophysics Data System (ADS)
Kazem Alavipanah, Seyed
There are some problems in soil salinity studies based upon remotely sensed data: 1-spectral world is full of ambiguity and therefore soil reflectance can not be attributed to a single soil property such as salinity, 2) soil surface conditions as a function of time and space is a complex phenomena, 3) vegetation with a dynamic biological nature may create some problems in the study of soil salinity. Due to these problems the first question which may arise is how to overcome or minimise these problems. In this study we hypothesised that different sources of data, well established sampling plan and optimum approach could be useful. In order to choose representative training sites in the Iranian playa margins, to define the spectral and informational classes and to overcome some problems encountered in the variation within the field, the following attempts were made: 1) Principal Component Analysis (PCA) in order: a) to determine the most important variables, b) to understand the Landsat satellite images and the most informative components, 2) the photomorphic unit (PMU) consideration and interpretation; 3) study of salt accumulation and salt distribution in the soil profile, 4) use of several forms of field data, such as geologic, geomorphologic and soil information; 6) confirmation of field data and land cover types with farmers and the members of the team. The results led us to find at suitable approaches with a high and acceptable image classification accuracy and image interpretation. KEY WORDS; Photo Morphic Unit, Pprincipal Ccomponent Analysis, Soil Salinity, Field Work, Remote Sensing
The Image of Today's Russian Armed Forces in the Eyes of Young People
ERIC Educational Resources Information Center
Novik, V. K.; Perednia, D. G.
2008-01-01
In the recent past there has been animated discussion of problems related to the image of the various social institutions and state organizations of Russia, including the Russian armed forces. Sociological analysis is a constructive way to shed light on the image of the military. The armed forces are linked closely to the main spheres of the life…
Face sketch recognition based on edge enhancement via deep learning
NASA Astrophysics Data System (ADS)
Xie, Zhenzhu; Yang, Fumeng; Zhang, Yuming; Wu, Congzhong
2017-11-01
In this paper,we address the face sketch recognition problem. Firstly, we utilize the eigenface algorithm to convert a sketch image into a synthesized sketch face image. Subsequently, considering the low-level vision problem in synthesized face sketch image .Super resolution reconstruction algorithm based on CNN(convolutional neural network) is employed to improve the visual effect. To be specific, we uses a lightweight super-resolution structure to learn a residual mapping instead of directly mapping the feature maps from the low-level space to high-level patch representations, which making the networks are easier to optimize and have lower computational complexity. Finally, we adopt LDA(Linear Discriminant Analysis) algorithm to realize face sketch recognition on synthesized face image before super resolution and after respectively. Extensive experiments on the face sketch database(CUFS) from CUHK demonstrate that the recognition rate of SVM(Support Vector Machine) algorithm improves from 65% to 69% and the recognition rate of LDA(Linear Discriminant Analysis) algorithm improves from 69% to 75%.What'more,the synthesized face image after super resolution can not only better describer image details such as hair ,nose and mouth etc, but also improve the recognition accuracy effectively.
SLAR image interpretation keys for geographic analysis
NASA Technical Reports Server (NTRS)
Coiner, J. C.
1972-01-01
A means for side-looking airborne radar (SLAR) imagery to become a more widely used data source in geoscience and agriculture is suggested by providing interpretation keys as an easily implemented interpretation model. Interpretation problems faced by the researcher wishing to employ SLAR are specifically described, and the use of various types of image interpretation keys to overcome these problems is suggested. With examples drawn from agriculture and vegetation mapping, direct and associate dichotomous image interpretation keys are discussed and methods of constructing keys are outlined. Initial testing of the keys, key-based automated decision rules, and the role of the keys in an information system for agriculture are developed.
On estimation of secret message length in LSB steganography in spatial domain
NASA Astrophysics Data System (ADS)
Fridrich, Jessica; Goljan, Miroslav
2004-06-01
In this paper, we present a new method for estimating the secret message length of bit-streams embedded using the Least Significant Bit embedding (LSB) at random pixel positions. We introduce the concept of a weighted stego image and then formulate the problem of determining the unknown message length as a simple optimization problem. The methodology is further refined to obtain more stable and accurate results for a wide spectrum of natural images. One of the advantages of the new method is its modular structure and a clean mathematical derivation that enables elegant estimator accuracy analysis using statistical image models.
Parallel algorithm of real-time infrared image restoration based on total variation theory
NASA Astrophysics Data System (ADS)
Zhu, Ran; Li, Miao; Long, Yunli; Zeng, Yaoyuan; An, Wei
2015-10-01
Image restoration is a necessary preprocessing step for infrared remote sensing applications. Traditional methods allow us to remove the noise but penalize too much the gradients corresponding to edges. Image restoration techniques based on variational approaches can solve this over-smoothing problem for the merits of their well-defined mathematical modeling of the restore procedure. The total variation (TV) of infrared image is introduced as a L1 regularization term added to the objective energy functional. It converts the restoration process to an optimization problem of functional involving a fidelity term to the image data plus a regularization term. Infrared image restoration technology with TV-L1 model exploits the remote sensing data obtained sufficiently and preserves information at edges caused by clouds. Numerical implementation algorithm is presented in detail. Analysis indicates that the structure of this algorithm can be easily implemented in parallelization. Therefore a parallel implementation of the TV-L1 filter based on multicore architecture with shared memory is proposed for infrared real-time remote sensing systems. Massive computation of image data is performed in parallel by cooperating threads running simultaneously on multiple cores. Several groups of synthetic infrared image data are used to validate the feasibility and effectiveness of the proposed parallel algorithm. Quantitative analysis of measuring the restored image quality compared to input image is presented. Experiment results show that the TV-L1 filter can restore the varying background image reasonably, and that its performance can achieve the requirement of real-time image processing.
Imaging mass spectrometry statistical analysis.
Jones, Emrys A; Deininger, Sören-Oliver; Hogendoorn, Pancras C W; Deelder, André M; McDonnell, Liam A
2012-08-30
Imaging mass spectrometry is increasingly used to identify new candidate biomarkers. This clinical application of imaging mass spectrometry is highly multidisciplinary: expertise in mass spectrometry is necessary to acquire high quality data, histology is required to accurately label the origin of each pixel's mass spectrum, disease biology is necessary to understand the potential meaning of the imaging mass spectrometry results, and statistics to assess the confidence of any findings. Imaging mass spectrometry data analysis is further complicated because of the unique nature of the data (within the mass spectrometry field); several of the assumptions implicit in the analysis of LC-MS/profiling datasets are not applicable to imaging. The very large size of imaging datasets and the reporting of many data analysis routines, combined with inadequate training and accessible reviews, have exacerbated this problem. In this paper we provide an accessible review of the nature of imaging data and the different strategies by which the data may be analyzed. Particular attention is paid to the assumptions of the data analysis routines to ensure that the reader is apprised of their correct usage in imaging mass spectrometry research. Copyright © 2012 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Reznik, A. L.; Tuzikov, A. V.; Solov'ev, A. A.; Torgov, A. V.
2016-11-01
Original codes and combinatorial-geometrical computational schemes are presented, which are developed and applied for finding exact analytical formulas that describe the probability of errorless readout of random point images recorded by a scanning aperture with a limited number of threshold levels. Combinatorial problems encountered in the course of the study and associated with the new generalization of Catalan numbers are formulated and solved. An attempt is made to find the explicit analytical form of these numbers, which is, on the one hand, a necessary stage of solving the basic research problem and, on the other hand, an independent self-consistent problem.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Zheng; Ukida, H.; Ramuhalli, Pradeep
2010-06-05
Imaging- and vision-based techniques play an important role in industrial inspection. The sophistication of the techniques assures high- quality performance of the manufacturing process through precise positioning, online monitoring, and real-time classification. Advanced systems incorporating multiple imaging and/or vision modalities provide robust solutions to complex situations and problems in industrial applications. A diverse range of industries, including aerospace, automotive, electronics, pharmaceutical, biomedical, semiconductor, and food/beverage, etc., have benefited from recent advances in multi-modal imaging, data fusion, and computer vision technologies. Many of the open problems in this context are in the general area of image analysis methodologies (preferably in anmore » automated fashion). This editorial article introduces a special issue of this journal highlighting recent advances and demonstrating the successful applications of integrated imaging and vision technologies in industrial inspection.« less
Slice-to-volume medical image registration: A survey.
Ferrante, Enzo; Paragios, Nikos
2017-07-01
During the last decades, the research community of medical imaging has witnessed continuous advances in image registration methods, which pushed the limits of the state-of-the-art and enabled the development of novel medical procedures. A particular type of image registration problem, known as slice-to-volume registration, played a fundamental role in areas like image guided surgeries and volumetric image reconstruction. However, to date, and despite the extensive literature available on this topic, no survey has been written to discuss this challenging problem. This paper introduces the first comprehensive survey of the literature about slice-to-volume registration, presenting a categorical study of the algorithms according to an ad-hoc taxonomy and analyzing advantages and disadvantages of every category. We draw some general conclusions from this analysis and present our perspectives on the future of the field. Copyright © 2017 Elsevier B.V. All rights reserved.
Objective determination of image end-members in spectral mixture analysis of AVIRIS data
NASA Technical Reports Server (NTRS)
Tompkins, Stefanie; Mustard, John F.; Pieters, Carle M.; Forsyth, Donald W.
1993-01-01
Spectral mixture analysis has been shown to be a powerful, multifaceted tool for analysis of multi- and hyper-spectral data. Applications of AVIRIS data have ranged from mapping soils and bedrock to ecosystem studies. During the first phase of the approach, a set of end-members are selected from an image cube (image end-members) that best account for its spectral variance within a constrained, linear least squares mixing model. These image end-members are usually selected using a priori knowledge and successive trial and error solutions to refine the total number and physical location of the end-members. However, in many situations a more objective method of determining these essential components is desired. We approach the problem of image end-member determination objectively by using the inherent variance of the data. Unlike purely statistical methods such as factor analysis, this approach derives solutions that conform to a physically realistic model.
The algorithm of motion blur image restoration based on PSF half-blind estimation
NASA Astrophysics Data System (ADS)
Chen, Da-Ke; Lin, Zhe
2011-08-01
A novel algorithm of motion blur image restoration based on PSF half-blind estimation with Hough transform was introduced on the basis of full analysis of the principle of TDICCD camera, with the problem that vertical uniform linear motion estimation used by IBD algorithm as the original value of PSF led to image restoration distortion. Firstly, the mathematical model of image degradation was established with the transcendental information of multi-frame images, and then two parameters (movement blur length and angle) that have crucial influence on PSF estimation was set accordingly. Finally, the ultimate restored image can be acquired through multiple iterative of the initial value of PSF estimation in Fourier domain, which the initial value was gained by the above method. Experimental results show that the proposal algorithm can not only effectively solve the image distortion problem caused by relative motion between TDICCD camera and movement objects, but also the details characteristics of original image are clearly restored.
Adaptive image inversion of contrast 3D echocardiography for enabling automated analysis.
Shaheen, Anjuman; Rajpoot, Kashif
2015-08-01
Contrast 3D echocardiography (C3DE) is commonly used to enhance the visual quality of ultrasound images in comparison with non-contrast 3D echocardiography (3DE). Although the image quality in C3DE is perceived to be improved for visual analysis, however it actually deteriorates for the purpose of automatic or semi-automatic analysis due to higher speckle noise and intensity inhomogeneity. Therefore, the LV endocardial feature extraction and segmentation from the C3DE images remains a challenging problem. To address this challenge, this work proposes an adaptive pre-processing method to invert the appearance of C3DE image. The image inversion is based on an image intensity threshold value which is automatically estimated through image histogram analysis. In the inverted appearance, the LV cavity appears dark while the myocardium appears bright thus making it similar in appearance to a 3DE image. Moreover, the resulting inverted image has high contrast and low noise appearance, yielding strong LV endocardium boundary and facilitating feature extraction for segmentation. Our results demonstrate that the inverse appearance of contrast image enables the subsequent LV segmentation. Copyright © 2015 Elsevier Ltd. All rights reserved.
Use of film digitizers to assist radiology image management
NASA Astrophysics Data System (ADS)
Honeyman-Buck, Janice C.; Frost, Meryll M.; Staab, Edward V.
1996-05-01
The purpose of this development effort was to evaluate the possibility of using digital technologies to solve image management problems in the Department of Radiology at the University of Florida. The three problem areas investigated were local interpretation of images produced in remote locations, distribution of images to areas outside of radiology, and film handling. In all cases the use of a laser film digitizer interfaced to an existing Picture Archiving and Communication System (PACS) was investigated as a solution to the problem. In each case the volume of studies involved were evaluated to estimate the impact of the solution on the network, archive, and workstations. Communications were stressed in the analysis of the needs for all image transmission. The operational aspects of the solution were examined to determine the needs for training, service, and maintenance. The remote sites requiring local interpretation included were a rural hospital needing coverage for after hours studies, the University of Florida student infirmary, and the emergency room. Distribution of images to the intensive care units was studied to improve image access and patient care. Handling of films originating from remote sites and those requiring urgent reporting were evaluated to improve management functions. The results of our analysis and the decisions that were made based on the analysis are described below. In the cases where systems were installed, a description of the system and its integration into the PACS system is included. For all three problem areas, although we could move images via a digitizer to the archive and a workstation, there was no way to inform the radiologist that a study needed attention. In the case of outside films, the patient did not always have a medical record number that matched one in our Radiology Information Systems (RIS). In order to incorporate all studies for a patient, we needed common locations for orders, reports, and images. RIS orders were generated for each outside study to be interpreted and a medical record number assigned if none existed. All digitized outside films were archived in the PACS archive for later review or comparison use. The request generated by the RIS requesting a diagnostic interpretation was placed at the PACS workstation to alert the radiologists that unread images had arrived and a box was added to the workstation user interface that could be checked by the radiologist to indicate that a report had been dictated. The digitizer system solved several problems, unavailable films in the emergency room, teleradiology, and archiving of outside studies that had been read by University of Florida radiologists. In addition to saving time for outside film management, we now store the studies for comparison purposes, no longer lose emergency room films, generate diagnostic reports on emergency room films in a timely manner (important for billing and reimbursement), and can handle the distributed nature of our business. As changes in health care drive management changes, existing tools can be used in new ways to help make the transition easier. In this case, adding digitizers to an existing PACS network helped solve several image management problems.
NASA Astrophysics Data System (ADS)
Pelikan, Erich; Vogelsang, Frank; Tolxdorff, Thomas
1996-04-01
The texture-based segmentation of x-ray images of focal bone lesions using topological maps is introduced. Texture characteristics are described by image-point correlation of feature images to feature vectors. For the segmentation, the topological map is labeled using an improved labeling strategy. Results of the technique are demonstrated on original and synthetic x-ray images and quantified with the aid of quality measures. In addition, a classifier-specific contribution analysis is applied for assessing the feature space.
NASA Astrophysics Data System (ADS)
Mues, Sarah; Lilge, Inga; Schönherr, Holger; Kemper, Björn; Schnekenburger, Jürgen
2017-02-01
The major problem of Digital Holographic Microscopy (DHM) long term live cell imaging is that over time most of the tracked cells move out of the image area and other ones move in. Therefore, most of the cells are lost for the evaluation of individual cellular processes. Here, we present an effective solution for this crucial problem of long-term microscopic live cell analysis. We have generated functionalized slides containing areas of 250 μm per 200 μm. These micropatterned biointerfaces consist of passivating polyaclrylamide brushes (PAAm). Inner areas are backfilled with octadecanthiol (ODT), which allows cell attachment. The fouling properties of these surfaces are highly controllable and therefore the defined areas designed for the size our microscopic image areas were effective in keeping all cells inside the rectangles over the selected imaging period.
NASA Astrophysics Data System (ADS)
Pu, Huangsheng; Zhang, Guanglei; He, Wei; Liu, Fei; Guang, Huizhi; Zhang, Yue; Bai, Jing; Luo, Jianwen
2014-09-01
It is a challenging problem to resolve and identify drug (or non-specific fluorophore) distribution throughout the whole body of small animals in vivo. In this article, an algorithm of unmixing multispectral fluorescence tomography (MFT) images based on independent component analysis (ICA) is proposed to solve this problem. ICA is used to unmix the data matrix assembled by the reconstruction results from MFT. Then the independent components (ICs) that represent spatial structures and the corresponding spectrum courses (SCs) which are associated with spectral variations can be obtained. By combining the ICs with SCs, the recovered MFT images can be generated and fluorophore concentration can be calculated. Simulation studies, phantom experiments and animal experiments with different concentration contrasts and spectrum combinations are performed to test the performance of the proposed algorithm. Results demonstrate that the proposed algorithm can not only provide the spatial information of fluorophores, but also recover the actual reconstruction of MFT images.
NASA Technical Reports Server (NTRS)
Masuoka, E.; Rose, J.; Quattromani, M.
1981-01-01
Recent developments related to microprocessor-based personal computers have made low-cost digital image processing systems a reality. Image analysis systems built around these microcomputers provide color image displays for images as large as 256 by 240 pixels in sixteen colors. Descriptive statistics can be computed for portions of an image, and supervised image classification can be obtained. The systems support Basic, Fortran, Pascal, and assembler language. A description is provided of a system which is representative of the new microprocessor-based image processing systems currently on the market. While small systems may never be truly independent of larger mainframes, because they lack 9-track tape drives, the independent processing power of the microcomputers will help alleviate some of the turn-around time problems associated with image analysis and display on the larger multiuser systems.
Xu, Dong; Yan, Shuicheng; Tao, Dacheng; Lin, Stephen; Zhang, Hong-Jiang
2007-11-01
Dimensionality reduction algorithms, which aim to select a small set of efficient and discriminant features, have attracted great attention for human gait recognition and content-based image retrieval (CBIR). In this paper, we present extensions of our recently proposed marginal Fisher analysis (MFA) to address these problems. For human gait recognition, we first present a direct application of MFA, then inspired by recent advances in matrix and tensor-based dimensionality reduction algorithms, we present matrix-based MFA for directly handling 2-D input in the form of gray-level averaged images. For CBIR, we deal with the relevance feedback problem by extending MFA to marginal biased analysis, in which within-class compactness is characterized only by the distances between each positive sample and its neighboring positive samples. In addition, we present a new technique to acquire a direct optimal solution for MFA without resorting to objective function modification as done in many previous algorithms. We conduct comprehensive experiments on the USF HumanID gait database and the Corel image retrieval database. Experimental results demonstrate that MFA and its extensions outperform related algorithms in both applications.
NASA Astrophysics Data System (ADS)
Li, Xun; Li, Xu; Zhu, Shanan; He, Bin
2009-05-01
Magnetoacoustic tomography with magnetic induction (MAT-MI) is a recently proposed imaging modality to image the electrical impedance of biological tissue. It combines the good contrast of electrical impedance tomography with the high spatial resolution of sonography. In this paper, a three-dimensional MAT-MI forward problem was investigated using the finite element method (FEM). The corresponding FEM formulae describing the forward problem are introduced. In the finite element analysis, magnetic induction in an object with conductivity values close to biological tissues was first carried out. The stimulating magnetic field was simulated as that generated from a three-dimensional coil. The corresponding acoustic source and field were then simulated. Computer simulation studies were conducted using both concentric and eccentric spherical conductivity models with different geometric specifications. In addition, the grid size for finite element analysis was evaluated for the model calibration and evaluation of the corresponding acoustic field.
Li, Xun; Li, Xu; Zhu, Shanan; He, Bin
2010-01-01
Magnetoacoustic Tomography with Magnetic Induction (MAT-MI) is a recently proposed imaging modality to image the electrical impedance of biological tissue. It combines the good contrast of electrical impedance tomography with the high spatial resolution of sonography. In this paper, three-dimensional MAT-MI forward problem was investigated using the finite element method (FEM). The corresponding FEM formulas describing the forward problem are introduced. In the finite element analysis, magnetic induction in an object with conductivity values close to biological tissues was first carried out. The stimulating magnetic field was simulated as that generated from a three-dimensional coil. The corresponding acoustic source and field were then simulated. Computer simulation studies were conducted using both concentric and eccentric spherical conductivity models with different geometric specifications. In addition, the grid size for finite element analysis was evaluated for model calibration and evaluation of the corresponding acoustic field. PMID:19351978
New Directions in the Digital Signal Processing of Image Data.
1987-05-01
and identify by block number) FIELD GROUP SUB-GROUP Object detection and idLntification 12 01 restoration of photon noise limited imagery 15 04 image...from incomplete information, restoration of blurred images in additive and multiplicative noise , motion analysis with fast hierarchical algorithms...different resolutions. As is well known, the solution to the matched filter problem under additive white noise conditions is the correlation receiver
NASA Astrophysics Data System (ADS)
Guerrout, EL-Hachemi; Ait-Aoudia, Samy; Michelucci, Dominique; Mahiou, Ramdane
2018-05-01
Many routine medical examinations produce images of patients suffering from various pathologies. With the huge number of medical images, the manual analysis and interpretation became a tedious task. Thus, automatic image segmentation became essential for diagnosis assistance. Segmentation consists in dividing the image into homogeneous and significant regions. We focus on hidden Markov random fields referred to as HMRF to model the problem of segmentation. This modelisation leads to a classical function minimisation problem. Broyden-Fletcher-Goldfarb-Shanno algorithm referred to as BFGS is one of the most powerful methods to solve unconstrained optimisation problem. In this paper, we investigate the combination of HMRF and BFGS algorithm to perform the segmentation operation. The proposed method shows very good segmentation results comparing with well-known approaches. The tests are conducted on brain magnetic resonance image databases (BrainWeb and IBSR) largely used to objectively confront the results obtained. The well-known Dice coefficient (DC) was used as similarity metric. The experimental results show that, in many cases, our proposed method approaches the perfect segmentation with a Dice Coefficient above .9. Moreover, it generally outperforms other methods in the tests conducted.
Design of FPGA ICA for hyperspectral imaging processing
NASA Astrophysics Data System (ADS)
Nordin, Anis; Hsu, Charles C.; Szu, Harold H.
2001-03-01
The remote sensing problem which uses hyperspectral imaging can be transformed into a blind source separation problem. Using this model, hyperspectral imagery can be de-mixed into sub-pixel spectra which indicate the different material present in the pixel. This can be further used to deduce areas which contain forest, water or biomass, without even knowing the sources which constitute the image. This form of remote sensing allows previously blurred images to show the specific terrain involved in that region. The blind source separation problem can be implemented using an Independent Component Analysis algorithm. The ICA Algorithm has previously been successfully implemented using software packages such as MATLAB, which has a downloadable version of FastICA. The challenge now lies in implementing it in a form of hardware, or firmware in order to improve its computational speed. Hardware implementation also solves insufficient memory problem encountered by software packages like MATLAB when employing ICA for high resolution images and a large number of channels. Here, a pipelined solution of the firmware, realized using FPGAs are drawn out and simulated using C. Since C code can be translated into HDLs or be used directly on the FPGAs, it can be used to simulate its actual implementation in hardware. The simulated results of the program is presented here, where seven channels are used to model the 200 different channels involved in hyperspectral imaging.
NASA Astrophysics Data System (ADS)
Stranieri, Andrew; Yearwood, John; Pham, Binh
1999-07-01
The development of data warehouses for the storage and analysis of very large corpora of medical image data represents a significant trend in health care and research. Amongst other benefits, the trend toward warehousing enables the use of techniques for automatically discovering knowledge from large and distributed databases. In this paper, we present an application design for knowledge discovery from databases (KDD) techniques that enhance the performance of the problem solving strategy known as case- based reasoning (CBR) for the diagnosis of radiological images. The problem of diagnosing the abnormality of the cervical spine is used to illustrate the method. The design of a case-based medical image diagnostic support system has three essential characteristics. The first is a case representation that comprises textual descriptions of the image, visual features that are known to be useful for indexing images, and additional visual features to be discovered by data mining many existing images. The second characteristic of the approach presented here involves the development of a case base that comprises an optimal number and distribution of cases. The third characteristic involves the automatic discovery, using KDD techniques, of adaptation knowledge to enhance the performance of the case based reasoner. Together, the three characteristics of our approach can overcome real time efficiency obstacles that otherwise mitigate against the use of CBR to the domain of medical image analysis.
Clock Scan Protocol for Image Analysis: ImageJ Plugins.
Dobretsov, Maxim; Petkau, Georg; Hayar, Abdallah; Petkau, Eugen
2017-06-19
The clock scan protocol for image analysis is an efficient tool to quantify the average pixel intensity within, at the border, and outside (background) a closed or segmented convex-shaped region of interest, leading to the generation of an averaged integral radial pixel-intensity profile. This protocol was originally developed in 2006, as a visual basic 6 script, but as such, it had limited distribution. To address this problem and to join similar recent efforts by others, we converted the original clock scan protocol code into two Java-based plugins compatible with NIH-sponsored and freely available image analysis programs like ImageJ or Fiji ImageJ. Furthermore, these plugins have several new functions, further expanding the range of capabilities of the original protocol, such as analysis of multiple regions of interest and image stacks. The latter feature of the program is especially useful in applications in which it is important to determine changes related to time and location. Thus, the clock scan analysis of stacks of biological images may potentially be applied to spreading of Na + or Ca ++ within a single cell, as well as to the analysis of spreading activity (e.g., Ca ++ waves) in populations of synaptically-connected or gap junction-coupled cells. Here, we describe these new clock scan plugins and show some examples of their applications in image analysis.
Methods for scalar-on-function regression.
Reiss, Philip T; Goldsmith, Jeff; Shang, Han Lin; Ogden, R Todd
2017-08-01
Recent years have seen an explosion of activity in the field of functional data analysis (FDA), in which curves, spectra, images, etc. are considered as basic functional data units. A central problem in FDA is how to fit regression models with scalar responses and functional data points as predictors. We review some of the main approaches to this problem, categorizing the basic model types as linear, nonlinear and nonparametric. We discuss publicly available software packages, and illustrate some of the procedures by application to a functional magnetic resonance imaging dataset.
Preprocessing film-copied MRI for studying morphological brain changes.
Pham, Tuan D; Eisenblätter, Uwe; Baune, Bernhard T; Berger, Klaus
2009-06-15
The magnetic resonance imaging (MRI) of the brain is one of the important data items for studying memory and morbidity in elderly as these images can provide useful information through the quantitative measures of various regions of interest of the brain. As an effort to fully automate the biomedical analysis of the brain that can be combined with the genetic data of the same human population and where the records of the original MRI data are missing, this paper presents two effective methods for addressing this imaging problem. The first method handles the restoration of the film-copied MRI. The second method involves the segmentation of the image data. Experimental results and comparisons with other methods suggest the usefulness of the proposed image analysis methodology.
The research on medical image classification algorithm based on PLSA-BOW model.
Cao, C H; Cao, H L
2016-04-29
With the rapid development of modern medical imaging technology, medical image classification has become more important for medical diagnosis and treatment. To solve the existence of polysemous words and synonyms problem, this study combines the word bag model with PLSA (Probabilistic Latent Semantic Analysis) and proposes the PLSA-BOW (Probabilistic Latent Semantic Analysis-Bag of Words) model. In this paper we introduce the bag of words model in text field to image field, and build the model of visual bag of words model. The method enables the word bag model-based classification method to be further improved in accuracy. The experimental results show that the PLSA-BOW model for medical image classification can lead to a more accurate classification.
Medical image classification based on multi-scale non-negative sparse coding.
Zhang, Ruijie; Shen, Jian; Wei, Fushan; Li, Xiong; Sangaiah, Arun Kumar
2017-11-01
With the rapid development of modern medical imaging technology, medical image classification has become more and more important in medical diagnosis and clinical practice. Conventional medical image classification algorithms usually neglect the semantic gap problem between low-level features and high-level image semantic, which will largely degrade the classification performance. To solve this problem, we propose a multi-scale non-negative sparse coding based medical image classification algorithm. Firstly, Medical images are decomposed into multiple scale layers, thus diverse visual details can be extracted from different scale layers. Secondly, for each scale layer, the non-negative sparse coding model with fisher discriminative analysis is constructed to obtain the discriminative sparse representation of medical images. Then, the obtained multi-scale non-negative sparse coding features are combined to form a multi-scale feature histogram as the final representation for a medical image. Finally, SVM classifier is combined to conduct medical image classification. The experimental results demonstrate that our proposed algorithm can effectively utilize multi-scale and contextual spatial information of medical images, reduce the semantic gap in a large degree and improve medical image classification performance. Copyright © 2017 Elsevier B.V. All rights reserved.
Lorenz, Kevin S.; Salama, Paul; Dunn, Kenneth W.; Delp, Edward J.
2013-01-01
Digital image analysis is a fundamental component of quantitative microscopy. However, intravital microscopy presents many challenges for digital image analysis. In general, microscopy volumes are inherently anisotropic, suffer from decreasing contrast with tissue depth, lack object edge detail, and characteristically have low signal levels. Intravital microscopy introduces the additional problem of motion artifacts, resulting from respiratory motion and heartbeat from specimens imaged in vivo. This paper describes an image registration technique for use with sequences of intravital microscopy images collected in time-series or in 3D volumes. Our registration method involves both rigid and non-rigid components. The rigid registration component corrects global image translations, while the non-rigid component manipulates a uniform grid of control points defined by B-splines. Each control point is optimized by minimizing a cost function consisting of two parts: a term to define image similarity, and a term to ensure deformation grid smoothness. Experimental results indicate that this approach is promising based on the analysis of several image volumes collected from the kidney, lung, and salivary gland of living rodents. PMID:22092443
A Projection free method for Generalized Eigenvalue Problem with a nonsmooth Regularizer.
Hwang, Seong Jae; Collins, Maxwell D; Ravi, Sathya N; Ithapu, Vamsi K; Adluru, Nagesh; Johnson, Sterling C; Singh, Vikas
2015-12-01
Eigenvalue problems are ubiquitous in computer vision, covering a very broad spectrum of applications ranging from estimation problems in multi-view geometry to image segmentation. Few other linear algebra problems have a more mature set of numerical routines available and many computer vision libraries leverage such tools extensively. However, the ability to call the underlying solver only as a "black box" can often become restrictive. Many 'human in the loop' settings in vision frequently exploit supervision from an expert, to the extent that the user can be considered a subroutine in the overall system. In other cases, there is additional domain knowledge, side or even partial information that one may want to incorporate within the formulation. In general, regularizing a (generalized) eigenvalue problem with such side information remains difficult. Motivated by these needs, this paper presents an optimization scheme to solve generalized eigenvalue problems (GEP) involving a (nonsmooth) regularizer. We start from an alternative formulation of GEP where the feasibility set of the model involves the Stiefel manifold. The core of this paper presents an end to end stochastic optimization scheme for the resultant problem. We show how this general algorithm enables improved statistical analysis of brain imaging data where the regularizer is derived from other 'views' of the disease pathology, involving clinical measurements and other image-derived representations.
Bringing the Digital Camera to the Physics Lab
ERIC Educational Resources Information Center
Rossi, M.; Gratton, L. M.; Oss, S.
2013-01-01
We discuss how compressed images created by modern digital cameras can lead to even severe problems in the quantitative analysis of experiments based on such images. Difficulties result from the nonlinear treatment of lighting intensity values stored in compressed files. To overcome such troubles, one has to adopt noncompressed, native formats, as…
1985-01-01
The NASA imaging processing technology, an advanced computer technique to enhance images sent to Earth in digital form by distant spacecraft, helped develop a new vision screening process. The Ocular Vision Screening system, an important step in preventing vision impairment, is a portable device designed especially to detect eye problems in children through the analysis of retinal reflexes.
Automated Analysis of Composition and Style of Photographs and Paintings
ERIC Educational Resources Information Center
Yao, Lei
2013-01-01
Computational aesthetics is a newly emerging cross-disciplinary field with its core situated in traditional research areas such as image processing and computer vision. Using a computer to interpret aesthetic terms for images is very challenging. In this dissertation, I focus on solving specific problems about analyzing the composition and style…
Efficient Workflows for Curation of Heterogeneous Data Supporting Modeling of U-Nb Alloy Aging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ward, Logan Timothy; Hackenberg, Robert Errol
These are slides from a presentation summarizing a graduate research associate's summer project. The following topics are covered in these slides: data challenges in materials, aging in U-Nb Alloys, Building an Aging Model, Different Phase Trans. in U-Nb, the Challenge, Storing Materials Data, Example Data Source, Organizing Data: What is a Schema?, What does a "XML Schema" look like?, Our Data Schema: Nice and Simple, Storing Data: Materials Data Curation System (MDCS), Problem with MDCS: Slow Data Entry, Getting Literature into MDCS, Staging Data in Excel Document, Final Result: MDCS Records, Analyzing Image Data, Process for Making TTT Diagram, Bottleneckmore » Number 1: Image Analysis, Fitting a TTP Boundary, Fitting a TTP Curve: Comparable Results, How Does it Compare to Our Data?, Image Analysis Workflow, Curating Hardness Records, Hardness Data: Two Key Decisions, Before Peak Age? - Automation, Interactive Viz, Which Transformation?, Microstructure-Informed Model, Tracking the Entire Process, General Problem with Property Models, Pinyon: Toolkit for Managing Model Creation, Tracking Individual Decisions, Jupyter: Docs and Code in One File, Hardness Analysis Workflow, Workflow for Aging Models, and conclusions.« less
Real-time image annotation by manifold-based biased Fisher discriminant analysis
NASA Astrophysics Data System (ADS)
Ji, Rongrong; Yao, Hongxun; Wang, Jicheng; Sun, Xiaoshuai; Liu, Xianming
2008-01-01
Automatic Linguistic Annotation is a promising solution to bridge the semantic gap in content-based image retrieval. However, two crucial issues are not well addressed in state-of-art annotation algorithms: 1. The Small Sample Size (3S) problem in keyword classifier/model learning; 2. Most of annotation algorithms can not extend to real-time online usage due to their low computational efficiencies. This paper presents a novel Manifold-based Biased Fisher Discriminant Analysis (MBFDA) algorithm to address these two issues by transductive semantic learning and keyword filtering. To address the 3S problem, Co-Training based Manifold learning is adopted for keyword model construction. To achieve real-time annotation, a Bias Fisher Discriminant Analysis (BFDA) based semantic feature reduction algorithm is presented for keyword confidence discrimination and semantic feature reduction. Different from all existing annotation methods, MBFDA views image annotation from a novel Eigen semantic feature (which corresponds to keywords) selection aspect. As demonstrated in experiments, our manifold-based biased Fisher discriminant analysis annotation algorithm outperforms classical and state-of-art annotation methods (1.K-NN Expansion; 2.One-to-All SVM; 3.PWC-SVM) in both computational time and annotation accuracy with a large margin.
NASA Technical Reports Server (NTRS)
Heydorn, R. P.
1984-01-01
The Mathematical Pattern Recognition and Image Analysis (MPRIA) Project is concerned with basic research problems related to the study of he Earth from remotely sensed measurements of its surface characteristics. The program goal is to better understand how to analyze the digital image that represents the spatial, spectral, and temporal arrangement of these measurements for purposing of making selected inferences about the Earth. This report summarizes the progress that has been made toward this program goal by each of the principal investigators in the MPRIA Program.
Optimal spatial filtering and transfer function for SAR ocean wave spectra
NASA Technical Reports Server (NTRS)
Goldfinger, A. D.; Beal, R. C.; Tilley, D. G.
1981-01-01
The Seasat Synthetic Aperture Radar (SAR) has proved to be an instrument of great utility in the sensing of ocean conditions on a global scale. An analysis of oceanographic and atmospheric aspects of Seasat data has shown that the features observed in the imagery are linked to ocean phenomena such as storm sources and their resulting swell systems. However, there remains one central problem which has not been satisfactorily solved to date. This problem is related to the accurate measurement of wind-generated ocean wave spectra. Investigations addressing this problem are currently being conducted. The problem has two parts, including the accurate measurement of the image spectra and the inference of actual surface wave spectra from these measurements. A description is presented of the progress made towards solving the first part of the problem, taking into account a digital rather than optical computation of the image transforms.
NASA Technical Reports Server (NTRS)
Hofman, L. B.; Erickson, W. K.; Donovan, W. E.
1984-01-01
Image Display and Analysis Systems (MIDAS) developed at NASA/Ames for the analysis of Landsat MSS images is described. The MIDAS computer power and memory, graphics, resource-sharing, expansion and upgrade, environment and maintenance, and software/user-interface requirements are outlined; the implementation hardware (including 32-bit microprocessor, 512K error-correcting RAM, 70 or 140-Mbyte formatted disk drive, 512 x 512 x 24 color frame buffer, and local-area-network transceiver) and applications software (ELAS, CIE, and P-EDITOR) are characterized; and implementation problems, performance data, and costs are examined. Planned improvements in MIDAS hardware and design goals and areas of exploration for MIDAS software are discussed.
Chavez, P.S.; Kwarteng, A.Y.
1989-01-01
A challenge encountered with Landsat Thematic Mapper (TM) data, which includes data from size reflective spectral bands, is displaying as much information as possible in a three-image set for color compositing or digital analysis. Principal component analysis (PCA) applied to the six TM bands simultaneously is often used to address this problem. However, two problems that can be encountered using the PCA method are that information of interest might be mathematically mapped to one of the unused components and that a color composite can be difficult to interpret. "Selective' PCA can be used to minimize both of these problems. The spectral contrast among several spectral regions was mapped for a northern Arizona site using Landsat TM data. Field investigations determined that most of the spectral contrast seen in this area was due to one of the following: the amount of iron and hematite in the soils and rocks, vegetation differences, standing and running water, or the presence of gypsum, which has a higher moisture retention capability than do the surrounding soils and rocks. -from Authors
Comparison of existing digital image analysis systems for the analysis of Thematic Mapper data
NASA Technical Reports Server (NTRS)
Likens, W. C.; Wrigley, R. C.
1984-01-01
Most existing image analysis systems were designed with the Landsat Multi-Spectral Scanner in mind, leaving open the question of whether or not these systems could adequately process Thematic Mapper data. In this report, both hardware and software systems have been evaluated for compatibility with TM data. Lack of spectral analysis capability was not found to be a problem, though techniques for spatial filtering and texture varied. Computer processing speed and data storage of currently existing mini-computer based systems may be less than adequate. Upgrading to more powerful hardware may be required for many TM applications.
High content image analysis for human H4 neuroglioma cells exposed to CuO nanoparticles.
Li, Fuhai; Zhou, Xiaobo; Zhu, Jinmin; Ma, Jinwen; Huang, Xudong; Wong, Stephen T C
2007-10-09
High content screening (HCS)-based image analysis is becoming an important and widely used research tool. Capitalizing this technology, ample cellular information can be extracted from the high content cellular images. In this study, an automated, reliable and quantitative cellular image analysis system developed in house has been employed to quantify the toxic responses of human H4 neuroglioma cells exposed to metal oxide nanoparticles. This system has been proved to be an essential tool in our study. The cellular images of H4 neuroglioma cells exposed to different concentrations of CuO nanoparticles were sampled using IN Cell Analyzer 1000. A fully automated cellular image analysis system has been developed to perform the image analysis for cell viability. A multiple adaptive thresholding method was used to classify the pixels of the nuclei image into three classes: bright nuclei, dark nuclei, and background. During the development of our image analysis methodology, we have achieved the followings: (1) The Gaussian filtering with proper scale has been applied to the cellular images for generation of a local intensity maximum inside each nucleus; (2) a novel local intensity maxima detection method based on the gradient vector field has been established; and (3) a statistical model based splitting method was proposed to overcome the under segmentation problem. Computational results indicate that 95.9% nuclei can be detected and segmented correctly by the proposed image analysis system. The proposed automated image analysis system can effectively segment the images of human H4 neuroglioma cells exposed to CuO nanoparticles. The computational results confirmed our biological finding that human H4 neuroglioma cells had a dose-dependent toxic response to the insult of CuO nanoparticles.
Image Quality Ranking Method for Microscopy
Koho, Sami; Fazeli, Elnaz; Eriksson, John E.; Hänninen, Pekka E.
2016-01-01
Automated analysis of microscope images is necessitated by the increased need for high-resolution follow up of events in time. Manually finding the right images to be analyzed, or eliminated from data analysis are common day-to-day problems in microscopy research today, and the constantly growing size of image datasets does not help the matter. We propose a simple method and a software tool for sorting images within a dataset, according to their relative quality. We demonstrate the applicability of our method in finding good quality images in a STED microscope sample preparation optimization image dataset. The results are validated by comparisons to subjective opinion scores, as well as five state-of-the-art blind image quality assessment methods. We also show how our method can be applied to eliminate useless out-of-focus images in a High-Content-Screening experiment. We further evaluate the ability of our image quality ranking method to detect out-of-focus images, by extensive simulations, and by comparing its performance against previously published, well-established microscopy autofocus metrics. PMID:27364703
Iris recognition based on robust principal component analysis
NASA Astrophysics Data System (ADS)
Karn, Pradeep; He, Xiao Hai; Yang, Shuai; Wu, Xiao Hong
2014-11-01
Iris images acquired under different conditions often suffer from blur, occlusion due to eyelids and eyelashes, specular reflection, and other artifacts. Existing iris recognition systems do not perform well on these types of images. To overcome these problems, we propose an iris recognition method based on robust principal component analysis. The proposed method decomposes all training images into a low-rank matrix and a sparse error matrix, where the low-rank matrix is used for feature extraction. The sparsity concentration index approach is then applied to validate the recognition result. Experimental results using CASIA V4 and IIT Delhi V1iris image databases showed that the proposed method achieved competitive performances in both recognition accuracy and computational efficiency.
Human Pose Estimation from Monocular Images: A Comprehensive Survey
Gong, Wenjuan; Zhang, Xuena; Gonzàlez, Jordi; Sobral, Andrews; Bouwmans, Thierry; Tu, Changhe; Zahzah, El-hadi
2016-01-01
Human pose estimation refers to the estimation of the location of body parts and how they are connected in an image. Human pose estimation from monocular images has wide applications (e.g., image indexing). Several surveys on human pose estimation can be found in the literature, but they focus on a certain category; for example, model-based approaches or human motion analysis, etc. As far as we know, an overall review of this problem domain has yet to be provided. Furthermore, recent advancements based on deep learning have brought novel algorithms for this problem. In this paper, a comprehensive survey of human pose estimation from monocular images is carried out including milestone works and recent advancements. Based on one standard pipeline for the solution of computer vision problems, this survey splits the problem into several modules: feature extraction and description, human body models, and modeling methods. Problem modeling methods are approached based on two means of categorization in this survey. One way to categorize includes top-down and bottom-up methods, and another way includes generative and discriminative methods. Considering the fact that one direct application of human pose estimation is to provide initialization for automatic video surveillance, there are additional sections for motion-related methods in all modules: motion features, motion models, and motion-based methods. Finally, the paper also collects 26 publicly available data sets for validation and provides error measurement methods that are frequently used. PMID:27898003
Computer aided detection system for Osteoporosis using low dose thoracic 3D CT images
NASA Astrophysics Data System (ADS)
Tsuji, Daisuke; Matsuhiro, Mikio; Suzuki, Hidenobu; Kawata, Yoshiki; Niki, Noboru; Nakano, Yasutaka; Harada, Masafumi; Kusumoto, Masahiko; Tsuchida, Takaaki; Eguchi, Kenji; Kaneko, Masahiro
2018-02-01
The patient of osteoporosis is about 13 million people in Japan and it is one of healthy life problems in the aging society. It is necessary to do early stage detection and treatment in order to prevent the osteoporosis. Multi-slice CT technology has been improving the three dimensional (3D) image analysis with higher resolution and shorter scan time. The 3D image analysis of thoracic vertebra can be used for supporting to diagnosis of osteoporosis. This analysis can be used for lung cancer detection at the same time. We develop method of shape analysis and CT values of spongy bone for the detection osteoporosis. Osteoporosis and lung cancer screening show high extraction rate by the thoracic vertebral evaluation CT images. In addition, we created standard pattern of CT value per thoracic vertebra for male age group using 298 low dose data.
Computer-aided diagnosis for osteoporosis using chest 3D CT images
NASA Astrophysics Data System (ADS)
Yoneda, K.; Matsuhiro, M.; Suzuki, H.; Kawata, Y.; Niki, N.; Nakano, Y.; Ohmatsu, H.; Kusumoto, M.; Tsuchida, T.; Eguchi, K.; Kaneko, M.
2016-03-01
The patients of osteoporosis comprised of about 13 million people in Japan and it is one of the problems the aging society has. In order to prevent the osteoporosis, it is necessary to do early detection and treatment. Multi-slice CT technology has been improving the three dimensional (3-D) image analysis with higher body axis resolution and shorter scan time. The 3-D image analysis using multi-slice CT images of thoracic vertebra can be used as a support to diagnose osteoporosis and at the same time can be used for lung cancer diagnosis which may lead to early detection. We develop automatic extraction and partitioning algorithm for spinal column by analyzing vertebral body structure, and the analysis algorithm of the vertebral body using shape analysis and a bone density measurement for the diagnosis of osteoporosis. Osteoporosis diagnosis support system obtained high extraction rate of the thoracic vertebral in both normal and low doses.
Colony image acquisition and segmentation
NASA Astrophysics Data System (ADS)
Wang, W. X.
2007-12-01
For counting of both colonies and plaques, there is a large number of applications including food, dairy, beverages, hygiene, environmental monitoring, water, toxicology, sterility testing, AMES testing, pharmaceuticals, paints, sterile fluids and fungal contamination. Recently, many researchers and developers have made efforts for this kind of systems. By investigation, some existing systems have some problems. The main problems are image acquisition and image segmentation. In order to acquire colony images with good quality, an illumination box was constructed as: the box includes front lightning and back lightning, which can be selected by users based on properties of colony dishes. With the illumination box, lightning can be uniform; colony dish can be put in the same place every time, which make image processing easy. The developed colony image segmentation algorithm consists of the sub-algorithms: (1) image classification; (2) image processing; and (3) colony delineation. The colony delineation algorithm main contain: the procedures based on grey level similarity, on boundary tracing, on shape information and colony excluding. In addition, a number of algorithms are developed for colony analysis. The system has been tested and satisfactory.
NASA Astrophysics Data System (ADS)
Kumar, Arjun S.
Over the last fifteen years, there has been a rapid growth in the use of high resolution X-ray computed tomography (HRXCT) imaging in material science applications. We use it at nanoscale resolutions up to 50 nm (nano-CT) for key research problems in large scale operation of polymer electrolyte membrane fuel cells (PEMFC) and lithium-ion (Li-ion) batteries in automotive applications. PEMFC are clean energy sources that electrochemically react with hydrogen gas to produce water and electricity. To reduce their costs, capturing their electrode nanostructure has become significant in modeling and optimizing their performance. For Li-ion batteries, a key challenge in increasing their scope for the automotive industry is Li metal dendrite growth. Li dendrites are structures of lithium with 100 nm features of interest that can grow chaotically within a battery and eventually lead to a short-circuit. HRXCT imaging is an effective diagnostics tool for such applications as it is a non-destructive method of capturing the 3D internal X-ray absorption coefficient of materials from a large series of 2D X-ray projections. Despite a recent push to use HRXCT for quantitative information on material samples, there is a relative dearth of computational tools in nano-CT image processing and analysis. Hence, we focus on developing computational methods for nano-CT image analysis of fuel cell and battery materials as required by the limitations in material samples and the imaging environment. The first problem we address is the segmentation of nano-CT Zernike phase contrast images. Nano-CT instruments are equipped with Zernike phase contrast optics to distinguish materials with a low difference in X-ray absorption coefficient by phase shifting the X-ray wave that is not diffracted by the sample. However, it creates image artifacts that hinder the use of traditional image segmentation techniques. To restore such images, we setup an inverse problem by modeling the X-ray phase contrast optics. We solve for the artifact-free images through an optimization function that uses novel edge detection and fast image interpolation methods. We use this optics-based segmentation method in two main research problems - 1) the characterization of a failure mechanism in the internal structure of Li-ion battery electrodes and 2) the measurement of Li metal dendrite morphology for different current and temperature parameters of Li-ion battery cell operation. The second problem we address is the development of a space+time (4D) reconstruction method for in-operando imaging of samples undergoing temporal change, particularly for X-ray sources with low throughput and nanoscale spatial resolutions. The challenge in using such systems is achieving a sufficient temporal resolution despite exposure times of a 2D projection on the order of 1 minute. We develop a 4D dynamic X-ray computed tomography (CT) reconstruction method, capable of reconstructing a temporal 3D image every 2 to 8 projections. Its novel properties are its projection angle sequence and the probabilistic detection of experimental change. We show its accuracy on phantom and experimental datasets to show its promise in temporally resolving Li metal dendrite growth and in elucidating mitigation strategies.
Mathematical Problems in Synthetic Aperture Radar
NASA Astrophysics Data System (ADS)
Klein, Jens
2010-10-01
This thesis is concerned with problems related to Synthetic Aperture Radar (SAR). The thesis is structured as follows: The first chapter explains what SAR is, and the physical and mathematical background is illuminated. The following chapter points out a problem with a divergent integral in a common approach and proposes an improvement. Numerical comparisons are shown that indicate that the improvements allow for a superior image quality. Thereafter the problem of limited data is analyzed. In a realistic SAR-measurement the data gathered from the electromagnetic waves reflected from the surface can only be collected from a limited area. However the reconstruction formula requires data from an infinite distance. The chapter gives an analysis of the artifacts which can obscure the reconstructed images due to this problem. Additionally, some numerical examples are shown that point to the severity of the problem. In chapter 4 the fact that data is available only from a limited area is used to propose a new inversion formula. This inversion formula has the potential to make it easier to suppress artifacts due to limited data and, depending on the application, can be refined to a fast reconstruction formula. In the penultimate chapter a solution to the problem of left-right ambiguity is presented. This problem exists since the invention of SAR and is caused by the geometry of the measurements. This leads to the fact that only symmetric images can be obtained. With the solution from this chapter it is possible to reconstruct not only the even part of the reflectivity function, but also the odd part, thus making it possible to reconstruct asymmetric images. Numerical simulations are shown to demonstrate that this solution is not affected by stability problems as other approaches have been. The final chapter develops some continuative ideas that could be pursued in the future.
Meng, Fan; Yang, Xiaomei; Zhou, Chenghu
2014-01-01
This paper studies the problem of the restoration of images corrupted by mixed Gaussian-impulse noise. In recent years, low-rank matrix reconstruction has become a research hotspot in many scientific and engineering domains such as machine learning, image processing, computer vision and bioinformatics, which mainly involves the problem of matrix completion and robust principal component analysis, namely recovering a low-rank matrix from an incomplete but accurate sampling subset of its entries and from an observed data matrix with an unknown fraction of its entries being arbitrarily corrupted, respectively. Inspired by these ideas, we consider the problem of recovering a low-rank matrix from an incomplete sampling subset of its entries with an unknown fraction of the samplings contaminated by arbitrary errors, which is defined as the problem of matrix completion from corrupted samplings and modeled as a convex optimization problem that minimizes a combination of the nuclear norm and the -norm in this paper. Meanwhile, we put forward a novel and effective algorithm called augmented Lagrange multipliers to exactly solve the problem. For mixed Gaussian-impulse noise removal, we regard it as the problem of matrix completion from corrupted samplings, and restore the noisy image following an impulse-detecting procedure. Compared with some existing methods for mixed noise removal, the recovery quality performance of our method is dominant if images possess low-rank features such as geometrically regular textures and similar structured contents; especially when the density of impulse noise is relatively high and the variance of Gaussian noise is small, our method can outperform the traditional methods significantly not only in the simultaneous removal of Gaussian noise and impulse noise, and the restoration ability for a low-rank image matrix, but also in the preservation of textures and details in the image. PMID:25248103
NASA Astrophysics Data System (ADS)
Balbin, Jessie R.; Dela Cruz, Jennifer C.; Camba, Clarisse O.; Gozo, Angelo D.; Jimenez, Sheena Mariz B.; Tribiana, Aivje C.
2017-06-01
Acne vulgaris, commonly called as acne, is a skin problem that occurs when oil and dead skin cells clog up in a person's pores. This is because hormones change which makes the skin oilier. The problem is people really do not know the real assessment of sensitivity of their skin in terms of fluid development on their faces that tends to develop acne vulgaris, thus having more complications. This research aims to assess Acne Vulgaris using luminescent visualization system through optical imaging and integration of image processing algorithms. Specifically, this research aims to design a prototype for facial fluid analysis using luminescent visualization system through optical imaging and integration of fluorescent imaging system, and to classify different facial fluids present in each person. Throughout the process, some structures and layers of the face will be excluded, leaving only a mapped facial structure with acne regions. Facial fluid regions are distinguished from the acne region as they are characterized differently.
NASA Astrophysics Data System (ADS)
Pueyo, Laurent
2016-01-01
A new class of high-contrast image analysis algorithms, that empirically fit and subtract systematic noise has lead to recent discoveries of faint exoplanet /substellar companions and scattered light images of circumstellar disks. The consensus emerging in the community is that these methods are extremely efficient at enhancing the detectability of faint astrophysical signal, but do generally create systematic biases in their observed properties. This poster provides a solution this outstanding problem. We present an analytical derivation of a linear expansion that captures the impact of astrophysical over/self-subtraction in current image analysis techniques. We examine the general case for which the reference images of the astrophysical scene moves azimuthally and/or radially across the field of view as a result of the observation strategy. Our new method method is based on perturbing the covariance matrix underlying any least-squares speckles problem and propagating this perturbation through the data analysis algorithm. This work is presented in the framework of Karhunen-Loeve Image Processing (KLIP) but it can be easily generalized to methods relying on linear combination of images (instead of eigen-modes). Based on this linear expansion, obtained in the most general case, we then demonstrate practical applications of this new algorithm. We first consider the case of the spectral extraction of faint point sources in IFS data and illustrate, using public Gemini Planet Imager commissioning data, that our novel perturbation based Forward Modeling (which we named KLIP-FM) can indeed alleviate algorithmic biases. We then apply KLIP-FM to the detection of point sources and show how it decreases the rate of false negatives while keeping the rate of false positives unchanged when compared to classical KLIP. This can potentially have important consequences on the design of follow-up strategies of ongoing direct imaging surveys.
COINSTAC: Decentralizing the future of brain imaging analysis
Ming, Jing; Verner, Eric; Sarwate, Anand; Kelly, Ross; Reed, Cory; Kahleck, Torran; Silva, Rogers; Panta, Sandeep; Turner, Jessica; Plis, Sergey; Calhoun, Vince
2017-01-01
In the era of Big Data, sharing neuroimaging data across multiple sites has become increasingly important. However, researchers who want to engage in centralized, large-scale data sharing and analysis must often contend with problems such as high database cost, long data transfer time, extensive manual effort, and privacy issues for sensitive data. To remove these barriers to enable easier data sharing and analysis, we introduced a new, decentralized, privacy-enabled infrastructure model for brain imaging data called COINSTAC in 2016. We have continued development of COINSTAC since this model was first introduced. One of the challenges with such a model is adapting the required algorithms to function within a decentralized framework. In this paper, we report on how we are solving this problem, along with our progress on several fronts, including additional decentralized algorithms implementation, user interface enhancement, decentralized regression statistic calculation, and complete pipeline specifications. PMID:29123643
Phase unwrapping using region-based markov random field model.
Dong, Ying; Ji, Jim
2010-01-01
Phase unwrapping is a classical problem in Magnetic Resonance Imaging (MRI), Interferometric Synthetic Aperture Radar and Sonar (InSAR/InSAS), fringe pattern analysis, and spectroscopy. Although many methods have been proposed to address this problem, robust and effective phase unwrapping remains a challenge. This paper presents a novel phase unwrapping method using a region-based Markov Random Field (MRF) model. Specifically, the phase image is segmented into regions within which the phase is not wrapped. Then, the phase image is unwrapped between different regions using an improved Highest Confidence First (HCF) algorithm to optimize the MRF model. The proposed method has desirable theoretical properties as well as an efficient implementation. Simulations and experimental results on MRI images show that the proposed method provides similar or improved phase unwrapping than Phase Unwrapping MAx-flow/min-cut (PUMA) method and ZpM method.
Performance characterization of image and video analysis systems at Siemens Corporate Research
NASA Astrophysics Data System (ADS)
Ramesh, Visvanathan; Jolly, Marie-Pierre; Greiffenhagen, Michael
2000-06-01
There has been a significant increase in commercial products using imaging analysis techniques to solve real-world problems in diverse fields such as manufacturing, medical imaging, document analysis, transportation and public security, etc. This has been accelerated by various factors: more advanced algorithms, the availability of cheaper sensors, and faster processors. While algorithms continue to improve in performance, a major stumbling block in translating improvements in algorithms to faster deployment of image analysis systems is the lack of characterization of limits of algorithms and how they affect total system performance. The research community has realized the need for performance analysis and there have been significant efforts in the last few years to remedy the situation. Our efforts at SCR have been on statistical modeling and characterization of modules and systems. The emphasis is on both white-box and black box methodologies to evaluate and optimize vision systems. In the first part of this paper we review the literature on performance characterization and then provide an overview of the status of research in performance characterization of image and video understanding systems. The second part of the paper is on performance evaluation of medical image segmentation algorithms. Finally, we highlight some research issues in performance analysis in medical imaging systems.
Design Criteria For Networked Image Analysis System
NASA Astrophysics Data System (ADS)
Reader, Cliff; Nitteberg, Alan
1982-01-01
Image systems design is currently undergoing a metamorphosis from the conventional computing systems of the past into a new generation of special purpose designs. This change is motivated by several factors, notably among which is the increased opportunity for high performance with low cost offered by advances in semiconductor technology. Another key issue is a maturing in understanding of problems and the applicability of digital processing techniques. These factors allow the design of cost-effective systems that are functionally dedicated to specific applications and used in a utilitarian fashion. Following an overview of the above stated issues, the paper presents a top-down approach to the design of networked image analysis systems. The requirements for such a system are presented, with orientation toward the hospital environment. The three main areas are image data base management, viewing of image data and image data processing. This is followed by a survey of the current state of the art, covering image display systems, data base techniques, communications networks and software systems control. The paper concludes with a description of the functional subystems and architectural framework for networked image analysis in a production environment.
Intrasubject multimodal groupwise registration with the conditional template entropy.
Polfliet, Mathias; Klein, Stefan; Huizinga, Wyke; Paulides, Margarethus M; Niessen, Wiro J; Vandemeulebroucke, Jef
2018-05-01
Image registration is an important task in medical image analysis. Whereas most methods are designed for the registration of two images (pairwise registration), there is an increasing interest in simultaneously aligning more than two images using groupwise registration. Multimodal registration in a groupwise setting remains difficult, due to the lack of generally applicable similarity metrics. In this work, a novel similarity metric for such groupwise registration problems is proposed. The metric calculates the sum of the conditional entropy between each image in the group and a representative template image constructed iteratively using principal component analysis. The proposed metric is validated in extensive experiments on synthetic and intrasubject clinical image data. These experiments showed equivalent or improved registration accuracy compared to other state-of-the-art (dis)similarity metrics and improved transformation consistency compared to pairwise mutual information. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.
PlantCV v2: Image analysis software for high-throughput plant phenotyping
Abbasi, Arash; Berry, Jeffrey C.; Callen, Steven T.; Chavez, Leonardo; Doust, Andrew N.; Feldman, Max J.; Gilbert, Kerrigan B.; Hodge, John G.; Hoyer, J. Steen; Lin, Andy; Liu, Suxing; Lizárraga, César; Lorence, Argelia; Miller, Michael; Platon, Eric; Tessman, Monica; Sax, Tony
2017-01-01
Systems for collecting image data in conjunction with computer vision techniques are a powerful tool for increasing the temporal resolution at which plant phenotypes can be measured non-destructively. Computational tools that are flexible and extendable are needed to address the diversity of plant phenotyping problems. We previously described the Plant Computer Vision (PlantCV) software package, which is an image processing toolkit for plant phenotyping analysis. The goal of the PlantCV project is to develop a set of modular, reusable, and repurposable tools for plant image analysis that are open-source and community-developed. Here we present the details and rationale for major developments in the second major release of PlantCV. In addition to overall improvements in the organization of the PlantCV project, new functionality includes a set of new image processing and normalization tools, support for analyzing images that include multiple plants, leaf segmentation, landmark identification tools for morphometrics, and modules for machine learning. PMID:29209576
PlantCV v2: Image analysis software for high-throughput plant phenotyping.
Gehan, Malia A; Fahlgren, Noah; Abbasi, Arash; Berry, Jeffrey C; Callen, Steven T; Chavez, Leonardo; Doust, Andrew N; Feldman, Max J; Gilbert, Kerrigan B; Hodge, John G; Hoyer, J Steen; Lin, Andy; Liu, Suxing; Lizárraga, César; Lorence, Argelia; Miller, Michael; Platon, Eric; Tessman, Monica; Sax, Tony
2017-01-01
Systems for collecting image data in conjunction with computer vision techniques are a powerful tool for increasing the temporal resolution at which plant phenotypes can be measured non-destructively. Computational tools that are flexible and extendable are needed to address the diversity of plant phenotyping problems. We previously described the Plant Computer Vision (PlantCV) software package, which is an image processing toolkit for plant phenotyping analysis. The goal of the PlantCV project is to develop a set of modular, reusable, and repurposable tools for plant image analysis that are open-source and community-developed. Here we present the details and rationale for major developments in the second major release of PlantCV. In addition to overall improvements in the organization of the PlantCV project, new functionality includes a set of new image processing and normalization tools, support for analyzing images that include multiple plants, leaf segmentation, landmark identification tools for morphometrics, and modules for machine learning.
PlantCV v2: Image analysis software for high-throughput plant phenotyping
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gehan, Malia A.; Fahlgren, Noah; Abbasi, Arash
Systems for collecting image data in conjunction with computer vision techniques are a powerful tool for increasing the temporal resolution at which plant phenotypes can be measured non-destructively. Computational tools that are flexible and extendable are needed to address the diversity of plant phenotyping problems. We previously described the Plant Computer Vision (PlantCV) software package, which is an image processing toolkit for plant phenotyping analysis. The goal of the PlantCV project is to develop a set of modular, reusable, and repurposable tools for plant image analysis that are open-source and community-developed. Here in this paper we present the details andmore » rationale for major developments in the second major release of PlantCV. In addition to overall improvements in the organization of the PlantCV project, new functionality includes a set of new image processing and normalization tools, support for analyzing images that include multiple plants, leaf segmentation, landmark identification tools for morphometrics, and modules for machine learning.« less
PlantCV v2: Image analysis software for high-throughput plant phenotyping
Gehan, Malia A.; Fahlgren, Noah; Abbasi, Arash; ...
2017-12-01
Systems for collecting image data in conjunction with computer vision techniques are a powerful tool for increasing the temporal resolution at which plant phenotypes can be measured non-destructively. Computational tools that are flexible and extendable are needed to address the diversity of plant phenotyping problems. We previously described the Plant Computer Vision (PlantCV) software package, which is an image processing toolkit for plant phenotyping analysis. The goal of the PlantCV project is to develop a set of modular, reusable, and repurposable tools for plant image analysis that are open-source and community-developed. Here in this paper we present the details andmore » rationale for major developments in the second major release of PlantCV. In addition to overall improvements in the organization of the PlantCV project, new functionality includes a set of new image processing and normalization tools, support for analyzing images that include multiple plants, leaf segmentation, landmark identification tools for morphometrics, and modules for machine learning.« less
Sparse Image Reconstruction on the Sphere: Analysis and Synthesis.
Wallis, Christopher G R; Wiaux, Yves; McEwen, Jason D
2017-11-01
We develop techniques to solve ill-posed inverse problems on the sphere by sparse regularization, exploiting sparsity in both axisymmetric and directional scale-discretized wavelet space. Denoising, inpainting, and deconvolution problems and combinations thereof, are considered as examples. Inverse problems are solved in both the analysis and synthesis settings, with a number of different sampling schemes. The most effective approach is that with the most restricted solution-space, which depends on the interplay between the adopted sampling scheme, the selection of the analysis/synthesis problem, and any weighting of the l 1 norm appearing in the regularization problem. More efficient sampling schemes on the sphere improve reconstruction fidelity by restricting the solution-space and also by improving sparsity in wavelet space. We apply the technique to denoise Planck 353-GHz observations, improving the ability to extract the structure of Galactic dust emission, which is important for studying Galactic magnetism.
NASA Technical Reports Server (NTRS)
Bracken, P. A.; Dalton, J. T.; Quann, J. J.; Billingsley, J. B.
1978-01-01
The Atmospheric and Oceanographic Information Processing System (AOIPS) was developed to help applications investigators perform required interactive image data analysis rapidly and to eliminate the inefficiencies and problems associated with batch operation. This paper describes the configuration and processing capabilities of AOIPS and presents unique subsystems for displaying, analyzing, storing, and manipulating digital image data. Applications of AOIPS to research investigations in meteorology and earth resources are featured.
General tensor discriminant analysis and gabor features for gait recognition.
Tao, Dacheng; Li, Xuelong; Wu, Xindong; Maybank, Stephen J
2007-10-01
The traditional image representations are not suited to conventional classification methods, such as the linear discriminant analysis (LDA), because of the under sample problem (USP): the dimensionality of the feature space is much higher than the number of training samples. Motivated by the successes of the two dimensional LDA (2DLDA) for face recognition, we develop a general tensor discriminant analysis (GTDA) as a preprocessing step for LDA. The benefits of GTDA compared with existing preprocessing methods, e.g., principal component analysis (PCA) and 2DLDA, include 1) the USP is reduced in subsequent classification by, for example, LDA; 2) the discriminative information in the training tensors is preserved; and 3) GTDA provides stable recognition rates because the alternating projection optimization algorithm to obtain a solution of GTDA converges, while that of 2DLDA does not. We use human gait recognition to validate the proposed GTDA. The averaged gait images are utilized for gait representation. Given the popularity of Gabor function based image decompositions for image understanding and object recognition, we develop three different Gabor function based image representations: 1) the GaborD representation is the sum of Gabor filter responses over directions, 2) GaborS is the sum of Gabor filter responses over scales, and 3) GaborSD is the sum of Gabor filter responses over scales and directions. The GaborD, GaborS and GaborSD representations are applied to the problem of recognizing people from their averaged gait images.A large number of experiments were carried out to evaluate the effectiveness (recognition rate) of gait recognition based on first obtaining a Gabor, GaborD, GaborS or GaborSD image representation, then using GDTA to extract features and finally using LDA for classification. The proposed methods achieved good performance for gait recognition based on image sequences from the USF HumanID Database. Experimental comparisons are made with nine state of the art classification methods in gait recognition.
NASA Technical Reports Server (NTRS)
1987-01-01
Used to detect eye problems in children through analysis of retinal reflexes, the system incorporates image processing techniques. VISISCREEN's photorefractor is basically a 35 millimeter camera with a telephoto lens and an electronic flash. By making a color photograph, the system can test the human eye for refractive error and obstruction in the cornea or lens. Ocular alignment problems are detected by imaging both eyes simultaneously. Electronic flash sends light into the eyes and the light is reflected from the retina back to the camera lens. Photorefractor analyzes the retinal reflexes generated by the subject's response to the flash and produces an image of the subject's eyes in which the pupils are variously colored. The nature of a defect, where such exists, is identifiable by atrained observer's visual examination.
Ship dynamics for maritime ISAR imaging.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Doerry, Armin Walter
2008-02-01
Demand is increasing for imaging ships at sea. Conventional SAR fails because the ships are usually in motion, both with a forward velocity, and other linear and angular motions that accompany sea travel. Because the target itself is moving, this becomes an Inverse- SAR, or ISAR problem. Developing useful ISAR techniques and algorithms is considerably aided by first understanding the nature and characteristics of ship motion. Consequently, a brief study of some principles of naval architecture sheds useful light on this problem. We attempt to do so here. Ship motions are analyzed for their impact on range-Doppler imaging using Inversemore » Synthetic Aperture Radar (ISAR). A framework for analysis is developed, and limitations of simple ISAR systems are discussed.« less
NASA Astrophysics Data System (ADS)
Perner, Petra
2017-03-01
Molecular image-based techniques are widely used in medicine to detect specific diseases. Look diagnosis is an important issue but also the analysis of the eye plays an important role in order to detect specific diseases. These topics are important topics in medicine and the standardization of these topics by an automatic system can be a new challenging field for machine vision. Compared to iris recognition has the iris diagnosis much more higher demands for the image acquisition and interpretation of the iris. One understands by iris diagnosis (Iridology) the investigation and analysis of the colored part of the eye, the iris, to discover factors, which play an important role for the prevention and treatment of illnesses, but also for the preservation of an optimum health. An automatic system would pave the way for a much wider use of the iris diagnosis for the diagnosis of illnesses and for the purpose of individual health protection. With this paper, we describe our work towards an automatic iris diagnosis system. We describe the image acquisition and the problems with it. Different ways are explained for image acquisition and image preprocessing. We describe the image analysis method for the detection of the iris. The meta-model for image interpretation is given. Based on this model we show the many tasks for image analysis that range from different image-object feature analysis, spatial image analysis to color image analysis. Our first results for the recognition of the iris are given. We describe how detecting the pupil and not wanted lamp spots. We explain how to recognize orange blue spots in the iris and match them against the topological map of the iris. Finally, we give an outlook for further work.
NASA Technical Reports Server (NTRS)
Knasel, T. Michael
1996-01-01
The primary goal of the Adaptive Vision Laboratory Research project was to develop advanced computer vision systems for automatic target recognition. The approach used in this effort combined several machine learning paradigms including evolutionary learning algorithms, neural networks, and adaptive clustering techniques to develop the E-MOR.PH system. This system is capable of generating pattern recognition systems to solve a wide variety of complex recognition tasks. A series of simulation experiments were conducted using E-MORPH to solve problems in OCR, military target recognition, industrial inspection, and medical image analysis. The bulk of the funds provided through this grant were used to purchase computer hardware and software to support these computationally intensive simulations. The payoff from this effort is the reduced need for human involvement in the design and implementation of recognition systems. We have shown that the techniques used in E-MORPH are generic and readily transition to other problem domains. Specifically, E-MORPH is multi-phase evolutionary leaming system that evolves cooperative sets of features detectors and combines their response using an adaptive classifier to form a complete pattern recognition system. The system can operate on binary or grayscale images. In our most recent experiments, we used multi-resolution images that are formed by applying a Gabor wavelet transform to a set of grayscale input images. To begin the leaming process, candidate chips are extracted from the multi-resolution images to form a training set and a test set. A population of detector sets is randomly initialized to start the evolutionary process. Using a combination of evolutionary programming and genetic algorithms, the feature detectors are enhanced to solve a recognition problem. The design of E-MORPH and recognition results for a complex problem in medical image analysis are described at the end of this report. The specific task involves the identification of vertebrae in x-ray images of human spinal columns. This problem is extremely challenging because the individual vertebra exhibit variation in shape, scale, orientation, and contrast. E-MORPH generated several accurate recognition systems to solve this task. This dual use of this ATR technology clearly demonstrates the flexibility and power of our approach.
ERIC Educational Resources Information Center
Wiseman, Marcie C.; Moradi, Bonnie
2010-01-01
On the basis of integrating objectification theory research with research on body image and eating problems among sexual minority men, the present study examined relations among sociocultural and psychological correlates of eating disorder symptoms with a sample of 231 sexual minority men. Results of a path analysis supported tenets of…
Bringing the Digital Camera to the Physics Lab
NASA Astrophysics Data System (ADS)
Rossi, M.; Gratton, L. M.; Oss, S.
2013-03-01
We discuss how compressed images created by modern digital cameras can lead to even severe problems in the quantitative analysis of experiments based on such images. Difficulties result from the nonlinear treatment of lighting intensity values stored in compressed files. To overcome such troubles, one has to adopt noncompressed, native formats, as we examine in this work.
Corporate image and public health: an analysis of the Philip Morris, Kraft, and Nestlé websites.
Smith, Elizabeth
2012-01-01
Companies need to maintain a good reputation to do business; however, companies in the infant formula, tobacco, and processed food industries have been identified as promoting disease. Such companies use their websites as a means of promulgating a positive public image, thereby potentially reducing the effectiveness of public health campaigns against the problems they perpetuate. The author examined documents from the websites of Philip Morris, Kraft, and Nestlé for issue framing and analyzed them using Benoit's typology of corporate image repair strategies. All three companies defined the problems they were addressing strategically, minimizing their own responsibility and the consequences of their actions. They proposed solutions that were actions to be taken by others. They also associated themselves with public health organizations. Health advocates should recognize industry attempts to use relationships with health organizations as strategic image repair and reject industry efforts to position themselves as stakeholders in public health problems. Denormalizing industries that are disease vectors, not just their products, may be critical in realizing positive change.
Corporate Image and Public Health: An Analysis of the Philip Morris, Kraft, and Nestlé Websites
SMITH, ELIZABETH
2012-01-01
Companies need to maintain a good reputation to do business; however, companies in the infant formula, tobacco, and processed food industries have been identified as promoting disease. Such companies use their websites as a means of promulgating a positive public image, thereby potentially reducing the effectiveness of public health campaigns against the problems they perpetuate. The author examined documents from the websites of Philip Morris, Kraft, and Nestlé for issue framing and analyzed them using Benoit’s typology of corporate image repair strategies. All three companies defined the problems they were addressing strategically, minimizing their own responsibility and the consequences of their actions. They proposed solutions that were actions to be taken by others. They also associated themselves with public health organizations. Health advocates should recognize industry attempts to use relationships with health organizations as strategic image repair and reject industry efforts to position themselves as stakeholders in public health problems. Denormalizing industries that are disease vectors, not just their products, may be critical in realizing positive change. PMID:22420639
Zhan, Mei; Crane, Matthew M; Entchev, Eugeni V; Caballero, Antonio; Fernandes de Abreu, Diana Andrea; Ch'ng, QueeLim; Lu, Hang
2015-04-01
Quantitative imaging has become a vital technique in biological discovery and clinical diagnostics; a plethora of tools have recently been developed to enable new and accelerated forms of biological investigation. Increasingly, the capacity for high-throughput experimentation provided by new imaging modalities, contrast techniques, microscopy tools, microfluidics and computer controlled systems shifts the experimental bottleneck from the level of physical manipulation and raw data collection to automated recognition and data processing. Yet, despite their broad importance, image analysis solutions to address these needs have been narrowly tailored. Here, we present a generalizable formulation for autonomous identification of specific biological structures that is applicable for many problems. The process flow architecture we present here utilizes standard image processing techniques and the multi-tiered application of classification models such as support vector machines (SVM). These low-level functions are readily available in a large array of image processing software packages and programming languages. Our framework is thus both easy to implement at the modular level and provides specific high-level architecture to guide the solution of more complicated image-processing problems. We demonstrate the utility of the classification routine by developing two specific classifiers as a toolset for automation and cell identification in the model organism Caenorhabditis elegans. To serve a common need for automated high-resolution imaging and behavior applications in the C. elegans research community, we contribute a ready-to-use classifier for the identification of the head of the animal under bright field imaging. Furthermore, we extend our framework to address the pervasive problem of cell-specific identification under fluorescent imaging, which is critical for biological investigation in multicellular organisms or tissues. Using these examples as a guide, we envision the broad utility of the framework for diverse problems across different length scales and imaging methods.
Mingus Discontinuous Multiphysics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pat Notz, Dan Turner
Mingus provides hybrid coupled local/non-local mechanics analysis capabilities that extend several traditional methods to applications with inherent discontinuities. Its primary features include adaptations of solid mechanics, fluid dynamics and digital image correlation that naturally accommodate dijointed data or irregular solution fields by assimilating a variety of discretizations (such as control volume finite elements, peridynamics and meshless control point clouds). The goal of this software is to provide an analysis framework form multiphysics engineering problems with an integrated image correlation capability that can be used for experimental validation and model
Autonomous precision landing using terrain-following navigation
NASA Technical Reports Server (NTRS)
Vaughan, R. M.; Gaskell, R. W.; Halamek, P.; Klumpp, A. R.; Synnott, S. P.
1991-01-01
Terrain-following navigation studies that have been done over the past two years in the navigation system section at JPL are described. A descent to Mars scenario based on Mars Rover and Sample Return mission profiles is described, and navigation and image processing issues pertaining to descent phases where landmark picture can be obtained are examined. A covariance analysis is performed to verify that landmark measurements from a terrain-following navigation system can satisfy precision landing requirements. Image processing problems involving known landmarks in actual pictures are considered. Mission design alternatives that can alleviate some of these problems are suggested.
How to Perform a Systematic Review and Meta-analysis of Diagnostic Imaging Studies.
Cronin, Paul; Kelly, Aine Marie; Altaee, Duaa; Foerster, Bradley; Petrou, Myria; Dwamena, Ben A
2018-05-01
A systematic review is a comprehensive search, critical evaluation, and synthesis of all the relevant studies on a specific (clinical) topic that can be applied to the evaluation of diagnostic and screening imaging studies. It can be a qualitative or a quantitative (meta-analysis) review of available literature. A meta-analysis uses statistical methods to combine and summarize the results of several studies. In this review, a 12-step approach to performing a systematic review (and meta-analysis) is outlined under the four domains: (1) Problem Formulation and Data Acquisition, (2) Quality Appraisal of Eligible Studies, (3) Statistical Analysis of Quantitative Data, and (4) Clinical Interpretation of the Evidence. This review is specifically geared toward the performance of a systematic review and meta-analysis of diagnostic test accuracy (imaging) studies. Copyright © 2018 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.
Multilevel image recognition using discriminative patches and kernel covariance descriptor
NASA Astrophysics Data System (ADS)
Lu, Le; Yao, Jianhua; Turkbey, Evrim; Summers, Ronald M.
2014-03-01
Computer-aided diagnosis of medical images has emerged as an important tool to objectively improve the performance, accuracy and consistency for clinical workflow. To computerize the medical image diagnostic recognition problem, there are three fundamental problems: where to look (i.e., where is the region of interest from the whole image/volume), image feature description/encoding, and similarity metrics for classification or matching. In this paper, we exploit the motivation, implementation and performance evaluation of task-driven iterative, discriminative image patch mining; covariance matrix based descriptor via intensity, gradient and spatial layout; and log-Euclidean distance kernel for support vector machine, to address these three aspects respectively. To cope with often visually ambiguous image patterns for the region of interest in medical diagnosis, discovery of multilabel selective discriminative patches is desired. Covariance of several image statistics summarizes their second order interactions within an image patch and is proved as an effective image descriptor, with low dimensionality compared with joint statistics and fast computation regardless of the patch size. We extensively evaluate two extended Gaussian kernels using affine-invariant Riemannian metric or log-Euclidean metric with support vector machines (SVM), on two medical image classification problems of degenerative disc disease (DDD) detection on cortical shell unwrapped CT maps and colitis detection on CT key images. The proposed approach is validated with promising quantitative results on these challenging tasks. Our experimental findings and discussion also unveil some interesting insights on the covariance feature composition with or without spatial layout for classification and retrieval, and different kernel constructions for SVM. This will also shed some light on future work using covariance feature and kernel classification for medical image analysis.
Gender Equality in the Academy: The Pipeline Problem
ERIC Educational Resources Information Center
Monroe, Kristen Renwick; Chiu, William F.
2010-01-01
As part of the ongoing work by the Committee on the Status of Women in the Profession (CSWP), we offer an empirical analysis of the pipeline problem in academia. The image of a pipeline is a commonly advanced explanation for persistent discrimination that suggests that gender inequality will decline once there are sufficient numbers of qualified…
NASA Astrophysics Data System (ADS)
Zhang, Jun; Saha, Ashirbani; Zhu, Zhe; Mazurowski, Maciej A.
2018-02-01
Breast tumor segmentation based on dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) remains an active as well as a challenging problem. Previous studies often rely on manual annotation for tumor regions, which is not only time-consuming but also error-prone. Recent studies have shown high promise of deep learning-based methods in various segmentation problems. However, these methods are usually faced with the challenge of limited number (e.g., tens or hundreds) of medical images for training, leading to sub-optimal segmentation performance. Also, previous methods cannot efficiently deal with prevalent class-imbalance problems in tumor segmentation, where the number of voxels in tumor regions is much lower than that in the background area. To address these issues, in this study, we propose a mask-guided hierarchical learning (MHL) framework for breast tumor segmentation via fully convolutional networks (FCN). Our strategy is first decomposing the original difficult problem into several sub-problems and then solving these relatively simpler sub-problems in a hierarchical manner. To precisely identify locations of tumors that underwent a biopsy, we further propose an FCN model to detect two landmarks defined on nipples. Finally, based on both segmentation probability maps and our identified landmarks, we proposed to select biopsied tumors from all detected tumors via a tumor selection strategy using the pathology location. We validate our MHL method using data for 272 patients, and achieve a mean Dice similarity coefficient (DSC) of 0.72 in breast tumor segmentation. Finally, in a radiogenomic analysis, we show that a previously developed image features show a comparable performance for identifying luminal A subtype when applied to the automatic segmentation and a semi-manual segmentation demonstrating a high promise for fully automated radiogenomic analysis in breast cancer.
Gurcan, Metin N; Tomaszewski, John; Overton, James A; Doyle, Scott; Ruttenberg, Alan; Smith, Barry
2017-02-01
Interoperability across data sets is a key challenge for quantitative histopathological imaging. There is a need for an ontology that can support effective merging of pathological image data with associated clinical and demographic data. To foster organized, cross-disciplinary, information-driven collaborations in the pathological imaging field, we propose to develop an ontology to represent imaging data and methods used in pathological imaging and analysis, and call it Quantitative Histopathological Imaging Ontology - QHIO. We apply QHIO to breast cancer hot-spot detection with the goal of enhancing reliability of detection by promoting the sharing of data between image analysts. Copyright © 2016 Elsevier Inc. All rights reserved.
An approach to integrate the human vision psychology and perception knowledge into image enhancement
NASA Astrophysics Data System (ADS)
Wang, Hui; Huang, Xifeng; Ping, Jiang
2009-07-01
Image enhancement is very important image preprocessing technology especially when the image is captured in the poor imaging condition or dealing with the high bits image. The benefactor of image enhancement either may be a human observer or a computer vision process performing some kind of higher-level image analysis, such as target detection or scene understanding. One of the main objects of the image enhancement is getting a high dynamic range image and a high contrast degree image for human perception or interpretation. So, it is very necessary to integrate either empirical or statistical human vision psychology and perception knowledge into image enhancement. The human vision psychology and perception claims that humans' perception and response to the intensity fluctuation δu of visual signals are weighted by the background stimulus u, instead of being plainly uniform. There are three main laws: Weber's law, Weber- Fechner's law and Stevens's Law that describe this phenomenon in the psychology and psychophysics. This paper will integrate these three laws of the human vision psychology and perception into a very popular image enhancement algorithm named Adaptive Plateau Equalization (APE). The experiments were done on the high bits star image captured in night scene and the infrared-red image both the static image and the video stream. For the jitter problem in the video stream, this algorithm reduces this problem using the difference between the current frame's plateau value and the previous frame's plateau value to correct the current frame's plateau value. Considering the random noise impacts, the pixel value mapping process is not only depending on the current pixel but the pixels in the window surround the current pixel. The window size is usually 3×3. The process results of this improved algorithms is evaluated by the entropy analysis and visual perception analysis. The experiments' result showed the improved APE algorithms improved the quality of the image, the target and the surrounding assistant targets could be identified easily, and the noise was not amplified much. For the low quality image, these improved algorithms augment the information entropy and improve the image and the video stream aesthetic quality, while for the high quality image they will not debase the quality of the image.
Lee, Unseok; Chang, Sungyul; Putra, Gian Anantrio; Kim, Hyoungseok; Kim, Dong Hwan
2018-01-01
A high-throughput plant phenotyping system automatically observes and grows many plant samples. Many plant sample images are acquired by the system to determine the characteristics of the plants (populations). Stable image acquisition and processing is very important to accurately determine the characteristics. However, hardware for acquiring plant images rapidly and stably, while minimizing plant stress, is lacking. Moreover, most software cannot adequately handle large-scale plant imaging. To address these problems, we developed a new, automated, high-throughput plant phenotyping system using simple and robust hardware, and an automated plant-imaging-analysis pipeline consisting of machine-learning-based plant segmentation. Our hardware acquires images reliably and quickly and minimizes plant stress. Furthermore, the images are processed automatically. In particular, large-scale plant-image datasets can be segmented precisely using a classifier developed using a superpixel-based machine-learning algorithm (Random Forest), and variations in plant parameters (such as area) over time can be assessed using the segmented images. We performed comparative evaluations to identify an appropriate learning algorithm for our proposed system, and tested three robust learning algorithms. We developed not only an automatic analysis pipeline but also a convenient means of plant-growth analysis that provides a learning data interface and visualization of plant growth trends. Thus, our system allows end-users such as plant biologists to analyze plant growth via large-scale plant image data easily.
Relaxations to Sparse Optimization Problems and Applications
NASA Astrophysics Data System (ADS)
Skau, Erik West
Parsimony is a fundamental property that is applied to many characteristics in a variety of fields. Of particular interest are optimization problems that apply rank, dimensionality, or support in a parsimonious manner. In this thesis we study some optimization problems and their relaxations, and focus on properties and qualities of the solutions of these problems. The Gramian tensor decomposition problem attempts to decompose a symmetric tensor as a sum of rank one tensors.We approach the Gramian tensor decomposition problem with a relaxation to a semidefinite program. We study conditions which ensure that the solution of the relaxed semidefinite problem gives the minimal Gramian rank decomposition. Sparse representations with learned dictionaries are one of the leading image modeling techniques for image restoration. When learning these dictionaries from a set of training images, the sparsity parameter of the dictionary learning algorithm strongly influences the content of the dictionary atoms.We describe geometrically the content of trained dictionaries and how it changes with the sparsity parameter.We use statistical analysis to characterize how the different content is used in sparse representations. Finally, a method to control the structure of the dictionaries is demonstrated, allowing us to learn a dictionary which can later be tailored for specific applications. Variations of dictionary learning can be broadly applied to a variety of applications.We explore a pansharpening problem with a triple factorization variant of coupled dictionary learning. Another application of dictionary learning is computer vision. Computer vision relies heavily on object detection, which we explore with a hierarchical convolutional dictionary learning model. Data fusion of disparate modalities is a growing topic of interest.We do a case study to demonstrate the benefit of using social media data with satellite imagery to estimate hazard extents. In this case study analysis we apply a maximum entropy model, guided by the social media data, to estimate the flooded regions during a 2013 flood in Boulder, CO and show that the results are comparable to those obtained using expert information.
Radar image enhancement and simulation as an aid to interpretation and training
NASA Technical Reports Server (NTRS)
Frost, V. S.; Stiles, J. A.; Holtzman, J. C.; Dellwig, L. F.; Held, D. N.
1980-01-01
Greatly increased activity in the field of radar image applications in the coming years demands that techniques of radar image analysis, enhancement, and simulation be developed now. Since the statistical nature of radar imagery differs from that of photographic imagery, one finds that the required digital image processing algorithms (e.g., for improved viewing and feature extraction) differ from those currently existing. This paper addresses these problems and discusses work at the Remote Sensing Laboratory in image simulation and processing, especially for systems comparable to the formerly operational SEASAT synthetic aperture radar.
NASA Astrophysics Data System (ADS)
Martel, Anne L.
2004-04-01
In order to extract quantitative information from dynamic contrast-enhanced MR images (DCE-MRI) it is usually necessary to identify an arterial input function. This is not a trivial problem if there are no major vessels present in the field of view. Most existing techniques rely on operator intervention or use various curve parameters to identify suitable pixels but these are often specific to the anatomical region or the acquisition method used. They also require the signal from several pixels to be averaged in order to improve the signal to noise ratio, however this introduces errors due to partial volume effects. We have described previously how factor analysis can be used to automatically separate arterial and venous components from DCE-MRI studies of the brain but although that method works well for single slice images through the brain when the blood brain barrier technique is intact, it runs into problems for multi-slice images with more complex dynamics. This paper will describe a factor analysis method that is more robust in such situations and is relatively insensitive to the number of physiological components present in the data set. The technique is very similar to that used to identify spectral end-members from multispectral remote sensing images.
White Light Optical Processing and Holography.
1982-10-01
of the object beam. The major problem in image deblurring is noise in the dclurred image. There are two kinds of noise : S (a) False images. The...reducing the noise this work is described in Sec. 3. 2. We addressed the bias buildup and SNR in incoherent optical processing, making an analysis that...system is generally better than the coherent for SNR. Thus, if we have a sensitive, low- noise detector at the output of an incoherent system, we should
Optical image acquisition system for colony analysis
NASA Astrophysics Data System (ADS)
Wang, Weixing; Jin, Wenbiao
2006-02-01
For counting of both colonies and plaques, there is a large number of applications including food, dairy, beverages, hygiene, environmental monitoring, water, toxicology, sterility testing, AMES testing, pharmaceuticals, paints, sterile fluids and fungal contamination. Recently, many researchers and developers have made efforts for this kind of systems. By investigation, some existing systems have some problems since they belong to a new technology product. One of the main problems is image acquisition. In order to acquire colony images with good quality, an illumination box was constructed as: the box includes front lightning and back lightning, which can be selected by users based on properties of colony dishes. With the illumination box, lightning can be uniform; colony dish can be put in the same place every time, which make image processing easy. A digital camera in the top of the box connected to a PC computer with a USB cable, all the camera functions are controlled by the computer.
Lesion Border Detection in Dermoscopy Images
Celebi, M. Emre; Schaefer, Gerald; Iyatomi, Hitoshi; Stoecker, William V.
2009-01-01
Background Dermoscopy is one of the major imaging modalities used in the diagnosis of melanoma and other pigmented skin lesions. Due to the difficulty and subjectivity of human interpretation, computerized analysis of dermoscopy images has become an important research area. One of the most important steps in dermoscopy image analysis is the automated detection of lesion borders. Methods In this article, we present a systematic overview of the recent border detection methods in the literature paying particular attention to computational issues and evaluation aspects. Conclusion Common problems with the existing approaches include the acquisition, size, and diagnostic distribution of the test image set, the evaluation of the results, and the inadequate description of the employed methods. Border determination by dermatologists appears to depend upon higher-level knowledge, therefore it is likely that the incorporation of domain knowledge in automated methods will enable them to perform better, especially in sets of images with a variety of diagnoses. PMID:19121917
Khan, Arif Ul Maula; Torelli, Angelo; Wolf, Ivo; Gretz, Norbert
2018-05-08
In biological assays, automated cell/colony segmentation and counting is imperative owing to huge image sets. Problems occurring due to drifting image acquisition conditions, background noise and high variation in colony features in experiments demand a user-friendly, adaptive and robust image processing/analysis method. We present AutoCellSeg (based on MATLAB) that implements a supervised automatic and robust image segmentation method. AutoCellSeg utilizes multi-thresholding aided by a feedback-based watershed algorithm taking segmentation plausibility criteria into account. It is usable in different operation modes and intuitively enables the user to select object features interactively for supervised image segmentation method. It allows the user to correct results with a graphical interface. This publicly available tool outperforms tools like OpenCFU and CellProfiler in terms of accuracy and provides many additional useful features for end-users.
Towards a computer-aided diagnosis system for vocal cord diseases.
Verikas, A; Gelzinis, A; Bacauskiene, M; Uloza, V
2006-01-01
The objective of this work is to investigate a possibility of creating a computer-aided decision support system for an automated analysis of vocal cord images aiming to categorize diseases of vocal cords. The problem is treated as a pattern recognition task. To obtain a concise and informative representation of a vocal cord image, colour, texture, and geometrical features are used. The representation is further analyzed by a pattern classifier categorizing the image into healthy, diffuse, and nodular classes. The approach developed was tested on 785 vocal cord images collected at the Department of Otolaryngology, Kaunas University of Medicine, Lithuania. A correct classification rate of over 87% was obtained when categorizing a set of unseen images into the aforementioned three classes. Bearing in mind the high similarity of the decision classes, the results obtained are rather encouraging and the developed tools could be very helpful for assuring objective analysis of the images of laryngeal diseases.
A new fringeline-tracking approach for color Doppler ultrasound imaging phase unwrapping
NASA Astrophysics Data System (ADS)
Saad, Ashraf A.; Shapiro, Linda G.
2008-03-01
Color Doppler ultrasound imaging is a powerful non-invasive diagnostic tool for many clinical applications that involve examining the anatomy and hemodynamics of human blood vessels. These clinical applications include cardio-vascular diseases, obstetrics, and abdominal diseases. Since its commercial introduction in the early eighties, color Doppler ultrasound imaging has been used mainly as a qualitative tool with very little attempts to quantify its images. Many imaging artifacts hinder the quantification of the color Doppler images, the most important of which is the aliasing artifact that distorts the blood flow velocities measured by the color Doppler technique. In this work we will address the color Doppler aliasing problem and present a recovery methodology for the true flow velocities from the aliased ones. The problem is formulated as a 2D phase-unwrapping problem, which is a well-defined problem with solid theoretical foundations for other imaging domains, including synthetic aperture radar and magnetic resonance imaging. This paper documents the need for a phase unwrapping algorithm for use in color Doppler ultrasound image analysis. It describes a new phase-unwrapping algorithm that relies on the recently developed cutline detection approaches. The algorithm is novel in its use of heuristic information provided by the ultrasound imaging modality to guide the phase unwrapping process. Experiments have been performed on both in-vitro flow-phantom data and in-vivo human blood flow data. Both data types were acquired under a controlled acquisition protocol developed to minimize the distortion of the color Doppler data and hence to simplify the phase-unwrapping task. In addition to the qualitative assessment of the results, a quantitative assessment approach was developed to measure the success of the results. The results of our new algorithm have been compared on ultrasound data to those from other well-known algorithms, and it outperforms all of them.
Guo, Xiaohu; Dong, Liquan; Zhao, Yuejin; Jia, Wei; Kong, Lingqin; Wu, Yijian; Li, Bing
2015-04-01
Wavefront coding (WFC) technology is adopted in the space optical system to resolve the problem of defocus caused by temperature difference or vibration of satellite motion. According to the theory of WFC, we calculate and optimize the phase mask parameter of the cubic phase mask plate, which is used in an on-axis three-mirror Cassegrain (TMC) telescope system. The simulation analysis and the experimental results indicate that the defocused modulation transfer function curves and the corresponding blurred images have a perfect consistency in the range of 10 times the depth of focus (DOF) of the original TMC system. After digital image processing by a Wiener filter, the spatial resolution of the restored images is up to 57.14 line pairs/mm. The results demonstrate that the WFC technology in the TMC system has superior performance in extending the DOF and less sensitivity to defocus, which has great value in resolving the problem of defocus in the space optical system.
Automatic gang graffiti recognition and interpretation
NASA Astrophysics Data System (ADS)
Parra, Albert; Boutin, Mireille; Delp, Edward J.
2017-09-01
One of the roles of emergency first responders (e.g., police and fire departments) is to prevent and protect against events that can jeopardize the safety and well-being of a community. In the case of criminal gang activity, tools are needed for finding, documenting, and taking the necessary actions to mitigate the problem or issue. We describe an integrated mobile-based system capable of using location-based services, combined with image analysis, to track and analyze gang activity through the acquisition, indexing, and recognition of gang graffiti images. This approach uses image analysis methods for color recognition, image segmentation, and image retrieval and classification. A database of gang graffiti images is described that includes not only the images but also metadata related to the images, such as date and time, geoposition, gang, gang member, colors, and symbols. The user can then query the data in a useful manner. We have implemented these features both as applications for Android and iOS hand-held devices and as a web-based interface.
Zhimeng, Li; Chuan, He; Dishan, Qiu; Jin, Liu; Manhao, Ma
2013-01-01
Aiming to the imaging tasks scheduling problem on high-altitude airship in emergency condition, the programming models are constructed by analyzing the main constraints, which take the maximum task benefit and the minimum energy consumption as two optimization objectives. Firstly, the hierarchy architecture is adopted to convert this scheduling problem into three subproblems, that is, the task ranking, value task detecting, and energy conservation optimization. Then, the algorithms are designed for the sub-problems, and the solving results are corresponding to feasible solution, efficient solution, and optimization solution of original problem, respectively. This paper makes detailed introduction to the energy-aware optimization strategy, which can rationally adjust airship's cruising speed based on the distribution of task's deadline, so as to decrease the total energy consumption caused by cruising activities. Finally, the application results and comparison analysis show that the proposed strategy and algorithm are effective and feasible. PMID:23864822
Progress in analysis of computed tomography (CT) images of hardwood logs for defect detection
Erol Sarigul; A. Lynn Abbott; Daniel L. Schmoldt
2003-01-01
This paper addresses the problem of automatically detecting internal defects in logs using computed tomography (CT) images. The overall purpose is to assist in breakdown optimization. Several studies have shown that the commercial value of resulting boards can be increased substantially if defect locations are known in advance, and if this information is used to make...
Multi-class texture analysis in colorectal cancer histology
NASA Astrophysics Data System (ADS)
Kather, Jakob Nikolas; Weis, Cleo-Aron; Bianconi, Francesco; Melchers, Susanne M.; Schad, Lothar R.; Gaiser, Timo; Marx, Alexander; Zöllner, Frank Gerrit
2016-06-01
Automatic recognition of different tissue types in histological images is an essential part in the digital pathology toolbox. Texture analysis is commonly used to address this problem; mainly in the context of estimating the tumour/stroma ratio on histological samples. However, although histological images typically contain more than two tissue types, only few studies have addressed the multi-class problem. For colorectal cancer, one of the most prevalent tumour types, there are in fact no published results on multiclass texture separation. In this paper we present a new dataset of 5,000 histological images of human colorectal cancer including eight different types of tissue. We used this set to assess the classification performance of a wide range of texture descriptors and classifiers. As a result, we found an optimal classification strategy that markedly outperformed traditional methods, improving the state of the art for tumour-stroma separation from 96.9% to 98.6% accuracy and setting a new standard for multiclass tissue separation (87.4% accuracy for eight classes). We make our dataset of histological images publicly available under a Creative Commons license and encourage other researchers to use it as a benchmark for their studies.
Automatic detection of diabetic foot complications with infrared thermography by asymmetric analysis
NASA Astrophysics Data System (ADS)
Liu, Chanjuan; van Netten, Jaap J.; van Baal, Jeff G.; Bus, Sicco A.; van der Heijden, Ferdi
2015-02-01
Early identification of diabetic foot complications and their precursors is essential in preventing their devastating consequences, such as foot infection and amputation. Frequent, automatic risk assessment by an intelligent telemedicine system might be feasible and cost effective. Infrared thermography is a promising modality for such a system. The temperature differences between corresponding areas on contralateral feet are the clinically significant parameters. This asymmetric analysis is hindered by (1) foot segmentation errors, especially when the foot temperature and the ambient temperature are comparable, and by (2) different shapes and sizes between contralateral feet due to deformities or minor amputations. To circumvent the first problem, we used a color image and a thermal image acquired synchronously. Foot regions, detected in the color image, were rigidly registered to the thermal image. This resulted in 97.8%±1.1% sensitivity and 98.4%±0.5% specificity over 76 high-risk diabetic patients with manual annotation as a reference. Nonrigid landmark-based registration with B-splines solved the second problem. Corresponding points in the two feet could be found regardless of the shapes and sizes of the feet. With that, the temperature difference of the left and right feet could be obtained.
Pécot, Thierry; Bouthemy, Patrick; Boulanger, Jérôme; Chessel, Anatole; Bardin, Sabine; Salamero, Jean; Kervrann, Charles
2015-02-01
Image analysis applied to fluorescence live cell microscopy has become a key tool in molecular biology since it enables to characterize biological processes in space and time at the subcellular level. In fluorescence microscopy imaging, the moving tagged structures of interest, such as vesicles, appear as bright spots over a static or nonstatic background. In this paper, we consider the problem of vesicle segmentation and time-varying background estimation at the cellular scale. The main idea is to formulate the joint segmentation-estimation problem in the general conditional random field framework. Furthermore, segmentation of vesicles and background estimation are alternatively performed by energy minimization using a min cut-max flow algorithm. The proposed approach relies on a detection measure computed from intensity contrasts between neighboring blocks in fluorescence microscopy images. This approach permits analysis of either 2D + time or 3D + time data. We demonstrate the performance of the so-called C-CRAFT through an experimental comparison with the state-of-the-art methods in fluorescence video-microscopy. We also use this method to characterize the spatial and temporal distribution of Rab6 transport carriers at the cell periphery for two different specific adhesion geometries.
A new method of cardiographic image segmentation based on grammar
NASA Astrophysics Data System (ADS)
Hamdi, Salah; Ben Abdallah, Asma; Bedoui, Mohamed H.; Alimi, Adel M.
2011-10-01
The measurement of the most common ultrasound parameters, such as aortic area, mitral area and left ventricle (LV) volume, requires the delineation of the organ in order to estimate the area. In terms of medical image processing this translates into the need to segment the image and define the contours as accurately as possible. The aim of this work is to segment an image and make an automated area estimation based on grammar. The entity "language" will be projected to the entity "image" to perform structural analysis and parsing of the image. We will show how the idea of segmentation and grammar-based area estimation is applied to real problems of cardio-graphic image processing.
Gong, Yunchao; Lazebnik, Svetlana; Gordo, Albert; Perronnin, Florent
2013-12-01
This paper addresses the problem of learning similarity-preserving binary codes for efficient similarity search in large-scale image collections. We formulate this problem in terms of finding a rotation of zero-centered data so as to minimize the quantization error of mapping this data to the vertices of a zero-centered binary hypercube, and propose a simple and efficient alternating minimization algorithm to accomplish this task. This algorithm, dubbed iterative quantization (ITQ), has connections to multiclass spectral clustering and to the orthogonal Procrustes problem, and it can be used both with unsupervised data embeddings such as PCA and supervised embeddings such as canonical correlation analysis (CCA). The resulting binary codes significantly outperform several other state-of-the-art methods. We also show that further performance improvements can result from transforming the data with a nonlinear kernel mapping prior to PCA or CCA. Finally, we demonstrate an application of ITQ to learning binary attributes or "classemes" on the ImageNet data set.
Feeney, James M; Montgomery, Stephanie C; Wolf, Laura; Jayaraman, Vijay; Twohig, Michael
2016-09-01
Among transferred trauma patients, challenges with the transfer of radiographic studies include problems loading or viewing the studies at the receiving hospitals, and problems manipulating, reconstructing, or evalu- ating the transferred images. Cloud-based image transfer systems may address some ofthese problems. We reviewed the charts of patients trans- ferred during one year surrounding the adoption of a cloud computing data transfer system. We compared the rates of repeat imaging before (precloud) and af- ter (postcloud) the adoption of the cloud-based data transfer system. During the precloud period, 28 out of 100 patients required 90 repeat studies. With the cloud computing transfer system in place, three out of 134 patients required seven repeat films. There was a statistically significant decrease in the proportion of patients requiring repeat films (28% to 2.2%, P < .0001). Based on an annualized volume of 200 trauma patient transfers, the cost savings estimated using three methods of cost analysis, is between $30,272 and $192,453.
Invariant correlation to position and rotation using a binary mask applied to binary and gray images
NASA Astrophysics Data System (ADS)
Álvarez-Borrego, Josué; Solorza, Selene; Bueno-Ibarra, Mario A.
2013-05-01
In this paper more alternative ways to generate the binary ring masks are studied and a new methodology is presented when in the analysis the image come with some distortion due to rotation. This new algorithm requires low computational cost. Signature vectors of the target so like signature vectors of the object to be recognized in the problem image are obtained using a binary ring mask constructed in accordance with the real or the imaginary part of their Fourier transform analyzing two different conditions in each one. In this manner, each image target or problem image, will have four unique binary ring masks. The four ways are analyzed and the best is chosen. In addition, due to any image with rotation include some distortion, the best transect is chosen in the Fourier plane in order to obtain the best signature through the different ways to obtain the binary mask. This methodology is applied to two cases: to identify different types of alphabetic letters in Arial font and to identify different fossil diatoms images. Considering the great similarity between diatom images the results obtained are excellent.
Thermal imaging for cold air flow visualisation and analysis
NASA Astrophysics Data System (ADS)
Grudzielanek, M.; Pflitsch, A.; Cermak, J.
2012-04-01
In this work we present first applications of a thermal imaging system for animated visualization and analysis of cold air flow in field studies. The development of mobile thermal imaging systems advanced very fast in the last decades. The surface temperature of objects, which is detected with long-wave infrared radiation, affords conclusions in different problems of research. Modern thermal imaging systems allow infrared picture-sequences and a following data analysis; the systems are not exclusive imaging methods like in the past. Thus, the monitoring and analysing of dynamic processes became possible. We measured the cold air flow on a sloping grassland area with standard methods (sonic anemometers and temperature loggers) plus a thermal imaging system measuring in the range from 7.5 to 14µm. To analyse the cold air with the thermal measurements, we collected the surface infrared temperatures at a projection screen, which was located in cold air flow direction, opposite the infrared (IR) camera. The intention of using a thermal imaging system for our work was: 1. to get a general idea of practicability in our problem, 2. to assess the value of the extensive and more detailed data sets and 3. to optimise visualisation. The results were very promising. Through the possibility of generating time-lapse movies of the image sequences in time scaling, processes of cold air flow, like flow waves, turbulence and general flow speed, can be directly identified. Vertical temperature gradients and near-ground inversions can be visualised very well. Time-lapse movies will be presented. The extensive data collection permits a higher spatial resolution of the data than standard methods, so that cold air flow attributes can be explored in much more detail. Time series are extracted from the IR data series, analysed statistically, and compared to data obtained using traditional systems. Finally, we assess the usefulness of the additional measurement of cold air flow with thermal imaging systems.
Image Segmentation for Connectomics Using Machine Learning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tasdizen, Tolga; Seyedhosseini, Mojtaba; Liu, TIng
Reconstruction of neural circuits at the microscopic scale of individual neurons and synapses, also known as connectomics, is an important challenge for neuroscience. While an important motivation of connectomics is providing anatomical ground truth for neural circuit models, the ability to decipher neural wiring maps at the individual cell level is also important in studies of many neurodegenerative diseases. Reconstruction of a neural circuit at the individual neuron level requires the use of electron microscopy images due to their extremely high resolution. Computational challenges include pixel-by-pixel annotation of these images into classes such as cell membrane, mitochondria and synaptic vesiclesmore » and the segmentation of individual neurons. State-of-the-art image analysis solutions are still far from the accuracy and robustness of human vision and biologists are still limited to studying small neural circuits using mostly manual analysis. In this chapter, we describe our image analysis pipeline that makes use of novel supervised machine learning techniques to tackle this problem.« less
Simulation of bright-field microscopy images depicting pap-smear specimen
Malm, Patrik; Brun, Anders; Bengtsson, Ewert
2015-01-01
As digital imaging is becoming a fundamental part of medical and biomedical research, the demand for computer-based evaluation using advanced image analysis is becoming an integral part of many research projects. A common problem when developing new image analysis algorithms is the need of large datasets with ground truth on which the algorithms can be tested and optimized. Generating such datasets is often tedious and introduces subjectivity and interindividual and intraindividual variations. An alternative to manually created ground-truth data is to generate synthetic images where the ground truth is known. The challenge then is to make the images sufficiently similar to the real ones to be useful in algorithm development. One of the first and most widely studied medical image analysis tasks is to automate screening for cervical cancer through Pap-smear analysis. As part of an effort to develop a new generation cervical cancer screening system, we have developed a framework for the creation of realistic synthetic bright-field microscopy images that can be used for algorithm development and benchmarking. The resulting framework has been assessed through a visual evaluation by experts with extensive experience of Pap-smear images. The results show that images produced using our described methods are realistic enough to be mistaken for real microscopy images. The developed simulation framework is very flexible and can be modified to mimic many other types of bright-field microscopy images. © 2015 The Authors. Published by Wiley Periodicals, Inc. on behalf of ISAC PMID:25573002
Guided color consistency optimization for image mosaicking
NASA Astrophysics Data System (ADS)
Xie, Renping; Xia, Menghan; Yao, Jian; Li, Li
2018-01-01
This paper studies the problem of color consistency correction for sequential images with diverse color characteristics. Existing algorithms try to adjust all images to minimize color differences among images under a unified energy framework, however, the results are prone to presenting a consistent but unnatural appearance when the color difference between images is large and diverse. In our approach, this problem is addressed effectively by providing a guided initial solution for the global consistency optimization, which avoids converging to a meaningless integrated solution. First of all, to obtain the reliable intensity correspondences in overlapping regions between image pairs, we creatively propose the histogram extreme point matching algorithm which is robust to image geometrical misalignment to some extents. In the absence of the extra reference information, the guided initial solution is learned from the major tone of the original images by searching some image subset as the reference, whose color characteristics will be transferred to the others via the paths of graph analysis. Thus, the final results via global adjustment will take on a consistent color similar to the appearance of the reference image subset. Several groups of convincing experiments on both the synthetic dataset and the challenging real ones sufficiently demonstrate that the proposed approach can achieve as good or even better results compared with the state-of-the-art approaches.
Registration algorithm research for three dimensional medical image
NASA Astrophysics Data System (ADS)
Zhao, Jianping; Yang, Huamin; Ding, Ying
2008-03-01
The development of CT and MRI etc. technique offers the means by which we can research directly human internal structure. In clinic, usually various imaging results of a patient are combined for analysis. At present, in the most case, doctors make a diagnosis by observing some slice images of human body. As complexity and configuration diversity of the structure of human body organ, and as well unpredictiveness of focus location and configuration, it is difficult to imagine the cubic configuration of organs and their relationship from these 2D slices without corresponding specialty knowledge and practical experience. So it isn't satisfied with preferable requests of medical diagnosis that only aligning two 2D images to get one 2D slice image. As a result we need extend registration t problem to 3D image. As the quantity of 3D volume data are much more, it undoubtedly increases calculation quantity for aligning two 3D images accurately. It forces us to find some good methods that can achieve better effect on precision and satisfy the demand for time. So in this paper digitally reconstructed radiograph (DRR) image method is proposed to solve correlative problems. Ray tracking two 3D images and digitally reconstruct to create two 2D images, by aligning 2D data to realize to align 3D data.
FluoroSim: A Visual Problem-Solving Environment for Fluorescence Microscopy
Quammen, Cory W.; Richardson, Alvin C.; Haase, Julian; Harrison, Benjamin D.; Taylor, Russell M.; Bloom, Kerry S.
2010-01-01
Fluorescence microscopy provides a powerful method for localization of structures in biological specimens. However, aspects of the image formation process such as noise and blur from the microscope's point-spread function combine to produce an unintuitive image transformation on the true structure of the fluorescing molecules in the specimen, hindering qualitative and quantitative analysis of even simple structures in unprocessed images. We introduce FluoroSim, an interactive fluorescence microscope simulator that can be used to train scientists who use fluorescence microscopy to understand the artifacts that arise from the image formation process, to determine the appropriateness of fluorescence microscopy as an imaging modality in an experiment, and to test and refine hypotheses of model specimens by comparing the output of the simulator to experimental data. FluoroSim renders synthetic fluorescence images from arbitrary geometric models represented as triangle meshes. We describe three rendering algorithms on graphics processing units for computing the convolution of the specimen model with a microscope's point-spread function and report on their performance. We also discuss several cases where the microscope simulator has been used to solve real problems in biology. PMID:20431698
Prospective motion correction of high-resolution magnetic resonance imaging data in children.
Brown, Timothy T; Kuperman, Joshua M; Erhart, Matthew; White, Nathan S; Roddey, J Cooper; Shankaranarayanan, Ajit; Han, Eric T; Rettmann, Dan; Dale, Anders M
2010-10-15
Motion artifacts pose significant problems for the acquisition and analysis of high-resolution magnetic resonance imaging data. These artifacts can be particularly severe when studying pediatric populations, where greater patient movement reduces the ability to clearly view and reliably measure anatomy. In this study, we tested the effectiveness of a new prospective motion correction technique, called PROMO, as applied to making neuroanatomical measures in typically developing school-age children. This method attempts to address the problem of motion at its source by keeping the measurement coordinate system fixed with respect to the subject throughout image acquisition. The technique also performs automatic rescanning of images that were acquired during intervals of particularly severe motion. Unlike many previous techniques, this approach adjusts for both in-plane and through-plane movement, greatly reducing image artifacts without the need for additional equipment. Results show that the use of PROMO notably enhances subjective image quality, reduces errors in Freesurfer cortical surface reconstructions, and significantly improves the subcortical volumetric segmentation of brain structures. Further applications of PROMO for clinical and cognitive neuroscience are discussed. Copyright 2010 Elsevier Inc. All rights reserved.
Analysis of iterative region-of-interest image reconstruction for x-ray computed tomography
Sidky, Emil Y.; Kraemer, David N.; Roth, Erin G.; Ullberg, Christer; Reiser, Ingrid S.; Pan, Xiaochuan
2014-01-01
Abstract. One of the challenges for iterative image reconstruction (IIR) is that such algorithms solve an imaging model implicitly, requiring a complete representation of the scanned subject within the viewing domain of the scanner. This requirement can place a prohibitively high computational burden for IIR applied to x-ray computed tomography (CT), especially when high-resolution tomographic volumes are required. In this work, we aim to develop an IIR algorithm for direct region-of-interest (ROI) image reconstruction. The proposed class of IIR algorithms is based on an optimization problem that incorporates a data fidelity term, which compares a derivative of the estimated data with the available projection data. In order to characterize this optimization problem, we apply it to computer-simulated two-dimensional fan-beam CT data, using both ideal noiseless data and realistic data containing a level of noise comparable to that of the breast CT application. The proposed method is demonstrated for both complete field-of-view and ROI imaging. To demonstrate the potential utility of the proposed ROI imaging method, it is applied to actual CT scanner data. PMID:25685824
Analysis of iterative region-of-interest image reconstruction for x-ray computed tomography.
Sidky, Emil Y; Kraemer, David N; Roth, Erin G; Ullberg, Christer; Reiser, Ingrid S; Pan, Xiaochuan
2014-10-03
One of the challenges for iterative image reconstruction (IIR) is that such algorithms solve an imaging model implicitly, requiring a complete representation of the scanned subject within the viewing domain of the scanner. This requirement can place a prohibitively high computational burden for IIR applied to x-ray computed tomography (CT), especially when high-resolution tomographic volumes are required. In this work, we aim to develop an IIR algorithm for direct region-of-interest (ROI) image reconstruction. The proposed class of IIR algorithms is based on an optimization problem that incorporates a data fidelity term, which compares a derivative of the estimated data with the available projection data. In order to characterize this optimization problem, we apply it to computer-simulated two-dimensional fan-beam CT data, using both ideal noiseless data and realistic data containing a level of noise comparable to that of the breast CT application. The proposed method is demonstrated for both complete field-of-view and ROI imaging. To demonstrate the potential utility of the proposed ROI imaging method, it is applied to actual CT scanner data.
Mesquita, D P; Dias, O; Amaral, A L; Ferreira, E C
2009-04-01
In recent years, a great deal of attention has been focused on the research of activated sludge processes, where the solid-liquid separation phase is frequently considered of critical importance, due to the different problems that severely affect the compaction and the settling of the sludge. Bearing that in mind, in this work, image analysis routines were developed in Matlab environment, allowing the identification and characterization of microbial aggregates and protruding filaments in eight different wastewater treatment plants, for a combined period of 2 years. The monitoring of the activated sludge contents allowed for the detection of bulking events proving that the developed image analysis methodology is adequate for a continuous examination of the morphological changes in microbial aggregates and subsequent estimation of the sludge volume index. In fact, the obtained results proved that the developed image analysis methodology is a feasible method for the continuous monitoring of activated sludge systems and identification of disturbances.
Fast Image Texture Classification Using Decision Trees
NASA Technical Reports Server (NTRS)
Thompson, David R.
2011-01-01
Texture analysis would permit improved autonomous, onboard science data interpretation for adaptive navigation, sampling, and downlink decisions. These analyses would assist with terrain analysis and instrument placement in both macroscopic and microscopic image data products. Unfortunately, most state-of-the-art texture analysis demands computationally expensive convolutions of filters involving many floating-point operations. This makes them infeasible for radiation- hardened computers and spaceflight hardware. A new method approximates traditional texture classification of each image pixel with a fast decision-tree classifier. The classifier uses image features derived from simple filtering operations involving integer arithmetic. The texture analysis method is therefore amenable to implementation on FPGA (field-programmable gate array) hardware. Image features based on the "integral image" transform produce descriptive and efficient texture descriptors. Training the decision tree on a set of training data yields a classification scheme that produces reasonable approximations of optimal "texton" analysis at a fraction of the computational cost. A decision-tree learning algorithm employing the traditional k-means criterion of inter-cluster variance is used to learn tree structure from training data. The result is an efficient and accurate summary of surface morphology in images. This work is an evolutionary advance that unites several previous algorithms (k-means clustering, integral images, decision trees) and applies them to a new problem domain (morphology analysis for autonomous science during remote exploration). Advantages include order-of-magnitude improvements in runtime, feasibility for FPGA hardware, and significant improvements in texture classification accuracy.
Analysis of Non Local Image Denoising Methods
NASA Astrophysics Data System (ADS)
Pardo, Álvaro
Image denoising is probably one of the most studied problems in the image processing community. Recently a new paradigm on non local denoising was introduced. The Non Local Means method proposed by Buades, Morel and Coll attracted the attention of other researches who proposed improvements and modifications to their proposal. In this work we analyze those methods trying to understand their properties while connecting them to segmentation based on spectral graph properties. We also propose some improvements to automatically estimate the parameters used on these methods.
2006-05-01
d). (e) In the histogram analysis eld units are observed initially for voxels located on the d to 250 Hounsfield units.ses (a) el the tration...CT10, CT20, and CT30. Histogram ximum difference of 250 Hounsfield units . Only 0.01% d units.d imag ts a mand finite-element model. The fluid flow...cause Hounsfield unit calibration problems. While this does not seem to influence the image registration, the use of CBCT for dose calculation should
Images and realities of alcohol.
Sulkunen, P
1998-09-01
The paper discusses the relationship between the images of alcohol and society, on one hand, and the reality of drinking and drinking problems on the other hand, from the point of view of policy-relevant research. Images of alcohol influence policy but they also depend on the social and cultural environment of policy-making. The epidemiological total consumption theory of alcohol-related problems is used as an example. The theory is embedded in the modern welfare state's ideals and its policy relevance presupposes that these ideals--universalism, consequentialism and public planning--are respected. If the approach today receives less attention by policy-makers than its empirical validity merits, it may be due to an erosion of these ideals, not of the epidemiological model itself. Images of alcohol influence behaviour and drinking problems but they also articulate the social context in which the images are constructed. This paper demonstrates the point, applying Lévi-Straussian cultural theory to an analysis of a recent beer advertisement addressed to young people. The advertisement not only reflects the images associated with youthful drinking but also the ambiguous status of youth as non-adults in contemporary society. The author stresses that for social and cultural research alcohol is a two-way window, to look at society through alcohol and to look at alcohol through society. Both directions are necessary for policy-relevant research.
NASA Astrophysics Data System (ADS)
Sun, Y. S.; Zhang, L.; Xu, B.; Zhang, Y.
2018-04-01
The accurate positioning of optical satellite image without control is the precondition for remote sensing application and small/medium scale mapping in large abroad areas or with large-scale images. In this paper, aiming at the geometric features of optical satellite image, based on a widely used optimization method of constraint problem which is called Alternating Direction Method of Multipliers (ADMM) and RFM least-squares block adjustment, we propose a GCP independent block adjustment method for the large-scale domestic high resolution optical satellite image - GISIBA (GCP-Independent Satellite Imagery Block Adjustment), which is easy to parallelize and highly efficient. In this method, the virtual "average" control points are built to solve the rank defect problem and qualitative and quantitative analysis in block adjustment without control. The test results prove that the horizontal and vertical accuracy of multi-covered and multi-temporal satellite images are better than 10 m and 6 m. Meanwhile the mosaic problem of the adjacent areas in large area DOM production can be solved if the public geographic information data is introduced as horizontal and vertical constraints in the block adjustment process. Finally, through the experiments by using GF-1 and ZY-3 satellite images over several typical test areas, the reliability, accuracy and performance of our developed procedure will be presented and studied in this paper.
Video coding for next-generation surveillance systems
NASA Astrophysics Data System (ADS)
Klasen, Lena M.; Fahlander, Olov
1997-02-01
Video is used as recording media in surveillance system and also more frequently by the Swedish Police Force. Methods for analyzing video using an image processing system have recently been introduced at the Swedish National Laboratory of Forensic Science, and new methods are in focus in a research project at Linkoping University, Image Coding Group. The accuracy of the result of those forensic investigations often depends on the quality of the video recordings, and one of the major problems when analyzing videos from crime scenes is the poor quality of the recordings. Enhancing poor image quality might add manipulative or subjective effects and does not seem to be the right way of getting reliable analysis results. The surveillance system in use today is mainly based on video techniques, VHS or S-VHS, and the weakest link is the video cassette recorder, (VCR). Multiplexers for selecting one of many camera outputs for recording is another problem as it often filters the video signal, and recording is limited to only one of the available cameras connected to the VCR. A way to get around the problem of poor recording is to simultaneously record all camera outputs digitally. It is also very important to build such a system bearing in mind that image processing analysis methods becomes more important as a complement to the human eye. Using one or more cameras gives a large amount of data, and the need for data compression is more than obvious. Crime scenes often involve persons or moving objects, and the available coding techniques are more or less useful. Our goal is to propose a possible system, being the best compromise with respect to what needs to be recorded, movements in the recorded scene, loss of information and resolution etc., to secure the efficient recording of the crime and enable forensic analysis. The preventative effective of having a well functioning surveillance system and well established image analysis methods is not to be neglected. Aspects of this next generation of digital surveillance systems are discussed in this paper.
Recent developments in imaging system assessment methodology, FROC analysis and the search model.
Chakraborty, Dev P
2011-08-21
A frequent problem in imaging is assessing whether a new imaging system is an improvement over an existing standard. Observer performance methods, in particular the receiver operating characteristic (ROC) paradigm, are widely used in this context. In ROC analysis lesion location information is not used and consequently scoring ambiguities can arise in tasks, such as nodule detection, involving finding localized lesions. This paper reviews progress in the free-response ROC (FROC) paradigm in which the observer marks and rates suspicious regions and the location information is used to determine whether lesions were correctly localized. Reviewed are FROC data analysis, a search-model for simulating FROC data, predictions of the model and a method for estimating the parameters. The search model parameters are physically meaningful quantities that can guide system optimization.
Improving the image discontinuous problem by using color temperature mapping method
NASA Astrophysics Data System (ADS)
Jeng, Wei-De; Mang, Ou-Yang; Lai, Chien-Cheng; Wu, Hsien-Ming
2011-09-01
This article mainly focuses on image processing of radial imaging capsule endoscope (RICE). First, it used the radial imaging capsule endoscope (RICE) to take the images, the experimental used a piggy to get the intestines and captured the images, but the images captured by RICE were blurred due to the RICE has aberration problems in the image center and lower light uniformity affect the image quality. To solve the problems, image processing can use to improve it. Therefore, the images captured by different time can use Person correlation coefficient algorithm to connect all the images, and using the color temperature mapping way to improve the discontinuous problem in the connection region.
FOREWORD: Imaging from coupled physics Imaging from coupled physics
NASA Astrophysics Data System (ADS)
Arridge, S. R.; Scherzer, O.
2012-08-01
Due to the increased demand for tomographic imaging in applied sciences, such as medicine, biology and nondestructive testing, the field has expanded enormously in the past few decades. The common task of tomography is to image the interior of three-dimensional objects from indirect measurement data. In practical realizations, the specimen to be investigated is exposed to probing fields. A variety of these, such as acoustic, electromagnetic or thermal radiation, amongst others, have been advocated in the literature. In all cases, the field is measured after interaction with internal mechanisms of attenuation and/or scattering and images are reconstructed using inverse problems techniques, representing spatial maps of the parameters of these perturbation mechanisms. In the majority of these imaging modalities, either the useful contrast is of low resolution, or high resolution images are obtained with limited contrast or quantitative discriminatory ability. In the last decade, an alternative phenomenon has become of increasing interest, although its origins can be traced much further back; see Widlak and Scherzer [1], Kuchment and Steinhaur [2], and Seo et al [3] in this issue for references to this historical context. Rather than using the same physical field for probing and measurement, with a contrast caused by perturbation, these methods exploit the generation of a secondary physical field which can be measured in addition to, or without, the often dominating effect of the primary probe field. These techniques are variously called 'hybrid imaging' or 'multimodality imaging'. However, in this article and special section we suggest the term 'imaging from coupled physics' (ICP) to more clearly distinguish this methodology from those that simply measure several types of data simultaneously. The key idea is that contrast induced by one type of radiation is read by another kind, so that both high resolution and high contrast are obtained simultaneously. As with all new imaging techniques, the discovery of physical principles which can be exploited to yield information about internal physical parameters has led, hand in hand, to the development of new mathematical methods for solving the corresponding inverse problems. In many cases, the coupled physics imaging problems are expected to be much better posed than conventional tomographical imaging problems. However, still, at the current state of research, there exist a variety of open mathematical questions regarding uniqueness, existence and stability. In this special section we have invited contributions from many of the leading researchers in the mathematics, physics and engineering of these techniques to survey and to elaborate on these novel methodologies, and to present recent research directions. Historically, one of the best studied strongly ill-posed problems in the mathematical literature is the Calderón problem occuring in conductivity imaging, and one of the first examples of ICP is the use of magnetic resonance imaging (MRI) to detect internal current distributions. This topic, known as current density imaging (CDI) or magnetic resonance elecrical impedance tomography (MREIT), and its related technique of magnetic resonance electrical property tomography (MREPT), is reviewed by Wildak and Scherzer [1], and also by Seo et al [3], where experimental studies are documented. Mathematically, several of the ICP problems can be analyzed in terms of the 'p-Laplacian' which raises interesting research questions of non-linear partial differential equations. One approach for analyzing and for the solution of the CDI problem, using characteristics of the 1-Laplacian, is discussed by Tamasan and Veras [4]. Moreover, Moradifam et al [5] present a novel iterative algorithm based on Bregman splitting for solving the CDI problem. Probably the most active research areas in ICP are related to acoustic detection, because most of these techniques rely on the photoacoustic effect wherein absorption of an ultrashort pulse of light, having propagated by multiple scattering some distance into a diffusing medium, generates a source of acoustic waves that are propagated with hyperbolic stability to a surface detector. A complementary problem is that of 'acousto-optics' which uses focussed acoustic waves as the primary field to induce perturbations in optical or electrical properties, which are thus spatially localized. Similar physical principles apply to implement ultrasound modulated electrical impedance tomography (UMEIT). These topics are included in the review of Wildak and Scherzer [1], and Kuchment and Steinhauer [2] offer a general analysis of their structure in terms of pseudo-differential operators. 'Acousto-electrical' imaging is analyzed as a particular case by Ammari et al [6]. In the paper by Tarvainen et al [7], the photo-acoustic problem is studied with respect to different models of the light propagation step. In the paper by Monard and Bal [8], a more general problem for the reconstruction of an anisotropic diffusion parameter from power density measurements is considered; here, issues of uniqueness with respect to the number of measurements is of great importance. A distinctive, and highly important, example of ICP is that of elastography, in which the primary field is low-frequency ultrasound giving rise to mechanical displacement that reveals information on the local elasticity tensor. As in all the methods discussed in this section, this contrast mechanism is measured internally, with a secondary technique, which in this case can be either MRI or ultrasound. McLaughlin et al [9] give a comprehensive analysis of this problem. Our intention for this special section was to provide both an overview and a snapshot of current work in this exciting area. The increasing interest, and the involvement of cross-disciplinary groups of scientists, will continue to lead to the rapid expansion and important new results in this novel area of imaging science. References [1] Widlak T and Scherzer O 2012 Inverse Problems 28 084008 [2] Kuchment P and Steinhauer D 2012 Inverse Problems 28 084007 [3] Seo J K, Kim D-H, Lee J, Kwon O I, Sajib S Z K and Woo E J 2012 Inverse Problems 28 084002 [4] Tamasan A and Veras J 2012 Inverse Problems 28 084006 [5] Moradifam A, Nachman A and Timonov A 2012 Inverse Problems 28 084003 [6] Ammari H, Garnier J and Jing W 2012 Inverse Problems 28 084005 [7] Tarvainen T, Cox B T, Kaipio J P and Arridge S R 2012 Inverse Problems 28 084009 [8] Monard F and Bal G 2012 Inverse Problems 28 084001 [9] McLaughlin J, Oberai A and Yoon J R 2012 Inverse Problems 28 084004
Super-Resolution Reconstruction of Remote Sensing Images Using Multifractal Analysis
Hu, Mao-Gui; Wang, Jin-Feng; Ge, Yong
2009-01-01
Satellite remote sensing (RS) is an important contributor to Earth observation, providing various kinds of imagery every day, but low spatial resolution remains a critical bottleneck in a lot of applications, restricting higher spatial resolution analysis (e.g., intra-urban). In this study, a multifractal-based super-resolution reconstruction method is proposed to alleviate this problem. The multifractal characteristic is common in Nature. The self-similarity or self-affinity presented in the image is useful to estimate details at larger and smaller scales than the original. We first look for the presence of multifractal characteristics in the images. Then we estimate parameters of the information transfer function and noise of the low resolution image. Finally, a noise-free, spatial resolution-enhanced image is generated by a fractal coding-based denoising and downscaling method. The empirical case shows that the reconstructed super-resolution image performs well in detail enhancement. This method is not only useful for remote sensing in investigating Earth, but also for other images with multifractal characteristics. PMID:22291530
Image Registration Algorithm Based on Parallax Constraint and Clustering Analysis
NASA Astrophysics Data System (ADS)
Wang, Zhe; Dong, Min; Mu, Xiaomin; Wang, Song
2018-01-01
To resolve the problem of slow computation speed and low matching accuracy in image registration, a new image registration algorithm based on parallax constraint and clustering analysis is proposed. Firstly, Harris corner detection algorithm is used to extract the feature points of two images. Secondly, use Normalized Cross Correlation (NCC) function to perform the approximate matching of feature points, and the initial feature pair is obtained. Then, according to the parallax constraint condition, the initial feature pair is preprocessed by K-means clustering algorithm, which is used to remove the feature point pairs with obvious errors in the approximate matching process. Finally, adopt Random Sample Consensus (RANSAC) algorithm to optimize the feature points to obtain the final feature point matching result, and the fast and accurate image registration is realized. The experimental results show that the image registration algorithm proposed in this paper can improve the accuracy of the image matching while ensuring the real-time performance of the algorithm.
Image guidance improves localization of sonographically occult colorectal liver metastases
NASA Astrophysics Data System (ADS)
Leung, Universe; Simpson, Amber L.; Adams, Lauryn B.; Jarnagin, William R.; Miga, Michael I.; Kingham, T. Peter
2015-03-01
Assessing the therapeutic benefit of surgical navigation systems is a challenging problem in image-guided surgery. The exact clinical indications for patients that may benefit from these systems is not always clear, particularly for abdominal surgery where image-guidance systems have failed to take hold in the same way as orthopedic and neurosurgical applications. We report interim analysis of a prospective clinical trial for localizing small colorectal liver metastases using the Explorer system (Path Finder Technologies, Nashville, TN). Colorectal liver metastases are small lesions that can be difficult to identify with conventional intraoperative ultrasound due to echogeneity changes in the liver as a result of chemotherapy and other preoperative treatments. Interim analysis with eighteen patients shows that 9 of 15 (60%) of these occult lesions could be detected with image guidance. Image guidance changed intraoperative management in 3 (17%) cases. These results suggest that image guidance is a promising tool for localization of small occult liver metastases and that the indications for image-guided surgery are expanding.
Automatic Generation of Algorithms for the Statistical Analysis of Planetary Nebulae Images
NASA Technical Reports Server (NTRS)
Fischer, Bernd
2004-01-01
Analyzing data sets collected in experiments or by observations is a Core scientific activity. Typically, experimentd and observational data are &aught with uncertainty, and the analysis is based on a statistical model of the conjectured underlying processes, The large data volumes collected by modern instruments make computer support indispensible for this. Consequently, scientists spend significant amounts of their time with the development and refinement of the data analysis programs. AutoBayes [GF+02, FS03] is a fully automatic synthesis system for generating statistical data analysis programs. Externally, it looks like a compiler: it takes an abstract problem specification and translates it into executable code. Its input is a concise description of a data analysis problem in the form of a statistical model as shown in Figure 1; its output is optimized and fully documented C/C++ code which can be linked dynamically into the Matlab and Octave environments. Internally, however, it is quite different: AutoBayes derives a customized algorithm implementing the given model using a schema-based process, and then further refines and optimizes the algorithm into code. A schema is a parameterized code template with associated semantic constraints which define and restrict the template s applicability. The schema parameters are instantiated in a problem-specific way during synthesis as AutoBayes checks the constraints against the original model or, recursively, against emerging sub-problems. AutoBayes schema library contains problem decomposition operators (which are justified by theorems in a formal logic in the domain of Bayesian networks) as well as machine learning algorithms (e.g., EM, k-Means) and nu- meric optimization methods (e.g., Nelder-Mead simplex, conjugate gradient). AutoBayes augments this schema-based approach by symbolic computation to derive closed-form solutions whenever possible. This is a major advantage over other statistical data analysis systems which use numerical approximations even in cases where closed-form solutions exist. AutoBayes is implemented in Prolog and comprises approximately 75.000 lines of code. In this paper, we take one typical scientific data analysis problem-analyzing planetary nebulae images taken by the Hubble Space Telescope-and show how AutoBayes can be used to automate the implementation of the necessary anal- ysis programs. We initially follow the analysis described by Knuth and Hajian [KHO2] and use AutoBayes to derive code for the published models. We show the details of the code derivation process, including the symbolic computations and automatic integration of library procedures, and compare the results of the automatically generated and manually implemented code. We then go beyond the original analysis and use AutoBayes to derive code for a simple image segmentation procedure based on a mixture model which can be used to automate a manual preproceesing step. Finally, we combine the original approach with the simple segmentation which yields a more detailed analysis. This also demonstrates that AutoBayes makes it easy to combine different aspects of data analysis.
NASA Astrophysics Data System (ADS)
Roy, Ankita
2007-12-01
This research using Hyperspectral imaging involves recognizing targets through spatial and spectral matching and spectral un-mixing of data ranging from remote sensing to medical imaging kernels for clinical studies based on Hyperspectral data-sets generated using the VFTHSI [Visible Fourier Transform Hyperspectral Imager], whose high resolution Si detector makes the analysis achievable. The research may be broadly classified into (I) A Physically Motivated Correlation Formalism (PMCF), which places both spatial and spectral data on an equivalent mathematical footing in the context of a specific Kernel and (II) An application in RF plasma specie detection during carbon nanotube growing process. (III) Hyperspectral analysis for assessing density and distribution of retinopathies like age related macular degeneration (ARMD) and error estimation enabling the early recognition of ARMD, which is treated as an ill-conditioned inverse imaging problem. The broad statistical scopes of this research are two fold-target recognition problems and spectral unmixing problems. All processes involve experimental and computational analysis of Hyperspectral data sets is presented, which is based on the principle of a Sagnac Interferometer, calibrated to obtain high SNR levels. PMCF computes spectral/spatial/cross moments and answers the question of how optimally the entire hypercube should be sampled and finds how many spatial-spectral pixels are required precisely for a particular target recognition. Spectral analysis of RF plasma radicals, typically Methane plasma and Argon plasma using VFTHSI has enabled better process monitoring during growth of vertically aligned multi-walled carbon nanotubes by instant registration of the chemical composition or density changes temporally, which is key since a significant correlation can be found between plasma state and structural properties. A vital focus of this dissertation is towards medical Hyperspectral imaging applied to retinopathies like age related macular degeneration targets taken with a Fundus imager, which is akin to the VFTHSI. Detection of the constituent components in the diseased hyper-pigmentation area is also computed. The target or reflectance matrix is treated as a highly ill-conditioned spectral un-mixing problem, to which methodologies like inverse techniques, principal component analysis (PCA) and receiver operating curves (ROC) for precise spectral recognition of infected area. The region containing ARMD was easily distinguishable from the spectral mesh plots over the entire band-pass area. Once the location was detected the PMCF coefficients were calculated by cross correlating a target of normal oxygenated retina with the deoxygenated one. The ROCs generated using PMCF shows 30% higher detection probability with improved accuracy than ROCs based on Spectral Angle Mapper (SAM). By spectral unmixing methods, the important endmembers/carotenoids of the MD pigment were found to be Xanthophyl and lutein, while beta-carotene which showed a negative correlation in the unconstrained inverse problem is a supplement given to ARMD patients to prevent the disease and does not occur in the eye. Literature also shows degeneration of meso-zeaxanthin. Ophthalmologists may assert the presence of ARMD and commence the diagnosis process if the Xanthophyl pigment have degenerated 89.9%, while the lutein has decayed almost 80%, as found deduced computationally. This piece of current research takes it to the next level of precise investigation in the continuing process of improved clinical findings by correlating the microanatomy of the diseased fovea and shows promise of an early detection of this disease.
NASA Astrophysics Data System (ADS)
Stork, David G.; Furuichi, Yasuo
2011-03-01
David Hockney has argued that the right hand of the disciple, thrust to the rear in Caravaggio's Supper at Emmaus (1606), is anomalously large as a result of the artist refocusing a putative secret lens-based optical projector and tracing the image it projected onto his canvas. We show through rigorous optical analysis that to achieve such an anomalously large hand image, Caravaggio would have needed to make extremely large, conspicuous and implausible alterations to his studio setup, moving both his purported lens and his canvas nearly two meters between "exposing" the disciple's left hand and then his right hand. Such major disruptions to his studio would have impeded -not aided- Caravaggio in his work. Our optical analysis quantifies these problems and our computer graphics reconstruction of Caravaggio's studio illustrates these problems. In this way we conclude that Caravaggio did not use optical projections in the way claimed by Hockney, but instead most likely set the sizes of these hands "by eye" for artistic reasons.
Quantitative histogram analysis of images
NASA Astrophysics Data System (ADS)
Holub, Oliver; Ferreira, Sérgio T.
2006-11-01
A routine for histogram analysis of images has been written in the object-oriented, graphical development environment LabVIEW. The program converts an RGB bitmap image into an intensity-linear greyscale image according to selectable conversion coefficients. This greyscale image is subsequently analysed by plots of the intensity histogram and probability distribution of brightness, and by calculation of various parameters, including average brightness, standard deviation, variance, minimal and maximal brightness, mode, skewness and kurtosis of the histogram and the median of the probability distribution. The program allows interactive selection of specific regions of interest (ROI) in the image and definition of lower and upper threshold levels (e.g., to permit the removal of a constant background signal). The results of the analysis of multiple images can be conveniently saved and exported for plotting in other programs, which allows fast analysis of relatively large sets of image data. The program file accompanies this manuscript together with a detailed description of two application examples: The analysis of fluorescence microscopy images, specifically of tau-immunofluorescence in primary cultures of rat cortical and hippocampal neurons, and the quantification of protein bands by Western-blot. The possibilities and limitations of this kind of analysis are discussed. Program summaryTitle of program: HAWGC Catalogue identifier: ADXG_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADXG_v1_0 Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Computers: Mobile Intel Pentium III, AMD Duron Installations: No installation necessary—Executable file together with necessary files for LabVIEW Run-time engine Operating systems or monitors under which the program has been tested: WindowsME/2000/XP Programming language used: LabVIEW 7.0 Memory required to execute with typical data:˜16MB for starting and ˜160MB used for loading of an image No. of bits in a word: 32 No. of processors used: 1 Has the code been vectorized or parallelized?: No No of lines in distributed program, including test data, etc.:138 946 No. of bytes in distributed program, including test data, etc.:15 166 675 Distribution format: tar.gz Nature of physical problem: Quantification of image data (e.g., for discrimination of molecular species in gels or fluorescent molecular probes in cell cultures) requires proprietary or complex software packages, which might not include the relevant statistical parameters or make the analysis of multiple images a tedious procedure for the general user. Method of solution: Tool for conversion of RGB bitmap image into luminance-linear image and extraction of luminance histogram, probability distribution, and statistical parameters (average brightness, standard deviation, variance, minimal and maximal brightness, mode, skewness and kurtosis of histogram and median of probability distribution) with possible selection of region of interest (ROI) and lower and upper threshold levels. Restrictions on the complexity of the problem: Does not incorporate application-specific functions (e.g., morphometric analysis) Typical running time: Seconds (depending on image size and processor speed) Unusual features of the program: None
Bifurcations at the dawn of Modern Science
NASA Astrophysics Data System (ADS)
Coullet, Pierre
2012-11-01
In this article we review two classical bifurcation problems: the instability of an axisymmetric floating body studied by Archimedes, 2300 years ago and the multiplicity of images observed in curved mirrors, a problem which has been solved by Alhazen in the 11th century. We will first introduce these problems in trying to keep some of the flavor of the original analysis and then, we will show how they can be reduced to a question of extremal distances studied by Apollonius.
Erol Sarigul; A. Lynn Abbott; Daniel L. Schmoldt; Philip A. Araman
2005-01-01
This paper describes recent progress in the analysis of computed tomography (CT) images of hardwood logs. The long-term goal of the work is to develop a system that is capable of autonomous (or semiautonomous) detection of internal defects, so that log breakdown decisions can be optimized based on defect locations. The problem is difficult because wood exhibits large...
Automatic rule generation for high-level vision
NASA Technical Reports Server (NTRS)
Rhee, Frank Chung-Hoon; Krishnapuram, Raghu
1992-01-01
A new fuzzy set based technique that was developed for decision making is discussed. It is a method to generate fuzzy decision rules automatically for image analysis. This paper proposes a method to generate rule-based approaches to solve problems such as autonomous navigation and image understanding automatically from training data. The proposed method is also capable of filtering out irrelevant features and criteria from the rules.
Techniques for automatic large scale change analysis of temporal multispectral imagery
NASA Astrophysics Data System (ADS)
Mercovich, Ryan A.
Change detection in remotely sensed imagery is a multi-faceted problem with a wide variety of desired solutions. Automatic change detection and analysis to assist in the coverage of large areas at high resolution is a popular area of research in the remote sensing community. Beyond basic change detection, the analysis of change is essential to provide results that positively impact an image analyst's job when examining potentially changed areas. Present change detection algorithms are geared toward low resolution imagery, and require analyst input to provide anything more than a simple pixel level map of the magnitude of change that has occurred. One major problem with this approach is that change occurs in such large volume at small spatial scales that a simple change map is no longer useful. This research strives to create an algorithm based on a set of metrics that performs a large area search for change in high resolution multispectral image sequences and utilizes a variety of methods to identify different types of change. Rather than simply mapping the magnitude of any change in the scene, the goal of this research is to create a useful display of the different types of change in the image. The techniques presented in this dissertation are used to interpret large area images and provide useful information to an analyst about small regions that have undergone specific types of change while retaining image context to make further manual interpretation easier. This analyst cueing to reduce information overload in a large area search environment will have an impact in the areas of disaster recovery, search and rescue situations, and land use surveys among others. By utilizing a feature based approach founded on applying existing statistical methods and new and existing topological methods to high resolution temporal multispectral imagery, a novel change detection methodology is produced that can automatically provide useful information about the change occurring in large area and high resolution image sequences. The change detection and analysis algorithm developed could be adapted to many potential image change scenarios to perform automatic large scale analysis of change.
Leavesley, Silas J; Sweat, Brenner; Abbott, Caitlyn; Favreau, Peter; Rich, Thomas C
2018-01-01
Spectral imaging technologies have been used for many years by the remote sensing community. More recently, these approaches have been applied to biomedical problems, where they have shown great promise. However, biomedical spectral imaging has been complicated by the high variance of biological data and the reduced ability to construct test scenarios with fixed ground truths. Hence, it has been difficult to objectively assess and compare biomedical spectral imaging assays and technologies. Here, we present a standardized methodology that allows assessment of the performance of biomedical spectral imaging equipment, assays, and analysis algorithms. This methodology incorporates real experimental data and a theoretical sensitivity analysis, preserving the variability present in biomedical image data. We demonstrate that this approach can be applied in several ways: to compare the effectiveness of spectral analysis algorithms, to compare the response of different imaging platforms, and to assess the level of target signature required to achieve a desired performance. Results indicate that it is possible to compare even very different hardware platforms using this methodology. Future applications could include a range of optimization tasks, such as maximizing detection sensitivity or acquisition speed, providing high utility for investigators ranging from design engineers to biomedical scientists. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Axelrod, Noel; Radko, Anna; Lewis, Aaron; Ben-Yosef, Nissim
2004-04-10
A methodology is described for phase restoration of an object function from differential interference contrast (DIC) images. The methodology involves collecting a set of DIC images in the same plane with different bias retardation between the two illuminating light components produced by a Wollaston prism. These images, together with one conventional bright-field image, allows for reduction of the phase deconvolution restoration problem from a highly complex nonlinear mathematical formulation to a set of linear equations that can be applied to resolve the phase for images with a relatively large number of pixels. Additionally, under certain conditions, an on-line atomic force imaging system that does not interfere with the standard DIC illumination modes resolves uncertainties in large topographical variations that generally lead to a basic problem in DIC imaging, i.e., phase unwrapping. Furthermore, the availability of confocal detection allows for a three-dimensional reconstruction with high accuracy of the refractive-index measurement of the object that is to be imaged. This has been applied to reconstruction of the refractive index of an arrayed waveguide in a region in which a defect in the sample is present. The results of this paper highlight the synergism of far-field microscopies integrated with scanned probe microscopies and restoration algorithms for phase reconstruction.
NASA Astrophysics Data System (ADS)
Shi, Cheng; Liu, Fang; Li, Ling-Ling; Hao, Hong-Xia
2014-01-01
The goal of pan-sharpening is to get an image with higher spatial resolution and better spectral information. However, the resolution of the pan-sharpened image is seriously affected by the thin clouds. For a single image, filtering algorithms are widely used to remove clouds. These kinds of methods can remove clouds effectively, but the detail lost in the cloud removal image is also serious. To solve this problem, a pan-sharpening algorithm to remove thin cloud via mask dodging and nonsampled shift-invariant shearlet transform (NSST) is proposed. For the low-resolution multispectral (LR MS) and high-resolution panchromatic images with thin clouds, a mask dodging method is used to remove clouds. For the cloud removal LR MS image, an adaptive principal component analysis transform is proposed to balance the spectral information and spatial resolution in the pan-sharpened image. Since the clouds removal process causes the detail loss problem, a weight matrix is designed to enhance the details of the cloud regions in the pan-sharpening process, but noncloud regions remain unchanged. And the details of the image are obtained by NSST. Experimental results over visible and evaluation metrics demonstrate that the proposed method can keep better spectral information and spatial resolution, especially for the images with thin clouds.
An Evaluation of Feature Learning Methods for High Resolution Image Classification
NASA Astrophysics Data System (ADS)
Tokarczyk, P.; Montoya, J.; Schindler, K.
2012-07-01
Automatic image classification is one of the fundamental problems of remote sensing research. The classification problem is even more challenging in high-resolution images of urban areas, where the objects are small and heterogeneous. Two questions arise, namely which features to extract from the raw sensor data to capture the local radiometry and image structure at each pixel or segment, and which classification method to apply to the feature vectors. While classifiers are nowadays well understood, selecting the right features remains a largely empirical process. Here we concentrate on the features. Several methods are evaluated which allow one to learn suitable features from unlabelled image data by analysing the image statistics. In a comparative study, we evaluate unsupervised feature learning with different linear and non-linear learning methods, including principal component analysis (PCA) and deep belief networks (DBN). We also compare these automatically learned features with popular choices of ad-hoc features including raw intensity values, standard combinations like the NDVI, a few PCA channels, and texture filters. The comparison is done in a unified framework using the same images, the target classes, reference data and a Random Forest classifier.
Multi-Atlas Segmentation using Partially Annotated Data: Methods and Annotation Strategies.
Koch, Lisa M; Rajchl, Martin; Bai, Wenjia; Baumgartner, Christian F; Tong, Tong; Passerat-Palmbach, Jonathan; Aljabar, Paul; Rueckert, Daniel
2017-08-22
Multi-atlas segmentation is a widely used tool in medical image analysis, providing robust and accurate results by learning from annotated atlas datasets. However, the availability of fully annotated atlas images for training is limited due to the time required for the labelling task. Segmentation methods requiring only a proportion of each atlas image to be labelled could therefore reduce the workload on expert raters tasked with annotating atlas images. To address this issue, we first re-examine the labelling problem common in many existing approaches and formulate its solution in terms of a Markov Random Field energy minimisation problem on a graph connecting atlases and the target image. This provides a unifying framework for multi-atlas segmentation. We then show how modifications in the graph configuration of the proposed framework enable the use of partially annotated atlas images and investigate different partial annotation strategies. The proposed method was evaluated on two Magnetic Resonance Imaging (MRI) datasets for hippocampal and cardiac segmentation. Experiments were performed aimed at (1) recreating existing segmentation techniques with the proposed framework and (2) demonstrating the potential of employing sparsely annotated atlas data for multi-atlas segmentation.
Models of formation and some algorithms of hyperspectral image processing
NASA Astrophysics Data System (ADS)
Achmetov, R. N.; Stratilatov, N. R.; Yudakov, A. A.; Vezenov, V. I.; Eremeev, V. V.
2014-12-01
Algorithms and information technologies for processing Earth hyperspectral imagery are presented. Several new approaches are discussed. Peculiar properties of processing the hyperspectral imagery, such as multifold signal-to-noise reduction, atmospheric distortions, access to spectral characteristics of every image point, and high dimensionality of data, were studied. Different measures of similarity between individual hyperspectral image points and the effect of additive uncorrelated noise on these measures were analyzed. It was shown that these measures are substantially affected by noise, and a new measure free of this disadvantage was proposed. The problem of detecting the observed scene object boundaries, based on comparing the spectral characteristics of image points, is considered. It was shown that contours are processed much better when spectral characteristics are used instead of energy brightness. A statistical approach to the correction of atmospheric distortions, which makes it possible to solve the stated problem based on analysis of a distorted image in contrast to analytical multiparametric models, was proposed. Several algorithms used to integrate spectral zonal images with data from other survey systems, which make it possible to image observed scene objects with a higher quality, are considered. Quality characteristics of hyperspectral data processing were proposed and studied.
NASA Astrophysics Data System (ADS)
Ahn, Chi Young; Jeon, Kiwan; Park, Won-Kwang
2015-06-01
This study analyzes the well-known MUltiple SIgnal Classification (MUSIC) algorithm to identify unknown support of thin penetrable electromagnetic inhomogeneity from scattered field data collected within the so-called multi-static response matrix in limited-view inverse scattering problems. The mathematical theories of MUSIC are partially discovered, e.g., in the full-view problem, for an unknown target of dielectric contrast or a perfectly conducting crack with the Dirichlet boundary condition (Transverse Magnetic-TM polarization) and so on. Hence, we perform further research to analyze the MUSIC-type imaging functional and to certify some well-known but theoretically unexplained phenomena. For this purpose, we establish a relationship between the MUSIC imaging functional and an infinite series of Bessel functions of integer order of the first kind. This relationship is based on the rigorous asymptotic expansion formula in the existence of a thin inhomogeneity with a smooth supporting curve. Various results of numerical simulation are presented in order to support the identified structure of MUSIC. Although a priori information of the target is needed, we suggest a least condition of range of incident and observation directions to apply MUSIC in the limited-view problem.
NASA Technical Reports Server (NTRS)
2002-01-01
A dengue fever outbreak has plagued Rio de Janeiro since January 2002. Dengue fever is a mosquito-borne disease. The elimination of standing water, which is a breeding ground for the mosquitoes, is a primary defense against mosquito-borne diseases like dengue. Removing such water remains a difficult problem in many urban regions. The International Space Station astronauts took this image (ISS001-ESC-5418) of Rio de Janeiro in December 2000. Image provided by the Earth Sciences and Image Analysis Laboratory at Johnson Space Center (JSC). Additional images taken by astronauts and cosmonauts can be viewed at the NASA-JSC Gateway to Astronaut Photography of Earth.
Fundamental limits of reconstruction-based superresolution algorithms under local translation.
Lin, Zhouchen; Shum, Heung-Yeung
2004-01-01
Superresolution is a technique that can produce images of a higher resolution than that of the originally captured ones. Nevertheless, improvement in resolution using such a technique is very limited in practice. This makes it significant to study the problem: "Do fundamental limits exist for superresolution?" In this paper, we focus on a major class of superresolution algorithms, called the reconstruction-based algorithms, which compute high-resolution images by simulating the image formation process. Assuming local translation among low-resolution images, this paper is the first attempt to determine the explicit limits of reconstruction-based algorithms, under both real and synthetic conditions. Based on the perturbation theory of linear systems, we obtain the superresolution limits from the conditioning analysis of the coefficient matrix. Moreover, we determine the number of low-resolution images that are sufficient to achieve the limit. Both real and synthetic experiments are carried out to verify our analysis.
NASA Technical Reports Server (NTRS)
Kiefer, R. W. (Principal Investigator)
1979-01-01
Research on the application of remote sensing to problems of water resources was concentrated on sediments and associated nonpoint source pollutants in lakes. Further transfer of the technology of remote sensing and the refinement of equipment and programs for thermal scanning and the digital analysis of images were also addressed.
Stanescu, T; Jaffray, D
2018-05-25
Magnetic resonance imaging is expected to play a more important role in radiation therapy given the recent developments in MR-guided technologies. MR images need to consistently show high spatial accuracy to facilitate RT specific tasks such as treatment planning and in-room guidance. The present study investigates a new harmonic analysis method for the characterization of complex 3D fields derived from MR images affected by system-related distortions. An interior Dirichlet problem based on solving the Laplace equation with boundary conditions (BCs) was formulated for the case of a 3D distortion field. The second-order boundary value problem (BVP) was solved using a finite elements method (FEM) for several quadratic geometries - i.e., sphere, cylinder, cuboid, D-shaped, and ellipsoid. To stress-test the method and generalize it, the BVP was also solved for more complex surfaces such as a Reuleaux 9-gon and the MR imaging volume of a scanner featuring a high degree of surface irregularities. The BCs were formatted from reference experimental data collected with a linearity phantom featuring a volumetric grid structure. The method was validated by comparing the harmonic analysis results with the corresponding experimental reference fields. The harmonic fields were found to be in good agreement with the baseline experimental data for all geometries investigated. In the case of quadratic domains, the percentage of sampling points with residual values larger than 1 mm were 0.5% and 0.2% for the axial components and vector magnitude, respectively. For the general case of a domain defined by the available MR imaging field of view, the reference data showed a peak distortion of about 12 mm and 79% of the sampling points carried a distortion magnitude larger than 1 mm (tolerance intrinsic to the experimental data). The upper limits of the residual values after comparison with the harmonic fields showed max and mean of 1.4 mm and 0.25 mm, respectively, with only 1.5% of sampling points exceeding 1 mm. A novel harmonic analysis approach relying on finite element methods was introduced and validated for multiple volumes with surface shape functions ranging from simple to highly complex. Since a boundary value problem is solved the method requires input data from only the surface of the desired domain of interest. It is believed that the harmonic method will facilitate (a) the design of new phantoms dedicated for the quantification of MR image distortions in large volumes and (b) an integrative approach of combining multiple imaging tests specific to radiotherapy into a single test object for routine imaging quality control. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Connected component analysis of review-SEM images for sub-10nm node process verification
NASA Astrophysics Data System (ADS)
Halder, Sandip; Leray, Philippe; Sah, Kaushik; Cross, Andrew; Parisi, Paolo
2017-03-01
Analysis of hotspots is becoming more and more critical as we scale from node to node. To define true process windows at sub-14 nm technology nodes, often defect inspections are being included to weed out design weak spots (often referred to as hotspots). Defect inspection sub 28 nm nodes is a two pass process. Defect locations identified by optical inspection tools need to be reviewed by review-SEM's to understand exactly which feature is failing in the region flagged by the optical tool. The images grabbed by the review-SEM tool are used for classification but rarely for quantification. The goal of this paper is to see if the thousands of review-SEM images which are existing can be used for quantification and further analysis. More specifically we address the SEM quantification problem with connected component analysis.
Quantitative nanoscopy: Tackling sampling limitations in (S)TEM imaging of polymers and composites.
Gnanasekaran, Karthikeyan; Snel, Roderick; de With, Gijsbertus; Friedrich, Heiner
2016-01-01
Sampling limitations in electron microscopy questions whether the analysis of a bulk material is representative, especially while analyzing hierarchical morphologies that extend over multiple length scales. We tackled this problem by automatically acquiring a large series of partially overlapping (S)TEM images with sufficient resolution, subsequently stitched together to generate a large-area map using an in-house developed acquisition toolbox (TU/e Acquisition ToolBox) and stitching module (TU/e Stitcher). In addition, we show that quantitative image analysis of the large scale maps provides representative information that can be related to the synthesis and process conditions of hierarchical materials, which moves electron microscopy analysis towards becoming a bulk characterization tool. We demonstrate the power of such an analysis by examining two different multi-phase materials that are structured over multiple length scales. Copyright © 2015 Elsevier B.V. All rights reserved.
Automated Dermoscopy Image Analysis of Pigmented Skin Lesions
Baldi, Alfonso; Quartulli, Marco; Murace, Raffaele; Dragonetti, Emanuele; Manganaro, Mario; Guerra, Oscar; Bizzi, Stefano
2010-01-01
Dermoscopy (dermatoscopy, epiluminescence microscopy) is a non-invasive diagnostic technique for the in vivo observation of pigmented skin lesions (PSLs), allowing a better visualization of surface and subsurface structures (from the epidermis to the papillary dermis). This diagnostic tool permits the recognition of morphologic structures not visible by the naked eye, thus opening a new dimension in the analysis of the clinical morphologic features of PSLs. In order to reduce the learning-curve of non-expert clinicians and to mitigate problems inherent in the reliability and reproducibility of the diagnostic criteria used in pattern analysis, several indicative methods based on diagnostic algorithms have been introduced in the last few years. Recently, numerous systems designed to provide computer-aided analysis of digital images obtained by dermoscopy have been reported in the literature. The goal of this article is to review these systems, focusing on the most recent approaches based on content-based image retrieval systems (CBIR). PMID:24281070
Research of flaw image collecting and processing technology based on multi-baseline stereo imaging
NASA Astrophysics Data System (ADS)
Yao, Yong; Zhao, Jiguang; Pang, Xiaoyan
2008-03-01
Aiming at the practical situations such as accurate optimal design, complex algorithms and precise technical demands of gun bore flaw image collecting, the design frame of a 3-D image collecting and processing system based on multi-baseline stereo imaging was presented in this paper. This system mainly including computer, electrical control box, stepping motor and CCD camera and it can realize function of image collection, stereo matching, 3-D information reconstruction and after-treatments etc. Proved by theoretical analysis and experiment results, images collected by this system were precise and it can slake efficiently the uncertainty problem produced by universally veins or repeated veins. In the same time, this system has faster measure speed and upper measure precision.
Wen, Yintang; Zhang, Zhenda; Zhang, Yuyan; Sun, Dongtao
2017-01-01
A coplanar electrode array sensor is established for the imaging of composite-material adhesive-layer defect detection. The sensor is based on the capacitive edge effect, which leads to capacitance data being considerably weak and susceptible to environmental noise. The inverse problem of coplanar array electrical capacitance tomography (C-ECT) is ill-conditioning, in which a small error of capacitance data can seriously affect the quality of reconstructed images. In order to achieve a stable image reconstruction process, a redundancy analysis method for capacitance data is proposed. The proposed method is based on contribution rate and anti-interference capability. According to the redundancy analysis, the capacitance data are divided into valid and invalid data. When the image is reconstructed by valid data, the sensitivity matrix needs to be changed accordingly. In order to evaluate the effectiveness of the sensitivity map, singular value decomposition (SVD) is used. Finally, the two-dimensional (2D) and three-dimensional (3D) images are reconstructed by the Tikhonov regularization method. Through comparison of the reconstructed images of raw capacitance data, the stability of the image reconstruction process can be improved, and the quality of reconstructed images is not degraded. As a result, much invalid data are not collected, and the data acquisition time can also be reduced. PMID:29295537
[Object Separation from Medical X-Ray Images Based on ICA].
Li, Yan; Yu, Chun-yu; Miao, Ya-jian; Fei, Bin; Zhuang, Feng-yun
2015-03-01
X-ray medical image can examine diseased tissue of patients and has important reference value for medical diagnosis. With the problems that traditional X-ray images have noise, poor level sense and blocked aliasing organs, this paper proposes a method for the introduction of multi-spectrum X-ray imaging and independent component analysis (ICA) algorithm to separate the target object. Firstly image de-noising preprocessing ensures the accuracy of target extraction based on independent component analysis and sparse code shrinkage. Then according to the main proportion of organ in the images, aliasing thickness matrix of each pixel was isolated. Finally independent component analysis obtains convergence matrix to reconstruct the target object with blind separation theory. In the ICA algorithm, it found that when the number is more than 40, the target objects separate successfully with the aid of subjective evaluation standard. And when the amplitudes of the scale are in the [25, 45] interval, the target images have high contrast and less distortion. The three-dimensional figure of Peak signal to noise ratio (PSNR) shows that the different convergence times and amplitudes have a greater influence on image quality. The contrast and edge information of experimental images achieve better effects with the convergence times 85 and amplitudes 35 in the ICA algorithm.
Applications of High-speed motion analysis system on Solid Rocket Motor (SRM)
NASA Astrophysics Data System (ADS)
Liu, Yang; He, Guo-qiang; Li, Jiang; Liu, Pei-jin; Chen, Jian
2007-01-01
High-speed motion analysis system could record images up to 12,000fps and analyzed with the image processing system. The system stored data and images directly in electronic memory convenient for managing and analyzing. The high-speed motion analysis system and the X-ray radiography system were established the high-speed real-time X-ray radiography system, which could diagnose and measure the dynamic and high-speed process in opaque. The image processing software was developed for improve quality of the original image for acquiring more precise information. The typical applications of high-speed motion analysis system on solid rocket motor (SRM) were introduced in the paper. The research of anomalous combustion of solid propellant grain with defects, real-time measurement experiment of insulator eroding, explosion incision process of motor, structure and wave character of plume during the process of ignition and flameout, measurement of end burning of solid propellant, measurement of flame front and compatibility between airplane and missile during the missile launching were carried out using high-speed motion analysis system. The significative results were achieved through the research. Aim at application of high-speed motion analysis system on solid rocket motor, the key problem, such as motor vibrancy, electrical source instability, geometry aberrance, and yawp disturbance, which damaged the image quality, was solved. The image processing software was developed which improved the capability of measuring the characteristic of image. The experimental results showed that the system was a powerful facility to study instantaneous and high-speed process in solid rocket motor. With the development of the image processing technique, the capability of high-speed motion analysis system was enhanced.
Extraction of Extended Small-Scale Objects in Digital Images
NASA Astrophysics Data System (ADS)
Volkov, V. Y.
2015-05-01
Detection and localization problem of extended small-scale objects with different shapes appears in radio observation systems which use SAR, infra-red, lidar and television camera. Intensive non-stationary background is the main difficulty for processing. Other challenge is low quality of images, blobs, blurred boundaries; in addition SAR images suffer from a serious intrinsic speckle noise. Statistics of background is not normal, it has evident skewness and heavy tails in probability density, so it is hard to identify it. The problem of extraction small-scale objects is solved here on the basis of directional filtering, adaptive thresholding and morthological analysis. New kind of masks is used which are open-ended at one side so it is possible to extract ends of line segments with unknown length. An advanced method of dynamical adaptive threshold setting is investigated which is based on isolated fragments extraction after thresholding. Hierarchy of isolated fragments on binary image is proposed for the analysis of segmentation results. It includes small-scale objects with different shape, size and orientation. The method uses extraction of isolated fragments in binary image and counting points in these fragments. Number of points in extracted fragments is normalized to the total number of points for given threshold and is used as effectiveness of extraction for these fragments. New method for adaptive threshold setting and control maximises effectiveness of extraction. It has optimality properties for objects extraction in normal noise field and shows effective results for real SAR images.
Moeller, Scott J.; Crocker, Jennifer
2009-01-01
Coping motives for drinking initiate alcohol-related problems. Interpersonal goals, which powerfully influence affect, could provide a starting point for this relation. Here we tested effects of self-image goals (which aim to construct and defend desired self-views) and compassionate goals (which aim to support others) on heavy-episodic drinking and alcohol-related problems. Undergraduate drinkers (N=258) completed measures of self-image and compassionate goals in academics and friendships, coping and enhancement drinking motives, heavy-episodic drinking, and alcohol-related problems in a cross-sectional design. As predicted, self-image goals, but not compassionate goals, positively related to alcohol-related problems. Path models showed that self-image goals relate to coping motives, but not enhancement motives; coping motives then relate to heavy-episodic drinking, which in turn relate to alcohol-related problems. Self-image goals remained a significant predictor in the final model, which accounted for 34% of the variance in alcohol-related problems. These findings indicate that self-image goals contribute to alcohol-related problems in college students both independently and through coping motives. Interventions can center on reducing self-image goals and their attendant negative affect. PMID:19586150
NASA Astrophysics Data System (ADS)
Georgiou, Harris
2009-10-01
Medical Informatics and the application of modern signal processing in the assistance of the diagnostic process in medical imaging is one of the more recent and active research areas today. This thesis addresses a variety of issues related to the general problem of medical image analysis, specifically in mammography, and presents a series of algorithms and design approaches for all the intermediate levels of a modern system for computer-aided diagnosis (CAD). The diagnostic problem is analyzed with a systematic approach, first defining the imaging characteristics and features that are relevant to probable pathology in mammo-grams. Next, these features are quantified and fused into new, integrated radio-logical systems that exhibit embedded digital signal processing, in order to improve the final result and minimize the radiological dose for the patient. In a higher level, special algorithms are designed for detecting and encoding these clinically interest-ing imaging features, in order to be used as input to advanced pattern classifiers and machine learning models. Finally, these approaches are extended in multi-classifier models under the scope of Game Theory and optimum collective deci-sion, in order to produce efficient solutions for combining classifiers with minimum computational costs for advanced diagnostic systems. The material covered in this thesis is related to a total of 18 published papers, 6 in scientific journals and 12 in international conferences.
Deep learning methods for CT image-domain metal artifact reduction
NASA Astrophysics Data System (ADS)
Gjesteby, Lars; Yang, Qingsong; Xi, Yan; Shan, Hongming; Claus, Bernhard; Jin, Yannan; De Man, Bruno; Wang, Ge
2017-09-01
Artifacts resulting from metal objects have been a persistent problem in CT images over the last four decades. A common approach to overcome their effects is to replace corrupt projection data with values synthesized from an interpolation scheme or by reprojection of a prior image. State-of-the-art correction methods, such as the interpolation- and normalization-based algorithm NMAR, often do not produce clinically satisfactory results. Residual image artifacts remain in challenging cases and even new artifacts can be introduced by the interpolation scheme. Metal artifacts continue to be a major impediment, particularly in radiation and proton therapy planning as well as orthopedic imaging. A new solution to the long-standing metal artifact reduction (MAR) problem is deep learning, which has been successfully applied to medical image processing and analysis tasks. In this study, we combine a convolutional neural network (CNN) with the state-of-the-art NMAR algorithm to reduce metal streaks in critical image regions. Training data was synthesized from CT simulation scans of a phantom derived from real patient images. The CNN is able to map metal-corrupted images to artifact-free monoenergetic images to achieve additional correction on top of NMAR for improved image quality. Our results indicate that deep learning is a novel tool to address CT reconstruction challenges, and may enable more accurate tumor volume estimation for radiation therapy planning.
Analysis of x-ray tomography data of an extruded low density styrenic foam: an image analysis study
NASA Astrophysics Data System (ADS)
Lin, Jui-Ching; Heeschen, William
2016-10-01
Extruded styrenic foams are low density foams that are widely used for thermal insulation. It is difficult to precisely characterize the structure of the cells in low density foams by traditional cross-section viewing due to the frailty of the walls of the cells. X-ray computed tomography (CT) is a non-destructive, three dimensional structure characterization technique that has great potential for structure characterization of styrenic foams. Unfortunately the intrinsic artifacts of the data and the artifacts generated during image reconstruction are often comparable in size and shape to the thin walls of the foam, making robust and reliable analysis of cell sizes challenging. We explored three different image processing methods to clean up artifacts in the reconstructed images, thus allowing quantitative three dimensional determination of cell size in a low density styrenic foam. Three image processing approaches - an intensity based approach, an intensity variance based approach, and a machine learning based approach - are explored in this study, and the machine learning image feature classification method was shown to be the best. Individual cells are segmented within the images after the images were cleaned up using the three different methods and the cell sizes are measured and compared in the study. Although the collected data with the image analysis methods together did not yield enough measurements for a good statistic of the measurement of cell sizes, the problem can be resolved by measuring multiple samples or increasing imaging field of view.
Bruce, C V; Clinton, J; Gentleman, S M; Roberts, G W; Royston, M C
1992-04-01
We have undertaken a study of the distribution of the beta/A4 amyloid deposited in the cerebral cortex in Alzheimer's disease. Previous studies which have examined the differential distribution of amyloid in the cortex in order to determine the laminar pattern of cortical pathology have not proved to be conclusive. We have developed an alternative method for the solution of this problem. It involves the immunostaining of sections followed by computer-enhanced image analysis. A mathematical model is then used to describe both the amount and the pattern of amyloid across the cortex. This method is both accurate and reliable and also removes many of the problems concerning inter and intra-rater variability in measurement. This method will provide the basis for further quantitative studies on the differential distribution of amyloid in Alzheimer's disease and other cases of dementia where cerebral amyloidosis occurs.
Quantitative imaging of aggregated emulsions.
Penfold, Robert; Watson, Andrew D; Mackie, Alan R; Hibberd, David J
2006-02-28
Noise reduction, restoration, and segmentation methods are developed for the quantitative structural analysis in three dimensions of aggregated oil-in-water emulsion systems imaged by fluorescence confocal laser scanning microscopy. Mindful of typical industrial formulations, the methods are demonstrated for concentrated (30% volume fraction) and polydisperse emulsions. Following a regularized deconvolution step using an analytic optical transfer function and appropriate binary thresholding, novel application of the Euclidean distance map provides effective discrimination of closely clustered emulsion droplets with size variation over at least 1 order of magnitude. The a priori assumption of spherical nonintersecting objects provides crucial information to combat the ill-posed inverse problem presented by locating individual particles. Position coordinates and size estimates are recovered with sufficient precision to permit quantitative study of static geometrical features. In particular, aggregate morphology is characterized by a novel void distribution measure based on the generalized Apollonius problem. This is also compared with conventional Voronoi/Delauney analysis.
Fast multi-core based multimodal registration of 2D cross-sections and 3D datasets.
Scharfe, Michael; Pielot, Rainer; Schreiber, Falk
2010-01-11
Solving bioinformatics tasks often requires extensive computational power. Recent trends in processor architecture combine multiple cores into a single chip to improve overall performance. The Cell Broadband Engine (CBE), a heterogeneous multi-core processor, provides power-efficient and cost-effective high-performance computing. One application area is image analysis and visualisation, in particular registration of 2D cross-sections into 3D image datasets. Such techniques can be used to put different image modalities into spatial correspondence, for example, 2D images of histological cuts into morphological 3D frameworks. We evaluate the CBE-driven PlayStation 3 as a high performance, cost-effective computing platform by adapting a multimodal alignment procedure to several characteristic hardware properties. The optimisations are based on partitioning, vectorisation, branch reducing and loop unrolling techniques with special attention to 32-bit multiplies and limited local storage on the computing units. We show how a typical image analysis and visualisation problem, the multimodal registration of 2D cross-sections and 3D datasets, benefits from the multi-core based implementation of the alignment algorithm. We discuss several CBE-based optimisation methods and compare our results to standard solutions. More information and the source code are available from http://cbe.ipk-gatersleben.de. The results demonstrate that the CBE processor in a PlayStation 3 accelerates computational intensive multimodal registration, which is of great importance in biological/medical image processing. The PlayStation 3 as a low cost CBE-based platform offers an efficient option to conventional hardware to solve computational problems in image processing and bioinformatics.
Random forest regression for magnetic resonance image synthesis.
Jog, Amod; Carass, Aaron; Roy, Snehashis; Pham, Dzung L; Prince, Jerry L
2017-01-01
By choosing different pulse sequences and their parameters, magnetic resonance imaging (MRI) can generate a large variety of tissue contrasts. This very flexibility, however, can yield inconsistencies with MRI acquisitions across datasets or scanning sessions that can in turn cause inconsistent automated image analysis. Although image synthesis of MR images has been shown to be helpful in addressing this problem, an inability to synthesize both T 2 -weighted brain images that include the skull and FLuid Attenuated Inversion Recovery (FLAIR) images has been reported. The method described herein, called REPLICA, addresses these limitations. REPLICA is a supervised random forest image synthesis approach that learns a nonlinear regression to predict intensities of alternate tissue contrasts given specific input tissue contrasts. Experimental results include direct image comparisons between synthetic and real images, results from image analysis tasks on both synthetic and real images, and comparison against other state-of-the-art image synthesis methods. REPLICA is computationally fast, and is shown to be comparable to other methods on tasks they are able to perform. Additionally REPLICA has the capability to synthesize both T 2 -weighted images of the full head and FLAIR images, and perform intensity standardization between different imaging datasets. Copyright © 2016 Elsevier B.V. All rights reserved.
Novel cooperative neural fusion algorithms for image restoration and image fusion.
Xia, Youshen; Kamel, Mohamed S
2007-02-01
To deal with the problem of restoring degraded images with non-Gaussian noise, this paper proposes a novel cooperative neural fusion regularization (CNFR) algorithm for image restoration. Compared with conventional regularization algorithms for image restoration, the proposed CNFR algorithm can relax need of the optimal regularization parameter to be estimated. Furthermore, to enhance the quality of restored images, this paper presents a cooperative neural fusion (CNF) algorithm for image fusion. Compared with existing signal-level image fusion algorithms, the proposed CNF algorithm can greatly reduce the loss of contrast information under blind Gaussian noise environments. The performance analysis shows that the proposed two neural fusion algorithms can converge globally to the robust and optimal image estimate. Simulation results confirm that in different noise environments, the proposed two neural fusion algorithms can obtain a better image estimate than several well known image restoration and image fusion methods.
Digital analysis of wind tunnel imagery to measure fluid thickness
NASA Technical Reports Server (NTRS)
Easton, Roger L., Jr.; Enge, James
1992-01-01
Documented here are the procedure and results obtained from the application of digital image processing techniques to the problem of measuring the thickness of a deicing fluid on a model airfoil during simulated takeoffs. The fluid contained a fluorescent dye and the images were recorded under flash illumination on photographic film. The films were digitized and analyzed on a personal computer to obtain maps of the fluid thickness.
Exploratory analysis of TOF-SIMS data from biological surfaces
NASA Astrophysics Data System (ADS)
Vaidyanathan, Seetharaman; Fletcher, John S.; Henderson, Alex; Lockyer, Nicholas P.; Vickerman, John C.
2008-12-01
The application of multivariate analytical tools enables simplification of TOF-SIMS datasets so that useful information can be extracted from complex spectra and images, especially those that do not give readily interpretable results. There is however a challenge in understanding the outputs from such analyses. The problem is complicated when analysing images, given the additional dimensions in the dataset. Here we demonstrate how the application of simple pre-processing routines can enable the interpretation of TOF-SIMS spectra and images. For the spectral data, TOF-SIMS spectra used to discriminate bacterial isolates associated with urinary tract infection were studied. Using different criteria for picking peaks before carrying out PC-DFA enabled identification of the discriminatory information with greater certainty. For the image data, an air-dried salt stressed bacterial sample, discussed in another paper by us in this issue, was studied. Exploration of the image datasets with and without normalisation prior to multivariate analysis by PCA or MAF resulted in different regions of the image being highlighted by the techniques.
Chen, Qiang; Chen, Yunhao; Jiang, Weiguo
2016-07-30
In the field of multiple features Object-Based Change Detection (OBCD) for very-high-resolution remotely sensed images, image objects have abundant features and feature selection affects the precision and efficiency of OBCD. Through object-based image analysis, this paper proposes a Genetic Particle Swarm Optimization (GPSO)-based feature selection algorithm to solve the optimization problem of feature selection in multiple features OBCD. We select the Ratio of Mean to Variance (RMV) as the fitness function of GPSO, and apply the proposed algorithm to the object-based hybrid multivariate alternative detection model. Two experiment cases on Worldview-2/3 images confirm that GPSO can significantly improve the speed of convergence, and effectively avoid the problem of premature convergence, relative to other feature selection algorithms. According to the accuracy evaluation of OBCD, GPSO is superior at overall accuracy (84.17% and 83.59%) and Kappa coefficient (0.6771 and 0.6314) than other algorithms. Moreover, the sensitivity analysis results show that the proposed algorithm is not easily influenced by the initial parameters, but the number of features to be selected and the size of the particle swarm would affect the algorithm. The comparison experiment results reveal that RMV is more suitable than other functions as the fitness function of GPSO-based feature selection algorithm.
Liu, Chanjuan; van Netten, Jaap J; van Baal, Jeff G; Bus, Sicco A; van der Heijden, Ferdi
2015-02-01
Early identification of diabetic foot complications and their precursors is essential in preventing their devastating consequences, such as foot infection and amputation. Frequent, automatic risk assessment by an intelligent telemedicine system might be feasible and cost effective. Infrared thermography is a promising modality for such a system. The temperature differences between corresponding areas on contralateral feet are the clinically significant parameters. This asymmetric analysis is hindered by (1) foot segmentation errors, especially when the foot temperature and the ambient temperature are comparable, and by (2) different shapes and sizes between contralateral feet due to deformities or minor amputations. To circumvent the first problem, we used a color image and a thermal image acquired synchronously. Foot regions, detected in the color image, were rigidly registered to the thermal image. This resulted in 97.8% ± 1.1% sensitivity and 98.4% ± 0.5% specificity over 76 high-risk diabetic patients with manual annotation as a reference. Nonrigid landmark-based registration with B-splines solved the second problem. Corresponding points in the two feet could be found regardless of the shapes and sizes of the feet. With that, the temperature difference of the left and right feet could be obtained. © 2015 Society of Photo-Optical Instrumentation Engineers (SPIE)
Video-based teleradiology for intraosseous lesions. A receiver operating characteristic analysis.
Tyndall, D A; Boyd, K S; Matteson, S R; Dove, S B
1995-11-01
Immediate access to off-site expert diagnostic consultants regarding unusual radiographic findings or radiographic quality assurance issues could be a current problem for private dental practitioners. Teleradiology, a system for transmitting radiographic images, offers a potential solution to this problem. Although much research has been done to evaluate feasibility and utilization of teleradiology systems in medical imaging, little research on dental applications has been performed. In this investigation 47 panoramic films with an equal distribution of images with intraosseous jaw lesions and no disease were viewed by a panel of observers with teleradiology and conventional viewing methods. The teleradiology system consisted of an analog video-based system simulating remote radiographic consultation between a general dentist and a dental imaging specialist. Conventional viewing consisted of traditional viewbox methods. Observers were asked to identify the presence or absence of 24 intraosseous lesions and to determine their locations. No statistically significant differences in modalities or observers were identified between methods at the 0.05 level. The results indicate that viewing intraosseous lesions of video-based panoramic images is equal to conventional light box viewing.
NASA Technical Reports Server (NTRS)
Swanberg, Nancy A.; Matson, Pamela A.
1987-01-01
It was experimentally determined whether induced differences in forest canopy chemical composition can be detected using data from the Airborne Imaging Spectrometer (AIS). Treatments were applied to an even-aged forest of Douglas fir trees. Work to date has stressed wet chemical analysis of foilage samples and correction of AIS data. Plot treatments were successful in providing a range of foliar N2 concentrations. Much time was spent investigating and correcting problems with the raw AIS data. Initial problems with groups of drop out lines in the AIS data were traced to the tape recorder and the tape drive. Custom adjustment of the tape drive led to recovery of most missing lines. Remaining individual drop out lines were replaced using average of adjacent lines. Application of a notch filter to the Fourier transform of the image in each band satisfactorily removed vertical striping. The aspect ratio was corrected by resampling the image in the line direction using nearest neighbor interpolation.
3D Volumetric Analysis of Fluid Inclusions Using Confocal Microscopy
NASA Astrophysics Data System (ADS)
Proussevitch, A.; Mulukutla, G.; Sahagian, D.; Bodnar, B.
2009-05-01
Fluid inclusions preserve valuable information regarding hydrothermal, metamorphic, and magmatic processes. The molar quantities of liquid and gaseous components in the inclusions can be estimated from their volumetric measurements at room temperatures combined with knowledge of the PVTX properties of the fluid and homogenization temperatures. Thus, accurate measurements of inclusion volumes and their two phase components are critical. One of the greatest advantages of the Laser Scanning Confocal Microscopy (LSCM) in application to fluid inclsion analsyis is that it is affordable for large numbers of samples, given the appropriate software analysis tools and methodology. Our present work is directed toward developing those tools and methods. For the last decade LSCM has been considered as a potential method for inclusion volume measurements. Nevertheless, the adequate and accurate measurement by LSCM has not yet been successful for fluid inclusions containing non-fluorescing fluids due to many technical challenges in image analysis despite the fact that the cost of collecting raw LSCM imagery has dramatically decreased in recent years. These problems mostly relate to image analysis methodology and software tools that are needed for pre-processing and image segmentation, which enable solid, liquid and gaseous components to be delineated. Other challenges involve image quality and contrast, which is controlled by fluorescence of the material (most aqueous fluid inclusions do not fluoresce at the appropriate laser wavelengths), material optical properties, and application of transmitted and/or reflected confocal illumination. In this work we have identified the key problems of image analysis and propose some potential solutions. For instance, we found that better contrast of pseudo-confocal transmitted light images could be overlayed with poor-contrast true-confocal reflected light images within the same stack of z-ordered slices. This approach allows one to narrow the interface boundaries between the phases before the application of segmentation routines. In turn, we found that an active contour segmentation technique works best for these types of geomaterials. The method was developed by adapting a medical software package implemented using the Insight Toolkit (ITK) set of algorithms developed for segmentation of anatomical structures. We have developed a manual analysis procedure with the potential of 2 micron resolution in 3D volume rendering that is specifically designed for application to fluid inclusion volume measurements.
Packham, B; Barnes, G; Dos Santos, G Sato; Aristovich, K; Gilad, O; Ghosh, A; Oh, T; Holder, D
2016-06-01
Electrical impedance tomography (EIT) allows for the reconstruction of internal conductivity from surface measurements. A change in conductivity occurs as ion channels open during neural activity, making EIT a potential tool for functional brain imaging. EIT images can have >10 000 voxels, which means statistical analysis of such images presents a substantial multiple testing problem. One way to optimally correct for these issues and still maintain the flexibility of complicated experimental designs is to use random field theory. This parametric method estimates the distribution of peaks one would expect by chance in a smooth random field of a given size. Random field theory has been used in several other neuroimaging techniques but never validated for EIT images of fast neural activity, such validation can be achieved using non-parametric techniques. Both parametric and non-parametric techniques were used to analyze a set of 22 images collected from 8 rats. Significant group activations were detected using both techniques (corrected p < 0.05). Both parametric and non-parametric analyses yielded similar results, although the latter was less conservative. These results demonstrate the first statistical analysis of such an image set and indicate that such an analysis is an approach for EIT images of neural activity.
Packham, B; Barnes, G; dos Santos, G Sato; Aristovich, K; Gilad, O; Ghosh, A; Oh, T; Holder, D
2016-01-01
Abstract Electrical impedance tomography (EIT) allows for the reconstruction of internal conductivity from surface measurements. A change in conductivity occurs as ion channels open during neural activity, making EIT a potential tool for functional brain imaging. EIT images can have >10 000 voxels, which means statistical analysis of such images presents a substantial multiple testing problem. One way to optimally correct for these issues and still maintain the flexibility of complicated experimental designs is to use random field theory. This parametric method estimates the distribution of peaks one would expect by chance in a smooth random field of a given size. Random field theory has been used in several other neuroimaging techniques but never validated for EIT images of fast neural activity, such validation can be achieved using non-parametric techniques. Both parametric and non-parametric techniques were used to analyze a set of 22 images collected from 8 rats. Significant group activations were detected using both techniques (corrected p < 0.05). Both parametric and non-parametric analyses yielded similar results, although the latter was less conservative. These results demonstrate the first statistical analysis of such an image set and indicate that such an analysis is an approach for EIT images of neural activity. PMID:27203477
PET/CT imaging of clear cell renal cell carcinoma with 124I labeled chimeric antibody
Bahnson, Eamonn E.; Murrey, Douglas A.; Mojzisik, Cathy M.; Hall, Nathan C.; Martinez-Suarez, Humberto J.; Knopp, Michael V.; Martin, Edward W.; Povoski, Stephen P.; Bahnson, Robert R.
2009-01-01
Clear cell renal cell carcinoma (ccRCC) presents problems for urologists in diagnosis, treatment selection, intraoperative surgical margin analysis, and long term monitoring. In this paper we describe the development of a radiolabeled antibody specific to ccRCC (124I-cG250) and its potential to help urologists manage each of these problems. We believe 124I-cG250, in conjunction with perioperative Positron emission tomography/computed tomography imaging and intraoperative handheld gamma probe use, has the potential to diagnose ccRCC, aid in determining a proper course of treatment (operative or otherwise), confirm complete resection of malignant tissue in real time, and monitor patients post-operatively. PMID:21789055
NASA Astrophysics Data System (ADS)
Ward, T.; Fleming, J. S.; Hoffmann, S. M. A.; Kemp, P. M.
2005-11-01
Simulation is useful in the validation of functional image analysis methods, particularly when considering the number of analysis techniques currently available lacking thorough validation. Problems exist with current simulation methods due to long run times or unrealistic results making it problematic to generate complete datasets. A method is presented for simulating known abnormalities within normal brain SPECT images using a measured point spread function (PSF), and incorporating a stereotactic atlas of the brain for anatomical positioning. This allows for the simulation of realistic images through the use of prior information regarding disease progression. SPECT images of cerebral perfusion have been generated consisting of a control database and a group of simulated abnormal subjects that are to be used in a UK audit of analysis methods. The abnormality is defined in the stereotactic space, then transformed to the individual subject space, convolved with a measured PSF and removed from the normal subject image. The dataset was analysed using SPM99 (Wellcome Department of Imaging Neuroscience, University College, London) and the MarsBaR volume of interest (VOI) analysis toolbox. The results were evaluated by comparison with the known ground truth. The analysis showed improvement when using a smoothing kernel equal to system resolution over the slightly larger kernel used routinely. Significant correlation was found between effective volume of a simulated abnormality and the detected size using SPM99. Improvements in VOI analysis sensitivity were found when using the region median over the region mean. The method and dataset provide an efficient methodology for use in the comparison and cross validation of semi-quantitative analysis methods in brain SPECT, and allow the optimization of analysis parameters.
Performance of the Wavelet Decomposition on Massively Parallel Architectures
NASA Technical Reports Server (NTRS)
El-Ghazawi, Tarek A.; LeMoigne, Jacqueline; Zukor, Dorothy (Technical Monitor)
2001-01-01
Traditionally, Fourier Transforms have been utilized for performing signal analysis and representation. But although it is straightforward to reconstruct a signal from its Fourier transform, no local description of the signal is included in its Fourier representation. To alleviate this problem, Windowed Fourier transforms and then wavelet transforms have been introduced, and it has been proven that wavelets give a better localization than traditional Fourier transforms, as well as a better division of the time- or space-frequency plane than Windowed Fourier transforms. Because of these properties and after the development of several fast algorithms for computing the wavelet representation of any signal, in particular the Multi-Resolution Analysis (MRA) developed by Mallat, wavelet transforms have increasingly been applied to signal analysis problems, especially real-life problems, in which speed is critical. In this paper we present and compare efficient wavelet decomposition algorithms on different parallel architectures. We report and analyze experimental measurements, using NASA remotely sensed images. Results show that our algorithms achieve significant performance gains on current high performance parallel systems, and meet scientific applications and multimedia requirements. The extensive performance measurements collected over a number of high-performance computer systems have revealed important architectural characteristics of these systems, in relation to the processing demands of the wavelet decomposition of digital images.
Spread spectrum phase modulation for coherent X-ray diffraction imaging.
Zhang, Xuesong; Jiang, Jing; Xiangli, Bin; Arce, Gonzalo R
2015-09-21
High dynamic range, phase ambiguity and radiation limited resolution are three challenging issues in coherent X-ray diffraction imaging (CXDI), which limit the achievable imaging resolution. This paper proposes a spread spectrum phase modulation (SSPM) method to address the aforementioned problems in a single strobe. The requirements on phase modulator parameters are presented, and a practical implementation of SSPM is discussed via ray optics analysis. Numerical experiments demonstrate the performance of SSPM under the constraint of available X-ray optics fabrication accuracy, showing its potential to real CXDI applications.
Hyperspectral image analysis for plant stress detection
USDA-ARS?s Scientific Manuscript database
Abiotic and disease-induced stress significantly reduces plant productivity. Automated on-the-go mapping of plant stress allows timely intervention and mitigating of the problem before critical thresholds are exceeded, thereby, maximizing productivity. A hyperspectral camera analyzed the spectral ...
The sperm count test is performed if a man's fertility is in question. It is helpful in determining if there is a problem in sperm production or quality of the sperm as a cause of infertility. The test may also be used after ...
Moeller, Scott J; Crocker, Jennifer
2009-06-01
Coping motives for drinking initiate alcohol-related problems. Interpersonal goals, which powerfully influence affect, could provide a starting point for this relation. Here we tested effects of self-image goals (which aim to construct and defend desired self-views) and compassionate goals (which aim to support others) on heavy-episodic drinking and alcohol-related problems. Undergraduate drinkers (N=258) completed measures of self-image and compassionate goals in academics and friendships, coping and enhancement drinking motives, heavy-episodic drinking, and alcohol-related problems in a cross-sectional design. As predicted, self-image goals, but not compassionate goals, positively related to alcohol-related problems. Path models showed that self-image goals relate to coping motives, but not enhancement motives; coping motives then relate to heavy-episodic drinking, which in turn relate to alcohol-related problems. Self-image goals remained a significant predictor in the final model, which accounted for 34% of the variance in alcohol-related problems. These findings indicate that self-image goals contribute to alcohol-related problems in college students both independently and through coping motives. Interventions can center on reducing self-image goals and their attendant negative affect. Copyright (c) 2009 APA, all rights reserved.
Terahertz multistatic reflection imaging.
Dorney, Timothy D; Symes, William W; Baraniuk, Richard G; Mittleman, Daniel M
2002-07-01
We describe a new imaging method using single-cycle pulses of terahertz (THz) radiation. This technique emulates the data collection and image processing procedures developed for geophysical prospecting and is made possible by the availability of fiber-coupled THz receiver antennas. We use a migration procedure to solve the inverse problem; this permits us to reconstruct the location, the shape, and the refractive index of targets. We show examples for both metallic and dielectric model targets, and we perform velocity analysis on dielectric targets to estimate the refractive indices of imaged components. These results broaden the capabilities of THz imaging systems and also demonstrate the viability of the THz system as a test bed for the exploration of new seismic processing methods.
Auroux, Didier; Cohen, Laurent D.; Masmoudi, Mohamed
2011-01-01
We combine in this paper the topological gradient, which is a powerful method for edge detection in image processing, and a variant of the minimal path method in order to find connected contours. The topological gradient provides a more global analysis of the image than the standard gradient and identifies the main edges of an image. Several image processing problems (e.g., inpainting and segmentation) require continuous contours. For this purpose, we consider the fast marching algorithm in order to find minimal paths in the topological gradient image. This coupled algorithm quickly provides accurate and connected contours. We present then two numerical applications, to image inpainting and segmentation, of this hybrid algorithm. PMID:22194734
Segmentation of medical images using explicit anatomical knowledge
NASA Astrophysics Data System (ADS)
Wilson, Laurie S.; Brown, Stephen; Brown, Matthew S.; Young, Jeanne; Li, Rongxin; Luo, Suhuai; Brandt, Lee
1999-07-01
Knowledge-based image segmentation is defined in terms of the separation of image analysis procedures and representation of knowledge. Such architecture is particularly suitable for medical image segmentation, because of the large amount of structured domain knowledge. A general methodology for the application of knowledge-based methods to medical image segmentation is described. This includes frames for knowledge representation, fuzzy logic for anatomical variations, and a strategy for determining the order of segmentation from the modal specification. This method has been applied to three separate problems, 3D thoracic CT, chest X-rays and CT angiography. The application of the same methodology to such a range of applications suggests a major role in medical imaging for segmentation methods incorporating representation of anatomical knowledge.
Vogel, H; Haller, D
2007-08-01
Control of luggage and shipped goods are frequently carried out. The possibilities of X-ray technology shall be demonstrated. There are different imaging techniques. The main concepts are transmission imaging, backscatter imaging, computed tomography, and dual energy imaging and the combination of different methods The images come from manufacturers and personal collections. The search concerns mainly, weapons, explosives, and drugs; furthermore animals, and stolen goods, Special problems offer the control of letters and the detection of Improvised Explosive Devices (IED). One has to expect that controls will increase and that imaging with X-rays will have their part. Pattern recognition software will be used for analysis enforced by economy and by demand for higher efficiency - man and computer will produce more security than man alone.
Neural imaging to track mental states while using an intelligent tutoring system.
Anderson, John R; Betts, Shawn; Ferris, Jennifer L; Fincham, Jon M
2010-04-13
Hemodynamic measures of brain activity can be used to interpret a student's mental state when they are interacting with an intelligent tutoring system. Functional magnetic resonance imaging (fMRI) data were collected while students worked with a tutoring system that taught an algebra isomorph. A cognitive model predicted the distribution of solution times from measures of problem complexity. Separately, a linear discriminant analysis used fMRI data to predict whether or not students were engaged in problem solving. A hidden Markov algorithm merged these two sources of information to predict the mental states of students during problem-solving episodes. The algorithm was trained on data from 1 day of interaction and tested with data from a later day. In terms of predicting what state a student was in during a 2-s period, the algorithm achieved 87% accuracy on the training data and 83% accuracy on the test data. The results illustrate the importance of integrating the bottom-up information from imaging data with the top-down information from a cognitive model.
Joutsijoki, Henry; Haponen, Markus; Rasku, Jyrki; Aalto-Setälä, Katriina; Juhola, Martti
2016-01-01
The focus of this research is on automated identification of the quality of human induced pluripotent stem cell (iPSC) colony images. iPS cell technology is a contemporary method by which the patient's cells are reprogrammed back to stem cells and are differentiated to any cell type wanted. iPS cell technology will be used in future to patient specific drug screening, disease modeling, and tissue repairing, for instance. However, there are technical challenges before iPS cell technology can be used in practice and one of them is quality control of growing iPSC colonies which is currently done manually but is unfeasible solution in large-scale cultures. The monitoring problem returns to image analysis and classification problem. In this paper, we tackle this problem using machine learning methods such as multiclass Support Vector Machines and several baseline methods together with Scaled Invariant Feature Transformation based features. We perform over 80 test arrangements and do a thorough parameter value search. The best accuracy (62.4%) for classification was obtained by using a k-NN classifier showing improved accuracy compared to earlier studies.
Multi-object segmentation using coupled nonparametric shape and relative pose priors
NASA Astrophysics Data System (ADS)
Uzunbas, Mustafa Gökhan; Soldea, Octavian; Çetin, Müjdat; Ünal, Gözde; Erçil, Aytül; Unay, Devrim; Ekin, Ahmet; Firat, Zeynep
2009-02-01
We present a new method for multi-object segmentation in a maximum a posteriori estimation framework. Our method is motivated by the observation that neighboring or coupling objects in images generate configurations and co-dependencies which could potentially aid in segmentation if properly exploited. Our approach employs coupled shape and inter-shape pose priors that are computed using training images in a nonparametric multi-variate kernel density estimation framework. The coupled shape prior is obtained by estimating the joint shape distribution of multiple objects and the inter-shape pose priors are modeled via standard moments. Based on such statistical models, we formulate an optimization problem for segmentation, which we solve by an algorithm based on active contours. Our technique provides significant improvements in the segmentation of weakly contrasted objects in a number of applications. In particular for medical image analysis, we use our method to extract brain Basal Ganglia structures, which are members of a complex multi-object system posing a challenging segmentation problem. We also apply our technique to the problem of handwritten character segmentation. Finally, we use our method to segment cars in urban scenes.
Application of kernel method in fluorescence molecular tomography
NASA Astrophysics Data System (ADS)
Zhao, Yue; Baikejiang, Reheman; Li, Changqing
2017-02-01
Reconstruction of fluorescence molecular tomography (FMT) is an ill-posed inverse problem. Anatomical guidance in the FMT reconstruction can improve FMT reconstruction efficiently. We have developed a kernel method to introduce the anatomical guidance into FMT robustly and easily. The kernel method is from machine learning for pattern analysis and is an efficient way to represent anatomical features. For the finite element method based FMT reconstruction, we calculate a kernel function for each finite element node from an anatomical image, such as a micro-CT image. Then the fluorophore concentration at each node is represented by a kernel coefficient vector and the corresponding kernel function. In the FMT forward model, we have a new system matrix by multiplying the sensitivity matrix with the kernel matrix. Thus, the kernel coefficient vector is the unknown to be reconstructed following a standard iterative reconstruction process. We convert the FMT reconstruction problem into the kernel coefficient reconstruction problem. The desired fluorophore concentration at each node can be calculated accordingly. Numerical simulation studies have demonstrated that the proposed kernel-based algorithm can improve the spatial resolution of the reconstructed FMT images. In the proposed kernel method, the anatomical guidance can be obtained directly from the anatomical image and is included in the forward modeling. One of the advantages is that we do not need to segment the anatomical image for the targets and background.
White blood cell segmentation by circle detection using electromagnetism-like optimization.
Cuevas, Erik; Oliva, Diego; Díaz, Margarita; Zaldivar, Daniel; Pérez-Cisneros, Marco; Pajares, Gonzalo
2013-01-01
Medical imaging is a relevant field of application of image processing algorithms. In particular, the analysis of white blood cell (WBC) images has engaged researchers from fields of medicine and computer vision alike. Since WBCs can be approximated by a quasicircular form, a circular detector algorithm may be successfully applied. This paper presents an algorithm for the automatic detection of white blood cells embedded into complicated and cluttered smear images that considers the complete process as a circle detection problem. The approach is based on a nature-inspired technique called the electromagnetism-like optimization (EMO) algorithm which is a heuristic method that follows electromagnetism principles for solving complex optimization problems. The proposed approach uses an objective function which measures the resemblance of a candidate circle to an actual WBC. Guided by the values of such objective function, the set of encoded candidate circles are evolved by using EMO, so that they can fit into the actual blood cells contained in the edge map of the image. Experimental results from blood cell images with a varying range of complexity are included to validate the efficiency of the proposed technique regarding detection, robustness, and stability.
Cartography of asteroids and comet nuclei from low resolution data
NASA Technical Reports Server (NTRS)
Stooke, Philip J.
1992-01-01
High resolution images of non-spherical objects, such as Viking images of Phobos and the anticipated Galileo images of Gaspra, lend themselves to conventional planetary cartographic procedures: control network analysis, stereophotogrammetry, image mosaicking in 2D or 3D, and airbrush mapping. There remains the problem of a suitable map projection for bodies which are extremely elongated or irregular in shape. Many bodies will soon be seen at lower resolution (5-30 pixels across the disk) in images from speckle interferometry, the Hubble Space Telescope, ground-based radar, distinct spacecraft encounters, and closer images degraded by smear. Different data with similar effective resolutions are available from stellar occultations, radar or lightcurve convex hulls, lightcurve modeling of albedo variations, and cometary jet modeling. With such low resolution, conventional methods of shape determination will be less useful or will fail altogether, leaving limb and terminator topography as the principal sources of topographic information. A method for shape determination based on limb and terminator topography was developed. It has been applied to the nucleus of Comet Halley and the jovian satellite Amalthea. The Amalthea results are described to give an example of the cartographic possibilities and problems of anticipated data sets.
White Blood Cell Segmentation by Circle Detection Using Electromagnetism-Like Optimization
Oliva, Diego; Díaz, Margarita; Zaldivar, Daniel; Pérez-Cisneros, Marco; Pajares, Gonzalo
2013-01-01
Medical imaging is a relevant field of application of image processing algorithms. In particular, the analysis of white blood cell (WBC) images has engaged researchers from fields of medicine and computer vision alike. Since WBCs can be approximated by a quasicircular form, a circular detector algorithm may be successfully applied. This paper presents an algorithm for the automatic detection of white blood cells embedded into complicated and cluttered smear images that considers the complete process as a circle detection problem. The approach is based on a nature-inspired technique called the electromagnetism-like optimization (EMO) algorithm which is a heuristic method that follows electromagnetism principles for solving complex optimization problems. The proposed approach uses an objective function which measures the resemblance of a candidate circle to an actual WBC. Guided by the values of such objective function, the set of encoded candidate circles are evolved by using EMO, so that they can fit into the actual blood cells contained in the edge map of the image. Experimental results from blood cell images with a varying range of complexity are included to validate the efficiency of the proposed technique regarding detection, robustness, and stability. PMID:23476713
Partial volume segmentation in 3D of lesions and tissues in magnetic resonance images
NASA Astrophysics Data System (ADS)
Johnston, Brian; Atkins, M. Stella; Booth, Kellogg S.
1994-05-01
An important first step in diagnosis and treatment planning using tomographic imaging is differentiating and quantifying diseased as well as healthy tissue. One of the difficulties encountered in solving this problem to date has been distinguishing the partial volume constituents of each voxel in the image volume. Most proposed solutions to this problem involve analysis of planar images, in sequence, in two dimensions only. We have extended a model-based method of image segmentation which applies the technique of iterated conditional modes in three dimensions. A minimum of user intervention is required to train the algorithm. Partial volume estimates for each voxel in the image are obtained yielding fractional compositions of multiple tissue types for individual voxels. A multispectral approach is applied, where spatially registered data sets are available. The algorithm is simple and has been parallelized using a dataflow programming environment to reduce the computational burden. The algorithm has been used to segment dual echo MRI data sets of multiple sclerosis patients using lesions, gray matter, white matter, and cerebrospinal fluid as the partial volume constituents. The results of the application of the algorithm to these datasets is presented and compared to the manual lesion segmentation of the same data.
NASA Astrophysics Data System (ADS)
D'Ambra, Pasqua; Tartaglione, Gaetano
2015-04-01
Image segmentation addresses the problem to partition a given image into its constituent objects and then to identify the boundaries of the objects. This problem can be formulated in terms of a variational model aimed to find optimal approximations of a bounded function by piecewise-smooth functions, minimizing a given functional. The corresponding Euler-Lagrange equations are a set of two coupled elliptic partial differential equations with varying coefficients. Numerical solution of the above system often relies on alternating minimization techniques involving descent methods coupled with explicit or semi-implicit finite-difference discretization schemes, which are slowly convergent and poorly scalable with respect to image size. In this work we focus on generalized relaxation methods also coupled with multigrid linear solvers, when a finite-difference discretization is applied to the Euler-Lagrange equations of Ambrosio-Tortorelli model. We show that non-linear Gauss-Seidel, accelerated by inner linear iterations, is an effective method for large-scale image analysis as those arising from high-throughput screening platforms for stem cells targeted differentiation, where one of the main goal is segmentation of thousand of images to analyze cell colonies morphology.
Solution of Ambrosio-Tortorelli model for image segmentation by generalized relaxation method
NASA Astrophysics Data System (ADS)
D'Ambra, Pasqua; Tartaglione, Gaetano
2015-03-01
Image segmentation addresses the problem to partition a given image into its constituent objects and then to identify the boundaries of the objects. This problem can be formulated in terms of a variational model aimed to find optimal approximations of a bounded function by piecewise-smooth functions, minimizing a given functional. The corresponding Euler-Lagrange equations are a set of two coupled elliptic partial differential equations with varying coefficients. Numerical solution of the above system often relies on alternating minimization techniques involving descent methods coupled with explicit or semi-implicit finite-difference discretization schemes, which are slowly convergent and poorly scalable with respect to image size. In this work we focus on generalized relaxation methods also coupled with multigrid linear solvers, when a finite-difference discretization is applied to the Euler-Lagrange equations of Ambrosio-Tortorelli model. We show that non-linear Gauss-Seidel, accelerated by inner linear iterations, is an effective method for large-scale image analysis as those arising from high-throughput screening platforms for stem cells targeted differentiation, where one of the main goal is segmentation of thousand of images to analyze cell colonies morphology.
Open-source software platform for medical image segmentation applications
NASA Astrophysics Data System (ADS)
Namías, R.; D'Amato, J. P.; del Fresno, M.
2017-11-01
Segmenting 2D and 3D images is a crucial and challenging problem in medical image analysis. Although several image segmentation algorithms have been proposed for different applications, no universal method currently exists. Moreover, their use is usually limited when detection of complex and multiple adjacent objects of interest is needed. In addition, the continually increasing volumes of medical imaging scans require more efficient segmentation software design and highly usable applications. In this context, we present an extension of our previous segmentation framework which allows the combination of existing explicit deformable models in an efficient and transparent way, handling simultaneously different segmentation strategies and interacting with a graphic user interface (GUI). We present the object-oriented design and the general architecture which consist of two layers: the GUI at the top layer, and the processing core filters at the bottom layer. We apply the framework for segmenting different real-case medical image scenarios on public available datasets including bladder and prostate segmentation from 2D MRI, and heart segmentation in 3D CT. Our experiments on these concrete problems show that this framework facilitates complex and multi-object segmentation goals while providing a fast prototyping open-source segmentation tool.
Robust 2DPCA with non-greedy l1 -norm maximization for image analysis.
Wang, Rong; Nie, Feiping; Yang, Xiaojun; Gao, Feifei; Yao, Minli
2015-05-01
2-D principal component analysis based on l1 -norm (2DPCA-L1) is a recently developed approach for robust dimensionality reduction and feature extraction in image domain. Normally, a greedy strategy is applied due to the difficulty of directly solving the l1 -norm maximization problem, which is, however, easy to get stuck in local solution. In this paper, we propose a robust 2DPCA with non-greedy l1 -norm maximization in which all projection directions are optimized simultaneously. Experimental results on face and other datasets confirm the effectiveness of the proposed approach.
Automated Dispersion and Orientation Analysis for Carbon Nanotube Reinforced Polymer Composites
Gao, Yi; Li, Zhuo; Lin, Ziyin; Zhu, Liangjia; Tannenbaum, Allen; Bouix, Sylvain; Wong, C.P.
2012-01-01
The properties of carbon nanotube (CNT)/polymer composites are strongly dependent on the dispersion and orientation of CNTs in the host matrix. Quantification of the dispersion and orientation of CNTs by microstructure observation and image analysis has been demonstrated as a useful way to understand the structure-property relationship of CNT/polymer composites. However, due to the various morphologies and large amount of CNTs in one image, automatic and accurate identification of CNTs has become the bottleneck for dispersion/orientation analysis. To solve this problem, shape identification is performed for each pixel in the filler identification step, so that individual CNT can be exacted from images automatically. The improved filler identification enables more accurate analysis of CNT dispersion and orientation. The obtained dispersion index and orientation index of both synthetic and real images from model compounds correspond well with the observations. Moreover, these indices help to explain the electrical properties of CNT/Silicone composite, which is used as a model compound. This method can also be extended to other polymer composites with high aspect ratio fillers. PMID:23060008
Computing camera heading: A study
NASA Astrophysics Data System (ADS)
Zhang, John Jiaxiang
2000-08-01
An accurate estimate of the motion of a camera is a crucial first step for the 3D reconstruction of sites, objects, and buildings from video. Solutions to the camera heading problem can be readily applied to many areas, such as robotic navigation, surgical operation, video special effects, multimedia, and lately even in internet commerce. From image sequences of a real world scene, the problem is to calculate the directions of the camera translations. The presence of rotations makes this problem very hard. This is because rotations and translations can have similar effects on the images, and are thus hard to tell apart. However, the visual angles between the projection rays of point pairs are unaffected by rotations, and their changes over time contain sufficient information to determine the direction of camera translation. We developed a new formulation of the visual angle disparity approach, first introduced by Tomasi, to the camera heading problem. Our new derivation makes theoretical analysis possible. Most notably, a theorem is obtained that locates all possible singularities of the residual function for the underlying optimization problem. This allows identifying all computation trouble spots beforehand, and to design reliable and accurate computational optimization methods. A bootstrap-jackknife resampling method simultaneously reduces complexity and tolerates outliers well. Experiments with image sequences show accurate results when compared with the true camera motion as measured with mechanical devices.
NiftyNet: a deep-learning platform for medical imaging.
Gibson, Eli; Li, Wenqi; Sudre, Carole; Fidon, Lucas; Shakir, Dzhoshkun I; Wang, Guotai; Eaton-Rosen, Zach; Gray, Robert; Doel, Tom; Hu, Yipeng; Whyntie, Tom; Nachev, Parashkev; Modat, Marc; Barratt, Dean C; Ourselin, Sébastien; Cardoso, M Jorge; Vercauteren, Tom
2018-05-01
Medical image analysis and computer-assisted intervention problems are increasingly being addressed with deep-learning-based solutions. Established deep-learning platforms are flexible but do not provide specific functionality for medical image analysis and adapting them for this domain of application requires substantial implementation effort. Consequently, there has been substantial duplication of effort and incompatible infrastructure developed across many research groups. This work presents the open-source NiftyNet platform for deep learning in medical imaging. The ambition of NiftyNet is to accelerate and simplify the development of these solutions, and to provide a common mechanism for disseminating research outputs for the community to use, adapt and build upon. The NiftyNet infrastructure provides a modular deep-learning pipeline for a range of medical imaging applications including segmentation, regression, image generation and representation learning applications. Components of the NiftyNet pipeline including data loading, data augmentation, network architectures, loss functions and evaluation metrics are tailored to, and take advantage of, the idiosyncracies of medical image analysis and computer-assisted intervention. NiftyNet is built on the TensorFlow framework and supports features such as TensorBoard visualization of 2D and 3D images and computational graphs by default. We present three illustrative medical image analysis applications built using NiftyNet infrastructure: (1) segmentation of multiple abdominal organs from computed tomography; (2) image regression to predict computed tomography attenuation maps from brain magnetic resonance images; and (3) generation of simulated ultrasound images for specified anatomical poses. The NiftyNet infrastructure enables researchers to rapidly develop and distribute deep learning solutions for segmentation, regression, image generation and representation learning applications, or extend the platform to new applications. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.
Simultenious binary hash and features learning for image retrieval
NASA Astrophysics Data System (ADS)
Frantc, V. A.; Makov, S. V.; Voronin, V. V.; Marchuk, V. I.; Semenishchev, E. A.; Egiazarian, K. O.; Agaian, S.
2016-05-01
Content-based image retrieval systems have plenty of applications in modern world. The most important one is the image search by query image or by semantic description. Approaches to this problem are employed in personal photo-collection management systems, web-scale image search engines, medical systems, etc. Automatic analysis of large unlabeled image datasets is virtually impossible without satisfactory image-retrieval technique. It's the main reason why this kind of automatic image processing has attracted so much attention during recent years. Despite rather huge progress in the field, semantically meaningful image retrieval still remains a challenging task. The main issue here is the demand to provide reliable results in short amount of time. This paper addresses the problem by novel technique for simultaneous learning of global image features and binary hash codes. Our approach provide mapping of pixel-based image representation to hash-value space simultaneously trying to save as much of semantic image content as possible. We use deep learning methodology to generate image description with properties of similarity preservation and statistical independence. The main advantage of our approach in contrast to existing is ability to fine-tune retrieval procedure for very specific application which allow us to provide better results in comparison to general techniques. Presented in the paper framework for data- dependent image hashing is based on use two different kinds of neural networks: convolutional neural networks for image description and autoencoder for feature to hash space mapping. Experimental results confirmed that our approach has shown promising results in compare to other state-of-the-art methods.
Magnetic resonance imaging as a tool for extravehicular activity analysis
NASA Technical Reports Server (NTRS)
Dickenson, R.; Lorenz, C.; Peterson, S.; Strauss, A.; Main, J.
1992-01-01
The purpose of this research is to examine the value of magnetic resonance imaging (MRI) as a means of conducting kinematic studies of the hand for the purpose of EVA capability enhancement. After imaging the subject hand using a magnetic resonance scanner, the resulting 2D slices were reconstructed into a 3D model of the proximal phalanx of the left hand. Using the coordinates of several landmark positions, one is then able to decompose the motion of the rigid body. MRI offers highly accurate measurements due to its tomographic nature without the problems associated with other imaging modalities for in vivo studies.
ANAlyte: A modular image analysis tool for ANA testing with indirect immunofluorescence.
Di Cataldo, Santa; Tonti, Simone; Bottino, Andrea; Ficarra, Elisa
2016-05-01
The automated analysis of indirect immunofluorescence images for Anti-Nuclear Autoantibody (ANA) testing is a fairly recent field that is receiving ever-growing interest from the research community. ANA testing leverages on the categorization of intensity level and fluorescent pattern of IIF images of HEp-2 cells to perform a differential diagnosis of important autoimmune diseases. Nevertheless, it suffers from tremendous lack of repeatability due to subjectivity in the visual interpretation of the images. The automatization of the analysis is seen as the only valid solution to this problem. Several works in literature address individual steps of the work-flow, nonetheless integrating such steps and assessing their effectiveness as a whole is still an open challenge. We present a modular tool, ANAlyte, able to characterize a IIF image in terms of fluorescent intensity level and fluorescent pattern without any user-interactions. For this purpose, ANAlyte integrates the following: (i) Intensity Classifier module, that categorizes the intensity level of the input slide based on multi-scale contrast assessment; (ii) Cell Segmenter module, that splits the input slide into individual HEp-2 cells; (iii) Pattern Classifier module, that determines the fluorescent pattern of the slide based on the pattern of the individual cells. To demonstrate the accuracy and robustness of our tool, we experimentally validated ANAlyte on two different public benchmarks of IIF HEp-2 images with rigorous leave-one-out cross-validation strategy. We obtained overall accuracy of fluorescent intensity and pattern classification respectively around 85% and above 90%. We assessed all results by comparisons with some of the most representative state of the art works. Unlike most of the other works in the recent literature, ANAlyte aims at the automatization of all the major steps of ANA image analysis. Results on public benchmarks demonstrate that the tool can characterize HEp-2 slides in terms of intensity and fluorescent pattern with accuracy better or comparable with the state of the art techniques, even when such techniques are run on manually segmented cells. Hence, ANAlyte can be proposed as a valid solution to the problem of ANA testing automatization. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Sheshkus, Alexander; Limonova, Elena; Nikolaev, Dmitry; Krivtsov, Valeriy
2017-03-01
In this paper, we propose an expansion of convolutional neural network (CNN) input features based on Hough Transform. We perform morphological contrasting of source image followed by Hough Transform, and then use it as input for some convolutional filters. Thus, CNNs computational complexity and the number of units are not affected. Morphological contrasting and Hough Transform are the only additional computational expenses of introduced CNN input features expansion. Proposed approach was demonstrated on the example of CNN with very simple structure. We considered two image recognition problems, that were object classification on CIFAR-10 and printed character recognition on private dataset with symbols taken from Russian passports. Our approach allowed to reach noticeable accuracy improvement without taking much computational effort, which can be extremely important in industrial recognition systems or difficult problems utilising CNNs, like pressure ridge analysis and classification.
Fuzzy Matching Based on Gray-scale Difference for Quantum Images
NASA Astrophysics Data System (ADS)
Luo, GaoFeng; Zhou, Ri-Gui; Liu, XingAo; Hu, WenWen; Luo, Jia
2018-05-01
Quantum image processing has recently emerged as an essential problem in practical tasks, e.g. real-time image matching. Previous studies have shown that the superposition and entanglement of quantum can greatly improve the efficiency of complex image processing. In this paper, a fuzzy quantum image matching scheme based on gray-scale difference is proposed to find out the target region in a reference image, which is very similar to the template image. Firstly, we employ the proposed enhanced quantum representation (NEQR) to store digital images. Then some certain quantum operations are used to evaluate the gray-scale difference between two quantum images by thresholding. If all of the obtained gray-scale differences are not greater than the threshold value, it indicates a successful fuzzy matching of quantum images. Theoretical analysis and experiments show that the proposed scheme performs fuzzy matching at a low cost and also enables exponentially significant speedup via quantum parallel computation.
Cloud-based processing of multi-spectral imaging data
NASA Astrophysics Data System (ADS)
Bernat, Amir S.; Bolton, Frank J.; Weiser, Reuven; Levitz, David
2017-03-01
Multispectral imaging holds great promise as a non-contact tool for the assessment of tissue composition. Performing multi - spectral imaging on a hand held mobile device would allow to bring this technology and with it knowledge to low resource settings to provide a state of the art classification of tissue health. This modality however produces considerably larger data sets than white light imaging and requires preliminary image analysis for it to be used. The data then needs to be analyzed and logged, while not requiring too much of the system resource or a long computation time and battery use by the end point device. Cloud environments were designed to allow offloading of those problems by allowing end point devices (smartphones) to offload computationally hard tasks. For this end we present a method where the a hand held device based around a smartphone captures a multi - spectral dataset in a movie file format (mp4) and compare it to other image format in size, noise and correctness. We present the cloud configuration used for segmenting images to frames where they can later be used for further analysis.
Segmentation of anatomical structures of the heart based on echocardiography
NASA Astrophysics Data System (ADS)
Danilov, V. V.; Skirnevskiy, I. P.; Gerget, O. M.
2017-01-01
Nowadays, many practical applications in the field of medical image processing require valid and reliable segmentation of images in the capacity of input data. Some of the commonly used imaging techniques are ultrasound, CT, and MRI. However, the main difference between the other medical imaging equipment and EchoCG is that it is safer, low cost, non-invasive and non-traumatic. Three-dimensional EchoCG is a non-invasive imaging modality that is complementary and supplementary to two-dimensional imaging and can be used to examine the cardiovascular function and anatomy in different medical settings. The challenging problems, presented by EchoCG image processing, such as speckle phenomena, noise, temporary non-stationarity of processes, unsharp boundaries, attenuation, etc. forced us to consider and compare existing methods and then to develop an innovative approach that can tackle the problems connected with clinical applications. Actual studies are related to the analysis and development of a cardiac parameters automatic detection system by EchoCG that will provide new data on the dynamics of changes in cardiac parameters and improve the accuracy and reliability of the diagnosis. Research study in image segmentation has highlighted the capabilities of image-based methods for medical applications. The focus of the research is both theoretical and practical aspects of the application of the methods. Some of the segmentation approaches can be interesting for the imaging and medical community. Performance evaluation is carried out by comparing the borders, obtained from the considered methods to those manually prescribed by a medical specialist. Promising results demonstrate the possibilities and the limitations of each technique for image segmentation problems. The developed approach allows: to eliminate errors in calculating the geometric parameters of the heart; perform the necessary conditions, such as speed, accuracy, reliability; build a master model that will be an indispensable assistant for operations on a beating heart.
NASA Astrophysics Data System (ADS)
Osipov, Gennady
2013-04-01
We propose a solution to the problem of exploration of various mineral resource deposits, determination of their forms / classification of types (oil, gas, minerals, gold, etc.) with the help of satellite photography of the region of interest. Images received from satellite are processed and analyzed to reveal the presence of specific signs of deposits of various minerals. Course of data processing and making forecast can be divided into some stages: Pre-processing of images. Normalization of color and luminosity characteristics, determination of the necessary contrast level and integration of a great number of separate photos into a single map of the region are performed. Construction of semantic map image. Recognition of bitmapped image and allocation of objects and primitives known to system are realized. Intelligent analysis. At this stage acquired information is analyzed with the help of a knowledge base, which contain so-called "attention landscapes" of experts. Used methods of recognition and identification of images: a) combined method of image recognition, b)semantic analysis of posterized images, c) reconstruction of three-dimensional objects from bitmapped images, d)cognitive technology of processing and interpretation of images. This stage is fundamentally new and it distinguishes suggested technology from all others. Automatic registration of allocation of experts` attention - registration of so-called "attention landscape" of experts - is the base of the technology. Landscapes of attention are, essentially, highly effective filters that cut off unnecessary information and emphasize exactly the factors used by an expert for making a decision. The technology based on denoted principles involves the next stages, which are implemented in corresponding program agents. Training mode -> Creation of base of ophthalmologic images (OI) -> Processing and making generalized OI (GOI) -> Mode of recognition and interpretation of unknown images. Training mode includes noncontact registration of eye motion, reconstruction of "attention landscape" fixed by the expert, recording the comments of the expert who is a specialist in the field of images` interpretation, and transfer this information into knowledge base.Creation of base of ophthalmologic images (OI) includes making semantic contacts from great number of OI based on analysis of OI and expert's comments.Processing of OI and making generalized OI (GOI) is realized by inductive logic algorithms and consists in synthesis of structural invariants of OI. The mode of recognition and interpretation of unknown images consists of several stages, which include: comparison of unknown image with the base of structural invariants of OI; revealing of structural invariants in unknown images; ynthesis of interpretive message of the structural invariants base and OI base (the experts` comments stored in it). We want to emphasize that the training mode does not assume special involvement of experts to teach the system - it is realized in the process of regular experts` work on image interpretation and it becomes possible after installation of a special apparatus for non contact registration of experts` attention. Consequently, the technology, which principles is described there, provides fundamentally new effective solution to the problem of exploration of mineral resource deposits based on computer analysis of aerial and satellite image data.
Image watermarking capacity analysis based on Hopfield neural network
NASA Astrophysics Data System (ADS)
Zhang, Fan; Zhang, Hongbin
2004-11-01
In watermarking schemes, watermarking can be viewed as a form of communication problems. Almost all of previous works on image watermarking capacity are based on information theory, using Shannon formula to calculate the capacity of watermarking. In this paper, we present a blind watermarking algorithm using Hopfield neural network, and analyze watermarking capacity based on neural network. In our watermarking algorithm, watermarking capacity is decided by attraction basin of associative memory.
Sub-pixel mapping of hyperspectral imagery using super-resolution
NASA Astrophysics Data System (ADS)
Sharma, Shreya; Sharma, Shakti; Buddhiraju, Krishna M.
2016-04-01
With the development of remote sensing technologies, it has become possible to obtain an overview of landscape elements which helps in studying the changes on earth's surface due to climate, geological, geomorphological and human activities. Remote sensing measures the electromagnetic radiations from the earth's surface and match the spectral similarity between the observed signature and the known standard signatures of the various targets. However, problem lies when image classification techniques assume pixels to be pure. In hyperspectral imagery, images have high spectral resolution but poor spatial resolution. Therefore, the spectra obtained is often contaminated due to the presence of mixed pixels and causes misclassification. To utilise this high spectral information, spatial resolution has to be enhanced. Many factors make the spatial resolution one of the most expensive and hardest to improve in imaging systems. To solve this problem, post-processing of hyperspectral images is done to retrieve more information from the already acquired images. The algorithm to enhance spatial resolution of the images by dividing them into sub-pixels is known as super-resolution and several researches have been done in this domain.In this paper, we propose a new method for super-resolution based on ant colony optimization and review the popular methods of sub-pixel mapping of hyperspectral images along with their comparative analysis.
Shkvarko, Yuriy; Tuxpan, José; Santos, Stewart
2011-01-01
We consider a problem of high-resolution array radar/SAR imaging formalized in terms of a nonlinear ill-posed inverse problem of nonparametric estimation of the power spatial spectrum pattern (SSP) of the random wavefield scattered from a remotely sensed scene observed through a kernel signal formation operator and contaminated with random Gaussian noise. First, the Sobolev-type solution space is constructed to specify the class of consistent kernel SSP estimators with the reproducing kernel structures adapted to the metrics in such the solution space. Next, the "model-free" variational analysis (VA)-based image enhancement approach and the "model-based" descriptive experiment design (DEED) regularization paradigm are unified into a new dynamic experiment design (DYED) regularization framework. Application of the proposed DYED framework to the adaptive array radar/SAR imaging problem leads to a class of two-level (DEED-VA) regularized SSP reconstruction techniques that aggregate the kernel adaptive anisotropic windowing with the projections onto convex sets to enforce the consistency and robustness of the overall iterative SSP estimators. We also show how the proposed DYED regularization method may be considered as a generalization of the MVDR, APES and other high-resolution nonparametric adaptive radar sensing techniques. A family of the DYED-related algorithms is constructed and their effectiveness is finally illustrated via numerical simulations.
Lozier, Leah M; Cardinale, Elise M; VanMeter, John W; Marsh, Abigail A
2014-06-01
Among youths with conduct problems, callous-unemotional (CU) traits are known to be an important determinant of symptom severity, prognosis, and treatment responsiveness. But positive correlations between conduct problems and CU traits result in suppressor effects that may mask important neurobiological distinctions among subgroups of children with conduct problems. To assess the unique neurobiological covariates of CU traits and externalizing behaviors in youths with conduct problems and determine whether neural dysfunction linked to CU traits mediates the link between callousness and proactive aggression. This cross-sectional case-control study involved behavioral testing and neuroimaging that were conducted at a university research institution. Neuroimaging was conducted using a 3-T Siemens magnetic resonance imaging scanner. It included 46 community-recruited male and female juveniles aged 10 to 17 years, including 16 healthy control participants and 30 youths with conduct problems with both low and high levels of CU traits. Blood oxygenation level-dependent signal as measured via functional magnetic resonance imaging during an implicit face-emotion processing task and analyzed using whole-brain and region of interest-based analysis of variance and multiple-regression analyses. Analysis of variance revealed no group differences in the amygdala. By contrast, consistent with the existence of suppressor effects, multiple-regression analysis found amygdala responses to fearful expressions to be negatively associated with CU traits (x = 26, y = 0, z = -12; k = 1) and positively associated with externalizing behavior (x = 24, y = 0, z = -14; k = 8) when both variables were modeled simultaneously. Reduced amygdala responses mediated the relationship between CU traits and proactive aggression. The results linked proactive aggression in youths with CU traits to hypoactive amygdala responses to emotional distress cues, consistent with theories that externalizing behaviors, particularly proactive aggression, in youths with these traits stem from deficient empathic responses to distress. Amygdala hypoactivity may represent an intermediate phenotype, offering new insights into effective treatment strategies for conduct problems.
Dermoscopy for common skin problems in Chinese children using a novel Hong Kong-made dermoscope.
Luk, David C K; Lam, Sam Y Y; Cheung, Patrick C H; Chan, Bill H B
2014-12-01
To evaluate the dermoscopic features of common skin problems in Chinese children. A case series with retrospective qualitative analysis of dermoscopic features of common skin problems in Chinese children. A regional hospital in Hong Kong. Dermoscopic image database, from 1 May 2013 to 31 October 2013, of 185 Chinese children (aged 0 to 18 years). Dermoscopic features of common paediatric skin problems in Chinese children were identified. These features corresponded with the known dermoscopic features reported in the western medical literature. New dermoscopic features were identified in café-au-lait macules. Dermoscopic features of common skin problems in Chinese children were consistent with those reported in western medical literature. Dermoscopy has a role in managing children with skin problems.
Efficient processing of fluorescence images using directional multiscale representations.
Labate, D; Laezza, F; Negi, P; Ozcan, B; Papadakis, M
2014-01-01
Recent advances in high-resolution fluorescence microscopy have enabled the systematic study of morphological changes in large populations of cells induced by chemical and genetic perturbations, facilitating the discovery of signaling pathways underlying diseases and the development of new pharmacological treatments. In these studies, though, due to the complexity of the data, quantification and analysis of morphological features are for the vast majority handled manually, slowing significantly data processing and limiting often the information gained to a descriptive level. Thus, there is an urgent need for developing highly efficient automated analysis and processing tools for fluorescent images. In this paper, we present the application of a method based on the shearlet representation for confocal image analysis of neurons. The shearlet representation is a newly emerged method designed to combine multiscale data analysis with superior directional sensitivity, making this approach particularly effective for the representation of objects defined over a wide range of scales and with highly anisotropic features. Here, we apply the shearlet representation to problems of soma detection of neurons in culture and extraction of geometrical features of neuronal processes in brain tissue, and propose it as a new framework for large-scale fluorescent image analysis of biomedical data.
Efficient processing of fluorescence images using directional multiscale representations
Labate, D.; Laezza, F.; Negi, P.; Ozcan, B.; Papadakis, M.
2017-01-01
Recent advances in high-resolution fluorescence microscopy have enabled the systematic study of morphological changes in large populations of cells induced by chemical and genetic perturbations, facilitating the discovery of signaling pathways underlying diseases and the development of new pharmacological treatments. In these studies, though, due to the complexity of the data, quantification and analysis of morphological features are for the vast majority handled manually, slowing significantly data processing and limiting often the information gained to a descriptive level. Thus, there is an urgent need for developing highly efficient automated analysis and processing tools for fluorescent images. In this paper, we present the application of a method based on the shearlet representation for confocal image analysis of neurons. The shearlet representation is a newly emerged method designed to combine multiscale data analysis with superior directional sensitivity, making this approach particularly effective for the representation of objects defined over a wide range of scales and with highly anisotropic features. Here, we apply the shearlet representation to problems of soma detection of neurons in culture and extraction of geometrical features of neuronal processes in brain tissue, and propose it as a new framework for large-scale fluorescent image analysis of biomedical data. PMID:28804225
Fast multi-core based multimodal registration of 2D cross-sections and 3D datasets
2010-01-01
Background Solving bioinformatics tasks often requires extensive computational power. Recent trends in processor architecture combine multiple cores into a single chip to improve overall performance. The Cell Broadband Engine (CBE), a heterogeneous multi-core processor, provides power-efficient and cost-effective high-performance computing. One application area is image analysis and visualisation, in particular registration of 2D cross-sections into 3D image datasets. Such techniques can be used to put different image modalities into spatial correspondence, for example, 2D images of histological cuts into morphological 3D frameworks. Results We evaluate the CBE-driven PlayStation 3 as a high performance, cost-effective computing platform by adapting a multimodal alignment procedure to several characteristic hardware properties. The optimisations are based on partitioning, vectorisation, branch reducing and loop unrolling techniques with special attention to 32-bit multiplies and limited local storage on the computing units. We show how a typical image analysis and visualisation problem, the multimodal registration of 2D cross-sections and 3D datasets, benefits from the multi-core based implementation of the alignment algorithm. We discuss several CBE-based optimisation methods and compare our results to standard solutions. More information and the source code are available from http://cbe.ipk-gatersleben.de. Conclusions The results demonstrate that the CBE processor in a PlayStation 3 accelerates computational intensive multimodal registration, which is of great importance in biological/medical image processing. The PlayStation 3 as a low cost CBE-based platform offers an efficient option to conventional hardware to solve computational problems in image processing and bioinformatics. PMID:20064262
NASA Astrophysics Data System (ADS)
Hunt, Gordon W.; Hemler, Paul F.; Vining, David J.
1997-05-01
Virtual colonscopy (VC) is a minimally invasive alternative to conventional fiberoptic endoscopy for colorectal cancer screening. The VC technique involves bowel cleansing, gas distension of the colon, spiral computed tomography (CT) scanning of a patient's abdomen and pelvis, and visual analysis of multiplanar 2D and 3D images created from the spiral CT data. Despite the ability of interactive computer graphics to assist a physician in visualizing 3D models of the colon, a correct diagnosis hinges upon a physician's ability to properly identify small and sometimes subtle polyps or masses within hundreds of multiplanar and 3D images. Human visual analysis is time-consuming, tedious, and often prone to error of interpretation.We have addressed the problem of visual analysis by creating a software system that automatically highlights potential lesions in the 2D and 3D images in order to expedite a physician's interpretation of the colon data.
Effects of 99mTc-TRODAT-1 drug template on image quantitative analysis
Yang, Bang-Hung; Chou, Yuan-Hwa; Wang, Shyh-Jen; Chen, Jyh-Cheng
2018-01-01
99mTc-TRODAT-1 is a type of drug that can bind to dopamine transporters in living organisms and is often used in SPCT imaging for observation of changes in the activity uptake of dopamine in the striatum. Therefore, it is currently widely used in studies on clinical diagnosis of Parkinson’s disease (PD) and movement-related disorders. In conventional 99mTc-TRODAT-1 SPECT image evaluation, visual inspection or manual selection of ROI for semiquantitative analysis is mainly used to observe and evaluate the degree of striatal defects. However, these methods are dependent on the subjective opinions of observers, which lead to human errors, have shortcomings such as long duration, increased effort, and have low reproducibility. To solve this problem, this study aimed to establish an automatic semiquantitative analytical method for 99mTc-TRODAT-1. This method combines three drug templates (one built-in SPECT template in SPM software and two self-generated MRI-based and HMPAO-based TRODAT-1 templates) for the semiquantitative analysis of the striatal phantom and clinical images. At the same time, the results of automatic analysis of the three templates were compared with results from a conventional manual analysis for examining the feasibility of automatic analysis and the effects of drug templates on automatic semiquantitative analysis results. After comparison, it was found that the MRI-based TRODAT-1 template generated from MRI images is the most suitable template for 99mTc-TRODAT-1 automatic semiquantitative analysis. PMID:29543874
Zöllner, Frank G; Daab, Markus; Sourbron, Steven P; Schad, Lothar R; Schoenberg, Stefan O; Weisser, Gerald
2016-01-14
Perfusion imaging has become an important image based tool to derive the physiological information in various applications, like tumor diagnostics and therapy, stroke, (cardio-) vascular diseases, or functional assessment of organs. However, even after 20 years of intense research in this field, perfusion imaging still remains a research tool without a broad clinical usage. One problem is the lack of standardization in technical aspects which have to be considered for successful quantitative evaluation; the second problem is a lack of tools that allow a direct integration into the diagnostic workflow in radiology. Five compartment models, namely, a one compartment model (1CP), a two compartment exchange (2CXM), a two compartment uptake model (2CUM), a two compartment filtration model (2FM) and eventually the extended Toft's model (ETM) were implemented as plugin for the DICOM workstation OsiriX. Moreover, the plugin has a clean graphical user interface and provides means for quality management during the perfusion data analysis. Based on reference test data, the implementation was validated against a reference implementation. No differences were found in the calculated parameters. We developed open source software to analyse DCE-MRI perfusion data. The software is designed as plugin for the DICOM Workstation OsiriX. It features a clean GUI and provides a simple workflow for data analysis while it could also be seen as a toolbox providing an implementation of several recent compartment models to be applied in research tasks. Integration into the infrastructure of a radiology department is given via OsiriX. Results can be saved automatically and reports generated automatically during data analysis ensure certain quality control.
The Use of Mobile Devices in Aiding Dietary Assessment and Evaluation
Zhu, Fengqing; Bosch, Marc; Woo, Insoo; Kim, SungYe; Boushey, Carol J.; Ebert, David S.; Delp, Edward J.
2010-01-01
There is a growing concern about chronic diseases and other health problems related to diet including obesity and cancer. The need to accurately measure diet (what foods a person consumes) becomes imperative. Dietary intake provides valuable insights for mounting intervention programs for prevention of chronic diseases. Measuring accurate dietary intake is considered to be an open research problem in the nutrition and health fields. In this paper, we describe a novel mobile telephone food record that will provide an accurate account of daily food and nutrient intake. Our approach includes the use of image analysis tools for identification and quantification of food that is consumed at a meal. Images obtained before and after foods are eaten are used to estimate the amount and type of food consumed. The mobile device provides a unique vehicle for collecting dietary information that reduces the burden on respondents that are obtained using more classical approaches for dietary assessment. We describe our approach to image analysis that includes the segmentation of food items, features used to identify foods, a method for automatic portion estimation, and our overall system architecture for collecting the food intake information. PMID:20862266
L1-Based Approximations of PDEs and Applications
2012-09-05
the analysis of the Navier-Stokes equations. The early versions of artificial vis- cosities being overly dissipative, the interest for these technique ...Guermond, and B. Popov. Stability analysis of explicit en- tropy viscosity methods for non-linear scalar conservation equations. Math. Comp., 2012... methods for solv- ing mathematical models of nonlinear phenomena such as nonlinear conservation laws, surface/image/data reconstruction problems
Pánek, J; Vohradský, J
1997-06-01
The principal motivation was to design an environment for the development of image-analysis applications which would allow the integration of independent modules into one frame and make available tools for their build-up, running, management and mutual communication. The system was designed as modular, consisting of the core and work modules. The system core focuses on overall management and provides a library of classes for build-up of the work modules, their user interface and data communication. The work modules carry practical implementation of algorithms and data structures for the solution of a particular problem, and were implemented as dynamic-link libraries. They are mutually independent and run as individual threads, communicating with each other via a unified mechanism. The environment was designed to simplify the development and testing of new algorithms or applications. An example of implementation for the particular problem of the analysis of two-dimensional (2D) gel electrophoretograms is presented. The environment was designed for the Windows NT operating system with the use of Microsoft Foundation Class Library employing the possibilities of C++ programming language. Available on request from the authors.
Texture Analysis of Chaotic Coupled Map Lattices Based Image Encryption Algorithm
NASA Astrophysics Data System (ADS)
Khan, Majid; Shah, Tariq; Batool, Syeda Iram
2014-09-01
As of late, data security is key in different enclosures like web correspondence, media frameworks, therapeutic imaging, telemedicine and military correspondence. In any case, a large portion of them confronted with a few issues, for example, the absence of heartiness and security. In this letter, in the wake of exploring the fundamental purposes of the chaotic trigonometric maps and the coupled map lattices, we have presented the algorithm of chaos-based image encryption based on coupled map lattices. The proposed mechanism diminishes intermittent impact of the ergodic dynamical systems in the chaos-based image encryption. To assess the security of the encoded image of this scheme, the association of two nearby pixels and composition peculiarities were performed. This algorithm tries to minimize the problems arises in image encryption.
Optical time-of-flight and absorbance imaging of biologic media.
Benaron, D A; Stevenson, D K
1993-03-05
Imaging the interior of living bodies with light may assist in the diagnosis and treatment of a number of clinical problems, which include the early detection of tumors and hypoxic cerebral injury. An existing picosecond time-of-flight and absorbance (TOFA) optical system has been used to image a model biologic system and a rat. Model measurements confirmed TOFA principles in systems with a high degree of photon scattering; rat images, which were constructed from the variable time delays experienced by a fixed fraction of early-arriving transmitted photons, revealed identifiable internal structure. A combination of light-based quantitative measurement and TOFA localization may have applications in continuous, noninvasive monitoring for structural imaging and spatial chemometric analysis in humans.
Optical Time-of-Flight and Absorbance Imaging of Biologic Media
NASA Astrophysics Data System (ADS)
Benaron, David A.; Stevenson, David K.
1993-03-01
Imaging the interior of living bodies with light may assist in the diagnosis and treatment of a number of clinical problems, which include the early detection of tumors and hypoxic cerebral injury. An existing picosecond time-of-flight and absorbance (TOFA) optical system has been used to image a model biologic system and a rat. Model measurements confirmed TOFA principles in systems with a high degree of photon scattering; rat images, which were constructed from the variable time delays experienced by a fixed fraction of early-arriving transmitted photons, revealed identifiable internal structure. A combination of light-based quantitative measurement and TOFA localization may have applications in continuous, noninvasive monitoring for structural imaging and spatial chemometric analysis in humans.
Tracking prominent points in image sequences
NASA Astrophysics Data System (ADS)
Hahn, Michael
1994-03-01
Measuring image motion and inferring scene geometry and camera motion are main aspects of image sequence analysis. The determination of image motion and the structure-from-motion problem are tasks that can be addressed independently or in cooperative processes. In this paper we focus on tracking prominent points. High stability, reliability, and accuracy are criteria for the extraction of prominent points. This implies that tracking should work quite well with those features; unfortunately, the reality looks quite different. In the experimental investigations we processed a long sequence of 128 images. This mono sequence is taken in an outdoor environment at the experimental field of Mercedes Benz in Rastatt. Different tracking schemes are explored and the results with respect to stability and quality are reported.
New techniques for imaging and analyzing lung tissue.
Roggli, V L; Ingram, P; Linton, R W; Gutknecht, W F; Mastin, P; Shelburne, J D
1984-01-01
The recent technological revolution in the field of imaging techniques has provided pathologists and toxicologists with an expanding repertoire of analytical techniques for studying the interaction between the lung and the various exogenous materials to which it is exposed. Analytical problems requiring elemental sensitivity or specificity beyond the range of that offered by conventional scanning electron microscopy and energy dispersive X-ray analysis are particularly appropriate for the application of these newer techniques. Electron energy loss spectrometry, Auger electron spectroscopy, secondary ion mass spectrometry, and laser microprobe mass analysis each offer unique advantages in this regard, but also possess their own limitations and disadvantages. Diffraction techniques provide crystalline structural information available through no other means. Bulk chemical techniques provide useful cross-checks on the data obtained by microanalytical approaches. It is the purpose of this review to summarize the methodology of these techniques, acknowledge situations in which they have been used in addressing problems in pulmonary toxicology, and comment on the relative advantages and disadvantages of each approach. It is necessary for an investigator to weigh each of these factors when deciding which technique is best suited for any given analytical problem; often it is useful to employ a combination of two or more of the techniques discussed. It is anticipated that there will be increasing utilization of these technologies for problems in pulmonary toxicology in the decades to come. Images FIGURE 3. A FIGURE 3. B FIGURE 3. C FIGURE 3. D FIGURE 4. FIGURE 5. FIGURE 7. A FIGURE 7. B FIGURE 8. A FIGURE 8. B FIGURE 8. C FIGURE 9. A FIGURE 9. B FIGURE 10. PMID:6090115
Static sign language recognition using 1D descriptors and neural networks
NASA Astrophysics Data System (ADS)
Solís, José F.; Toxqui, Carina; Padilla, Alfonso; Santiago, César
2012-10-01
A frame work for static sign language recognition using descriptors which represents 2D images in 1D data and artificial neural networks is presented in this work. The 1D descriptors were computed by two methods, first one consists in a correlation rotational operator.1 and second is based on contour analysis of hand shape. One of the main problems in sign language recognition is segmentation; most of papers report a special color in gloves or background for hand shape analysis. In order to avoid the use of gloves or special clothing, a thermal imaging camera was used to capture images. Static signs were picked up from 1 to 9 digits of American Sign Language, a multilayer perceptron reached 100% recognition with cross-validation.
Midulla, Marco; Moreno, Ramiro; Baali, Adil; Chau, Ming; Negre-Salvayre, Anne; Nicoud, Franck; Pruvo, Jean-Pierre; Haulon, Stephan; Rousseau, Hervé
2012-10-01
In the last decade, there was been increasing interest in finding imaging techniques able to provide a functional vascular imaging of the thoracic aorta. The purpose of this paper is to present an imaging method combining magnetic resonance imaging (MRI) and computational fluid dynamics (CFD) to obtain a patient-specific haemodynamic analysis of patients treated by thoracic endovascular aortic repair (TEVAR). MRI was used to obtain boundary conditions. MR angiography (MRA) was followed by cardiac-gated cine sequences which covered the whole thoracic aorta. Phase contrast imaging provided the inlet and outlet profiles. A CFD mesh generator was used to model the arterial morphology, and wall movements were imposed according to the cine imaging. CFD runs were processed using the finite volume (FV) method assuming blood as a homogeneous Newtonian fluid. Twenty patients (14 men; mean age 62.2 years) with different aortic lesions were evaluated. Four-dimensional mapping of velocity and wall shear stress were obtained, depicting different patterns of flow (laminar, turbulent, stenosis-like) and local alterations of parietal stress in-stent and along the native aorta. A computational method using a combined approach with MRI appears feasible and seems promising to provide detailed functional analysis of thoracic aorta after stent-graft implantation. • Functional vascular imaging of the thoracic aorta offers new diagnostic opportunities • CFD can model vascular haemodynamics for clinical aortic problems • Combining CFD with MRI offers patient specific method of aortic analysis • Haemodynamic analysis of stent-grafts could improve clinical management and follow-up.
Research of generalized wavelet transformations of Haar correctness in remote sensing of the Earth
NASA Astrophysics Data System (ADS)
Kazaryan, Maretta; Shakhramanyan, Mihail; Nedkov, Roumen; Richter, Andrey; Borisova, Denitsa; Stankova, Nataliya; Ivanova, Iva; Zaharinova, Mariana
2017-10-01
In this paper, Haar's generalized wavelet functions are applied to the problem of ecological monitoring by the method of remote sensing of the Earth. We study generalized Haar wavelet series and suggest the use of Tikhonov's regularization method for investigating them for correctness. In the solution of this problem, an important role is played by classes of functions that were introduced and described in detail by I.M. Sobol for studying multidimensional quadrature formulas and it contains functions with rapidly convergent series of wavelet Haar. A theorem on the stability and uniform convergence of the regularized summation function of the generalized wavelet-Haar series of a function from this class with approximate coefficients is proved. The article also examines the problem of using orthogonal transformations in Earth remote sensing technologies for environmental monitoring. Remote sensing of the Earth allows to receive from spacecrafts information of medium, high spatial resolution and to conduct hyperspectral measurements. Spacecrafts have tens or hundreds of spectral channels. To process the images, the device of discrete orthogonal transforms, and namely, wavelet transforms, was used. The aim of the work is to apply the regularization method in one of the problems associated with remote sensing of the Earth and subsequently to process the satellite images through discrete orthogonal transformations, in particular, generalized Haar wavelet transforms. General methods of research. In this paper, Tikhonov's regularization method, the elements of mathematical analysis, the theory of discrete orthogonal transformations, and methods for decoding of satellite images are used. Scientific novelty. The task of processing of archival satellite snapshots (images), in particular, signal filtering, was investigated from the point of view of an incorrectly posed problem. The regularization parameters for discrete orthogonal transformations were determined.
NASA Astrophysics Data System (ADS)
Li, Zuhe; Fan, Yangyu; Liu, Weihua; Yu, Zeqi; Wang, Fengqin
2017-01-01
We aim to apply sparse autoencoder-based unsupervised feature learning to emotional semantic analysis for textile images. To tackle the problem of limited training data, we present a cross-domain feature learning scheme for emotional textile image classification using convolutional autoencoders. We further propose a correlation-analysis-based feature selection method for the weights learned by sparse autoencoders to reduce the number of features extracted from large size images. First, we randomly collect image patches on an unlabeled image dataset in the source domain and learn local features with a sparse autoencoder. We then conduct feature selection according to the correlation between different weight vectors corresponding to the autoencoder's hidden units. We finally adopt a convolutional neural network including a pooling layer to obtain global feature activations of textile images in the target domain and send these global feature vectors into logistic regression models for emotional image classification. The cross-domain unsupervised feature learning method achieves 65% to 78% average accuracy in the cross-validation experiments corresponding to eight emotional categories and performs better than conventional methods. Feature selection can reduce the computational cost of global feature extraction by about 50% while improving classification performance.
Design and optimal control of multi-spacecraft interferometric imaging systems
NASA Astrophysics Data System (ADS)
Chakravorty, Suman
The objective of the proposed NASA Origins mission, Planet Imager, is the high-resolution imaging of exo-solar planets and similar high resolution astronomical imaging applications. The imaging is to be accomplished through the design of multi-spacecraft interferometric imaging systems (MSIIS). In this dissertation, we study the design of MSIIS. Assuming that the ultimate goal of imaging is the correct classification of the formed images, we formulate the design problem as minimization of some resource utilization of the system subject to the constraint that the probability of misclassification of any given image is below a pre-specified level. We model the process of image formation in an MSIIS and show that the Modulation Transfer function of and the noise corrupting the synthesized optical instrument are dependent on the trajectories of the constituent spacecraft. Assuming that the final goal of imaging is the correct classification of the formed image based on a given feature (a real valued function of the image variable), and a threshold on the feature, we find conditions on the noise corrupting the measurements such that the probability of misclassification is below some pre-specified level. These conditions translate into constraints on the trajectories of the constituent spacecraft. Thus, the design problem reduces to minimizing some resource utilization of the system, while satisfying the constraints placed on the system by the imaging requirements. We study the problem of designing minimum time maneuvers for MSIIS. We transform the time minimization problem into a "painting problem". The painting problem involves painting a large disk with smaller paintbrushes (coverage disks). We show that spirals form the dominant set for the solution to the painting problem. We frame the time minimization in the subspace of spirals and obtain a bilinear program, the double pantograph problem, in the design parameters of the spiral, the spiraling rate and the angular rate. We show that the solution of this problem is given by the solution to two associated linear programs. We illustrate our results through a simulation where the banded appearance of a fictitious exo-solar planet at a distance of 8 parsecs is detected.
Photogrammetry of the solar aureole
NASA Technical Reports Server (NTRS)
Deepak, A.
1978-01-01
This paper presents a photogrammetric analysis of the solar aureole for the purpose of making photographic sky radiance measurements for determining aerosol physical characteristics. A photograph is essentially a projection of a 3-D object space onto a 2-D image space. Photogrammetry deals with relations that exist between the object and the image spaces. The main problem of photogrammetry is the reconstruction of configurations in the object space by means of the image space data. It is shown that the almucantar projects onto the photographic plane as a conic section and the sun vertical as a straight line.
The Effects of Clock Drift on the Mars Exploration Rovers
NASA Technical Reports Server (NTRS)
Ali, Khaled S.; Vanelli, C. Anthony
2012-01-01
All clocks drift by some amount, and the mission clock on the Mars Exploration Rovers (MER) is no exception. The mission clock on both MER rovers drifted significantly since the rovers were launched, and it is still drifting on the Opportunity rover. The drift rate is temperature dependent. Clock drift causes problems for onboard behaviors and spacecraft operations, such as attitude estimation, driving, operation of the robotic arm, pointing for imaging, power analysis, and telecom analysis. The MER operations team has techniques to deal with some of these problems. There are a few techniques for reducing and eliminating the clock drift, but each has drawbacks. This paper presents an explanation of what is meant by clock drift on the rovers, its relationship to temperature, how we measure it, what problems it causes, how we deal with those problems, and techniques for reducing the drift.
Rey-Villamizar, Nicolas; Somasundar, Vinay; Megjhani, Murad; Xu, Yan; Lu, Yanbin; Padmanabhan, Raghav; Trett, Kristen; Shain, William; Roysam, Badri
2014-01-01
In this article, we describe the use of Python for large-scale automated server-based bio-image analysis in FARSIGHT, a free and open-source toolkit of image analysis methods for quantitative studies of complex and dynamic tissue microenvironments imaged by modern optical microscopes, including confocal, multi-spectral, multi-photon, and time-lapse systems. The core FARSIGHT modules for image segmentation, feature extraction, tracking, and machine learning are written in C++, leveraging widely used libraries including ITK, VTK, Boost, and Qt. For solving complex image analysis tasks, these modules must be combined into scripts using Python. As a concrete example, we consider the problem of analyzing 3-D multi-spectral images of brain tissue surrounding implanted neuroprosthetic devices, acquired using high-throughput multi-spectral spinning disk step-and-repeat confocal microscopy. The resulting images typically contain 5 fluorescent channels. Each channel consists of 6000 × 10,000 × 500 voxels with 16 bits/voxel, implying image sizes exceeding 250 GB. These images must be mosaicked, pre-processed to overcome imaging artifacts, and segmented to enable cellular-scale feature extraction. The features are used to identify cell types, and perform large-scale analysis for identifying spatial distributions of specific cell types relative to the device. Python was used to build a server-based script (Dell 910 PowerEdge servers with 4 sockets/server with 10 cores each, 2 threads per core and 1TB of RAM running on Red Hat Enterprise Linux linked to a RAID 5 SAN) capable of routinely handling image datasets at this scale and performing all these processing steps in a collaborative multi-user multi-platform environment. Our Python script enables efficient data storage and movement between computers and storage servers, logs all the processing steps, and performs full multi-threaded execution of all codes, including open and closed-source third party libraries.
Remote Sensing and Imaging Physics
2012-03-07
Model Analysis Process Wire-frame Shape Model a s s u m e d a p rio ri k n o w le d g e No material BRDF library employed in retrieval...a collection of information if it does not display a currently valid OMB control number. 1. REPORT DATE 07 MAR 2012 2. REPORT TYPE 3. DATES COVERED...imaging estimation problems Allows properties of local maxima to be derived from the Kolmogorov model of atmospheric turbulence: Each speckle
Chen, Qiang; Chen, Yunhao; Jiang, Weiguo
2016-01-01
In the field of multiple features Object-Based Change Detection (OBCD) for very-high-resolution remotely sensed images, image objects have abundant features and feature selection affects the precision and efficiency of OBCD. Through object-based image analysis, this paper proposes a Genetic Particle Swarm Optimization (GPSO)-based feature selection algorithm to solve the optimization problem of feature selection in multiple features OBCD. We select the Ratio of Mean to Variance (RMV) as the fitness function of GPSO, and apply the proposed algorithm to the object-based hybrid multivariate alternative detection model. Two experiment cases on Worldview-2/3 images confirm that GPSO can significantly improve the speed of convergence, and effectively avoid the problem of premature convergence, relative to other feature selection algorithms. According to the accuracy evaluation of OBCD, GPSO is superior at overall accuracy (84.17% and 83.59%) and Kappa coefficient (0.6771 and 0.6314) than other algorithms. Moreover, the sensitivity analysis results show that the proposed algorithm is not easily influenced by the initial parameters, but the number of features to be selected and the size of the particle swarm would affect the algorithm. The comparison experiment results reveal that RMV is more suitable than other functions as the fitness function of GPSO-based feature selection algorithm. PMID:27483285
Biochemical imaging of tissues by SIMS for biomedical applications
NASA Astrophysics Data System (ADS)
Lee, Tae Geol; Park, Ji-Won; Shon, Hyun Kyong; Moon, Dae Won; Choi, Won Woo; Li, Kapsok; Chung, Jin Ho
2008-12-01
With the development of optimal surface cleaning techniques by cluster ion beam sputtering, certain applications of SIMS for analyzing cells and tissues have been actively investigated. For this report, we collaborated with bio-medical scientists to study bio-SIMS analyses of skin and cancer tissues for biomedical diagnostics. We pay close attention to the setting up of a routine procedure for preparing tissue specimens and treating the surface before obtaining the bio-SIMS data. Bio-SIMS was used to study two biosystems, skin tissues for understanding the effects of photoaging and colon cancer tissues for insight into the development of new cancer diagnostics for cancer. Time-of-flight SIMS imaging measurements were taken after surface cleaning with cluster ion bombardment by Bi n or C 60 under varying conditions. The imaging capability of bio-SIMS with a spatial resolution of a few microns combined with principal component analysis reveal biologically meaningful information, but the lack of high molecular weight peaks even with cluster ion bombardment was a problem. This, among other problems, shows that discourse with biologists and medical doctors are critical to glean any meaningful information from SIMS mass spectrometric and imaging data. For SIMS to be accepted as a routine, daily analysis tool in biomedical laboratories, various practical sample handling methodology such as surface matrix treatment, including nano-metal particles and metal coating, in addition to cluster sputtering, should be studied.
Brain vascular image enhancement based on gradient adjust with split Bregman
NASA Astrophysics Data System (ADS)
Liang, Xiao; Dong, Di; Hui, Hui; Zhang, Liwen; Fang, Mengjie; Tian, Jie
2016-04-01
Light Sheet Microscopy is a high-resolution fluorescence microscopic technique which enables to observe the mouse brain vascular network clearly with immunostaining. However, micro-vessels are stained with few fluorescence antibodies and their signals are much weaker than large vessels, which make micro-vessels unclear in LSM images. In this work, we developed a vascular image enhancement method to enhance micro-vessel details which should be useful for vessel statistics analysis. Since gradient describes the edge information of the vessel, the main idea of our method is to increase the gradient values of the enhanced image to improve the micro-vessels contrast. Our method contained two steps: 1) calculate the gradient image of LSM image, and then amplify high gradient values of the original image to enhance the vessel edge and suppress low gradient values to remove noises. Then we formulated a new L1-norm regularization optimization problem to find an image with the expected gradient while keeping the main structure information of the original image. 2) The split Bregman iteration method was used to deal with the L1-norm regularization problem and generate the final enhanced image. The main advantage of the split Bregman method is that it has both fast convergence and low memory cost. In order to verify the effectiveness of our method, we applied our method to a series of mouse brain vascular images acquired from a commercial LSM system in our lab. The experimental results showed that our method could greatly enhance micro-vessel edges which were unclear in the original images.
Cost/benefit analysis for video security systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1997-01-01
Dr. Don Hush and Scott Chapman, in conjunction with the Electrical and Computer Engineering Department of the University of New Mexico (UNM), have been contracted by Los Alamos National Laboratories to perform research in the area of high security video analysis. The first phase of this research, presented in this report, is a cost/benefit analysis of various approaches to the problem in question. This discussion begins with a description of three architectures that have been used as solutions to the problem of high security surveillance. An overview of the relative merits and weaknesses of each of the proposed systems ismore » included. These descriptions are followed directly by a discussion of the criteria chosen in evaluating the systems and the techniques used to perform the comparisons. The results are then given in graphical and tabular form, and their implications discussed. The project to this point has involved assessing hardware and software issues in image acquisition, processing and change detection. Future work is to leave these questions behind to consider the issues of change analysis - particularly the detection of human motion - and alarm decision criteria. The criteria for analysis in this report include: cost; speed; tradeoff issues in moving primative operations from software to hardware; real time operation considerations; change image resolution; and computational requirements.« less
SHERPA: an image segmentation and outline feature extraction tool for diatoms and other objects
2014-01-01
Background Light microscopic analysis of diatom frustules is widely used both in basic and applied research, notably taxonomy, morphometrics, water quality monitoring and paleo-environmental studies. In these applications, usually large numbers of frustules need to be identified and/or measured. Although there is a need for automation in these applications, and image processing and analysis methods supporting these tasks have previously been developed, they did not become widespread in diatom analysis. While methodological reports for a wide variety of methods for image segmentation, diatom identification and feature extraction are available, no single implementation combining a subset of these into a readily applicable workflow accessible to diatomists exists. Results The newly developed tool SHERPA offers a versatile image processing workflow focused on the identification and measurement of object outlines, handling all steps from image segmentation over object identification to feature extraction, and providing interactive functions for reviewing and revising results. Special attention was given to ease of use, applicability to a broad range of data and problems, and supporting high throughput analyses with minimal manual intervention. Conclusions Tested with several diatom datasets from different sources and of various compositions, SHERPA proved its ability to successfully analyze large amounts of diatom micrographs depicting a broad range of species. SHERPA is unique in combining the following features: application of multiple segmentation methods and selection of the one giving the best result for each individual object; identification of shapes of interest based on outline matching against a template library; quality scoring and ranking of resulting outlines supporting quick quality checking; extraction of a wide range of outline shape descriptors widely used in diatom studies and elsewhere; minimizing the need for, but enabling manual quality control and corrections. Although primarily developed for analyzing images of diatom valves originating from automated microscopy, SHERPA can also be useful for other object detection, segmentation and outline-based identification problems. PMID:24964954
SHERPA: an image segmentation and outline feature extraction tool for diatoms and other objects.
Kloster, Michael; Kauer, Gerhard; Beszteri, Bánk
2014-06-25
Light microscopic analysis of diatom frustules is widely used both in basic and applied research, notably taxonomy, morphometrics, water quality monitoring and paleo-environmental studies. In these applications, usually large numbers of frustules need to be identified and/or measured. Although there is a need for automation in these applications, and image processing and analysis methods supporting these tasks have previously been developed, they did not become widespread in diatom analysis. While methodological reports for a wide variety of methods for image segmentation, diatom identification and feature extraction are available, no single implementation combining a subset of these into a readily applicable workflow accessible to diatomists exists. The newly developed tool SHERPA offers a versatile image processing workflow focused on the identification and measurement of object outlines, handling all steps from image segmentation over object identification to feature extraction, and providing interactive functions for reviewing and revising results. Special attention was given to ease of use, applicability to a broad range of data and problems, and supporting high throughput analyses with minimal manual intervention. Tested with several diatom datasets from different sources and of various compositions, SHERPA proved its ability to successfully analyze large amounts of diatom micrographs depicting a broad range of species. SHERPA is unique in combining the following features: application of multiple segmentation methods and selection of the one giving the best result for each individual object; identification of shapes of interest based on outline matching against a template library; quality scoring and ranking of resulting outlines supporting quick quality checking; extraction of a wide range of outline shape descriptors widely used in diatom studies and elsewhere; minimizing the need for, but enabling manual quality control and corrections. Although primarily developed for analyzing images of diatom valves originating from automated microscopy, SHERPA can also be useful for other object detection, segmentation and outline-based identification problems.
Digital Biomass Accumulation Using High-Throughput Plant Phenotype Data Analysis.
Rahaman, Md Matiur; Ahsan, Md Asif; Gillani, Zeeshan; Chen, Ming
2017-09-01
Biomass is an important phenotypic trait in functional ecology and growth analysis. The typical methods for measuring biomass are destructive, and they require numerous individuals to be cultivated for repeated measurements. With the advent of image-based high-throughput plant phenotyping facilities, non-destructive biomass measuring methods have attempted to overcome this problem. Thus, the estimation of plant biomass of individual plants from their digital images is becoming more important. In this paper, we propose an approach to biomass estimation based on image derived phenotypic traits. Several image-based biomass studies state that the estimation of plant biomass is only a linear function of the projected plant area in images. However, we modeled the plant volume as a function of plant area, plant compactness, and plant age to generalize the linear biomass model. The obtained results confirm the proposed model and can explain most of the observed variance during image-derived biomass estimation. Moreover, a small difference was observed between actual and estimated digital biomass, which indicates that our proposed approach can be used to estimate digital biomass accurately.
Shi, Peng; Zhong, Jing; Hong, Jinsheng; Huang, Rongfang; Wang, Kaijun; Chen, Yunbin
2016-08-26
Nasopharyngeal carcinoma is one of the malignant neoplasm with high incidence in China and south-east Asia. Ki-67 protein is strictly associated with cell proliferation and malignant degree. Cells with higher Ki-67 expression are always sensitive to chemotherapy and radiotherapy, the assessment of which is beneficial to NPC treatment. It is still challenging to automatically analyze immunohistochemical Ki-67 staining nasopharyngeal carcinoma images due to the uneven color distributions in different cell types. In order to solve the problem, an automated image processing pipeline based on clustering of local correlation features is proposed in this paper. Unlike traditional morphology-based methods, our algorithm segments cells by classifying image pixels on the basis of local pixel correlations from particularly selected color spaces, then characterizes cells with a set of grading criteria for the reference of pathological analysis. Experimental results showed high accuracy and robustness in nucleus segmentation despite image data variance. Quantitative indicators obtained in this essay provide a reliable evidence for the analysis of Ki-67 staining nasopharyngeal carcinoma microscopic images, which would be helpful in relevant histopathological researches.
Entropy based quantification of Ki-67 positive cell images and its evaluation by a reader study
NASA Astrophysics Data System (ADS)
Niazi, M. Khalid Khan; Pennell, Michael; Elkins, Camille; Hemminger, Jessica; Jin, Ming; Kirby, Sean; Kurt, Habibe; Miller, Barrie; Plocharczyk, Elizabeth; Roth, Rachel; Ziegler, Rebecca; Shana'ah, Arwa; Racke, Fred; Lozanski, Gerard; Gurcan, Metin N.
2013-03-01
Presence of Ki-67, a nuclear protein, is typically used to measure cell proliferation. The quantification of the Ki-67 proliferation index is performed visually by the pathologist; however, this is subject to inter- and intra-reader variability. Automated techniques utilizing digital image analysis by computers have emerged. The large variations in specimen preparation, staining, and imaging as well as true biological heterogeneity of tumor tissue often results in variable intensities in Ki-67 stained images. These variations affect the performance of currently developed methods. To optimize the segmentation of Ki-67 stained cells, one should define a data dependent transformation that will account for these color variations instead of defining a fixed linear transformation to separate different hues. To address these issues in images of tissue stained with Ki-67, we propose a methodology that exploits the intrinsic properties of CIE L∗a∗b∗ color space to translate this complex problem into an automatic entropy based thresholding problem. The developed method was evaluated through two reader studies with pathology residents and expert hematopathologists. Agreement between the proposed method and the expert pathologists was good (CCC = 0.80).
Steganalysis based on reducing the differences of image statistical characteristics
NASA Astrophysics Data System (ADS)
Wang, Ran; Niu, Shaozhang; Ping, Xijian; Zhang, Tao
2018-04-01
Compared with the process of embedding, the image contents make a more significant impact on the differences of image statistical characteristics. This makes the image steganalysis to be a classification problem with bigger withinclass scatter distances and smaller between-class scatter distances. As a result, the steganalysis features will be inseparate caused by the differences of image statistical characteristics. In this paper, a new steganalysis framework which can reduce the differences of image statistical characteristics caused by various content and processing methods is proposed. The given images are segmented to several sub-images according to the texture complexity. Steganalysis features are separately extracted from each subset with the same or close texture complexity to build a classifier. The final steganalysis result is figured out through a weighted fusing process. The theoretical analysis and experimental results can demonstrate the validity of the framework.
Use of discrete chromatic space to tune the image tone in a color image mosaic
NASA Astrophysics Data System (ADS)
Zhang, Zuxun; Li, Zhijiang; Zhang, Jianqing; Zheng, Li
2003-09-01
Color image process is a very important problem. However, the main approach presently of them is to transfer RGB colour space into another colour space, such as HIS (Hue, Intensity and Saturation). YIQ, LUV and so on. Virutally, it may not be a valid way to process colour airborne image just in one colour space. Because the electromagnetic wave is physically altered in every wave band, while the color image is perceived based on psychology vision. Therefore, it's necessary to propose an approach accord with physical transformation and psychological perception. Then, an analysis on how to use relative colour spaces to process colour airborne photo is discussed and an application on how to tune the image tone in colour airborne image mosaic is introduced. As a practice, a complete approach to perform the mosaic on color airborne images via taking full advantage of relative color spaces is discussed in the application.
SKL algorithm based fabric image matching and retrieval
NASA Astrophysics Data System (ADS)
Cao, Yichen; Zhang, Xueqin; Ma, Guojian; Sun, Rongqing; Dong, Deping
2017-07-01
Intelligent computer image processing technology provides convenience and possibility for designers to carry out designs. Shape analysis can be achieved by extracting SURF feature. However, high dimension of SURF feature causes to lower matching speed. To solve this problem, this paper proposed a fast fabric image matching algorithm based on SURF K-means and LSH algorithm. By constructing the bag of visual words on K-Means algorithm, and forming feature histogram of each image, the dimension of SURF feature is reduced at the first step. Then with the help of LSH algorithm, the features are encoded and the dimension is further reduced. In addition, the indexes of each image and each class of image are created, and the number of matching images is decreased by LSH hash bucket. Experiments on fabric image database show that this algorithm can speed up the matching and retrieval process, the result can satisfy the requirement of dress designers with accuracy and speed.
Efficient bias correction for magnetic resonance image denoising.
Mukherjee, Partha Sarathi; Qiu, Peihua
2013-05-30
Magnetic resonance imaging (MRI) is a popular radiology technique that is used for visualizing detailed internal structure of the body. Observed MRI images are generated by the inverse Fourier transformation from received frequency signals of a magnetic resonance scanner system. Previous research has demonstrated that random noise involved in the observed MRI images can be described adequately by the so-called Rician noise model. Under that model, the observed image intensity at a given pixel is a nonlinear function of the true image intensity and of two independent zero-mean random variables with the same normal distribution. Because of such a complicated noise structure in the observed MRI images, denoised images by conventional denoising methods are usually biased, and the bias could reduce image contrast and negatively affect subsequent image analysis. Therefore, it is important to address the bias issue properly. To this end, several bias-correction procedures have been proposed in the literature. In this paper, we study the Rician noise model and the corresponding bias-correction problem systematically and propose a new and more effective bias-correction formula based on the regression analysis and Monte Carlo simulation. Numerical studies show that our proposed method works well in various applications. Copyright © 2012 John Wiley & Sons, Ltd.
Signature detection and matching for document image retrieval.
Zhu, Guangyu; Zheng, Yefeng; Doermann, David; Jaeger, Stefan
2009-11-01
As one of the most pervasive methods of individual identification and document authentication, signatures present convincing evidence and provide an important form of indexing for effective document image processing and retrieval in a broad range of applications. However, detection and segmentation of free-form objects such as signatures from clustered background is currently an open document analysis problem. In this paper, we focus on two fundamental problems in signature-based document image retrieval. First, we propose a novel multiscale approach to jointly detecting and segmenting signatures from document images. Rather than focusing on local features that typically have large variations, our approach captures the structural saliency using a signature production model and computes the dynamic curvature of 2D contour fragments over multiple scales. This detection framework is general and computationally tractable. Second, we treat the problem of signature retrieval in the unconstrained setting of translation, scale, and rotation invariant nonrigid shape matching. We propose two novel measures of shape dissimilarity based on anisotropic scaling and registration residual error and present a supervised learning framework for combining complementary shape information from different dissimilarity metrics using LDA. We quantitatively study state-of-the-art shape representations, shape matching algorithms, measures of dissimilarity, and the use of multiple instances as query in document image retrieval. We further demonstrate our matching techniques in offline signature verification. Extensive experiments using large real-world collections of English and Arabic machine-printed and handwritten documents demonstrate the excellent performance of our approaches.
NASA Technical Reports Server (NTRS)
Liu, Hua-Kuang (Editor); Schenker, Paul (Editor)
1987-01-01
The papers presented in this volume provide an overview of current research in both optical and digital pattern recognition, with a theme of identifying overlapping research problems and methodologies. Topics discussed include image analysis and low-level vision, optical system design, object analysis and recognition, real-time hybrid architectures and algorithms, high-level image understanding, and optical matched filter design. Papers are presented on synthetic estimation filters for a control system; white-light correlator character recognition; optical AI architectures for intelligent sensors; interpreting aerial photographs by segmentation and search; and optical information processing using a new photopolymer.
Combination of Thin Lenses--A Computer Oriented Method.
ERIC Educational Resources Information Center
Flerackers, E. L. M.; And Others
1984-01-01
Suggests a method treating geometric optics using a microcomputer to do the calculations of image formation. Calculations are based on the connection between the composition of lenses and the mathematics of fractional linear equations. Logic of the analysis and an example problem are included. (JM)
Hyperspectral image analysis for water stress detection of apple trees
USDA-ARS?s Scientific Manuscript database
Plant stress significantly reduces plant productivity. Automated on-the-go mapping of plant stress would allow for a timely intervention and mitigation of the problem before critical thresholds are exceeded, thereby maximizing productivity. The spectral signature of plant leaves was analyzed by a ...
USDA-ARS?s Scientific Manuscript database
Aflatoxins are secondary metabolites produced by certain fungal species of the Aspergillus genus. Aflatoxin contamination remains a problem in agricultural products due to its toxic and carcinogenic properties. Conventional chemical methods for aflatoxin detection are time-consuming and destructive....
The Iterative Reweighted Mixed-Norm Estimate for Spatio-Temporal MEG/EEG Source Reconstruction.
Strohmeier, Daniel; Bekhti, Yousra; Haueisen, Jens; Gramfort, Alexandre
2016-10-01
Source imaging based on magnetoencephalography (MEG) and electroencephalography (EEG) allows for the non-invasive analysis of brain activity with high temporal and good spatial resolution. As the bioelectromagnetic inverse problem is ill-posed, constraints are required. For the analysis of evoked brain activity, spatial sparsity of the neuronal activation is a common assumption. It is often taken into account using convex constraints based on the l 1 -norm. The resulting source estimates are however biased in amplitude and often suboptimal in terms of source selection due to high correlations in the forward model. In this work, we demonstrate that an inverse solver based on a block-separable penalty with a Frobenius norm per block and a l 0.5 -quasinorm over blocks addresses both of these issues. For solving the resulting non-convex optimization problem, we propose the iterative reweighted Mixed Norm Estimate (irMxNE), an optimization scheme based on iterative reweighted convex surrogate optimization problems, which are solved efficiently using a block coordinate descent scheme and an active set strategy. We compare the proposed sparse imaging method to the dSPM and the RAP-MUSIC approach based on two MEG data sets. We provide empirical evidence based on simulations and analysis of MEG data that the proposed method improves on the standard Mixed Norm Estimate (MxNE) in terms of amplitude bias, support recovery, and stability.
A two-stage linear discriminant analysis via QR-decomposition.
Ye, Jieping; Li, Qi
2005-06-01
Linear Discriminant Analysis (LDA) is a well-known method for feature extraction and dimension reduction. It has been used widely in many applications involving high-dimensional data, such as image and text classification. An intrinsic limitation of classical LDA is the so-called singularity problems; that is, it fails when all scatter matrices are singular. Many LDA extensions were proposed in the past to overcome the singularity problems. Among these extensions, PCA+LDA, a two-stage method, received relatively more attention. In PCA+LDA, the LDA stage is preceded by an intermediate dimension reduction stage using Principal Component Analysis (PCA). Most previous LDA extensions are computationally expensive, and not scalable, due to the use of Singular Value Decomposition or Generalized Singular Value Decomposition. In this paper, we propose a two-stage LDA method, namely LDA/QR, which aims to overcome the singularity problems of classical LDA, while achieving efficiency and scalability simultaneously. The key difference between LDA/QR and PCA+LDA lies in the first stage, where LDA/QR applies QR decomposition to a small matrix involving the class centroids, while PCA+LDA applies PCA to the total scatter matrix involving all training data points. We further justify the proposed algorithm by showing the relationship among LDA/QR and previous LDA methods. Extensive experiments on face images and text documents are presented to show the effectiveness of the proposed algorithm.
Lütz-Meindl, Ursula
2007-01-01
Energy filtering TEM (EFTEM) with modern spectrometers and software offers new possibilities for element analysis and image generation in plant cells. In the present review, applications of EFTEM in plant physiology, such as identification of light elements and ion transport, analyses of natural cell incrustations, determination of element exchange between fungi and rootlets during mycorrhiza development, heavy metal storage and detoxification, and employment in plant physiological experiments are summarized. In addition, it is demonstrated that EFTEM can be successfully used in more practical approaches, for example, in phytoremediation, food and wood industry, and agriculture. Preparation methods for plant material as prerequisites for EFTEM analysis are compared with respect to their suitability and technical problems are discussed.
NASA Astrophysics Data System (ADS)
Tang, Xiaoli; Lin, Tong; Jiang, Steve
2009-09-01
We propose a novel approach for potential online treatment verification using cine EPID (electronic portal imaging device) images for hypofractionated lung radiotherapy based on a machine learning algorithm. Hypofractionated radiotherapy requires high precision. It is essential to effectively monitor the target to ensure that the tumor is within the beam aperture. We modeled the treatment verification problem as a two-class classification problem and applied an artificial neural network (ANN) to classify the cine EPID images acquired during the treatment into corresponding classes—with the tumor inside or outside of the beam aperture. Training samples were generated for the ANN using digitally reconstructed radiographs (DRRs) with artificially added shifts in the tumor location—to simulate cine EPID images with different tumor locations. Principal component analysis (PCA) was used to reduce the dimensionality of the training samples and cine EPID images acquired during the treatment. The proposed treatment verification algorithm was tested on five hypofractionated lung patients in a retrospective fashion. On average, our proposed algorithm achieved a 98.0% classification accuracy, a 97.6% recall rate and a 99.7% precision rate. This work was first presented at the Seventh International Conference on Machine Learning and Applications, San Diego, CA, USA, 11-13 December 2008.
Maeda, Yoshiaki; Dobashi, Hironori; Sugiyama, Yui; Saeki, Tatsuya; Lim, Tae-kyu; Harada, Manabu; Matsunaga, Tadashi; Yoshino, Tomoko
2017-01-01
Detection and identification of microbial species are crucial in a wide range of industries, including production of beverages, foods, cosmetics, and pharmaceuticals. Traditionally, colony formation and its morphological analysis (e.g., size, shape, and color) with a naked eye have been employed for this purpose. However, such a conventional method is time consuming, labor intensive, and not very reproducible. To overcome these problems, we propose a novel method that detects microcolonies (diameter 10–500 μm) using a lensless imaging system. When comparing colony images of five microorganisms from different genera (Escherichia coli, Salmonella enterica, Pseudomonas aeruginosa, Staphylococcus aureus, and Candida albicans), the images showed obvious different features. Being closely related species, St. aureus and St. epidermidis resembled each other, but the imaging analysis could extract substantial information (colony fingerprints) including the morphological and physiological features, and linear discriminant analysis of the colony fingerprints distinguished these two species with 100% of accuracy. Because this system may offer many advantages such as high-throughput testing, lower costs, more compact equipment, and ease of automation, it holds promise for microbial detection and identification in various academic and industrial areas. PMID:28369067
Image analysis for material characterisation
NASA Astrophysics Data System (ADS)
Livens, Stefan
In this thesis, a number of image analysis methods are presented as solutions to two applications concerning the characterisation of materials. Firstly, we deal with the characterisation of corrosion images, which is handled using a multiscale texture analysis method based on wavelets. We propose a feature transformation that deals with the problem of rotation invariance. Classification is performed with a Learning Vector Quantisation neural network and with combination of outputs. In an experiment, 86,2% of the images showing either pit formation or cracking, are correctly classified. Secondly, we develop an automatic system for the characterisation of silver halide microcrystals. These are flat crystals with a triangular or hexagonal base and a thickness in the 100 to 200 nm range. A light microscope is used to image them. A novel segmentation method is proposed, which allows to separate agglomerated crystals. For the measurement of shape, the ratio between the largest and the smallest radius yields the best results. The thickness measurement is based on the interference colours that appear for light reflected at the crystals. The mean colour of different thickness populations is determined, from which a calibration curve is derived. With this, the thickness of new populations can be determined accurately.
Alignment error envelopes for single particle analysis.
Jensen, G J
2001-01-01
To determine the structure of a biological particle to high resolution by electron microscopy, image averaging is required to combine information from different views and to increase the signal-to-noise ratio. Starting from the number of noiseless views necessary to resolve features of a given size, four general factors are considered that increase the number of images actually needed: (1) the physics of electron scattering introduces shot noise, (2) thermal motion and particle inhomogeneity cause the scattered electrons to describe a mixture of structures, (3) the microscope system fails to usefully record all the information carried by the scattered electrons, and (4) image misalignment leads to information loss through incoherent averaging. The compound effect of factors 2-4 is approximated by the product of envelope functions. The problem of incoherent image averaging is developed in detail through derivation of five envelope functions that account for small errors in 11 "alignment" parameters describing particle location, orientation, defocus, magnification, and beam tilt. The analysis provides target error tolerances for single particle analysis to near-atomic (3.5 A) resolution, and this prospect is shown to depend critically on image quality, defocus determination, and microscope alignment. Copyright 2001 Academic Press.
Object tracking using plenoptic image sequences
NASA Astrophysics Data System (ADS)
Kim, Jae Woo; Bae, Seong-Joon; Park, Seongjin; Kim, Do Hyung
2017-05-01
Object tracking is a very important problem in computer vision research. Among the difficulties of object tracking, partial occlusion problem is one of the most serious and challenging problems. To address the problem, we proposed novel approaches to object tracking on plenoptic image sequences. Our approaches take advantage of the refocusing capability that plenoptic images provide. Our approaches input the sequences of focal stacks constructed from plenoptic image sequences. The proposed image selection algorithms select the sequence of optimal images that can maximize the tracking accuracy from the sequence of focal stacks. Focus measure approach and confidence measure approach were proposed for image selection and both of the approaches were validated by the experiments using thirteen plenoptic image sequences that include heavily occluded target objects. The experimental results showed that the proposed approaches were satisfactory comparing to the conventional 2D object tracking algorithms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jurrus, Elizabeth R.; Hodas, Nathan O.; Baker, Nathan A.
Forensic analysis of nanoparticles is often conducted through the collection and identifi- cation of electron microscopy images to determine the origin of suspected nuclear material. Each image is carefully studied by experts for classification of materials based on texture, shape, and size. Manually inspecting large image datasets takes enormous amounts of time. However, automatic classification of large image datasets is a challenging problem due to the complexity involved in choosing image features, the lack of training data available for effective machine learning methods, and the availability of user interfaces to parse through images. Therefore, a significant need exists for automatedmore » and semi-automated methods to help analysts perform accurate image classification in large image datasets. We present INStINCt, our Intelligent Signature Canvas, as a framework for quickly organizing image data in a web based canvas framework. Images are partitioned using small sets of example images, chosen by users, and presented in an optimal layout based on features derived from convolutional neural networks.« less
Submarine harbor navigation using image data
NASA Astrophysics Data System (ADS)
Stubberud, Stephen C.; Kramer, Kathleen A.
2017-01-01
The process of ingress and egress of a United States Navy submarine is a human-intensive process that takes numerous individuals to monitor locations and for hazards. Sailors pass vocal information to bridge where it is processed manually. There is interest in using video imaging of the periscope view to more automatically provide navigation within harbors and other points of ingress and egress. In this paper, video-based navigation is examined as a target-tracking problem. While some image-processing methods claim to provide range information, the moving platform problem and weather concerns, such as fog, reduce the effectiveness of these range estimates. The video-navigation problem then becomes an angle-only tracking problem. Angle-only tracking is known to be fraught with difficulties, due to the fact that the unobservable space is not the null space. When using a Kalman filter estimator to perform the tracking, significant errors arise which could endanger the submarine. This work analyzes the performance of the Kalman filter when angle-only measurements are used to provide the target tracks. This paper addresses estimation unobservability and the minimal set of requirements that are needed to address it in this complex but real-world problem. Three major issues are addressed: the knowledge of navigation beacons/landmarks' locations, the minimal number of these beacons needed to maintain the course, and update rates of the angles of the landmarks as the periscope rotates and landmarks become obscured due to blockage and weather. The goal is to address the problem of navigation to and from the docks, while maintaining the traversing of the harbor channel based on maritime rules relying solely on the image-based data. The minimal number of beacons will be considered. For this effort, the image correlation from frame to frame is assumed to be achieved perfectly. Variation in the update rates and the dropping of data due to rotation and obscuration is considered. The analysis will be based on a simple straight-line channel harbor entry to the dock, similar to a submarine entering the submarine port in San Diego.
Content-based analysis of Ki-67 stained meningioma specimens for automatic hot-spot selection.
Swiderska-Chadaj, Zaneta; Markiewicz, Tomasz; Grala, Bartlomiej; Lorent, Malgorzata
2016-10-07
Hot-spot based examination of immunohistochemically stained histological specimens is one of the most important procedures in pathomorphological practice. The development of image acquisition equipment and computational units allows for the automation of this process. Moreover, a lot of possible technical problems occur in everyday histological material, which increases the complexity of the problem. Thus, a full context-based analysis of histological specimens is also needed in the quantification of immunohistochemically stained specimens. One of the most important reactions is the Ki-67 proliferation marker in meningiomas, the most frequent intracranial tumour. The aim of our study is to propose a context-based analysis of Ki-67 stained specimens of meningiomas for automatic selection of hot-spots. The proposed solution is based on textural analysis, mathematical morphology, feature ranking and classification, as well as on the proposed hot-spot gradual extinction algorithm to allow for the proper detection of a set of hot-spot fields. The designed whole slide image processing scheme eliminates such artifacts as hemorrhages, folds or stained vessels from the region of interest. To validate automatic results, a set of 104 meningioma specimens were selected and twenty hot-spots inside them were identified independently by two experts. The Spearman rho correlation coefficient was used to compare the results which were also analyzed with the help of a Bland-Altman plot. The results show that most of the cases (84) were automatically examined properly with two fields of view with a technical problem at the very most. Next, 13 had three such fields, and only seven specimens did not meet the requirement for the automatic examination. Generally, the Automatic System identifies hot-spot areas, especially their maximum points, better. Analysis of the results confirms the very high concordance between an automatic Ki-67 examination and the expert's results, with a Spearman rho higher than 0.95. The proposed hot-spot selection algorithm with an extended context-based analysis of whole slide images and hot-spot gradual extinction algorithm provides an efficient tool for simulation of a manual examination. The presented results have confirmed that the automatic examination of Ki-67 in meningiomas could be introduced in the near future.
Spectral Unmixing Analysis of Time Series Landsat 8 Images
NASA Astrophysics Data System (ADS)
Zhuo, R.; Xu, L.; Peng, J.; Chen, Y.
2018-05-01
Temporal analysis of Landsat 8 images opens up new opportunities in the unmixing procedure. Although spectral analysis of time series Landsat imagery has its own advantage, it has rarely been studied. Nevertheless, using the temporal information can provide improved unmixing performance when compared to independent image analyses. Moreover, different land cover types may demonstrate different temporal patterns, which can aid the discrimination of different natures. Therefore, this letter presents time series K-P-Means, a new solution to the problem of unmixing time series Landsat imagery. The proposed approach is to obtain the "purified" pixels in order to achieve optimal unmixing performance. The vertex component analysis (VCA) is used to extract endmembers for endmember initialization. First, nonnegative least square (NNLS) is used to estimate abundance maps by using the endmember. Then, the estimated endmember is the mean value of "purified" pixels, which is the residual of the mixed pixel after excluding the contribution of all nondominant endmembers. Assembling two main steps (abundance estimation and endmember update) into the iterative optimization framework generates the complete algorithm. Experiments using both simulated and real Landsat 8 images show that the proposed "joint unmixing" approach provides more accurate endmember and abundance estimation results compared with "separate unmixing" approach.
Shallow sea-floor reflectance and water depth derived by unmixing multispectral imagery
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bierwirth, P.N.; Lee, T.J.; Burne, R.V.
1993-03-01
A major problem for mapping shallow water zones by the analysis of remotely sensed data is that contrast effects due to water depth obscure and distort the special nature of the substrate. This paper outlines a new method which unmixes the exponential influence of depth in each pixel by employing a mathematical constraint. This leaves a multispectral residual which represents relative substrate reflectance. Input to the process are the raw multispectral data and water attenuation coefficients derived by the co-analysis of known bathymetry and remotely sensed data. Outputs are substrate-reflectance images corresponding to the input bands and a greyscale depthmore » image. The method has been applied in the analysis of Landsat TM data at Hamelin Pool in Shark Bay, Western Australia. Algorithm derived substrate reflectance images for Landsat TM bands 1, 2, and 3 combined in color represent the optimum enhancement for mapping or classifying substrate types. As a result, this color image successfully delineated features, which were obscured in the raw data, such as the distributions of sea-grasses, microbial mats, and sandy area. 19 refs.« less
Introduction to computer image processing
NASA Technical Reports Server (NTRS)
Moik, J. G.
1973-01-01
Theoretical backgrounds and digital techniques for a class of image processing problems are presented. Image formation in the context of linear system theory, image evaluation, noise characteristics, mathematical operations on image and their implementation are discussed. Various techniques for image restoration and image enhancement are presented. Methods for object extraction and the problem of pictorial pattern recognition and classification are discussed.
Segmentation and learning in the quantitative analysis of microscopy images
NASA Astrophysics Data System (ADS)
Ruggiero, Christy; Ross, Amy; Porter, Reid
2015-02-01
In material science and bio-medical domains the quantity and quality of microscopy images is rapidly increasing and there is a great need to automatically detect, delineate and quantify particles, grains, cells, neurons and other functional "objects" within these images. These are challenging problems for image processing because of the variability in object appearance that inevitably arises in real world image acquisition and analysis. One of the most promising (and practical) ways to address these challenges is interactive image segmentation. These algorithms are designed to incorporate input from a human operator to tailor the segmentation method to the image at hand. Interactive image segmentation is now a key tool in a wide range of applications in microscopy and elsewhere. Historically, interactive image segmentation algorithms have tailored segmentation on an image-by-image basis, and information derived from operator input is not transferred between images. But recently there has been increasing interest to use machine learning in segmentation to provide interactive tools that accumulate and learn from the operator input over longer periods of time. These new learning algorithms reduce the need for operator input over time, and can potentially provide a more dynamic balance between customization and automation for different applications. This paper reviews the state of the art in this area, provides a unified view of these algorithms, and compares the segmentation performance of various design choices.
View synthesis using parallax invariance
NASA Astrophysics Data System (ADS)
Dornaika, Fadi
2001-06-01
View synthesis becomes a focus of attention of both the computer vision and computer graphics communities. It consists of creating novel images of a scene as it would appear from novel viewpoints. View synthesis can be used in a wide variety of applications such as video compression, graphics generation, virtual reality and entertainment. This paper addresses the following problem. Given a dense disparity map between two reference images, we would like to synthesize a novel view of the same scene associated with a novel viewpoint. Most of the existing work is relying on building a set of 3D meshes which are then projected onto the new image (the rendering process is performed using texture mapping). The advantages of our view synthesis approach are as follows. First, the novel view is specified by a rotation and a translation which are the most natural way to express the virtual location of the camera. Second, the approach is able to synthesize highly realistic images whose viewing position is significantly far away from the reference viewpoints. Third, the approach is able to handle the visibility problem during the synthesis process. Our developed framework has two main steps. The first step (analysis step) consists of computing the homography at infinity, the epipoles, and thus the parallax field associated with the reference images. The second step (synthesis step) consists of warping the reference image into a new one, which is based on the invariance of the computed parallax field. The analysis step is working directly on the reference views, and only need to be performed once. Examples of synthesizing novel views using either feature correspondences or dense disparity map have demonstrated the feasibility of the proposed approach.
Chuan, He; Dishan, Qiu; Jin, Liu
2012-01-01
The cooperative scheduling problem on high-altitude airships for imaging observation tasks is discussed. A constraint programming model is established by analyzing the main constraints, which takes the maximum task benefit and the minimum cruising distance as two optimization objectives. The cooperative scheduling problem of high-altitude airships is converted into a main problem and a subproblem by adopting hierarchy architecture. The solution to the main problem can construct the preliminary matching between tasks and observation resource in order to reduce the search space of the original problem. Furthermore, the solution to the sub-problem can detect the key nodes that each airship needs to fly through in sequence, so as to get the cruising path. Firstly, the task set is divided by using k-core neighborhood growth cluster algorithm (K-NGCA). Then, a novel swarm intelligence algorithm named propagation algorithm (PA) is combined with the key node search algorithm (KNSA) to optimize the cruising path of each airship and determine the execution time interval of each task. Meanwhile, this paper also provides the realization approach of the above algorithm and especially makes a detailed introduction on the encoding rules, search models, and propagation mechanism of the PA. Finally, the application results and comparison analysis show the proposed models and algorithms are effective and feasible. PMID:23365522
NASA Astrophysics Data System (ADS)
Utomo, Edy Setiyo; Juniati, Dwi; Siswono, Tatag Yuli Eko
2017-08-01
The aim of this research was to describe the mathematical visualization process of Junior High School students in solving contextual problems based on cognitive style. Mathematical visualization process in this research was seen from aspects of image generation, image inspection, image scanning, and image transformation. The research subject was the students in the eighth grade based on GEFT test (Group Embedded Figures Test) adopted from Within to determining the category of cognitive style owned by the students namely field independent or field dependent and communicative. The data collection was through visualization test in contextual problem and interview. The validity was seen through time triangulation. The data analysis referred to the aspect of mathematical visualization through steps of categorization, reduction, discussion, and conclusion. The results showed that field-independent and field-dependent subjects were difference in responding to contextual problems. The field-independent subject presented in the form of 2D and 3D, while the field-dependent subject presented in the form of 3D. Both of the subjects had different perception to see the swimming pool. The field-independent subject saw from the top, while the field-dependent subject from the side. The field-independent subject chose to use partition-object strategy, while the field-dependent subject chose to use general-object strategy. Both the subjects did transformation in an object rotation to get the solution. This research is reference to mathematical curriculum developers of Junior High School in Indonesia. Besides, teacher could develop the students' mathematical visualization by using technology media or software, such as geogebra, portable cabri in learning.
A Markov model for blind image separation by a mean-field EM algorithm.
Tonazzini, Anna; Bedini, Luigi; Salerno, Emanuele
2006-02-01
This paper deals with blind separation of images from noisy linear mixtures with unknown coefficients, formulated as a Bayesian estimation problem. This is a flexible framework, where any kind of prior knowledge about the source images and the mixing matrix can be accounted for. In particular, we describe local correlation within the individual images through the use of Markov random field (MRF) image models. These are naturally suited to express the joint pdf of the sources in a factorized form, so that the statistical independence requirements of most independent component analysis approaches to blind source separation are retained. Our model also includes edge variables to preserve intensity discontinuities. MRF models have been proved to be very efficient in many visual reconstruction problems, such as blind image restoration, and allow separation and edge detection to be performed simultaneously. We propose an expectation-maximization algorithm with the mean field approximation to derive a procedure for estimating the mixing matrix, the sources, and their edge maps. We tested this procedure on both synthetic and real images, in the fully blind case (i.e., no prior information on mixing is exploited) and found that a source model accounting for local autocorrelation is able to increase robustness against noise, even space variant. Furthermore, when the model closely fits the source characteristics, independence is no longer a strict requirement, and cross-correlated sources can be separated, as well.
Velocity filtering applied to optical flow calculations
NASA Technical Reports Server (NTRS)
Barniv, Yair
1990-01-01
Optical flow is a method by which a stream of two-dimensional images obtained from a forward-looking passive sensor is used to map the three-dimensional volume in front of a moving vehicle. Passive ranging via optical flow is applied here to the helicopter obstacle-avoidance problem. Velocity filtering is used as a field-based method to determine range to all pixels in the initial image. The theoretical understanding and performance analysis of velocity filtering as applied to optical flow is expanded and experimental results are presented.
Three-Dimensional Medical Image Registration Using a Patient Space Correlation Technique
1991-12-01
dates (e.g. 10 Seenon Technial Jun 87 - 30 Jun 88). Statements on TechnicalDocuments." Block 4. Title and Subtitle. A title is taken from DOE - See...requirements ( 30 :6). The context analysis for this development was conducted primarily to bound the image regis- tration problem and to isolate the required...a series of 30 transverse slices. Each slice is composed of 240 voxels in the x-dimension and 164 voxels in the y-dimension. The dataset was provided
NASA Astrophysics Data System (ADS)
Zharinov, I. O.; Zharinov, O. O.
2017-12-01
The problem of the research is concerned with quantitative analysis of influence of technological variation of the screen color profile parameters on chromaticity coordinates of the displayed image. Some mathematical expressions which approximate the two-dimensional distribution of chromaticity coordinates of an image, which is displayed on the screen with a three-component color formation principle were proposed. Proposed mathematical expressions show the way to development of correction techniques to improve reproducibility of the colorimetric features of displays.
Shkvarko, Yuriy; Tuxpan, José; Santos, Stewart
2011-01-01
We consider a problem of high-resolution array radar/SAR imaging formalized in terms of a nonlinear ill-posed inverse problem of nonparametric estimation of the power spatial spectrum pattern (SSP) of the random wavefield scattered from a remotely sensed scene observed through a kernel signal formation operator and contaminated with random Gaussian noise. First, the Sobolev-type solution space is constructed to specify the class of consistent kernel SSP estimators with the reproducing kernel structures adapted to the metrics in such the solution space. Next, the “model-free” variational analysis (VA)-based image enhancement approach and the “model-based” descriptive experiment design (DEED) regularization paradigm are unified into a new dynamic experiment design (DYED) regularization framework. Application of the proposed DYED framework to the adaptive array radar/SAR imaging problem leads to a class of two-level (DEED-VA) regularized SSP reconstruction techniques that aggregate the kernel adaptive anisotropic windowing with the projections onto convex sets to enforce the consistency and robustness of the overall iterative SSP estimators. We also show how the proposed DYED regularization method may be considered as a generalization of the MVDR, APES and other high-resolution nonparametric adaptive radar sensing techniques. A family of the DYED-related algorithms is constructed and their effectiveness is finally illustrated via numerical simulations. PMID:22163859
High contrast imaging through adaptive transmittance control in the focal plane
NASA Astrophysics Data System (ADS)
Dhadwal, Harbans S.; Rastegar, Jahangir; Feng, Dake
2016-05-01
High contrast imaging, in the presence of a bright background, is a challenging problem encountered in diverse applications ranging from the daily chore of driving into a sun-drenched scene to in vivo use of biomedical imaging in various types of keyhole surgeries. Imaging in the presence of bright sources saturates the vision system, resulting in loss of scene fidelity, corresponding to low image contrast and reduced resolution. The problem is exacerbated in retro-reflective imaging systems where the light sources illuminating the object are unavoidably strong, typically masking the object features. This manuscript presents a novel theoretical framework, based on nonlinear analysis and adaptive focal plane transmittance, to selectively remove object domain sources of background light from the image plane, resulting in local and global increases in image contrast. The background signal can either be of a global specular nature, giving rise to parallel illumination from the entire object surface or can be represented by a mosaic of randomly orientated, small specular surfaces. The latter is more representative of real world practical imaging systems. Thus, the background signal comprises of groups of oblique rays corresponding to distributions of the mosaic surfaces. Through the imaging system, light from group of like surfaces, converges to a localized spot in the focal plane of the lens and then diverges to cast a localized bright spot in the image plane. Thus, transmittance of a spatial light modulator, positioned in the focal plane, can be adaptively controlled to block a particular source of background light. Consequently, the image plane intensity is entirely due to the object features. Experimental image data is presented to verify the efficacy of the methodology.
Color model comparative analysis for breast cancer diagnosis using H and E stained images
NASA Astrophysics Data System (ADS)
Li, Xingyu; Plataniotis, Konstantinos N.
2015-03-01
Digital cancer diagnosis is a research realm where signal processing techniques are used to analyze and to classify color histopathology images. Different from grayscale image analysis of magnetic resonance imaging or X-ray, colors in histopathology images convey large amount of histological information and thus play significant role in cancer diagnosis. Though color information is widely used in histopathology works, as today, there is few study on color model selections for feature extraction in cancer diagnosis schemes. This paper addresses the problem of color space selection for digital cancer classification using H and E stained images, and investigates the effectiveness of various color models (RGB, HSV, CIE L*a*b*, and stain-dependent H and E decomposition model) in breast cancer diagnosis. Particularly, we build a diagnosis framework as a comparison benchmark and take specific concerns of medical decision systems into account in evaluation. The evaluation methodologies include feature discriminate power evaluation and final diagnosis performance comparison. Experimentation on a publicly accessible histopathology image set suggests that the H and E decomposition model outperforms other assessed color spaces. For reasons behind various performance of color spaces, our analysis via mutual information estimation demonstrates that color components in the H and E model are less dependent, and thus most feature discriminate power is collected in one channel instead of spreading out among channels in other color spaces.
Leveraging unsupervised training sets for multi-scale compartmentalization in renal pathology
NASA Astrophysics Data System (ADS)
Lutnick, Brendon; Tomaszewski, John E.; Sarder, Pinaki
2017-03-01
Clinical pathology relies on manual compartmentalization and quantification of biological structures, which is time consuming and often error-prone. Application of computer vision segmentation algorithms to histopathological image analysis, in contrast, can offer fast, reproducible, and accurate quantitative analysis to aid pathologists. Algorithms tunable to different biologically relevant structures can allow accurate, precise, and reproducible estimates of disease states. In this direction, we have developed a fast, unsupervised computational method for simultaneously separating all biologically relevant structures from histopathological images in multi-scale. Segmentation is achieved by solving an energy optimization problem. Representing the image as a graph, nodes (pixels) are grouped by minimizing a Potts model Hamiltonian, adopted from theoretical physics, modeling interacting electron spins. Pixel relationships (modeled as edges) are used to update the energy of the partitioned graph. By iteratively improving the clustering, the optimal number of segments is revealed. To reduce computational time, the graph is simplified using a Cantor pairing function to intelligently reduce the number of included nodes. The classified nodes are then used to train a multiclass support vector machine to apply the segmentation over the full image. Accurate segmentations of images with as many as 106 pixels can be completed only in 5 sec, allowing for attainable multi-scale visualization. To establish clinical potential, we employed our method in renal biopsies to quantitatively visualize for the first time scale variant compartments of heterogeneous intra- and extraglomerular structures simultaneously. Implications of the utility of our method extend to fields such as oncology, genomics, and non-biological problems.
NASA Technical Reports Server (NTRS)
Kruse, Fred A.; Taranik, Dan L.; Kierein-Young, Kathryn S.
1988-01-01
Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) data for sites in Nevada and Colorado were evaluated to determine their utility for mineralogical mapping in support of geologic investigations. Equal energy normalization is commonly used with imaging spectrometer data to reduce albedo effects. Spectra, profiles, and stacked, color-coded spectra were extracted from the AVIRIS data using an interactive analysis program (QLook) and these derivative data were compared to Airborne Imaging Spectrometer (AIS) results, field and laboratory spectra, and geologic maps. A feature extraction algorithm was used to extract and characterize absorption features from AVIRIS and laboratory spectra, allowing direct comparison of the position and shape of absorption features. Both muscovite and carbonate spectra were identified in the Nevada AVIRIS data by comparison with laboratory and AIS spectra, and an image was made that showed the distribution of these minerals for the entire site. Additional, distinctive spectra were located for an unknown mineral. For the two Colorado sites, the signal-to-noise problem was significantly worse and attempts to extract meaningful spectra were unsuccessful. Problems with the Colorado AVIRIS data were accentuated by the IAR reflectance technique because of moderate vegetation cover. Improved signal-to-noise and alternative calibration procedures will be required to produce satisfactory reflectance spectra from these data. Although the AVIRIS data were useful for mapping strong mineral absorption features and producing mineral maps at the Nevada site, it is clear that significant improvements to the instrument performance are required before AVIRIS will be an operational instrument.
NASA Astrophysics Data System (ADS)
Chatzistergos, Theodosios; Ermolli, Ilaria; Solanki, Sami K.; Krivova, Natalie A.
2018-01-01
Context. Historical Ca II K spectroheliograms (SHG) are unique in representing long-term variations of the solar chromospheric magnetic field. They usually suffer from numerous problems and lack photometric calibration. Thus accurate processing of these data is required to get meaningful results from their analysis. Aims: In this paper we aim at developing an automatic processing and photometric calibration method that provides precise and consistent results when applied to historical SHG. Methods: The proposed method is based on the assumption that the centre-to-limb variation of the intensity in quiet Sun regions does not vary with time. We tested the accuracy of the proposed method on various sets of synthetic images that mimic problems encountered in historical observations. We also tested our approach on a large sample of images randomly extracted from seven different SHG archives. Results: The tests carried out on the synthetic data show that the maximum relative errors of the method are generally <6.5%, while the average error is <1%, even if rather poor quality observations are considered. In the absence of strong artefacts the method returns images that differ from the ideal ones by <2% in any pixel. The method gives consistent values for both plage and network areas. We also show that our method returns consistent results for images from different SHG archives. Conclusions: Our tests show that the proposed method is more accurate than other methods presented in the literature. Our method can also be applied to process images from photographic archives of solar observations at other wavelengths than Ca II K.
Ramani, Sathish; Liu, Zhihao; Rosen, Jeffrey; Nielsen, Jon-Fredrik; Fessler, Jeffrey A.
2012-01-01
Regularized iterative reconstruction algorithms for imaging inverse problems require selection of appropriate regularization parameter values. We focus on the challenging problem of tuning regularization parameters for nonlinear algorithms for the case of additive (possibly complex) Gaussian noise. Generalized cross-validation (GCV) and (weighted) mean-squared error (MSE) approaches (based on Stein's Unbiased Risk Estimate— SURE) need the Jacobian matrix of the nonlinear reconstruction operator (representative of the iterative algorithm) with respect to the data. We derive the desired Jacobian matrix for two types of nonlinear iterative algorithms: a fast variant of the standard iterative reweighted least-squares method and the contemporary split-Bregman algorithm, both of which can accommodate a wide variety of analysis- and synthesis-type regularizers. The proposed approach iteratively computes two weighted SURE-type measures: Predicted-SURE and Projected-SURE (that require knowledge of noise variance σ2), and GCV (that does not need σ2) for these algorithms. We apply the methods to image restoration and to magnetic resonance image (MRI) reconstruction using total variation (TV) and an analysis-type ℓ1-regularization. We demonstrate through simulations and experiments with real data that minimizing Predicted-SURE and Projected-SURE consistently lead to near-MSE-optimal reconstructions. We also observed that minimizing GCV yields reconstruction results that are near-MSE-optimal for image restoration and slightly sub-optimal for MRI. Theoretical derivations in this work related to Jacobian matrix evaluations can be extended, in principle, to other types of regularizers and reconstruction algorithms. PMID:22531764
NASA Astrophysics Data System (ADS)
Huang, Shih-Wei; Chen, Shih-Hua; Chen, Weichung; Wu, I.-Chen; Wu, Ming Tsang; Kuo, Chie-Tong; Wang, Hsiang-Chen
2016-03-01
This study presents a method to identify early esophageal cancer within endoscope using hyperspectral imaging technology. The research samples are three kinds of endoscopic images including white light endoscopic, chromoendoscopic, and narrow-band endoscopic images with different stages of pathological changes (normal, dysplasia, dysplasia - esophageal cancer, and esophageal cancer). Research is divided into two parts: first, we analysis the reflectance spectra of endoscopic images with different stages to know the spectral responses by pathological changes. Second, we identified early cancerous lesion of esophagus by principal component analysis (PCA) of the reflectance spectra of endoscopic images. The results of this study show that the identification of early cancerous lesion is possible achieve from three kinds of images. In which the spectral characteristics of NBI endoscopy images of a gray area than those without the existence of the problem the first two, and the trend is very clear. Therefore, if simply to reflect differences in the degree of spectral identification, chromoendoscopic images are suitable samples. The best identification of early esophageal cancer is using the NBI endoscopic images. Based on the results, the use of hyperspectral imaging technology in the early endoscopic esophageal cancer lesion image recognition helps clinicians quickly diagnose. We hope for the future to have a relatively large amount of endoscopic image by establishing a hyperspectral imaging database system developed in this study, so the clinician can take this repository more efficiently preliminary diagnosis.
Hyperspectral small animal fluorescence imaging: spectral selection imaging
NASA Astrophysics Data System (ADS)
Leavesley, Silas; Jiang, Yanan; Patsekin, Valery; Hall, Heidi; Vizard, Douglas; Robinson, J. Paul
2008-02-01
Molecular imaging is a rapidly growing area of research, fueled by needs in pharmaceutical drug-development for methods for high-throughput screening, pre-clinical and clinical screening for visualizing tumor growth and drug targeting, and a growing number of applications in the molecular biology fields. Small animal fluorescence imaging employs fluorescent probes to target molecular events in vivo, with a large number of molecular targeting probes readily available. The ease at which new targeting compounds can be developed, the short acquisition times, and the low cost (compared to microCT, MRI, or PET) makes fluorescence imaging attractive. However, small animal fluorescence imaging suffers from high optical scattering, absorption, and autofluorescence. Much of these problems can be overcome through multispectral imaging techniques, which collect images at different fluorescence emission wavelengths, followed by analysis, classification, and spectral deconvolution methods to isolate signals from fluorescence emission. We present an alternative to the current method, using hyperspectral excitation scanning (spectral selection imaging), a technique that allows excitation at any wavelength in the visible and near-infrared wavelength range. In many cases, excitation imaging may be more effective at identifying specific fluorescence signals because of the higher complexity of the fluorophore excitation spectrum. Because the excitation is filtered and not the emission, the resolution limit and image shift imposed by acousto-optic tunable filters have no effect on imager performance. We will discuss design of the imager, optimizing the imager for use in small animal fluorescence imaging, and application of spectral analysis and classification methods for identifying specific fluorescence signals.
Image segmentation using association rule features.
Rushing, John A; Ranganath, Heggere; Hinke, Thomas H; Graves, Sara J
2002-01-01
A new type of texture feature based on association rules is described. Association rules have been used in applications such as market basket analysis to capture relationships present among items in large data sets. It is shown that association rules can be adapted to capture frequently occurring local structures in images. The frequency of occurrence of these structures can be used to characterize texture. Methods for segmentation of textured images based on association rule features are described. Simulation results using images consisting of man made and natural textures show that association rule features perform well compared to other widely used texture features. Association rule features are used to detect cumulus cloud fields in GOES satellite images and are found to achieve higher accuracy than other statistical texture features for this problem.
Lesion Detection in CT Images Using Deep Learning Semantic Segmentation Technique
NASA Astrophysics Data System (ADS)
Kalinovsky, A.; Liauchuk, V.; Tarasau, A.
2017-05-01
In this paper, the problem of automatic detection of tuberculosis lesion on 3D lung CT images is considered as a benchmark for testing out algorithms based on a modern concept of Deep Learning. For training and testing of the algorithms a domestic dataset of 338 3D CT scans of tuberculosis patients with manually labelled lesions was used. The algorithms which are based on using Deep Convolutional Networks were implemented and applied in three different ways including slice-wise lesion detection in 2D images using semantic segmentation, slice-wise lesion detection in 2D images using sliding window technique as well as straightforward detection of lesions via semantic segmentation in whole 3D CT scans. The algorithms demonstrate superior performance compared to algorithms based on conventional image analysis methods.
A New Test Method of Circuit Breaker Spring Telescopic Characteristics Based Image Processing
NASA Astrophysics Data System (ADS)
Huang, Huimin; Wang, Feifeng; Lu, Yufeng; Xia, Xiaofei; Su, Yi
2018-06-01
This paper applied computer vision technology to the fatigue condition monitoring of springs, and a new telescopic characteristics test method is proposed for circuit breaker operating mechanism spring based on image processing technology. High-speed camera is utilized to capture spring movement image sequences when high voltage circuit breaker operated. Then the image-matching method is used to obtain the deformation-time curve and speed-time curve, and the spring expansion and deformation parameters are extracted from it, which will lay a foundation for subsequent spring force analysis and matching state evaluation. After performing simulation tests at the experimental site, this image analyzing method could solve the complex problems of traditional mechanical sensor installation and monitoring online, status assessment of the circuit breaker spring.
Luma-chroma space filter design for subpixel-based monochrome image downsampling.
Fang, Lu; Au, Oscar C; Cheung, Ngai-Man; Katsaggelos, Aggelos K; Li, Houqiang; Zou, Feng
2013-10-01
In general, subpixel-based downsampling can achieve higher apparent resolution of the down-sampled images on LCD or OLED displays than pixel-based downsampling. With the frequency domain analysis of subpixel-based downsampling, we discover special characteristics of the luma-chroma color transform choice for monochrome images. With these, we model the anti-aliasing filter design for subpixel-based monochrome image downsampling as a human visual system-based optimization problem with a two-term cost function and obtain a closed-form solution. One cost term measures the luminance distortion and the other term measures the chrominance aliasing in our chosen luma-chroma space. Simulation results suggest that the proposed method can achieve sharper down-sampled gray/font images compared with conventional pixel and subpixel-based methods, without noticeable color fringing artifacts.
[Imaging Mass Spectrometry in Histopathologic Analysis].
Yamazaki, Fumiyoshi; Seto, Mitsutoshi
2015-04-01
Matrix-assisted laser desorption/ionization (MALDI)-imaging mass spectrometry (IMS) enables visualization of the distribution of a range of biomolecules by integrating biochemical information from mass spectrometry with positional information from microscopy. IMS identifies a target molecule. In addition, IMS enables global analysis of biomolecules containing unknown molecules by detecting the ratio of the molecular weight to electric charge without any target, which makes it possible to identify novel molecules. IMS generates data on the distribution of lipids and small molecules in tissues, which is difficult to visualize with either conventional counter-staining or immunohistochemistry. In this review, we firstly introduce the principle of imaging mass spectrometry and recent advances in the sample preparation method. Secondly, we present findings regarding biological samples, especially pathological ones. Finally, we discuss the limitations and problems of the IMS technique and clinical application, such as in drug development.
Application of image processing technology to problems in manuscript encapsulation. [Codex Hammer
NASA Technical Reports Server (NTRS)
Glackin, D. L.; Korsmo, E. P.
1983-01-01
The long term effects of encapsulation individual sheets of the Codex Hammer were investigated. The manuscript was simulated with similar sheets of paper which were photographed under repeatable raking light conditions to enhance their surface texture, encapsulated in plexiglas, cycled in an environmental test chamber, and rephotographed at selected intervals. The film images were digitized, contrast enhanced, geometrically registered, and apodized. An FFT analysis of a control sheet and two experimental sheets indicates no micro-burnishing, but reveals that the ""mesoscale'' deformations with sizes 8mm are degrading monotonically, which is of no concern. Difference image analysis indicates that the sheets were increasingly stressed with time and that the plexiglas did not provide a sufficient environmental barrier under the simulation conditions. The relationship of these results to the Codex itself is to be determined.
NASA Astrophysics Data System (ADS)
Xu, Shaoping; Zeng, Xiaoxia; Jiang, Yinnan; Tang, Yiling
2018-01-01
We proposed a noniterative principal component analysis (PCA)-based noise level estimation (NLE) algorithm that addresses the problem of estimating the noise level with a two-step scheme. First, we randomly extracted a number of raw patches from a given noisy image and took the smallest eigenvalue of the covariance matrix of the raw patches as the preliminary estimation of the noise level. Next, the final estimation was directly obtained with a nonlinear mapping (rectification) function that was trained on some representative noisy images corrupted with different known noise levels. Compared with the state-of-art NLE algorithms, the experiment results show that the proposed NLE algorithm can reliably infer the noise level and has robust performance over a wide range of image contents and noise levels, showing a good compromise between speed and accuracy in general.
Rajpoot, Kashif; Grau, Vicente; Noble, J Alison; Becher, Harald; Szmigielski, Cezary
2011-08-01
Real-time 3D echocardiography (RT3DE) promises a more objective and complete cardiac functional analysis by dynamic 3D image acquisition. Despite several efforts towards automation of left ventricle (LV) segmentation and tracking, these remain challenging research problems due to the poor-quality nature of acquired images usually containing missing anatomical information, speckle noise, and limited field-of-view (FOV). Recently, multi-view fusion 3D echocardiography has been introduced as acquiring multiple conventional single-view RT3DE images with small probe movements and fusing them together after alignment. This concept of multi-view fusion helps to improve image quality and anatomical information and extends the FOV. We now take this work further by comparing single-view and multi-view fused images in a systematic study. In order to better illustrate the differences, this work evaluates image quality and information content of single-view and multi-view fused images using image-driven LV endocardial segmentation and tracking. The image-driven methods were utilized to fully exploit image quality and anatomical information present in the image, thus purposely not including any high-level constraints like prior shape or motion knowledge in the analysis approaches. Experiments show that multi-view fused images are better suited for LV segmentation and tracking, while relatively more failures and errors were observed on single-view images. Copyright © 2011 Elsevier B.V. All rights reserved.
Formulation of image fusion as a constrained least squares optimization problem
Dwork, Nicholas; Lasry, Eric M.; Pauly, John M.; Balbás, Jorge
2017-01-01
Abstract. Fusing a lower resolution color image with a higher resolution monochrome image is a common practice in medical imaging. By incorporating spatial context and/or improving the signal-to-noise ratio, it provides clinicians with a single frame of the most complete information for diagnosis. In this paper, image fusion is formulated as a convex optimization problem that avoids image decomposition and permits operations at the pixel level. This results in a highly efficient and embarrassingly parallelizable algorithm based on widely available robust and simple numerical methods that realizes the fused image as the global minimizer of the convex optimization problem. PMID:28331885
Less is More: Bigger Data from Compressive Measurements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stevens, Andrew; Browning, Nigel D.
Compressive sensing approaches are beginning to take hold in (scanning) transmission electron microscopy (S/TEM) [1,2,3]. Compressive sensing is a mathematical theory about acquiring signals in a compressed form (measurements) and the probability of recovering the original signal by solving an inverse problem [4]. The inverse problem is underdetermined (more unknowns than measurements), so it is not obvious that recovery is possible. Compression is achieved by taking inner products of the signal with measurement weight vectors. Both Gaussian random weights and Bernoulli (0,1) random weights form a large class of measurement vectors for which recovery is possible. The measurements can alsomore » be designed through an optimization process. The key insight for electron microscopists is that compressive sensing can be used to increase acquisition speed and reduce dose. Building on work initially developed for optical cameras, this new paradigm will allow electron microscopists to solve more problems in the engineering and life sciences. We will be collecting orders of magnitude more data than previously possible. The reason that we will have more data is because we will have increased temporal/spatial/spectral sampling rates, and we will be able ability to interrogate larger classes of samples that were previously too beam sensitive to survive the experiment. For example consider an in-situ experiment that takes 1 minute. With traditional sensing, we might collect 5 images per second for a total of 300 images. With compressive sensing, each of those 300 images can be expanded into 10 more images, making the collection rate 50 images per second, and the decompressed data a total of 3000 images [3]. But, what are the implications, in terms of data, for this new methodology? Acquisition of compressed data will require downstream reconstruction to be useful. The reconstructed data will be much larger than traditional data, we will need space to store the reconstructions during analysis, and the computational demands for analysis will be higher. Moreover, there will be time costs associated with reconstruction. Deep learning [5] is an approach to address these problems. Deep learning is a hierarchical approach to find useful (for a particular task) representations of data. Each layer of the hierarchy is intended to represent higher levels of abstraction. For example, a deep model of faces might have sinusoids, edges and gradients in the first layer; eyes, noses, and mouths in the second layer, and faces in the third layer. There has been significant effort recently in deep learning algorithms for tasks beyond image classification such as compressive reconstruction [6] and image segmentation [7]. A drawback of deep learning, however, is that training the model requires large datasets and dedicated computational resources (to reduce training time to a few days). A second issue is that deep learning is not user-friendly and the meaning behind the results is usually not interpretable. We have shown it is possible to reduce the data set size while maintaining model quality [8] and developed interpretable models for image classification [9], but the demands are still significant. The key to addressing these problems is to NOT reconstruct the data. Instead, we should design computational sensors that give answers to specific problems. A simple version of this idea is compressive classification [10], where the goal is to classify signal type from a small number of compressed measurements. Classification is a much simpler problem than reconstruction, so 1) much fewer measurements will be necessary, and 2) these measurements will probably not be useful for reconstruction. Other simple examples of computational sensing include determining object volume or the number of objects present in the field of view [11].« less
Mixed Membership Distributions with Applications to Modeling Multiple Strategy Usage
ERIC Educational Resources Information Center
Galyardt, April
2012-01-01
This dissertation examines two related questions. "How do mixed membership models work?" and "Can mixed membership be used to model how students use multiple strategies to solve problems?". Mixed membership models have been used in thousands of applications from text and image processing to genetic microarray analysis. Yet…
The Alchemy of Mathematical Experience: A Psychoanalysis of Student Writings.
ERIC Educational Resources Information Center
Early, Robert E.
1992-01-01
Shares a psychological look at student images of mathematical learning and problem solving through students' writings about mathematical experiences. The analysis is done from a Jungian psychoanalytic orientation with the goal of assisting students develop a deeper perspective from which to view their mathematics experience. (MDH)
Using MATLAB software with Tomcat server and Java platform for remote image analysis in pathology.
Markiewicz, Tomasz
2011-03-30
The Matlab software is a one of the most advanced development tool for application in engineering practice. From our point of view the most important is the image processing toolbox, offering many built-in functions, including mathematical morphology, and implementation of a many artificial neural networks as AI. It is very popular platform for creation of the specialized program for image analysis, also in pathology. Based on the latest version of Matlab Builder Java toolbox, it is possible to create the software, serving as a remote system for image analysis in pathology via internet communication. The internet platform can be realized based on Java Servlet Pages with Tomcat server as servlet container. In presented software implementation we propose remote image analysis realized by Matlab algorithms. These algorithms can be compiled to executable jar file with the help of Matlab Builder Java toolbox. The Matlab function must be declared with the set of input data, output structure with numerical results and Matlab web figure. Any function prepared in that manner can be used as a Java function in Java Servlet Pages (JSP). The graphical user interface providing the input data and displaying the results (also in graphical form) must be implemented in JSP. Additionally the data storage to database can be implemented within algorithm written in Matlab with the help of Matlab Database Toolbox directly with the image processing. The complete JSP page can be run by Tomcat server. The proposed tool for remote image analysis was tested on the Computerized Analysis of Medical Images (CAMI) software developed by author. The user provides image and case information (diagnosis, staining, image parameter etc.). When analysis is initialized, input data with image are sent to servlet on Tomcat. When analysis is done, client obtains the graphical results as an image with marked recognized cells and also the quantitative output. Additionally, the results are stored in a server database. The internet platform was tested on PC Intel Core2 Duo T9600 2.8 GHz 4 GB RAM server with 768x576 pixel size, 1.28 Mb tiff format images reffering to meningioma tumour (x400, Ki-67/MIB-1). The time consumption was as following: at analysis by CAMI, locally on a server - 3.5 seconds, at remote analysis - 26 seconds, from which 22 seconds were used for data transfer via internet connection. At jpg format image (102 Kb) the consumption time was reduced to 14 seconds. The results have confirmed that designed remote platform can be useful for pathology image analysis. The time consumption is depended mainly on the image size and speed of the internet connections. The presented implementation can be used for many types of analysis at different staining, tissue, morphometry approaches, etc. The significant problem is the implementation of the JSP page in the multithread form, that can be used parallelly by many users. The presented platform for image analysis in pathology can be especially useful for small laboratory without its own image analysis system.
Using MATLAB software with Tomcat server and Java platform for remote image analysis in pathology
2011-01-01
Background The Matlab software is a one of the most advanced development tool for application in engineering practice. From our point of view the most important is the image processing toolbox, offering many built-in functions, including mathematical morphology, and implementation of a many artificial neural networks as AI. It is very popular platform for creation of the specialized program for image analysis, also in pathology. Based on the latest version of Matlab Builder Java toolbox, it is possible to create the software, serving as a remote system for image analysis in pathology via internet communication. The internet platform can be realized based on Java Servlet Pages with Tomcat server as servlet container. Methods In presented software implementation we propose remote image analysis realized by Matlab algorithms. These algorithms can be compiled to executable jar file with the help of Matlab Builder Java toolbox. The Matlab function must be declared with the set of input data, output structure with numerical results and Matlab web figure. Any function prepared in that manner can be used as a Java function in Java Servlet Pages (JSP). The graphical user interface providing the input data and displaying the results (also in graphical form) must be implemented in JSP. Additionally the data storage to database can be implemented within algorithm written in Matlab with the help of Matlab Database Toolbox directly with the image processing. The complete JSP page can be run by Tomcat server. Results The proposed tool for remote image analysis was tested on the Computerized Analysis of Medical Images (CAMI) software developed by author. The user provides image and case information (diagnosis, staining, image parameter etc.). When analysis is initialized, input data with image are sent to servlet on Tomcat. When analysis is done, client obtains the graphical results as an image with marked recognized cells and also the quantitative output. Additionally, the results are stored in a server database. The internet platform was tested on PC Intel Core2 Duo T9600 2.8GHz 4GB RAM server with 768x576 pixel size, 1.28Mb tiff format images reffering to meningioma tumour (x400, Ki-67/MIB-1). The time consumption was as following: at analysis by CAMI, locally on a server – 3.5 seconds, at remote analysis – 26 seconds, from which 22 seconds were used for data transfer via internet connection. At jpg format image (102 Kb) the consumption time was reduced to 14 seconds. Conclusions The results have confirmed that designed remote platform can be useful for pathology image analysis. The time consumption is depended mainly on the image size and speed of the internet connections. The presented implementation can be used for many types of analysis at different staining, tissue, morphometry approaches, etc. The significant problem is the implementation of the JSP page in the multithread form, that can be used parallelly by many users. The presented platform for image analysis in pathology can be especially useful for small laboratory without its own image analysis system. PMID:21489188
Using machine learning techniques to automate sky survey catalog generation
NASA Technical Reports Server (NTRS)
Fayyad, Usama M.; Roden, J. C.; Doyle, R. J.; Weir, Nicholas; Djorgovski, S. G.
1993-01-01
We describe the application of machine classification techniques to the development of an automated tool for the reduction of a large scientific data set. The 2nd Palomar Observatory Sky Survey provides comprehensive photographic coverage of the northern celestial hemisphere. The photographic plates are being digitized into images containing on the order of 10(exp 7) galaxies and 10(exp 8) stars. Since the size of this data set precludes manual analysis and classification of objects, our approach is to develop a software system which integrates independently developed techniques for image processing and data classification. Image processing routines are applied to identify and measure features of sky objects. Selected features are used to determine the classification of each object. GID3* and O-BTree, two inductive learning techniques, are used to automatically learn classification decision trees from examples. We describe the techniques used, the details of our specific application, and the initial encouraging results which indicate that our approach is well-suited to the problem. The benefits of the approach are increased data reduction throughput, consistency of classification, and the automated derivation of classification rules that will form an objective, examinable basis for classifying sky objects. Furthermore, astronomers will be freed from the tedium of an intensely visual task to pursue more challenging analysis and interpretation problems given automatically cataloged data.
GPR image analysis to locate water leaks from buried pipes by applying variance filters
NASA Astrophysics Data System (ADS)
Ocaña-Levario, Silvia J.; Carreño-Alvarado, Elizabeth P.; Ayala-Cabrera, David; Izquierdo, Joaquín
2018-05-01
Nowadays, there is growing interest in controlling and reducing the amount of water lost through leakage in water supply systems (WSSs). Leakage is, in fact, one of the biggest problems faced by the managers of these utilities. This work addresses the problem of leakage in WSSs by using GPR (Ground Penetrating Radar) as a non-destructive method. The main objective is to identify and extract features from GPR images such as leaks and components in a controlled laboratory condition by a methodology based on second order statistical parameters and, using the obtained features, to create 3D models that allows quick visualization of components and leaks in WSSs from GPR image analysis and subsequent interpretation. This methodology has been used before in other fields and provided promising results. The results obtained with the proposed methodology are presented, analyzed, interpreted and compared with the results obtained by using a well-established multi-agent based methodology. These results show that the variance filter is capable of highlighting the characteristics of components and anomalies, in an intuitive manner, which can be identified by non-highly qualified personnel, using the 3D models we develop. This research intends to pave the way towards future intelligent detection systems that enable the automatic detection of leaks in WSSs.
Qian, Zhi-Ming; Wang, Shuo Hong; Cheng, Xi En; Chen, Yan Qiu
2016-06-23
Fish tracking is an important step for video based analysis of fish behavior. Due to severe body deformation and mutual occlusion of multiple swimming fish, accurate and robust fish tracking from video image sequence is a highly challenging problem. The current tracking methods based on motion information are not accurate and robust enough to track the waving body and handle occlusion. In order to better overcome these problems, we propose a multiple fish tracking method based on fish head detection. The shape and gray scale characteristics of the fish image are employed to locate the fish head position. For each detected fish head, we utilize the gray distribution of the head region to estimate the fish head direction. Both the position and direction information from fish detection are then combined to build a cost function of fish swimming. Based on the cost function, global optimization method can be applied to associate the target between consecutive frames. Results show that our method can accurately detect the position and direction information of fish head, and has a good tracking performance for dozens of fish. The proposed method can successfully obtain the motion trajectories for dozens of fish so as to provide more precise data to accommodate systematic analysis of fish behavior.
NASA Technical Reports Server (NTRS)
Roscoe, Stanley N.
1989-01-01
For better or worse, virtual imaging displays are with us in the form of narrow-angle combining-glass presentations, head-up displays (HUD), and head-mounted projections of wide-angle sensor-generated or computer-animated imagery (HMD). All military and civil aviation services and a large number of aerospace companies are involved in one way or another in a frantic competition to develop the best virtual imaging display system. The success or failure of major weapon systems hangs in the balance, and billions of dollars in potential business are at stake. Because of the degree to which national defense is committed to the perfection of virtual imaging displays, a brief consideration of their status, an investigation and analysis of their problems, and a search for realistic alternatives are long overdue.
Different methods of image segmentation in the process of meat marbling evaluation
NASA Astrophysics Data System (ADS)
Ludwiczak, A.; Ślósarz, P.; Lisiak, D.; Przybylak, A.; Boniecki, P.; Stanisz, M.; Koszela, K.; Zaborowicz, M.; Przybył, K.; Wojcieszak, D.; Janczak, D.; Bykowska, M.
2015-07-01
The level of marbling in meat assessment based on digital images is very popular, as computer vision tools are becoming more and more advanced. However considering muscle cross sections as the data source for marbling level evaluation, there are still a few problems to cope with. There is a need for an accurate method which would facilitate this evaluation procedure and increase its accuracy. The presented research was conducted in order to compare the effect of different image segmentation tools considering their usefulness in meat marbling evaluation on the muscle anatomical cross - sections. However this study is considered to be an initial trial in the presented field of research and an introduction to ultrasonic images processing and analysis.
Optical design of multi-multiple expander structure of laser gas analysis and measurement device
NASA Astrophysics Data System (ADS)
Fu, Xiang; Wei, Biao
2018-03-01
The installation and debugging of optical circuit structure in the application of carbon monoxide distributed laser gas analysis and measurement, there are difficult key technical problems. Based on the three-component expansion theory, multi-multiple expander structure with expansion ratio of 4, 5, 6 and 7 is adopted in the absorption chamber to enhance the adaptability of the installation environment of the gas analysis and measurement device. According to the basic theory of aberration, the optimal design of multi-multiple beam expander structure is carried out. By using image quality evaluation method, the difference of image quality under different magnifications is analyzed. The results show that the optical quality of the optical system with the expanded beam structure is the best when the expansion ratio is 5-7.
Real Time Intelligent Target Detection and Analysis with Machine Vision
NASA Technical Reports Server (NTRS)
Howard, Ayanna; Padgett, Curtis; Brown, Kenneth
2000-01-01
We present an algorithm for detecting a specified set of targets for an Automatic Target Recognition (ATR) application. ATR involves processing images for detecting, classifying, and tracking targets embedded in a background scene. We address the problem of discriminating between targets and nontarget objects in a scene by evaluating 40x40 image blocks belonging to an image. Each image block is first projected onto a set of templates specifically designed to separate images of targets embedded in a typical background scene from those background images without targets. These filters are found using directed principal component analysis which maximally separates the two groups. The projected images are then clustered into one of n classes based on a minimum distance to a set of n cluster prototypes. These cluster prototypes have previously been identified using a modified clustering algorithm based on prior sensed data. Each projected image pattern is then fed into the associated cluster's trained neural network for classification. A detailed description of our algorithm will be given in this paper. We outline our methodology for designing the templates, describe our modified clustering algorithm, and provide details on the neural network classifiers. Evaluation of the overall algorithm demonstrates that our detection rates approach 96% with a false positive rate of less than 0.03%.
Tsukamoto, Takafumi; Yasunaga, Takuo
2014-11-01
Eos (Extensible object-oriented system) is one of the powerful applications for image processing of electron micrographs. In usual cases, Eos works with only character user interfaces (CUI) under the operating systems (OS) such as OS-X or Linux, not user-friendly. Thus, users of Eos need to be expert at image processing of electron micrographs, and have a little knowledge of computer science, as well. However, all the persons who require Eos does not an expert for CUI. Thus we extended Eos to a web system independent of OS with graphical user interfaces (GUI) by integrating web browser.Advantage to use web browser is not only to extend Eos with GUI, but also extend Eos to work under distributed computational environment. Using Ajax (Asynchronous JavaScript and XML) technology, we implemented more comfortable user-interface on web browser. Eos has more than 400 commands related to image processing for electron microscopy, and the usage of each command is different from each other. Since the beginning of development, Eos has managed their user-interface by using the interface definition file of "OptionControlFile" written in CSV (Comma-Separated Value) format, i.e., Each command has "OptionControlFile", which notes information for interface and its usage generation. Developed GUI system called "Zephyr" (Zone for Easy Processing of HYpermedia Resources) also accessed "OptionControlFIle" and produced a web user-interface automatically, because its mechanism is mature and convenient,The basic actions of client side system was implemented properly and can supply auto-generation of web-form, which has functions of execution, image preview, file-uploading to a web server. Thus the system can execute Eos commands with unique options for each commands, and process image analysis. There remain problems of image file format for visualization and workspace for analysis: The image file format information is useful to check whether the input/output file is correct and we also need to provide common workspace for analysis because the client is physically separated from a server. We solved the file format problem by extension of rules of OptionControlFile of Eos. Furthermore, to solve workspace problems, we have developed two type of system. The first system is to use only local environments. The user runs a web server provided by Eos, access to a web client through a web browser, and manipulate the local files with GUI on the web browser. The second system is employing PIONE (Process-rule for Input/Output Negotiation Environment), which is our developing platform that works under heterogenic distributed environment. The users can put their resources, such as microscopic images, text files and so on, into the server-side environment supported by PIONE, and so experts can write PIONE rule definition, which defines a workflow of image processing. PIONE run each image processing on suitable computers, following the defined rule. PIONE has the ability of interactive manipulation, and user is able to try a command with various setting values. In this situation, we contribute to auto-generation of GUI for a PIONE workflow.As advanced functions, we have developed a module to log user actions. The logs include information such as setting values in image processing, procedure of commands and so on. If we use the logs effectively, we can get a lot of advantages. For example, when an expert may discover some know-how of image processing, other users can also share logs including his know-hows and so we may obtain recommendation workflow of image analysis, if we analyze logs. To implement social platform of image processing for electron microscopists, we have developed system infrastructure, as well. © The Author 2014. Published by Oxford University Press on behalf of The Japanese Society of Microscopy. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Spectral mapping tools from the earth sciences applied to spectral microscopy data.
Harris, A Thomas
2006-08-01
Spectral imaging, originating from the field of earth remote sensing, is a powerful tool that is being increasingly used in a wide variety of applications for material identification. Several workers have used techniques like linear spectral unmixing (LSU) to discriminate materials in images derived from spectral microscopy. However, many spectral analysis algorithms rely on assumptions that are often violated in microscopy applications. This study explores algorithms originally developed as improvements on early earth imaging techniques that can be easily translated for use with spectral microscopy. To best demonstrate the application of earth remote sensing spectral analysis tools to spectral microscopy data, earth imaging software was used to analyze data acquired with a Leica confocal microscope with mechanical spectral scanning. For this study, spectral training signatures (often referred to as endmembers) were selected with the ENVI (ITT Visual Information Solutions, Boulder, CO) "spectral hourglass" processing flow, a series of tools that use the spectrally over-determined nature of hyperspectral data to find the most spectrally pure (or spectrally unique) pixels within the data set. This set of endmember signatures was then used in the full range of mapping algorithms available in ENVI to determine locations, and in some cases subpixel abundances of endmembers. Mapping and abundance images showed a broad agreement between the spectral analysis algorithms, supported through visual assessment of output classification images and through statistical analysis of the distribution of pixels within each endmember class. The powerful spectral analysis algorithms available in COTS software, the result of decades of research in earth imaging, are easily translated to new sources of spectral data. Although the scale between earth imagery and spectral microscopy is radically different, the problem is the same: mapping material locations and abundances based on unique spectral signatures. (c) 2006 International Society for Analytical Cytology.
Adipose Tissue Quantification by Imaging Methods: A Proposed Classification
Shen, Wei; Wang, ZiMian; Punyanita, Mark; Lei, Jianbo; Sinav, Ahmet; Kral, John G.; Imielinska, Celina; Ross, Robert; Heymsfield, Steven B.
2007-01-01
Recent advances in imaging techniques and understanding of differences in the molecular biology of adipose tissue has rendered classical anatomy obsolete, requiring a new classification of the topography of adipose tissue. Adipose tissue is one of the largest body compartments, yet a classification that defines specific adipose tissue depots based on their anatomic location and related functions is lacking. The absence of an accepted taxonomy poses problems for investigators studying adipose tissue topography and its functional correlates. The aim of this review was to critically examine the literature on imaging of whole body and regional adipose tissue and to create the first systematic classification of adipose tissue topography. Adipose tissue terminology was examined in over 100 original publications. Our analysis revealed inconsistencies in the use of specific definitions, especially for the compartment termed “visceral” adipose tissue. This analysis leads us to propose an updated classification of total body and regional adipose tissue, providing a well-defined basis for correlating imaging studies of specific adipose tissue depots with molecular processes. PMID:12529479
Bilinear Factor Matrix Norm Minimization for Robust PCA: Algorithms and Applications.
Shang, Fanhua; Cheng, James; Liu, Yuanyuan; Luo, Zhi-Quan; Lin, Zhouchen
2017-09-04
The heavy-tailed distributions of corrupted outliers and singular values of all channels in low-level vision have proven effective priors for many applications such as background modeling, photometric stereo and image alignment. And they can be well modeled by a hyper-Laplacian. However, the use of such distributions generally leads to challenging non-convex, non-smooth and non-Lipschitz problems, and makes existing algorithms very slow for large-scale applications. Together with the analytic solutions to Lp-norm minimization with two specific values of p, i.e., p=1/2 and p=2/3, we propose two novel bilinear factor matrix norm minimization models for robust principal component analysis. We first define the double nuclear norm and Frobenius/nuclear hybrid norm penalties, and then prove that they are in essence the Schatten-1/2 and 2/3 quasi-norms, respectively, which lead to much more tractable and scalable Lipschitz optimization problems. Our experimental analysis shows that both our methods yield more accurate solutions than original Schatten quasi-norm minimization, even when the number of observations is very limited. Finally, we apply our penalties to various low-level vision problems, e.g. moving object detection, image alignment and inpainting, and show that our methods usually outperform the state-of-the-art methods.
On the convergence of nonconvex minimization methods for image recovery.
Xiao, Jin; Ng, Michael Kwok-Po; Yang, Yu-Fei
2015-05-01
Nonconvex nonsmooth regularization method has been shown to be effective for restoring images with neat edges. Fast alternating minimization schemes have also been proposed and developed to solve the nonconvex nonsmooth minimization problem. The main contribution of this paper is to show the convergence of these alternating minimization schemes, based on the Kurdyka-Łojasiewicz property. In particular, we show that the iterates generated by the alternating minimization scheme, converges to a critical point of this nonconvex nonsmooth objective function. We also extend the analysis to nonconvex nonsmooth regularization model with box constraints, and obtain similar convergence results of the related minimization algorithm. Numerical examples are given to illustrate our convergence analysis.
Segmentation of touching mycobacterium tuberculosis from Ziehl-Neelsen stained sputum smear images
NASA Astrophysics Data System (ADS)
Xu, Chao; Zhou, Dongxiang; Liu, Yunhui
2015-12-01
Touching Mycobacterium tuberculosis objects in the Ziehl-Neelsen stained sputum smear images present different shapes and invisible boundaries in the adhesion areas, which increases the difficulty in objects recognition and counting. In this paper, we present a segmentation method of combining the hierarchy tree analysis with gradient vector flow snake to address this problem. The skeletons of the objects are used for structure analysis based on the hierarchy tree. The gradient vector flow snake is used to estimate the object edge. Experimental results show that the single objects composing the touching objects are successfully segmented by the proposed method. This work will improve the accuracy and practicability of the computer-aided diagnosis of tuberculosis.
NASA Technical Reports Server (NTRS)
Iverson, L. R.; Cook, E. A.; Graham, R. L.; Olson, J. S.; Frank, T.; Ke, Y.; Treworgy, C.; Risser, P. G.
1986-01-01
Several hardware, software, and data collection problems encountered were conquered. The Geographic Information System (GIS) data from other systems were converted to ERDAS format for incorporation with the image data. Statistical analysis of the relationship between spectral values and productivity is being pursued. Several project sites, including Jackson, Pope, Boulder, Smokies, and Huntington Forest are evolving as the most intensively studied areas, primarily due to availability of data and time. Progress with data acquisition and quality checking, more details on experimental sites, and brief summarizations of research results and future plans are discussed. Material on personnel, collaborators, facilities, site background, and meetings and publications of the investigators are included.
The semantic system is involved in mathematical problem solving.
Zhou, Xinlin; Li, Mengyi; Li, Leinian; Zhang, Yiyun; Cui, Jiaxin; Liu, Jie; Chen, Chuansheng
2018-02-01
Numerous studies have shown that the brain regions around bilateral intraparietal cortex are critical for number processing and arithmetical computation. However, the neural circuits for more advanced mathematics such as mathematical problem solving (with little routine arithmetical computation) remain unclear. Using functional magnetic resonance imaging (fMRI), this study (N = 24 undergraduate students) compared neural bases of mathematical problem solving (i.e., number series completion, mathematical word problem solving, and geometric problem solving) and arithmetical computation. Direct subject- and item-wise comparisons revealed that mathematical problem solving typically had greater activation than arithmetical computation in all 7 regions of the semantic system (which was based on a meta-analysis of 120 functional neuroimaging studies on semantic processing). Arithmetical computation typically had greater activation in the supplementary motor area and left precentral gyrus. The results suggest that the semantic system in the brain supports mathematical problem solving. Copyright © 2017 Elsevier Inc. All rights reserved.
Collaborative classification of hyperspectral and visible images with convolutional neural network
NASA Astrophysics Data System (ADS)
Zhang, Mengmeng; Li, Wei; Du, Qian
2017-10-01
Recent advances in remote sensing technology have made multisensor data available for the same area, and it is well-known that remote sensing data processing and analysis often benefit from multisource data fusion. Specifically, low spatial resolution of hyperspectral imagery (HSI) degrades the quality of the subsequent classification task while using visible (VIS) images with high spatial resolution enables high-fidelity spatial analysis. A collaborative classification framework is proposed to fuse HSI and VIS images for finer classification. First, the convolutional neural network model is employed to extract deep spectral features for HSI classification. Second, effective binarized statistical image features are learned as contextual basis vectors for the high-resolution VIS image, followed by a classifier. The proposed approach employs diversified data in a decision fusion, leading to an integration of the rich spectral information, spatial information, and statistical representation information. In particular, the proposed approach eliminates the potential problems of the curse of dimensionality and excessive computation time. The experiments evaluated on two standard data sets demonstrate better classification performance offered by this framework.
Machine Learning and Radiology
Wang, Shijun; Summers, Ronald M.
2012-01-01
In this paper, we give a short introduction to machine learning and survey its applications in radiology. We focused on six categories of applications in radiology: medical image segmentation, registration, computer aided detection and diagnosis, brain function or activity analysis and neurological disease diagnosis from fMR images, content-based image retrieval systems for CT or MRI images, and text analysis of radiology reports using natural language processing (NLP) and natural language understanding (NLU). This survey shows that machine learning plays a key role in many radiology applications. Machine learning identifies complex patterns automatically and helps radiologists make intelligent decisions on radiology data such as conventional radiographs, CT, MRI, and PET images and radiology reports. In many applications, the performance of machine learning-based automatic detection and diagnosis systems has shown to be comparable to that of a well-trained and experienced radiologist. Technology development in machine learning and radiology will benefit from each other in the long run. Key contributions and common characteristics of machine learning techniques in radiology are discussed. We also discuss the problem of translating machine learning applications to the radiology clinical setting, including advantages and potential barriers. PMID:22465077
DeepInfer: open-source deep learning deployment toolkit for image-guided therapy
NASA Astrophysics Data System (ADS)
Mehrtash, Alireza; Pesteie, Mehran; Hetherington, Jorden; Behringer, Peter A.; Kapur, Tina; Wells, William M.; Rohling, Robert; Fedorov, Andriy; Abolmaesumi, Purang
2017-03-01
Deep learning models have outperformed some of the previous state-of-the-art approaches in medical image analysis. Instead of using hand-engineered features, deep models attempt to automatically extract hierarchical representations at multiple levels of abstraction from the data. Therefore, deep models are usually considered to be more flexible and robust solutions for image analysis problems compared to conventional computer vision models. They have demonstrated significant improvements in computer-aided diagnosis and automatic medical image analysis applied to such tasks as image segmentation, classification and registration. However, deploying deep learning models often has a steep learning curve and requires detailed knowledge of various software packages. Thus, many deep models have not been integrated into the clinical research work ows causing a gap between the state-of-the-art machine learning in medical applications and evaluation in clinical research procedures. In this paper, we propose "DeepInfer" - an open-source toolkit for developing and deploying deep learning models within the 3D Slicer medical image analysis platform. Utilizing a repository of task-specific models, DeepInfer allows clinical researchers and biomedical engineers to deploy a trained model selected from the public registry, and apply it to new data without the need for software development or configuration. As two practical use cases, we demonstrate the application of DeepInfer in prostate segmentation for targeted MRI-guided biopsy and identification of the target plane in 3D ultrasound for spinal injections.
DeepInfer: Open-Source Deep Learning Deployment Toolkit for Image-Guided Therapy.
Mehrtash, Alireza; Pesteie, Mehran; Hetherington, Jorden; Behringer, Peter A; Kapur, Tina; Wells, William M; Rohling, Robert; Fedorov, Andriy; Abolmaesumi, Purang
2017-02-11
Deep learning models have outperformed some of the previous state-of-the-art approaches in medical image analysis. Instead of using hand-engineered features, deep models attempt to automatically extract hierarchical representations at multiple levels of abstraction from the data. Therefore, deep models are usually considered to be more flexible and robust solutions for image analysis problems compared to conventional computer vision models. They have demonstrated significant improvements in computer-aided diagnosis and automatic medical image analysis applied to such tasks as image segmentation, classification and registration. However, deploying deep learning models often has a steep learning curve and requires detailed knowledge of various software packages. Thus, many deep models have not been integrated into the clinical research workflows causing a gap between the state-of-the-art machine learning in medical applications and evaluation in clinical research procedures. In this paper, we propose "DeepInfer" - an open-source toolkit for developing and deploying deep learning models within the 3D Slicer medical image analysis platform. Utilizing a repository of task-specific models, DeepInfer allows clinical researchers and biomedical engineers to deploy a trained model selected from the public registry, and apply it to new data without the need for software development or configuration. As two practical use cases, we demonstrate the application of DeepInfer in prostate segmentation for targeted MRI-guided biopsy and identification of the target plane in 3D ultrasound for spinal injections.
DeepInfer: Open-Source Deep Learning Deployment Toolkit for Image-Guided Therapy
Mehrtash, Alireza; Pesteie, Mehran; Hetherington, Jorden; Behringer, Peter A.; Kapur, Tina; Wells, William M.; Rohling, Robert; Fedorov, Andriy; Abolmaesumi, Purang
2017-01-01
Deep learning models have outperformed some of the previous state-of-the-art approaches in medical image analysis. Instead of using hand-engineered features, deep models attempt to automatically extract hierarchical representations at multiple levels of abstraction from the data. Therefore, deep models are usually considered to be more flexible and robust solutions for image analysis problems compared to conventional computer vision models. They have demonstrated significant improvements in computer-aided diagnosis and automatic medical image analysis applied to such tasks as image segmentation, classification and registration. However, deploying deep learning models often has a steep learning curve and requires detailed knowledge of various software packages. Thus, many deep models have not been integrated into the clinical research workflows causing a gap between the state-of-the-art machine learning in medical applications and evaluation in clinical research procedures. In this paper, we propose “DeepInfer” – an open-source toolkit for developing and deploying deep learning models within the 3D Slicer medical image analysis platform. Utilizing a repository of task-specific models, DeepInfer allows clinical researchers and biomedical engineers to deploy a trained model selected from the public registry, and apply it to new data without the need for software development or configuration. As two practical use cases, we demonstrate the application of DeepInfer in prostate segmentation for targeted MRI-guided biopsy and identification of the target plane in 3D ultrasound for spinal injections. PMID:28615794
An excitation wavelength-scanning spectral imaging system for preclinical imaging
NASA Astrophysics Data System (ADS)
Leavesley, Silas; Jiang, Yanan; Patsekin, Valery; Rajwa, Bartek; Robinson, J. Paul
2008-02-01
Small-animal fluorescence imaging is a rapidly growing field, driven by applications in cancer detection and pharmaceutical therapies. However, the practical use of this imaging technology is limited by image-quality issues related to autofluorescence background from animal tissues, as well as attenuation of the fluorescence signal due to scatter and absorption. To combat these problems, spectral imaging and analysis techniques are being employed to separate the fluorescence signal from background autofluorescence. To date, these technologies have focused on detecting the fluorescence emission spectrum at a fixed excitation wavelength. We present an alternative to this technique, an imaging spectrometer that detects the fluorescence excitation spectrum at a fixed emission wavelength. The advantages of this approach include increased available information for discrimination of fluorescent dyes, decreased optical radiation dose to the animal, and ability to scan a continuous wavelength range instead of discrete wavelength sampling. This excitation-scanning imager utilizes an acousto-optic tunable filter (AOTF), with supporting optics, to scan the excitation spectrum. Advanced image acquisition and analysis software has also been developed for classification and unmixing of the spectral image sets. Filtering has been implemented in a single-pass configuration with a bandwidth (full width at half maximum) of 16nm at 550nm central diffracted wavelength. We have characterized AOTF filtering over a wide range of incident light angles, much wider than has been previously reported in the literature, and we show how changes in incident light angle can be used to attenuate AOTF side lobes and alter bandwidth. A new parameter, in-band to out-of-band ratio, was defined to assess the quality of the filtered excitation light. Additional parameters were measured to allow objective characterization of the AOTF and the imager as a whole. This is necessary for comparing the excitation-scanning imager to other spectral and fluorescence imaging technologies. The effectiveness of the hyperspectral imager was tested by imaging and analysis of mice with injected fluorescent dyes. Finally, a discussion of the optimization of spectral fluorescence imagers is given, relating the effects of filter quality on fluorescence images collected and the analysis outcome.
Lozier, Leah M.; Cardinale, Elise M.; VanMeter, John W.; Marsh, Abigail A.
2015-01-01
Importance Among youths with conduct problems, callous-unemotional (CU) traits are known to be an important determinant of symptom severity, prognosis, and treatment responsiveness. But positive correlations between conduct problems and CU traits result in suppressor effects that may mask important neurobiological distinctions among subgroups of children with conduct problems. Objective To assess the unique neurobiological covariates of CU traits and externalizing behaviors in youths with conduct problems and determine whether neural dysfunction linked to CU traits mediates the link between callousness and proactive aggression. Design, Setting, and Participants This cross-sectional case-control study involved behavioral testing and neuroimaging that were conducted at a university research institution. Neuroimaging was conducted using a 3-T Siemens magnetic resonance imaging scanner. It included 46 community-recruited male and female juveniles aged 10 to 17 years, including 16 healthy control participants and 30 youths with conduct problems with both low and high levels of CU traits. Main Outcomes and Measures Blood oxygenation level–dependent signal as measured via functional magnetic resonance imaging during an implicit face-emotion processing task and analyzed using whole-brain and region of interest–based analysis of variance and multiple-regression analyses. Results Analysis of variance revealed no group differences in the amygdala. By contrast, consistent with the existence of suppressor effects, multiple-regression analysis found amygdala responses to fearful expressions to be negatively associated with CU traits (x = 26, y = 0, z = −12; k = 1) and positively associated with externalizing behavior (x = 24, y = 0, z = −14; k = 8) when both variables were modeled simultaneously. Reduced amygdala responses mediated the relationship between CU traits and proactive aggression. Conclusions and Relevance The results linked proactive aggression in youths with CU traits to hypoactive amygdala responses to emotional distress cues, consistent with theories that externalizing behaviors, particularly proactive aggression, in youths with these traits stem from deficient empathic responses to distress. Amygdala hypoactivity may represent an intermediate phenotype, offering new insights into effective treatment strategies for conduct problems. PMID:24671141
Ahmed, Wamiq M; Lenz, Dominik; Liu, Jia; Paul Robinson, J; Ghafoor, Arif
2008-03-01
High-throughput biological imaging uses automated imaging devices to collect a large number of microscopic images for analysis of biological systems and validation of scientific hypotheses. Efficient manipulation of these datasets for knowledge discovery requires high-performance computational resources, efficient storage, and automated tools for extracting and sharing such knowledge among different research sites. Newly emerging grid technologies provide powerful means for exploiting the full potential of these imaging techniques. Efficient utilization of grid resources requires the development of knowledge-based tools and services that combine domain knowledge with analysis algorithms. In this paper, we first investigate how grid infrastructure can facilitate high-throughput biological imaging research, and present an architecture for providing knowledge-based grid services for this field. We identify two levels of knowledge-based services. The first level provides tools for extracting spatiotemporal knowledge from image sets and the second level provides high-level knowledge management and reasoning services. We then present cellular imaging markup language, an extensible markup language-based language for modeling of biological images and representation of spatiotemporal knowledge. This scheme can be used for spatiotemporal event composition, matching, and automated knowledge extraction and representation for large biological imaging datasets. We demonstrate the expressive power of this formalism by means of different examples and extensive experimental results.
Niang, Oumar; Thioune, Abdoulaye; El Gueirea, Mouhamed Cheikh; Deléchelle, Eric; Lemoine, Jacques
2012-09-01
The major problem with the empirical mode decomposition (EMD) algorithm is its lack of a theoretical framework. So, it is difficult to characterize and evaluate this approach. In this paper, we propose, in the 2-D case, the use of an alternative implementation to the algorithmic definition of the so-called "sifting process" used in the original Huang's EMD method. This approach, especially based on partial differential equations (PDEs), was presented by Niang in previous works, in 2005 and 2007, and relies on a nonlinear diffusion-based filtering process to solve the mean envelope estimation problem. In the 1-D case, the efficiency of the PDE-based method, compared to the original EMD algorithmic version, was also illustrated in a recent paper. Recently, several 2-D extensions of the EMD method have been proposed. Despite some effort, 2-D versions for EMD appear poorly performing and are very time consuming. So in this paper, an extension to the 2-D space of the PDE-based approach is extensively described. This approach has been applied in cases of both signal and image decomposition. The obtained results confirm the usefulness of the new PDE-based sifting process for the decomposition of various kinds of data. Some results have been provided in the case of image decomposition. The effectiveness of the approach encourages its use in a number of signal and image applications such as denoising, detrending, or texture analysis.
Afshar, Yaser; Sbalzarini, Ivo F.
2016-01-01
Modern fluorescence microscopy modalities, such as light-sheet microscopy, are capable of acquiring large three-dimensional images at high data rate. This creates a bottleneck in computational processing and analysis of the acquired images, as the rate of acquisition outpaces the speed of processing. Moreover, images can be so large that they do not fit the main memory of a single computer. We address both issues by developing a distributed parallel algorithm for segmentation of large fluorescence microscopy images. The method is based on the versatile Discrete Region Competition algorithm, which has previously proven useful in microscopy image segmentation. The present distributed implementation decomposes the input image into smaller sub-images that are distributed across multiple computers. Using network communication, the computers orchestrate the collectively solving of the global segmentation problem. This not only enables segmentation of large images (we test images of up to 1010 pixels), but also accelerates segmentation to match the time scale of image acquisition. Such acquisition-rate image segmentation is a prerequisite for the smart microscopes of the future and enables online data compression and interactive experiments. PMID:27046144
Afshar, Yaser; Sbalzarini, Ivo F
2016-01-01
Modern fluorescence microscopy modalities, such as light-sheet microscopy, are capable of acquiring large three-dimensional images at high data rate. This creates a bottleneck in computational processing and analysis of the acquired images, as the rate of acquisition outpaces the speed of processing. Moreover, images can be so large that they do not fit the main memory of a single computer. We address both issues by developing a distributed parallel algorithm for segmentation of large fluorescence microscopy images. The method is based on the versatile Discrete Region Competition algorithm, which has previously proven useful in microscopy image segmentation. The present distributed implementation decomposes the input image into smaller sub-images that are distributed across multiple computers. Using network communication, the computers orchestrate the collectively solving of the global segmentation problem. This not only enables segmentation of large images (we test images of up to 10(10) pixels), but also accelerates segmentation to match the time scale of image acquisition. Such acquisition-rate image segmentation is a prerequisite for the smart microscopes of the future and enables online data compression and interactive experiments.
A sequential solution for anisotropic total variation image denoising with interval constraints
NASA Astrophysics Data System (ADS)
Xu, Jingyan; Noo, Frédéric
2017-09-01
We show that two problems involving the anisotropic total variation (TV) and interval constraints on the unknown variables admit, under some conditions, a simple sequential solution. Problem 1 is a constrained TV penalized image denoising problem; problem 2 is a constrained fused lasso signal approximator. The sequential solution entails finding first the solution to the unconstrained problem, and then applying a thresholding to satisfy the constraints. If the interval constraints are uniform, this sequential solution solves problem 1. If the interval constraints furthermore contain zero, the sequential solution solves problem 2. Here uniform interval constraints refer to all unknowns being constrained to the same interval. A typical example of application is image denoising in x-ray CT, where the image intensities are non-negative as they physically represent linear attenuation coefficient in the patient body. Our results are simple yet seem unknown; we establish them using the Karush-Kuhn-Tucker conditions for constrained convex optimization.
Research on Wide-field Imaging Technologies for Low-frequency Radio Array
NASA Astrophysics Data System (ADS)
Lao, B. Q.; An, T.; Chen, X.; Wu, X. C.; Lu, Y.
2017-09-01
Wide-field imaging of low-frequency radio telescopes are subject to a number of difficult problems. One particularly pernicious problem is the non-coplanar baseline effect. It will lead to distortion of the final image when the phase of w direction called w-term is ignored. The image degradation effects are amplified for telescopes with the wide field of view. This paper summarizes and analyzes several w-term correction methods and their technical principles. Their advantages and disadvantages have been analyzed after comparing their computational cost and computational complexity. We conduct simulations with two of these methods, faceting and w-projection, based on the configuration of the first-phase Square Kilometre Array (SKA) low frequency array. The resulted images are also compared with the two-dimensional Fourier transform method. The results show that image quality and correctness derived from both faceting and w-projection are better than the two-dimensional Fourier transform method in wide-field imaging. The image quality and run time affected by the number of facets and w steps have been evaluated. The results indicate that the number of facets and w steps must be reasonable. Finally, we analyze the effect of data size on the run time of faceting and w-projection. The results show that faceting and w-projection need to be optimized before the massive amounts of data processing. The research of the present paper initiates the analysis of wide-field imaging techniques and their application in the existing and future low-frequency array, and fosters the application and promotion to much broader fields.
Joint graph cut and relative fuzzy connectedness image segmentation algorithm.
Ciesielski, Krzysztof Chris; Miranda, Paulo A V; Falcão, Alexandre X; Udupa, Jayaram K
2013-12-01
We introduce an image segmentation algorithm, called GC(sum)(max), which combines, in novel manner, the strengths of two popular algorithms: Relative Fuzzy Connectedness (RFC) and (standard) Graph Cut (GC). We show, both theoretically and experimentally, that GC(sum)(max) preserves robustness of RFC with respect to the seed choice (thus, avoiding "shrinking problem" of GC), while keeping GC's stronger control over the problem of "leaking though poorly defined boundary segments." The analysis of GC(sum)(max) is greatly facilitated by our recent theoretical results that RFC can be described within the framework of Generalized GC (GGC) segmentation algorithms. In our implementation of GC(sum)(max) we use, as a subroutine, a version of RFC algorithm (based on Image Forest Transform) that runs (provably) in linear time with respect to the image size. This results in GC(sum)(max) running in a time close to linear. Experimental comparison of GC(sum)(max) to GC, an iterative version of RFC (IRFC), and power watershed (PW), based on a variety medical and non-medical images, indicates superior accuracy performance of GC(sum)(max) over these other methods, resulting in a rank ordering of GC(sum)(max)>PW∼IRFC>GC. Copyright © 2013 Elsevier B.V. All rights reserved.
Automatic MRI 2D brain segmentation using graph searching technique.
Pedoia, Valentina; Binaghi, Elisabetta
2013-09-01
Accurate and efficient segmentation of the whole brain in magnetic resonance (MR) images is a key task in many neuroscience and medical studies either because the whole brain is the final anatomical structure of interest or because the automatic extraction facilitates further analysis. The problem of segmenting brain MRI images has been extensively addressed by many researchers. Despite the relevant achievements obtained, automated segmentation of brain MRI imagery is still a challenging problem whose solution has to cope with critical aspects such as anatomical variability and pathological deformation. In the present paper, we describe and experimentally evaluate a method for segmenting brain from MRI images basing on two-dimensional graph searching principles for border detection. The segmentation of the whole brain over the entire volume is accomplished slice by slice, automatically detecting frames including eyes. The method is fully automatic and easily reproducible by computing the internal main parameters directly from the image data. The segmentation procedure is conceived as a tool of general applicability, although design requirements are especially commensurate with the accuracy required in clinical tasks such as surgical planning and post-surgical assessment. Several experiments were performed to assess the performance of the algorithm on a varied set of MRI images obtaining good results in terms of accuracy and stability. Copyright © 2012 John Wiley & Sons, Ltd.
DeepNeuron: an open deep learning toolbox for neuron tracing.
Zhou, Zhi; Kuo, Hsien-Chi; Peng, Hanchuan; Long, Fuhui
2018-06-06
Reconstructing three-dimensional (3D) morphology of neurons is essential for understanding brain structures and functions. Over the past decades, a number of neuron tracing tools including manual, semiautomatic, and fully automatic approaches have been developed to extract and analyze 3D neuronal structures. Nevertheless, most of them were developed based on coding certain rules to extract and connect structural components of a neuron, showing limited performance on complicated neuron morphology. Recently, deep learning outperforms many other machine learning methods in a wide range of image analysis and computer vision tasks. Here we developed a new Open Source toolbox, DeepNeuron, which uses deep learning networks to learn features and rules from data and trace neuron morphology in light microscopy images. DeepNeuron provides a family of modules to solve basic yet challenging problems in neuron tracing. These problems include but not limited to: (1) detecting neuron signal under different image conditions, (2) connecting neuronal signals into tree(s), (3) pruning and refining tree morphology, (4) quantifying the quality of morphology, and (5) classifying dendrites and axons in real time. We have tested DeepNeuron using light microscopy images including bright-field and confocal images of human and mouse brain, on which DeepNeuron demonstrates robustness and accuracy in neuron tracing.
Digital mammography, cancer screening: Factors important for image compression
NASA Technical Reports Server (NTRS)
Clarke, Laurence P.; Blaine, G. James; Doi, Kunio; Yaffe, Martin J.; Shtern, Faina; Brown, G. Stephen; Winfield, Daniel L.; Kallergi, Maria
1993-01-01
The use of digital mammography for breast cancer screening poses several novel problems such as development of digital sensors, computer assisted diagnosis (CAD) methods for image noise suppression, enhancement, and pattern recognition, compression algorithms for image storage, transmission, and remote diagnosis. X-ray digital mammography using novel direct digital detection schemes or film digitizers results in large data sets and, therefore, image compression methods will play a significant role in the image processing and analysis by CAD techniques. In view of the extensive compression required, the relative merit of 'virtually lossless' versus lossy methods should be determined. A brief overview is presented here of the developments of digital sensors, CAD, and compression methods currently proposed and tested for mammography. The objective of the NCI/NASA Working Group on Digital Mammography is to stimulate the interest of the image processing and compression scientific community for this medical application and identify possible dual use technologies within the NASA centers.
Landmark matching based retinal image alignment by enforcing sparsity in correspondence matrix.
Zheng, Yuanjie; Daniel, Ebenezer; Hunter, Allan A; Xiao, Rui; Gao, Jianbin; Li, Hongsheng; Maguire, Maureen G; Brainard, David H; Gee, James C
2014-08-01
Retinal image alignment is fundamental to many applications in diagnosis of eye diseases. In this paper, we address the problem of landmark matching based retinal image alignment. We propose a novel landmark matching formulation by enforcing sparsity in the correspondence matrix and offer its solutions based on linear programming. The proposed formulation not only enables a joint estimation of the landmark correspondences and a predefined transformation model but also combines the benefits of the softassign strategy (Chui and Rangarajan, 2003) and the combinatorial optimization of linear programming. We also introduced a set of reinforced self-similarities descriptors which can better characterize local photometric and geometric properties of the retinal image. Theoretical analysis and experimental results with both fundus color images and angiogram images show the superior performances of our algorithms to several state-of-the-art techniques. Copyright © 2013 Elsevier B.V. All rights reserved.
Color image analysis of contaminants and bacteria transport in porous media
NASA Astrophysics Data System (ADS)
Rashidi, Mehdi; Dehmeshki, Jamshid; Daemi, Mohammad F.; Cole, Larry; Dickenson, Eric
1997-10-01
Transport of contaminants and bacteria in aqueous heterogeneous saturated porous systems have been studied experimentally using a novel fluorescent microscopic imaging technique. The approach involves color visualization and quantification of bacterium and contaminant distributions within a transparent porous column. By introducing stained bacteria and an organic dye as a contaminant into the column and illuminating the porous regions with a planar sheet of laser beam, contaminant and bacterial transport processes through the porous medium can be observed and measured microscopically. A computer controlled color CCD camera is used to record the fluorescent images as a function of time. These images are recorded by a frame accurate high resolution VCR and are then analyzed using a color image analysis code written in our laboratories. The color images are digitized this way and simultaneous concentration and velocity distributions of both contaminant and bacterium are evaluated as a function of time and pore characteristics. The approach provides a unique dynamic probe to observe these transport processes microscopically. These results are extremely valuable in in-situ bioremediation problems since microscopic particle-contaminant- bacterium interactions are the key to understanding and optimization of these processes.
Environmental scanning electron microscope imaging examples related to particle analysis.
Wight, S A; Zeissler, C J
1993-08-01
This work provides examples of some of the imaging capabilities of environmental scanning electron microscopy applied to easily charged samples relevant to particle analysis. Environmental SEM (also referred to as high pressure or low vacuum SEM) can address uncoated samples that are known to be difficult to image. Most of these specimens are difficult to image by conventional SEM even when coated with a conductive layer. Another area where environmental SEM is particularly applicable is for specimens not compatible with high vacuum, such as volatile specimens. Samples from which images were obtained that otherwise may not have been possible by conventional methods included fly ash particles on an oiled plastic membrane impactor substrate, a one micrometer diameter fiber mounted on the end of a wire, uranium oxide particles embedded in oil-bearing cellulose nitrate, teflon and polycarbonate filter materials with collected air particulate matter, polystyrene latex spheres on cellulosic filter paper, polystyrene latex spheres "loosely" sitting on a glass slide, and subsurface tracks in an etched nuclear track-etch detector. Surface charging problems experienced in high vacuum SEMs are virtually eliminated in the low vacuum SEM, extending imaging capabilities to samples previously difficult to use or incompatible with conventional methods.
Performance characteristics of a visual-search human-model observer with sparse PET image data
NASA Astrophysics Data System (ADS)
Gifford, Howard C.
2012-02-01
As predictors of human performance in detection-localization tasks, statistical model observers can have problems with tasks that are primarily limited by target contrast or structural noise. Model observers with a visual-search (VS) framework may provide a more reliable alternative. This framework provides for an initial holistic search that identifies suspicious locations for analysis by a statistical observer. A basic VS observer for emission tomography focuses on hot "blobs" in an image and uses a channelized nonprewhitening (CNPW) observer for analysis. In [1], we investigated this model for a contrast-limited task with SPECT images; herein, a statisticalnoise limited task involving PET images is considered. An LROC study used 2D image slices with liver, lung and soft-tissue tumors. Human and model observers read the images in coronal, sagittal and transverse display formats. The study thus measured the detectability of tumors in a given organ as a function of display format. The model observers were applied under several task variants that tested their response to structural noise both at the organ boundaries alone and over the organs as a whole. As measured by correlation with the human data, the VS observer outperformed the CNPW scanning observer.
Piqueras, Sara; Bedia, Carmen; Beleites, Claudia; Krafft, Christoph; Popp, Jürgen; Maeder, Marcel; Tauler, Romà; de Juan, Anna
2018-06-05
Data fusion of different imaging techniques allows a comprehensive description of chemical and biological systems. Yet, joining images acquired with different spectroscopic platforms is complex because of the different sample orientation and image spatial resolution. Whereas matching sample orientation is often solved by performing suitable affine transformations of rotation, translation, and scaling among images, the main difficulty in image fusion is preserving the spatial detail of the highest spatial resolution image during multitechnique image analysis. In this work, a special variant of the unmixing algorithm Multivariate Curve Resolution Alternating Least Squares (MCR-ALS) for incomplete multisets is proposed to provide a solution for this kind of problem. This algorithm allows analyzing simultaneously images collected with different spectroscopic platforms without losing spatial resolution and ensuring spatial coherence among the images treated. The incomplete multiset structure concatenates images of the two platforms at the lowest spatial resolution with the image acquired with the highest spatial resolution. As a result, the constituents of the sample analyzed are defined by a single set of distribution maps, common to all platforms used and with the highest spatial resolution, and their related extended spectral signatures, covering the signals provided by each of the fused techniques. We demonstrate the potential of the new variant of MCR-ALS for multitechnique analysis on three case studies: (i) a model example of MIR and Raman images of pharmaceutical mixture, (ii) FT-IR and Raman images of palatine tonsil tissue, and (iii) mass spectrometry and Raman images of bean tissue.
Hybrid region merging method for segmentation of high-resolution remote sensing images
NASA Astrophysics Data System (ADS)
Zhang, Xueliang; Xiao, Pengfeng; Feng, Xuezhi; Wang, Jiangeng; Wang, Zuo
2014-12-01
Image segmentation remains a challenging problem for object-based image analysis. In this paper, a hybrid region merging (HRM) method is proposed to segment high-resolution remote sensing images. HRM integrates the advantages of global-oriented and local-oriented region merging strategies into a unified framework. The globally most-similar pair of regions is used to determine the starting point of a growing region, which provides an elegant way to avoid the problem of starting point assignment and to enhance the optimization ability for local-oriented region merging. During the region growing procedure, the merging iterations are constrained within the local vicinity, so that the segmentation is accelerated and can reflect the local context, as compared with the global-oriented method. A set of high-resolution remote sensing images is used to test the effectiveness of the HRM method, and three region-based remote sensing image segmentation methods are adopted for comparison, including the hierarchical stepwise optimization (HSWO) method, the local-mutual best region merging (LMM) method, and the multiresolution segmentation (MRS) method embedded in eCognition Developer software. Both the supervised evaluation and visual assessment show that HRM performs better than HSWO and LMM by combining both their advantages. The segmentation results of HRM and MRS are visually comparable, but HRM can describe objects as single regions better than MRS, and the supervised and unsupervised evaluation results further prove the superiority of HRM.
NASA Astrophysics Data System (ADS)
Aghaei, A.
2017-12-01
Digital imaging and modeling of rocks and subsequent simulation of physical phenomena in digitally-constructed rock models are becoming an integral part of core analysis workflows. One of the inherent limitations of image-based analysis, at any given scale, is image resolution. This limitation becomes more evident when the rock has multiple scales of porosity such as in carbonates and tight sandstones. Multi-scale imaging and constructions of hybrid models that encompass images acquired at multiple scales and resolutions are proposed as a solution to this problem. In this study, we investigate the effect of image resolution and unresolved porosity on petrophysical and two-phase flow properties calculated based on images. A helical X-ray micro-CT scanner with a high cone-angle is used to acquire digital rock images that are free of geometric distortion. To remove subjectivity from the analyses, a semi-automated image processing technique is used to process and segment the acquired data into multiple phases. Direct and pore network based models are used to simulate physical phenomena and obtain absolute permeability, formation factor and two-phase flow properties such as relative permeability and capillary pressure. The effect of image resolution on each property is investigated. Finally a hybrid network model incorporating images at multiple resolutions is built and used for simulations. The results from the hybrid model are compared against results from the model built at the highest resolution and those from laboratory tests.
FFT-enhanced IHS transform method for fusing high-resolution satellite images
Ling, Y.; Ehlers, M.; Usery, E.L.; Madden, M.
2007-01-01
Existing image fusion techniques such as the intensity-hue-saturation (IHS) transform and principal components analysis (PCA) methods may not be optimal for fusing the new generation commercial high-resolution satellite images such as Ikonos and QuickBird. One problem is color distortion in the fused image, which causes visual changes as well as spectral differences between the original and fused images. In this paper, a fast Fourier transform (FFT)-enhanced IHS method is developed for fusing new generation high-resolution satellite images. This method combines a standard IHS transform with FFT filtering of both the panchromatic image and the intensity component of the original multispectral image. Ikonos and QuickBird data are used to assess the FFT-enhanced IHS transform method. Experimental results indicate that the FFT-enhanced IHS transform method may improve upon the standard IHS transform and the PCA methods in preserving spectral and spatial information. ?? 2006 International Society for Photogrammetry and Remote Sensing, Inc. (ISPRS).
Single underwater image enhancement based on color cast removal and visibility restoration
NASA Astrophysics Data System (ADS)
Li, Chongyi; Guo, Jichang; Wang, Bo; Cong, Runmin; Zhang, Yan; Wang, Jian
2016-05-01
Images taken under underwater condition usually have color cast and serious loss of contrast and visibility. Degraded underwater images are inconvenient for observation and analysis. In order to address these problems, an underwater image-enhancement method is proposed. A simple yet effective underwater image color cast removal algorithm is first presented based on the optimization theory. Then, based on the minimum information loss principle and inherent relationship of medium transmission maps of three color channels in an underwater image, an effective visibility restoration algorithm is proposed to recover visibility, contrast, and natural appearance of degraded underwater images. To evaluate the performance of the proposed method, qualitative comparison, quantitative comparison, and color accuracy test are conducted. Experimental results demonstrate that the proposed method can effectively remove color cast, improve contrast and visibility, and recover natural appearance of degraded underwater images. Additionally, the proposed method is comparable to and even better than several state-of-the-art methods.
Insight into efficient image registration techniques and the demons algorithm.
Vercauteren, Tom; Pennec, Xavier; Malis, Ezio; Perchant, Aymeric; Ayache, Nicholas
2007-01-01
As image registration becomes more and more central to many biomedical imaging applications, the efficiency of the algorithms becomes a key issue. Image registration is classically performed by optimizing a similarity criterion over a given spatial transformation space. Even if this problem is considered as almost solved for linear registration, we show in this paper that some tools that have recently been developed in the field of vision-based robot control can outperform classical solutions. The adequacy of these tools for linear image registration leads us to revisit non-linear registration and allows us to provide interesting theoretical roots to the different variants of Thirion's demons algorithm. This analysis predicts a theoretical advantage to the symmetric forces variant of the demons algorithm. We show that, on controlled experiments, this advantage is confirmed, and yields a faster convergence.
Adapting the ISO 20462 softcopy ruler method for online image quality studies
NASA Astrophysics Data System (ADS)
Burns, Peter D.; Phillips, Jonathan B.; Williams, Don
2013-01-01
In this paper we address the problem of Image Quality Assessment of no reference metrics, focusing on JPEG corrupted images. In general no reference metrics are not able to measure with the same performance the distortions within their possible range and with respect to different image contents. The crosstalk between content and distortion signals influences the human perception. We here propose two strategies to improve the correlation between subjective and objective quality data. The first strategy is based on grouping the images according to their spatial complexity. The second one is based on a frequency analysis. Both the strategies are tested on two databases available in the literature. The results show an improvement in the correlations between no reference metrics and psycho-visual data, evaluated in terms of the Pearson Correlation Coefficient.
A Method of Face Detection with Bayesian Probability
NASA Astrophysics Data System (ADS)
Sarker, Goutam
2010-10-01
The objective of face detection is to identify all images which contain a face, irrespective of its orientation, illumination conditions etc. This is a hard problem, because the faces are highly variable in size, shape lighting conditions etc. Many methods have been designed and developed to detect faces in a single image. The present paper is based on one `Appearance Based Method' which relies on learning the facial and non facial features from image examples. This in its turn is based on statistical analysis of examples and counter examples of facial images and employs Bayesian Conditional Classification Rule to detect the probability of belongingness of a face (or non-face) within an image frame. The detection rate of the present system is very high and thereby the number of false positive and false negative detection is substantially low.
Helicopter flights with night-vision goggles: Human factors aspects
NASA Technical Reports Server (NTRS)
Brickner, Michael S.
1989-01-01
Night-vision goggles (NVGs) and, in particular, the advanced, helmet-mounted Aviators Night-Vision-Imaging System (ANVIS) allows helicopter pilots to perform low-level flight at night. It consists of light intensifier tubes which amplify low-intensity ambient illumination (star and moon light) and an optical system which together produce a bright image of the scene. However, these NVGs do not turn night into day, and, while they may often provide significant advantages over unaided night flight, they may also result in visual fatigue, high workload, and safety hazards. These problems reflect both system limitations and human-factors issues. A brief description of the technical characteristics of NVGs and of human night-vision capabilities is followed by a description and analysis of specific perceptual problems which occur with the use of NVGs in flight. Some of the issues addressed include: limitations imposed by a restricted field of view; problems related to binocular rivalry; the consequences of inappropriate focusing of the eye; the effects of ambient illumination levels and of various types of terrain on image quality; difficulties in distance and slope estimation; effects of dazzling; and visual fatigue and superimposed symbology. These issues are described and analyzed in terms of their possible consequences on helicopter pilot performance. The additional influence of individual differences among pilots is emphasized. Thermal imaging systems (forward looking infrared (FLIR)) are described briefly and compared to light intensifier systems (NVGs). Many of the phenomena which are described are not readily understood. More research is required to better understand the human-factors problems created by the use of NVGs and other night-vision aids, to enhance system design, and to improve training methods and simulation techniques.
DiversePathsJ: diverse shortest paths for bioimage analysis.
Uhlmann, Virginie; Haubold, Carsten; Hamprecht, Fred A; Unser, Michael
2018-02-01
We introduce a formulation for the general task of finding diverse shortest paths between two end-points. Our approach is not linked to a specific biological problem and can be applied to a large variety of images thanks to its generic implementation as a user-friendly ImageJ/Fiji plugin. It relies on the introduction of additional layers in a Viterbi path graph, which requires slight modifications to the standard Viterbi algorithm rules. This layered graph construction allows for the specification of various constraints imposing diversity between solutions. The software allows obtaining a collection of diverse shortest paths under some user-defined constraints through a convenient and user-friendly interface. It can be used alone or be integrated into larger image analysis pipelines. http://bigwww.epfl.ch/algorithms/diversepathsj. michael.unser@epfl.ch or fred.hamprecht@iwr.uni-heidelberg.de. Supplementary data are available at Bioinformatics online. © The Author(s) 2017. Published by Oxford University Press.
Collewet, Guylaine; Moussaoui, Saïd; Deligny, Cécile; Lucas, Tiphaine; Idier, Jérôme
2018-06-01
Multi-tissue partial volume estimation in MRI images is investigated with a viewpoint related to spectral unmixing as used in hyperspectral imaging. The main contribution of this paper is twofold. It firstly proposes a theoretical analysis of the statistical optimality conditions of the proportion estimation problem, which in the context of multi-contrast MRI data acquisition allows to appropriately set the imaging sequence parameters. Secondly, an efficient proportion quantification algorithm based on the minimisation of a penalised least-square criterion incorporating a regularity constraint on the spatial distribution of the proportions is proposed. Furthermore, the resulting developments are discussed using empirical simulations. The practical usefulness of the spectral unmixing approach for partial volume quantification in MRI is illustrated through an application to food analysis on the proving of a Danish pastry. Copyright © 2018 Elsevier Inc. All rights reserved.
Photofragment image analysis using the Onion-Peeling Algorithm
NASA Astrophysics Data System (ADS)
Manzhos, Sergei; Loock, Hans-Peter
2003-07-01
With the growing popularity of the velocity map imaging technique, a need for the analysis of photoion and photoelectron images arose. Here, a computer program is presented that allows for the analysis of cylindrically symmetric images. It permits the inversion of the projection of the 3D charged particle distribution using the Onion Peeling Algorithm. Further analysis includes the determination of radial and angular distributions, from which velocity distributions and spatial anisotropy parameters are obtained. Identification and quantification of the different photolysis channels is therefore straightforward. In addition, the program features geometry correction, centering, and multi-Gaussian fitting routines, as well as a user-friendly graphical interface and the possibility of generating synthetic images using either the fitted or user-defined parameters. Program summaryTitle of program: Glass Onion Catalogue identifier: ADRY Program Summary URL:http://cpc.cs.qub.ac.uk/summaries/ADRY Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Licensing provisions: none Computer: IBM PC Operating system under which the program has been tested: Windows 98, Windows 2000, Windows NT Programming language used: Delphi 4.0 Memory required to execute with typical data: 18 Mwords No. of bits in a word: 32 No. of bytes in distributed program, including test data, etc.: 9 911 434 Distribution format: zip file Keywords: Photofragment image, onion peeling, anisotropy parameters Nature of physical problem: Information about velocity and angular distributions of photofragments is the basis on which the analysis of the photolysis process resides. Reconstructing the three-dimensional distribution from the photofragment image is the first step, further processing involving angular and radial integration of the inverted image to obtain velocity and angular distributions. Provisions have to be made to correct for slight distortions of the image, and to verify the accuracy of the analysis process. Method of solution: The "Onion Peeling" algorithm described by Helm [Rev. Sci. Instrum. 67 (6) (1996)] is used to perform the image reconstruction. Angular integration with a subsequent multi-Gaussian fit supplies information about the velocity distribution of the photofragments, whereas radial integration with subsequent expansion of the angular distributions over Legendre Polynomials gives the spatial anisotropy parameters. Fitting algorithms have been developed to centre the image and to correct for image distortion. Restrictions on the complexity of the problem: The maximum image size (1280×1280) and resolution (16 bit) are restricted by available memory and can be changed in the source code. Initial centre coordinates within 5 pixels may be required for the correction and the centering algorithm to converge. Peaks on the velocity profile separated by less then the peak width may not be deconvolved. In the charged particle image reconstruction, it is assumed that the kinetic energy released in the dissociation process is small compared to the energy acquired in the electric field. For the fitting parameters to be physically meaningful, cylindrical symmetry of the image has to be assumed but the actual inversion algorithm is stable to distortions of such symmetry in experimental images. Typical running time: The analysis procedure can be divided into three parts: inversion, fitting, and geometry correction. The inversion time grows approx. as R3, where R is the radius of the region of interest: for R=200 pixels it is less than a minute, for R=400 pixels less then 6 min on a 400 MHz IBM personal computer. The time for the velocity fitting procedure to converge depends strongly on the number of peaks in the velocity profile and the convergence criterion. It ranges between less then a second for simple curves and a few minutes for profiles with up to twenty peaks. The time taken for the image correction scales as R2 and depends on the curve profile. It is on the order of a few minutes for images with R=500 pixels. Unusual features of the program: Our centering and image correction algorithm is based on Fourier analysis of the radial distribution to insure the sharpest velocity profile and is insensitive to an uneven intensity distribution. There exists an angular averaging option to stabilize the inversion algorithm and not to loose the resolution at the same time.
Deep Adaptive Log-Demons: Diffeomorphic Image Registration with Very Large Deformations
Jia, Kebin
2015-01-01
This paper proposes a new framework for capturing large and complex deformation in image registration. Traditionally, this challenging problem relies firstly on a preregistration, usually an affine matrix containing rotation, scale, and translation and afterwards on a nonrigid transformation. According to preregistration, the directly calculated affine matrix, which is obtained by limited pixel information, may misregistrate when large biases exist, thus misleading following registration subversively. To address this problem, for two-dimensional (2D) images, the two-layer deep adaptive registration framework proposed in this paper firstly accurately classifies the rotation parameter through multilayer convolutional neural networks (CNNs) and then identifies scale and translation parameters separately. For three-dimensional (3D) images, affine matrix is located through feature correspondences by a triplanar 2D CNNs. Then deformation removal is done iteratively through preregistration and demons registration. By comparison with the state-of-the-art registration framework, our method gains more accurate registration results on both synthetic and real datasets. Besides, principal component analysis (PCA) is combined with correlation like Pearson and Spearman to form new similarity standards in 2D and 3D registration. Experiment results also show faster convergence speed. PMID:26120356
Deep Adaptive Log-Demons: Diffeomorphic Image Registration with Very Large Deformations.
Zhao, Liya; Jia, Kebin
2015-01-01
This paper proposes a new framework for capturing large and complex deformation in image registration. Traditionally, this challenging problem relies firstly on a preregistration, usually an affine matrix containing rotation, scale, and translation and afterwards on a nonrigid transformation. According to preregistration, the directly calculated affine matrix, which is obtained by limited pixel information, may misregistrate when large biases exist, thus misleading following registration subversively. To address this problem, for two-dimensional (2D) images, the two-layer deep adaptive registration framework proposed in this paper firstly accurately classifies the rotation parameter through multilayer convolutional neural networks (CNNs) and then identifies scale and translation parameters separately. For three-dimensional (3D) images, affine matrix is located through feature correspondences by a triplanar 2D CNNs. Then deformation removal is done iteratively through preregistration and demons registration. By comparison with the state-of-the-art registration framework, our method gains more accurate registration results on both synthetic and real datasets. Besides, principal component analysis (PCA) is combined with correlation like Pearson and Spearman to form new similarity standards in 2D and 3D registration. Experiment results also show faster convergence speed.
Text recognition and correction for automated data collection by mobile devices
NASA Astrophysics Data System (ADS)
Ozarslan, Suleyman; Eren, P. Erhan
2014-03-01
Participatory sensing is an approach which allows mobile devices such as mobile phones to be used for data collection, analysis and sharing processes by individuals. Data collection is the first and most important part of a participatory sensing system, but it is time consuming for the participants. In this paper, we discuss automatic data collection approaches for reducing the time required for collection, and increasing the amount of collected data. In this context, we explore automated text recognition on images of store receipts which are captured by mobile phone cameras, and the correction of the recognized text. Accordingly, our first goal is to evaluate the performance of the Optical Character Recognition (OCR) method with respect to data collection from store receipt images. Images captured by mobile phones exhibit some typical problems, and common image processing methods cannot handle some of them. Consequently, the second goal is to address these types of problems through our proposed Knowledge Based Correction (KBC) method used in support of the OCR, and also to evaluate the KBC method with respect to the improvement on the accurate recognition rate. Results of the experiments show that the KBC method improves the accurate data recognition rate noticeably.
[Near infrared spectroscopy system structure with MOEMS scanning mirror array].
Luo, Biao; Wen, Zhi-Yu; Wen, Zhong-Quan; Chen, Li; Qian, Rong-Rong
2011-11-01
A method which uses MOEMS mirror array optical structure to reduce the high cost of infrared spectrometer is given in the present paper. This method resolved the problem that MOEMS mirror array can not be used in simple infrared spectrometer because the problem of imaging irregularity in infrared spectroscopy and a new structure for spectral imaging was designed. According to the requirements of imaging spot, this method used optical design software ZEMAX and standard-specific aberrations of the optimization algorithm, designed and optimized the optical structure. It works from 900 to 1 400 nm. The results of design analysis showed that with the light source slit width of 50 microm, the spectrophotometric system is superior to the theoretical resolution of 6 nm, and the size of the available spot is 0.042 mm x 0.08 mm. Verification examples show that the design meets the requirements of the imaging regularity, and can be used for MOEMS mirror reflectance scan. And it was also verified that the use of a new MOEMS mirror array spectrometer model is feasible. Finally, analyze the relationship between the location of the detector and the maximum deflection angle of micro-mirror was analyzed.
Image processing and 3D visualization in the interpretation of patterned injury of the skin
NASA Astrophysics Data System (ADS)
Oliver, William R.; Altschuler, Bruce R.
1995-09-01
The use of image processing is becoming increasingly important in the evaluation of violent crime. While much work has been done in the use of these techniques for forensic purposes outside of forensic pathology, its use in the pathologic examination of wounding has been limited. We are investigating the use of image processing in the analysis of patterned injuries and tissue damage. Our interests are currently concentrated on 1) the use of image processing techniques to aid the investigator in observing and evaluating patterned injuries in photographs, 2) measurement of the 3D shape characteristics of surface lesions, and 3) correlation of patterned injuries with deep tissue injury as a problem in 3D visualization. We are beginning investigations in data-acquisition problems for performing 3D scene reconstructions from the pathology perspective of correlating tissue injury to scene features and trace evidence localization. Our primary tool for correlation of surface injuries with deep tissue injuries has been the comparison of processed surface injury photographs with 3D reconstructions from antemortem CT and MRI data. We have developed a prototype robot for the acquisition of 3D wound and scene data.
Yamamoto, Kyosuke; Togami, Takashi; Yamaguchi, Norio
2017-11-06
Unmanned aerial vehicles (UAVs or drones) are a very promising branch of technology, and they have been utilized in agriculture-in cooperation with image processing technologies-for phenotyping and vigor diagnosis. One of the problems in the utilization of UAVs for agricultural purposes is the limitation in flight time. It is necessary to fly at a high altitude to capture the maximum number of plants in the limited time available, but this reduces the spatial resolution of the captured images. In this study, we applied a super-resolution method to the low-resolution images of tomato diseases to recover detailed appearances, such as lesions on plant organs. We also conducted disease classification using high-resolution, low-resolution, and super-resolution images to evaluate the effectiveness of super-resolution methods in disease classification. Our results indicated that the super-resolution method outperformed conventional image scaling methods in spatial resolution enhancement of tomato disease images. The results of disease classification showed that the accuracy attained was also better by a large margin with super-resolution images than with low-resolution images. These results indicated that our approach not only recovered the information lost in low-resolution images, but also exerted a beneficial influence on further image analysis. The proposed approach will accelerate image-based phenotyping and vigor diagnosis in the field, because it not only saves time to capture images of a crop in a cultivation field but also secures the accuracy of these images for further analysis.
Togami, Takashi; Yamaguchi, Norio
2017-01-01
Unmanned aerial vehicles (UAVs or drones) are a very promising branch of technology, and they have been utilized in agriculture—in cooperation with image processing technologies—for phenotyping and vigor diagnosis. One of the problems in the utilization of UAVs for agricultural purposes is the limitation in flight time. It is necessary to fly at a high altitude to capture the maximum number of plants in the limited time available, but this reduces the spatial resolution of the captured images. In this study, we applied a super-resolution method to the low-resolution images of tomato diseases to recover detailed appearances, such as lesions on plant organs. We also conducted disease classification using high-resolution, low-resolution, and super-resolution images to evaluate the effectiveness of super-resolution methods in disease classification. Our results indicated that the super-resolution method outperformed conventional image scaling methods in spatial resolution enhancement of tomato disease images. The results of disease classification showed that the accuracy attained was also better by a large margin with super-resolution images than with low-resolution images. These results indicated that our approach not only recovered the information lost in low-resolution images, but also exerted a beneficial influence on further image analysis. The proposed approach will accelerate image-based phenotyping and vigor diagnosis in the field, because it not only saves time to capture images of a crop in a cultivation field but also secures the accuracy of these images for further analysis. PMID:29113104
NASA Astrophysics Data System (ADS)
Alperovich, Leonid; Averbuch, Amir; Eppelbaum, Lev; Zheludev, Valery
2013-04-01
Karst areas occupy about 14% of the world land. Karst terranes of different origin have caused difficult conditions for building, industrial activity and tourism, and are the source of heightened danger for environment. Mapping of karst (sinkhole) hazards, obviously, will be one of the most significant problems of engineering geophysics in the XXI century. Taking into account the complexity of geological media, some unfavourable environments and known ambiguity of geophysical data analysis, a single geophysical method examination might be insufficient. Wavelet methodology as whole has a significant impact on cardinal problems of geophysical signal processing such as: denoising of signals, enhancement of signals and distinguishing of signals with closely related characteristics and integrated analysis of different geophysical fields (satellite, airborne, earth surface or underground observed data). We developed a three-phase approach to the integrated geophysical localization of subsurface karsts (the same approach could be used for following monitoring of karst dynamics). The first phase consists of modeling devoted to compute various geophysical effects characterizing karst phenomena. The second phase determines development of the signal processing approaches to analyzing of profile or areal geophysical observations. Finally, at the third phase provides integration of these methods in order to create a new method of the combined interpretation of different geophysical data. In the base of our combine geophysical analysis we put modern developments in the wavelet technique of the signal and image processing. The development of the integrated methodology of geophysical field examination will enable to recognizing the karst terranes even by a small ratio of "useful signal - noise" in complex geological environments. For analyzing the geophysical data, we used a technique based on the algorithm to characterize a geophysical image by a limited number of parameters. This set of parameters serves as a signature of the image and is to be utilized for discrimination of images containing karst cavity (K) from the images non-containing karst (N). The constructed algorithm consists of the following main phases: (a) collection of the database, (b) characterization of geophysical images, (c) and dimensionality reduction. Then, each image is characterized by the histogram of the coherency directions. As a result of the previous steps we obtain two sets K and N of the signatures vectors for images from sections containing karst cavity and non-karst subsurface, respectively.
NASA Astrophysics Data System (ADS)
Hosani, E. Al; Zhang, M.; Abascal, J. F. P. J.; Soleimani, M.
2016-11-01
Electrical capacitance tomography (ECT) is an imaging technology used to reconstruct the permittivity distribution within the sensing region. So far, ECT has been primarily used to image non-conductive media only, since if the conductivity of the imaged object is high, the capacitance measuring circuit will be almost shortened by the conductivity path and a clear image cannot be produced using the standard image reconstruction approaches. This paper tackles the problem of imaging metallic samples using conventional ECT systems by investigating the two main aspects of image reconstruction algorithms, namely the forward problem and the inverse problem. For the forward problem, two different methods to model the region of high conductivity in ECT is presented. On the other hand, for the inverse problem, three different algorithms to reconstruct the high contrast images are examined. The first two methods are the linear single step Tikhonov method and the iterative total variation regularization method, and use two sets of ECT data to reconstruct the image in time difference mode. The third method, namely the level set method, uses absolute ECT measurements and was developed using a metallic forward model. The results indicate that the applications of conventional ECT systems can be extended to metal samples using the suggested algorithms and forward model, especially using a level set algorithm to find the boundary of the metal.
Angelis, G I; Reader, A J; Markiewicz, P J; Kotasidis, F A; Lionheart, W R; Matthews, J C
2013-08-07
Recent studies have demonstrated the benefits of a resolution model within iterative reconstruction algorithms in an attempt to account for effects that degrade the spatial resolution of the reconstructed images. However, these algorithms suffer from slower convergence rates, compared to algorithms where no resolution model is used, due to the additional need to solve an image deconvolution problem. In this paper, a recently proposed algorithm, which decouples the tomographic and image deconvolution problems within an image-based expectation maximization (EM) framework, was evaluated. This separation is convenient, because more computational effort can be placed on the image deconvolution problem and therefore accelerate convergence. Since the computational cost of solving the image deconvolution problem is relatively small, multiple image-based EM iterations do not significantly increase the overall reconstruction time. The proposed algorithm was evaluated using 2D simulations, as well as measured 3D data acquired on the high-resolution research tomograph. Results showed that bias reduction can be accelerated by interleaving multiple iterations of the image-based EM algorithm solving the resolution model problem, with a single EM iteration solving the tomographic problem. Significant improvements were observed particularly for voxels that were located on the boundaries between regions of high contrast within the object being imaged and for small regions of interest, where resolution recovery is usually more challenging. Minor differences were observed using the proposed nested algorithm, compared to the single iteration normally performed, when an optimal number of iterations are performed for each algorithm. However, using the proposed nested approach convergence is significantly accelerated enabling reconstruction using far fewer tomographic iterations (up to 70% fewer iterations for small regions). Nevertheless, the optimal number of nested image-based EM iterations is hard to be defined and it should be selected according to the given application.
Measurement of relative density of tissue using wavelet analysis and neural nets
NASA Astrophysics Data System (ADS)
Suyatinov, Sergey I.; Kolentev, Sergey V.; Buldakova, Tatyana I.
2001-01-01
Development of methods for indirect measurement of substance's consistence and characteristics is highly actual problem of medical diagnostics. Many diseases bring about changes of tissue density or appearances of alien bodies (e.g. stones in kidneys or gallbladders). Propose to use wavelet-analysis and neural nets for indirect measurement of relative density of tissue by images of internal organs. It shall allow to reveal a disease on early stage.
De la Torre, Fernando; Chu, Wen-Sheng; Xiong, Xuehan; Vicente, Francisco; Ding, Xiaoyu; Cohn, Jeffrey
2016-01-01
Within the last 20 years, there has been an increasing interest in the computer vision community in automated facial image analysis algorithms. This has been driven by applications in animation, market research, autonomous-driving, surveillance, and facial editing among others. To date, there exist several commercial packages for specific facial image analysis tasks such as facial expression recognition, facial attribute analysis or face tracking. However, free and easy-to-use software that incorporates all these functionalities is unavailable. This paper presents IntraFace (IF), a publicly-available software package for automated facial feature tracking, head pose estimation, facial attribute recognition, and facial expression analysis from video. In addition, IFincludes a newly develop technique for unsupervised synchrony detection to discover correlated facial behavior between two or more persons, a relatively unexplored problem in facial image analysis. In tests, IF achieved state-of-the-art results for emotion expression and action unit detection in three databases, FERA, CK+ and RU-FACS; measured audience reaction to a talk given by one of the authors; and discovered synchrony for smiling in videos of parent-infant interaction. IF is free of charge for academic use at http://www.humansensing.cs.cmu.edu/intraface/. PMID:27346987
Improved disparity map analysis through the fusion of monocular image segmentations
NASA Technical Reports Server (NTRS)
Perlant, Frederic P.; Mckeown, David M.
1991-01-01
The focus is to examine how estimates of three dimensional scene structure, as encoded in a scene disparity map, can be improved by the analysis of the original monocular imagery. The utilization of surface illumination information is provided by the segmentation of the monocular image into fine surface patches of nearly homogeneous intensity to remove mismatches generated during stereo matching. These patches are used to guide a statistical analysis of the disparity map based on the assumption that such patches correspond closely with physical surfaces in the scene. Such a technique is quite independent of whether the initial disparity map was generated by automated area-based or feature-based stereo matching. Stereo analysis results are presented on a complex urban scene containing various man-made and natural features. This scene contains a variety of problems including low building height with respect to the stereo baseline, buildings and roads in complex terrain, and highly textured buildings and terrain. The improvements are demonstrated due to monocular fusion with a set of different region-based image segmentations. The generality of this approach to stereo analysis and its utility in the development of general three dimensional scene interpretation systems are also discussed.
NASA Astrophysics Data System (ADS)
Tong, Xiaojun; Cui, Minggen; Wang, Zhu
2009-07-01
The design of the new compound two-dimensional chaotic function is presented by exploiting two one-dimensional chaotic functions which switch randomly, and the design is used as a chaotic sequence generator which is proved by Devaney's definition proof of chaos. The properties of compound chaotic functions are also proved rigorously. In order to improve the robustness against difference cryptanalysis and produce avalanche effect, a new feedback image encryption scheme is proposed using the new compound chaos by selecting one of the two one-dimensional chaotic functions randomly and a new image pixels method of permutation and substitution is designed in detail by array row and column random controlling based on the compound chaos. The results from entropy analysis, difference analysis, statistical analysis, sequence randomness analysis, cipher sensitivity analysis depending on key and plaintext have proven that the compound chaotic sequence cipher can resist cryptanalytic, statistical and brute-force attacks, and especially it accelerates encryption speed, and achieves higher level of security. By the dynamical compound chaos and perturbation technology, the paper solves the problem of computer low precision of one-dimensional chaotic function.
De la Torre, Fernando; Chu, Wen-Sheng; Xiong, Xuehan; Vicente, Francisco; Ding, Xiaoyu; Cohn, Jeffrey
2015-05-01
Within the last 20 years, there has been an increasing interest in the computer vision community in automated facial image analysis algorithms. This has been driven by applications in animation, market research, autonomous-driving, surveillance, and facial editing among others. To date, there exist several commercial packages for specific facial image analysis tasks such as facial expression recognition, facial attribute analysis or face tracking. However, free and easy-to-use software that incorporates all these functionalities is unavailable. This paper presents IntraFace (IF), a publicly-available software package for automated facial feature tracking, head pose estimation, facial attribute recognition, and facial expression analysis from video. In addition, IFincludes a newly develop technique for unsupervised synchrony detection to discover correlated facial behavior between two or more persons, a relatively unexplored problem in facial image analysis. In tests, IF achieved state-of-the-art results for emotion expression and action unit detection in three databases, FERA, CK+ and RU-FACS; measured audience reaction to a talk given by one of the authors; and discovered synchrony for smiling in videos of parent-infant interaction. IF is free of charge for academic use at http://www.humansensing.cs.cmu.edu/intraface/.
Image analysis for the detection of Barré
USDA-ARS?s Scientific Manuscript database
Barré is a major problem for the textile industry. Barré is detectable after fabric is dyed and the detection of barré can depend upon the color of the dyed fabric, lighting conditions, fabric pattern, and/or the color perception of the person viewing the fabric. The standard method for measuring ...
USDA-ARS?s Scientific Manuscript database
There is a growing need to combine DNA sequencing technologies to address complex problems in genome biology. These genomic studies routinely generate voluminous image, sequence, and mapping files that should be associated with quality control information (gels, spectra, etc.), and other important ...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Chao
Sparx, a new environment for Cryo-EM image processing; Cryo-EM, Single particle reconstruction, principal component analysis; Hardware Req.: PC, MAC, Supercomputer, Mainframe, Multiplatform, Workstation. Software Req.: operating system is Unix; Compiler C++; type of files: source code, object library, executable modules, compilation instructions; sample problem input data. Location/transmission: http://sparx-em.org; User manual & paper: http://sparx-em.org;
Computer-aided diagnostics of screening mammography using content-based image retrieval
NASA Astrophysics Data System (ADS)
Deserno, Thomas M.; Soiron, Michael; de Oliveira, Júlia E. E.; de A. Araújo, Arnaldo
2012-03-01
Breast cancer is one of the main causes of death among women in occidental countries. In the last years, screening mammography has been established worldwide for early detection of breast cancer, and computer-aided diagnostics (CAD) is being developed to assist physicians reading mammograms. A promising method for CAD is content-based image retrieval (CBIR). Recently, we have developed a classification scheme of suspicious tissue pattern based on the support vector machine (SVM). In this paper, we continue moving towards automatic CAD of screening mammography. The experiments are based on in total 10,509 radiographs that have been collected from different sources. From this, 3,375 images are provided with one and 430 radiographs with more than one chain code annotation of cancerous regions. In different experiments, this data is divided into 12 and 20 classes, distinguishing between four categories of tissue density, three categories of pathology and in the 20 class problem two categories of different types of lesions. Balancing the number of images in each class yields 233 and 45 images remaining in each of the 12 and 20 classes, respectively. Using a two-dimensional principal component analysis, features are extracted from small patches of 128 x 128 pixels and classified by means of a SVM. Overall, the accuracy of the raw classification was 61.6 % and 52.1 % for the 12 and the 20 class problem, respectively. The confusion matrices are assessed for detailed analysis. Furthermore, an implementation of a SVM-based CBIR system for CADx in screening mammography is presented. In conclusion, with a smarter patch extraction, the CBIR approach might reach precision rates that are helpful for the physicians. This, however, needs more comprehensive evaluation on clinical data.
Shin, Hoo-Chang; Roth, Holger R; Gao, Mingchen; Lu, Le; Xu, Ziyue; Nogues, Isabella; Yao, Jianhua; Mollura, Daniel; Summers, Ronald M
2016-05-01
Remarkable progress has been made in image recognition, primarily due to the availability of large-scale annotated datasets and deep convolutional neural networks (CNNs). CNNs enable learning data-driven, highly representative, hierarchical image features from sufficient training data. However, obtaining datasets as comprehensively annotated as ImageNet in the medical imaging domain remains a challenge. There are currently three major techniques that successfully employ CNNs to medical image classification: training the CNN from scratch, using off-the-shelf pre-trained CNN features, and conducting unsupervised CNN pre-training with supervised fine-tuning. Another effective method is transfer learning, i.e., fine-tuning CNN models pre-trained from natural image dataset to medical image tasks. In this paper, we exploit three important, but previously understudied factors of employing deep convolutional neural networks to computer-aided detection problems. We first explore and evaluate different CNN architectures. The studied models contain 5 thousand to 160 million parameters, and vary in numbers of layers. We then evaluate the influence of dataset scale and spatial image context on performance. Finally, we examine when and why transfer learning from pre-trained ImageNet (via fine-tuning) can be useful. We study two specific computer-aided detection (CADe) problems, namely thoraco-abdominal lymph node (LN) detection and interstitial lung disease (ILD) classification. We achieve the state-of-the-art performance on the mediastinal LN detection, and report the first five-fold cross-validation classification results on predicting axial CT slices with ILD categories. Our extensive empirical evaluation, CNN model analysis and valuable insights can be extended to the design of high performance CAD systems for other medical imaging tasks.
Body image and sexual problems in young women with breast cancer.
Fobair, Pat; Stewart, Susan L; Chang, Subo; D'Onofrio, Carol; Banks, Priscilla J; Bloom, Joan R
2006-07-01
The purpose of this study was to determine the frequency of body image and sexual problems in the first months after treatment among women diagnosed with breast cancer at age 50 or younger. Breast cancer treatment may have severe effects on the bodies of younger women. Surgical treatment may be disfiguring, chemotherapy may cause abrupt menopause, and hormone replacement is not recommended. A multi-ethnic population-based sample of 549 women aged 22-50 who were married or in a stable unmarried relationship were interviewed within seven months of diagnosis with in situ, local, or regional breast cancer. Body image and sexual problems were experienced by a substantial proportion of women in the early months after diagnosis. Half of the 546 women experienced two or more body image problems some of the time (33%), or at least one problem much of the time (17%). Among sexually active women, greater body image problems were associated with mastectomy and possible reconstruction, hair loss from chemotherapy, concern with weight gain or loss, poorer mental health, lower self-esteem, and partner's difficulty understanding one's feelings. Among the 360 sexually active women, half (52%) reported having a little problem in two or more areas of sexual functioning (24%), or a definite or serious problem in at least one area (28%). Greater sexual problems were associated with vaginal dryness, poorer mental health, being married, partner's difficulty understanding one's feelings, and more body image problems, and there were significant ethnic differences in reported severity. Difficulties related to sexuality and sexual functioning were common and occurred soon after surgical and adjuvant treatment. Addressing these problems is essential to improve the quality of life of young women with breast cancer.
NASA Astrophysics Data System (ADS)
Maboudi, Mehdi; Amini, Jalal; Malihi, Shirin; Hahn, Michael
2018-04-01
Updated road network as a crucial part of the transportation database plays an important role in various applications. Thus, increasing the automation of the road extraction approaches from remote sensing images has been the subject of extensive research. In this paper, we propose an object based road extraction approach from very high resolution satellite images. Based on the object based image analysis, our approach incorporates various spatial, spectral, and textural objects' descriptors, the capabilities of the fuzzy logic system for handling the uncertainties in road modelling, and the effectiveness and suitability of ant colony algorithm for optimization of network related problems. Four VHR optical satellite images which are acquired by Worldview-2 and IKONOS satellites are used in order to evaluate the proposed approach. Evaluation of the extracted road networks shows that the average completeness, correctness, and quality of the results can reach 89%, 93% and 83% respectively, indicating that the proposed approach is applicable for urban road extraction. We also analyzed the sensitivity of our algorithm to different ant colony optimization parameter values. Comparison of the achieved results with the results of four state-of-the-art algorithms and quantifying the robustness of the fuzzy rule set demonstrate that the proposed approach is both efficient and transferable to other comparable images.
Multiple directed graph large-class multi-spectral processor
NASA Technical Reports Server (NTRS)
Casasent, David; Liu, Shiaw-Dong; Yoneyama, Hideyuki
1988-01-01
Numerical analysis techniques for the interpretation of high-resolution imaging-spectrometer data are described and demonstrated. The method proposed involves the use of (1) a hierarchical classifier with a tree structure generated automatically by a Fisher linear-discriminant-function algorithm and (2) a novel multiple-directed-graph scheme which reduces the local maxima and the number of perturbations required. Results for a 500-class test problem involving simulated imaging-spectrometer data are presented in tables and graphs; 100-percent-correct classification is achieved with an improvement factor of 5.
Hafer, J C; Joiner, C
1984-01-01
This article addresses role conflict and image problems nurses have with role partners. If these problems were corrected, nurses could be valuable assets in a "team selling" effort to help hospitals build their images. This research integrates sales management concepts and cites literature alluding to sales management research on identical problems.
A benchmark for comparison of dental radiography analysis algorithms.
Wang, Ching-Wei; Huang, Cheng-Ta; Lee, Jia-Hong; Li, Chung-Hsing; Chang, Sheng-Wei; Siao, Ming-Jhih; Lai, Tat-Ming; Ibragimov, Bulat; Vrtovec, Tomaž; Ronneberger, Olaf; Fischer, Philipp; Cootes, Tim F; Lindner, Claudia
2016-07-01
Dental radiography plays an important role in clinical diagnosis, treatment and surgery. In recent years, efforts have been made on developing computerized dental X-ray image analysis systems for clinical usages. A novel framework for objective evaluation of automatic dental radiography analysis algorithms has been established under the auspices of the IEEE International Symposium on Biomedical Imaging 2015 Bitewing Radiography Caries Detection Challenge and Cephalometric X-ray Image Analysis Challenge. In this article, we present the datasets, methods and results of the challenge and lay down the principles for future uses of this benchmark. The main contributions of the challenge include the creation of the dental anatomy data repository of bitewing radiographs, the creation of the anatomical abnormality classification data repository of cephalometric radiographs, and the definition of objective quantitative evaluation for comparison and ranking of the algorithms. With this benchmark, seven automatic methods for analysing cephalometric X-ray image and two automatic methods for detecting bitewing radiography caries have been compared, and detailed quantitative evaluation results are presented in this paper. Based on the quantitative evaluation results, we believe automatic dental radiography analysis is still a challenging and unsolved problem. The datasets and the evaluation software will be made available to the research community, further encouraging future developments in this field. (http://www-o.ntust.edu.tw/~cweiwang/ISBI2015/). Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Garcia-Allende, P. Beatriz; Amygdalos, Iakovos; Dhanapala, Hiruni; Goldin, Robert D.; Hanna, George B.; Elson, Daniel S.
2012-01-01
Computer-aided diagnosis of ophthalmic diseases using optical coherence tomography (OCT) relies on the extraction of thickness and size measures from the OCT images, but such defined layers are usually not observed in emerging OCT applications aimed at "optical biopsy" such as pulmonology or gastroenterology. Mathematical methods such as Principal Component Analysis (PCA) or textural analyses including both spatial textural analysis derived from the two-dimensional discrete Fourier transform (DFT) and statistical texture analysis obtained independently from center-symmetric auto-correlation (CSAC) and spatial grey-level dependency matrices (SGLDM), as well as, quantitative measurements of the attenuation coefficient have been previously proposed to overcome this problem. We recently proposed an alternative approach consisting of a region segmentation according to the intensity variation along the vertical axis and a pure statistical technology for feature quantification. OCT images were first segmented in the axial direction in an automated manner according to intensity. Afterwards, a morphological analysis of the segmented OCT images was employed for quantifying the features that served for tissue classification. In this study, a PCA processing of the extracted features is accomplished to combine their discriminative power in a lower number of dimensions. Ready discrimination of gastrointestinal surgical specimens is attained demonstrating that the approach further surpasses the algorithms previously reported and is feasible for tissue classification in the clinical setting.
Visual analytics for semantic queries of TerraSAR-X image content
NASA Astrophysics Data System (ADS)
Espinoza-Molina, Daniela; Alonso, Kevin; Datcu, Mihai
2015-10-01
With the continuous image product acquisition of satellite missions, the size of the image archives is considerably increasing every day as well as the variety and complexity of their content, surpassing the end-user capacity to analyse and exploit them. Advances in the image retrieval field have contributed to the development of tools for interactive exploration and extraction of the images from huge archives using different parameters like metadata, key-words, and basic image descriptors. Even though we count on more powerful tools for automated image retrieval and data analysis, we still face the problem of understanding and analyzing the results. Thus, a systematic computational analysis of these results is required in order to provide to the end-user a summary of the archive content in comprehensible terms. In this context, visual analytics combines automated analysis with interactive visualizations analysis techniques for an effective understanding, reasoning and decision making on the basis of very large and complex datasets. Moreover, currently several researches are focused on associating the content of the images with semantic definitions for describing the data in a format to be easily understood by the end-user. In this paper, we present our approach for computing visual analytics and semantically querying the TerraSAR-X archive. Our approach is mainly composed of four steps: 1) the generation of a data model that explains the information contained in a TerraSAR-X product. The model is formed by primitive descriptors and metadata entries, 2) the storage of this model in a database system, 3) the semantic definition of the image content based on machine learning algorithms and relevance feedback, and 4) querying the image archive using semantic descriptors as query parameters and computing the statistical analysis of the query results. The experimental results shows that with the help of visual analytics and semantic definitions we are able to explain the image content using semantic terms and the relations between them answering questions such as what is the percentage of urban area in a region? or what is the distribution of water bodies in a city?
Students' Images of Problem Contexts when Solving Applied Problems
ERIC Educational Resources Information Center
Moore, Kevin C.; Carlson, Marilyn P.
2012-01-01
This article reports findings from an investigation of precalculus students' approaches to solving novel problems. We characterize the images that students constructed during their solution attempts and describe the degree to which they were successful in imagining how the quantities in a problem's context change together. Our analyses revealed…
Students’ Errors in Geometry Viewed from Spatial Intelligence
NASA Astrophysics Data System (ADS)
Riastuti, N.; Mardiyana, M.; Pramudya, I.
2017-09-01
Geometry is one of the difficult materials because students must have ability to visualize, describe images, draw shapes, and know the kind of shapes. This study aim is to describe student error based on Newmans’ Error Analysis in solving geometry problems viewed from spatial intelligence. This research uses descriptive qualitative method by using purposive sampling technique. The datas in this research are the result of geometri material test and interview by the 8th graders of Junior High School in Indonesia. The results of this study show that in each category of spatial intelligence has a different type of error in solving the problem on the material geometry. Errors are mostly made by students with low spatial intelligence because they have deficiencies in visual abilities. Analysis of student error viewed from spatial intelligence is expected to help students do reflection in solving the problem of geometry.
2012-01-01
Background Dimensionality reduction (DR) enables the construction of a lower dimensional space (embedding) from a higher dimensional feature space while preserving object-class discriminability. However several popular DR approaches suffer from sensitivity to choice of parameters and/or presence of noise in the data. In this paper, we present a novel DR technique known as consensus embedding that aims to overcome these problems by generating and combining multiple low-dimensional embeddings, hence exploiting the variance among them in a manner similar to ensemble classifier schemes such as Bagging. We demonstrate theoretical properties of consensus embedding which show that it will result in a single stable embedding solution that preserves information more accurately as compared to any individual embedding (generated via DR schemes such as Principal Component Analysis, Graph Embedding, or Locally Linear Embedding). Intelligent sub-sampling (via mean-shift) and code parallelization are utilized to provide for an efficient implementation of the scheme. Results Applications of consensus embedding are shown in the context of classification and clustering as applied to: (1) image partitioning of white matter and gray matter on 10 different synthetic brain MRI images corrupted with 18 different combinations of noise and bias field inhomogeneity, (2) classification of 4 high-dimensional gene-expression datasets, (3) cancer detection (at a pixel-level) on 16 image slices obtained from 2 different high-resolution prostate MRI datasets. In over 200 different experiments concerning classification and segmentation of biomedical data, consensus embedding was found to consistently outperform both linear and non-linear DR methods within all applications considered. Conclusions We have presented a novel framework termed consensus embedding which leverages ensemble classification theory within dimensionality reduction, allowing for application to a wide range of high-dimensional biomedical data classification and segmentation problems. Our generalizable framework allows for improved representation and classification in the context of both imaging and non-imaging data. The algorithm offers a promising solution to problems that currently plague DR methods, and may allow for extension to other areas of biomedical data analysis. PMID:22316103
Combined optimization of image-gathering and image-processing systems for scene feature detection
NASA Technical Reports Server (NTRS)
Halyo, Nesim; Arduini, Robert F.; Samms, Richard W.
1987-01-01
The relationship between the image gathering and image processing systems for minimum mean squared error estimation of scene characteristics is investigated. A stochastic optimization problem is formulated where the objective is to determine a spatial characteristic of the scene rather than a feature of the already blurred, sampled and noisy image data. An analytical solution for the optimal characteristic image processor is developed. The Wiener filter for the sampled image case is obtained as a special case, where the desired characteristic is scene restoration. Optimal edge detection is investigated using the Laplacian operator x G as the desired characteristic, where G is a two dimensional Gaussian distribution function. It is shown that the optimal edge detector compensates for the blurring introduced by the image gathering optics, and notably, that it is not circularly symmetric. The lack of circular symmetry is largely due to the geometric effects of the sampling lattice used in image acquisition. The optimal image gathering optical transfer function is also investigated and the results of a sensitivity analysis are shown.
System Matrix Analysis for Computed Tomography Imaging
Flores, Liubov; Vidal, Vicent; Verdú, Gumersindo
2015-01-01
In practical applications of computed tomography imaging (CT), it is often the case that the set of projection data is incomplete owing to the physical conditions of the data acquisition process. On the other hand, the high radiation dose imposed on patients is also undesired. These issues demand that high quality CT images can be reconstructed from limited projection data. For this reason, iterative methods of image reconstruction have become a topic of increased research interest. Several algorithms have been proposed for few-view CT. We consider that the accurate solution of the reconstruction problem also depends on the system matrix that simulates the scanning process. In this work, we analyze the application of the Siddon method to generate elements of the matrix and we present results based on real projection data. PMID:26575482
AFM feature definition for neural cells on nanofibrillar tissue scaffolds.
Tiryaki, Volkan M; Khan, Adeel A; Ayres, Virginia M
2012-01-01
A diagnostic approach is developed and implemented that provides clear feature definition in atomic force microscopy (AFM) images of neural cells on nanofibrillar tissue scaffolds. Because the cellular edges and processes are on the same order as the background nanofibers, this imaging situation presents a feature definition problem. The diagnostic approach is based on analysis of discrete Fourier transforms of standard AFM section measurements. The diagnostic conclusion that the combination of dynamic range enhancement with low-frequency component suppression enhances feature definition is shown to be correct and to lead to clear-featured images that could change previously held assumptions about the cell-cell interactions present. Clear feature definition of cells on scaffolds extends the usefulness of AFM imaging for use in regenerative medicine. © Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Wu, Yu-Xia; Zhang, Xi; Xu, Xiao-Pan; Liu, Yang; Zhang, Guo-Peng; Li, Bao-Juan; Chen, Hui-Jun; Lu, Hong-Bing
2017-02-01
Ischemic stroke has great correlation with carotid atherosclerosis and is mostly caused by vulnerable plaques. It's particularly important to analysis the components of plaques for the detection of vulnerable plaques. Recently plaque analysis based on multi-contrast magnetic resonance imaging has attracted great attention. Though multi-contrast MR imaging has potentials in enhanced demonstration of carotid wall, its performance is hampered by the misalignment of different imaging sequences. In this study, a coarse-to-fine registration strategy based on cross-sectional images and wall boundaries is proposed to solve the problem. It includes two steps: a rigid step using the iterative closest points to register the centerlines of carotid artery extracted from multi-contrast MR images, and a non-rigid step using the thin plate spline to register the lumen boundaries of carotid artery. In the rigid step, the centerline was extracted by tracking the crosssectional images along the vessel direction calculated by Hessian matrix. In the non-rigid step, a shape context descriptor is introduced to find corresponding points of two similar boundaries. In addition, the deterministic annealing technique is used to find a globally optimized solution. The proposed strategy was evaluated by newly developed three-dimensional, fast and high resolution multi-contrast black blood MR imaging. Quantitative validation indicated that after registration, the overlap of two boundaries from different sequences is 95%, and their mean surface distance is 0.12 mm. In conclusion, the proposed algorithm has improved the accuracy of registration effectively for further component analysis of carotid plaques.
Images of the future - Two decades in astronomy
NASA Technical Reports Server (NTRS)
Weistrop, D.
1982-01-01
Future instruments for the 100-10,000 A UV-wavelength region will require detectors with greater quantum efficiency, smaller picture elements, a greater wavelength range, and greater active area than those currently available. After assessing the development status and performance characteristics of vidicons, image tubes, electronographic cameras, digicons, silicon arrays and microchannel plate intensifiers presently employed by astronomical spacecraft, attention is given to such next-generation detectors as the Mosaicked Optical Self-scanned Array Imaging Camera, which consists of a photocathode deposited on the input side of a microchannel plate intensifier. The problems posed by the signal processing and data analysis requirements of the devices foreseen for the 21st century are noted.
NASA Astrophysics Data System (ADS)
Stewart, P. A. E.
1987-05-01
Present and projected applications of penetrating radiation techniques to gas turbine research and development are considered. Approaches discussed include the visualization and measurement of metal component movement using high energy X-rays, the measurement of metal temperatures using epithermal neutrons, the measurement of metal stresses using thermal neutron diffraction, and the visualization and measurement of oil and fuel systems using either cold neutron radiography or emitting isotope tomography. By selecting the radiation appropriate to the problem, the desired data can be probed for and obtained through imaging or signal acquisition, and the necessary information can then be extracted with digital image processing or knowledge based image manipulation and pattern recognition.
Emotion Recognition - the need for a complete analysis of the phenomenon of expression formation
NASA Astrophysics Data System (ADS)
Bobkowska, Katarzyna; Przyborski, Marek; Skorupka, Dariusz
2018-01-01
This article shows how complex emotions are. This has been proven by the analysis of the changes that occur on the face. The authors present the problem of image analysis for the purpose of identifying emotions. In addition, they point out the importance of recording the phenomenon of the development of emotions on the human face with the use of high-speed cameras, which allows the detection of micro expression. The work that was prepared for this article was based on analyzing the parallax pair correlation coefficients for specific faces. In the article authors proposed to divide the facial image into 8 characteristic segments. With this approach, it was confirmed that at different moments of emotion the pace of expression and the maximum change characteristic of a particular emotion, for each part of the face is different.
High dynamic range algorithm based on HSI color space
NASA Astrophysics Data System (ADS)
Zhang, Jiancheng; Liu, Xiaohua; Dong, Liquan; Zhao, Yuejin; Liu, Ming
2014-10-01
This paper presents a High Dynamic Range algorithm based on HSI color space. To keep hue and saturation of original image and conform to human eye vision effect is the first problem, convert the input image data to HSI color space which include intensity dimensionality. To raise the speed of the algorithm is the second problem, use integral image figure out the average of every pixel intensity value under a certain scale, as local intensity component of the image, and figure out detail intensity component. To adjust the overall image intensity is the third problem, we can get an S type curve according to the original image information, adjust the local intensity component according to the S type curve. To enhance detail information is the fourth problem, adjust the detail intensity component according to the curve designed in advance. The weighted sum of local intensity component after adjusted and detail intensity component after adjusted is final intensity. Converting synthetic intensity and other two dimensionality to output color space can get final processed image.
A linear programming approach to max-sum problem: a review.
Werner, Tomás
2007-07-01
The max-sum labeling problem, defined as maximizing a sum of binary (i.e., pairwise) functions of discrete variables, is a general NP-hard optimization problem with many applications, such as computing the MAP configuration of a Markov random field. We review a not widely known approach to the problem, developed by Ukrainian researchers Schlesinger et al. in 1976, and show how it contributes to recent results, most importantly, those on the convex combination of trees and tree-reweighted max-product. In particular, we review Schlesinger et al.'s upper bound on the max-sum criterion, its minimization by equivalent transformations, its relation to the constraint satisfaction problem, the fact that this minimization is dual to a linear programming relaxation of the original problem, and the three kinds of consistency necessary for optimality of the upper bound. We revisit problems with Boolean variables and supermodular problems. We describe two algorithms for decreasing the upper bound. We present an example application for structural image analysis.
The new analysis method of PWQ in the DRAM pattern
NASA Astrophysics Data System (ADS)
Han, Daehan; Chang, Jinman; Kim, Taeheon; Lee, Kyusun; Kim, Yonghyeon; Kang, Jinyoung; Hong, Aeran; Choi, Bumjin; Lee, Joosung; Kim, Hyoung Jun; Lee, Kweonjae; Hong, Hyoungsun; Jin, Gyoyoung
2016-03-01
In a sub 2Xnm node process, the feedback of pattern weak points is more and more significant. Therefore, it is very important to extract the systemic defect in Double Patterning Technology(DPT), however, it is impossible to predict exact systemic defect at the recent photo simulation tool.[1] Therefore, the method of Process Window Qualification (PWQ) is very serious and essential these days. Conventional PWQ methods are die to die image comparison by using an e-beam or bright field machine. Results are evaluated by the person, who reviews the images, in some cases. However, conventional die to die comparison method has critical problem. If reference die and comparison die have same problem, such as both of dies have pattern problems, the issue patterns are not detected by current defect detecting approach. Aside from the inspection accuracy, reviewing the wafer requires much effort and time to justify the genuine issue patterns. Therefore, our company adopts die to data based matching PWQ method that is using NGR machine. The main features of the NGR are as follows. First, die to data based matching, second High speed, finally massive data were used for evaluation of pattern inspection.[2] Even though our die to data based matching PWQ method measures the mass data, our margin decision process is based on image shape. Therefore, it has some significant problems. First, because of the long analysis time, the developing period of new device is increased. Moreover, because of the limitation of resources, it may not examine the full chip area. Consequently, the result of PWQ weak points cannot represent the all the possible defects. Finally, since the PWQ margin is not decided by the mathematical value, to make the solid definition of killing defect is impossible. To overcome these problems, we introduce a statistical values base process window qualification method that increases the accuracy of process margin and reduces the review time. Therefore, it is possible to see the genuine margin of the critical pattern issue which we cannot see on our conventional PWQ inspection; hence we can enhance the accuracy of PWQ margin.
Real-time computation of parameter fitting and image reconstruction using graphical processing units
NASA Astrophysics Data System (ADS)
Locans, Uldis; Adelmann, Andreas; Suter, Andreas; Fischer, Jannis; Lustermann, Werner; Dissertori, Günther; Wang, Qiulin
2017-06-01
In recent years graphical processing units (GPUs) have become a powerful tool in scientific computing. Their potential to speed up highly parallel applications brings the power of high performance computing to a wider range of users. However, programming these devices and integrating their use in existing applications is still a challenging task. In this paper we examined the potential of GPUs for two different applications. The first application, created at Paul Scherrer Institut (PSI), is used for parameter fitting during data analysis of μSR (muon spin rotation, relaxation and resonance) experiments. The second application, developed at ETH, is used for PET (Positron Emission Tomography) image reconstruction and analysis. Applications currently in use were examined to identify parts of the algorithms in need of optimization. Efficient GPU kernels were created in order to allow applications to use a GPU, to speed up the previously identified parts. Benchmarking tests were performed in order to measure the achieved speedup. During this work, we focused on single GPU systems to show that real time data analysis of these problems can be achieved without the need for large computing clusters. The results show that the currently used application for parameter fitting, which uses OpenMP to parallelize calculations over multiple CPU cores, can be accelerated around 40 times through the use of a GPU. The speedup may vary depending on the size and complexity of the problem. For PET image analysis, the obtained speedups of the GPU version were more than × 40 larger compared to a single core CPU implementation. The achieved results show that it is possible to improve the execution time by orders of magnitude.
NASA Astrophysics Data System (ADS)
Kim, Sungho
2017-06-01
Automatic target recognition (ATR) is a traditionally challenging problem in military applications because of the wide range of infrared (IR) image variations and the limited number of training images. IR variations are caused by various three-dimensional target poses, noncooperative weather conditions (fog and rain), and difficult target acquisition environments. Recently, deep convolutional neural network-based approaches for RGB images (RGB-CNN) showed breakthrough performance in computer vision problems, such as object detection and classification. The direct use of RGB-CNN to the IR ATR problem fails to work because of the IR database problems (limited database size and IR image variations). An IR variation-reduced deep CNN (IVR-CNN) to cope with the problems is presented. The problem of limited IR database size is solved by a commercial thermal simulator (OKTAL-SE). The second problem of IR variations is mitigated by the proposed shifted ramp function-based intensity transformation. This can suppress the background and enhance the target contrast simultaneously. The experimental results on the synthesized IR images generated by the thermal simulator (OKTAL-SE) validated the feasibility of IVR-CNN for military ATR applications.
The Effects of Immigration and Media Influence on Body Image Among Pakistani Men
Saghir, Sheeba; Hyland, Lynda
2017-01-01
This study examined the role of media influence and immigration on body image among Pakistani men. Attitudes toward the body were compared between those living in Pakistan (n = 56) and those who had immigrated to the United Arab Emirates (n = 58). Results of a factorial analysis of variance demonstrated a significant main effect of immigrant status. Pakistani men living in the United Arab Emirates displayed poorer body image than those in the Pakistan sample. Results also indicated a second main effect of media influence.Those highly influenced by the media displayed poorer body image. No interaction effect was observed between immigrant status and media influence on body image. These findings suggest that media influence and immigration are among important risk factors for the development of negative body image among non-Western men. Interventions designed to address the negative effects of the media and immigration may be effective at reducing body image disorders and other related health problems in this population. PMID:28625116
Groesz, Lisa M; Levine, Michael P; Murnen, Sarah K
2002-01-01
The effect of experimental manipulations of the thin beauty ideal, as portrayed in the mass media, on female body image was evaluated using meta-analysis. Data from 25 studies (43 effect sizes) were used to examine the main effect of mass media images of the slender ideal, as well as the moderating effects of pre-existing body image problems, the age of the participants, the number of stimulus presentations, and the type of research design. Body image was significantly more negative after viewing thin media images than after viewing images of either average size models, plus size models, or inanimate objects. This effect was stronger for between-subjects designs, participants less than 19 years of age, and for participants who are vulnerable to activation of a thinness schema. Results support the sociocultural perspective that mass media promulgate a slender ideal that elicits body dissatisfaction. Implications for prevention and research on social comparison processes are considered. Copyright 2002 by John Wiley & Sons, Inc.
The Effects of Immigration and Media Influence on Body Image Among Pakistani Men.
Saghir, Sheeba; Hyland, Lynda
2017-07-01
This study examined the role of media influence and immigration on body image among Pakistani men. Attitudes toward the body were compared between those living in Pakistan ( n = 56) and those who had immigrated to the United Arab Emirates ( n = 58). Results of a factorial analysis of variance demonstrated a significant main effect of immigrant status. Pakistani men living in the United Arab Emirates displayed poorer body image than those in the Pakistan sample. Results also indicated a second main effect of media influence.Those highly influenced by the media displayed poorer body image. No interaction effect was observed between immigrant status and media influence on body image. These findings suggest that media influence and immigration are among important risk factors for the development of negative body image among non-Western men. Interventions designed to address the negative effects of the media and immigration may be effective at reducing body image disorders and other related health problems in this population.
Novel view synthesis by interpolation over sparse examples
NASA Astrophysics Data System (ADS)
Liang, Bodong; Chung, Ronald C.
2006-01-01
Novel view synthesis (NVS) is an important problem in image rendering. It involves synthesizing an image of a scene at any specified (novel) viewpoint, given some images of the scene at a few sample viewpoints. The general understanding is that the solution should bypass explicit 3-D reconstruction of the scene. As it is, the problem has a natural tie to interpolation, despite that mainstream efforts on the problem have been adopting formulations otherwise. Interpolation is about finding the output of a function f(x) for any specified input x, given a few input-output pairs {(xi,fi):i=1,2,3,...,n} of the function. If the input x is the viewpoint, and f(x) is the image, the interpolation problem becomes exactly NVS. We treat the NVS problem using the interpolation formulation. In particular, we adopt the example-based everything or interpolation (EBI) mechanism-an established mechanism for interpolating or learning functions from examples. EBI has all the desirable properties of a good interpolation: all given input-output examples are satisfied exactly, and the interpolation is smooth with minimum oscillations between the examples. We point out that EBI, however, has difficulty in interpolating certain classes of functions, including the image function in the NVS problem. We propose an extension of the mechanism for overcoming the limitation. We also present how the extended interpolation mechanism could be used to synthesize images at novel viewpoints. Real image results show that the mechanism has promising performance, even with very few example images.
ANALYTIC MODELING OF STARSHADES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cash, Webster
2011-09-01
External occulters, otherwise known as starshades, have been proposed as a solution to one of the highest priority yet technically vexing problems facing astrophysics-the direct imaging and characterization of terrestrial planets around other stars. New apodization functions, developed over the past few years, now enable starshades of just a few tens of meters diameter to occult central stars so efficiently that the orbiting exoplanets can be revealed and other high-contrast imaging challenges addressed. In this paper, an analytic approach to the analysis of these apodization functions is presented. It is used to develop a tolerance analysis suitable for use inmore » designing practical starshades. The results provide a mathematical basis for understanding starshades and a quantitative approach to setting tolerances.« less
NASA Technical Reports Server (NTRS)
Gordon, H. R.; Evans, R. H.
1993-01-01
In a recent paper Eckstein and Simpson describe what they believe to be serious difficulties and/or errors with the CZCS (Coastal Zone Color Scanner) processing algorithms based on their analysis of seven images. Here we point out that portions of their analysis, particularly those dealing with multiple scattered Rayleigh radiance, are incorrect. We also argue that other problems they discuss have already been addressed in the literature. Finally, we suggest that many apparent artifacts in CZCS-derived pigment fields are likely to be due to inadequacies in the sensor band set or to poor radiometric stability, both of which will be remedied with the next generation of ocean color sensors.
NASA Astrophysics Data System (ADS)
Cunningham, Cindy C.; Peloquin, Tracy D.
1999-02-01
Since late 1996 the Forensic Identification Services Section of the Ontario Provincial Police has been actively involved in state-of-the-art image capture and the processing of video images extracted from crime scene videos. The benefits and problems of this technology for video analysis are discussed. All analysis is being conducted on SUN Microsystems UNIX computers, networked to a digital disk recorder that is used for video capture. The primary advantage of this system over traditional frame grabber technology is reviewed. Examples from actual cases are presented and the successes and limitations of this approach are explored. Suggestions to companies implementing security technology plans for various organizations (banks, stores, restaurants, etc.) will be made. Future directions for this work and new technologies are also discussed.
2007-02-28
Iterative Ultrasonic Signal and Image Deconvolution for Estimation of the Complex Medium Response, International Journal of Imaging Systems and...1767-1782, 2006. 31. Z. Mu, R. Plemmons, and P. Santago. Iterative Ultrasonic Signal and Image Deconvolution for Estimation of the Complex...rigorous mathematical and computational research on inverse problems in optical imaging of direct interest to the Army and also the intelligence agencies
Detection of correlated fragments in a sequence of images by superimposed Fourier holograms
NASA Astrophysics Data System (ADS)
Pavlov, A. V.
2016-08-01
The problem of detecting correlated fragments in a sequence of images recorded by the superimposing holograms within the Fourier holography scheme with angular multiplication of a spatially modulated reference beam is considered. The approach to the solution of this problem is based on the properties of the variance of the image sum. It is shown that this problem can be solved by providing a constant distance between the signal and reference images when recording superimposed holograms and a partial mutual correlatedness of reference images. The detection efficiency is analysed from the point of view of estimated image data capacity, the degree of mutual correlation of reference images, and the hologram recording conditions. The results of a numerical experiment under the most complicated conditions (representation of images by realisations of homogeneous random fields) confirm the theoretical conclusions.
Model based approach to UXO imaging using the time domain electromagnetic method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lavely, E.M.
1999-04-01
Time domain electromagnetic (TDEM) sensors have emerged as a field-worthy technology for UXO detection in a variety of geological and environmental settings. This success has been achieved with commercial equipment that was not optimized for UXO detection and discrimination. The TDEM response displays a rich spatial and temporal behavior which is not currently utilized. Therefore, in this paper the author describes a research program for enhancing the effectiveness of the TDEM method for UXO detection and imaging. Fundamental research is required in at least three major areas: (a) model based imaging capability i.e. the forward and inverse problem, (b) detectormore » modeling and instrument design, and (c) target recognition and discrimination algorithms. These research problems are coupled and demand a unified treatment. For example: (1) the inverse solution depends on solution of the forward problem and knowledge of the instrument response; (2) instrument design with improved diagnostic power requires forward and inverse modeling capability; and (3) improved target recognition algorithms (such as neural nets) must be trained with data collected from the new instrument and with synthetic data computed using the forward model. Further, the design of the appropriate input and output layers of the net will be informed by the results of the forward and inverse modeling. A more fully developed model of the TDEM response would enable the joint inversion of data collected from multiple sensors (e.g., TDEM sensors and magnetometers). Finally, the author suggests that a complementary approach to joint inversions is the statistical recombination of data using principal component analysis. The decomposition into principal components is useful since the first principal component contains those features that are most strongly correlated from image to image.« less
Li, Xingyu; Plataniotis, Konstantinos N
2015-07-01
In digital histopathology, tasks of segmentation and disease diagnosis are achieved by quantitative analysis of image content. However, color variation in image samples makes it challenging to produce reliable results. This paper introduces a complete normalization scheme to address the problem of color variation in histopathology images jointly caused by inconsistent biopsy staining and nonstandard imaging condition. Method : Different from existing normalization methods that either address partial cause of color variation or lump them together, our method identifies causes of color variation based on a microscopic imaging model and addresses inconsistency in biopsy imaging and staining by an illuminant normalization module and a spectral normalization module, respectively. In evaluation, we use two public datasets that are representative of histopathology images commonly received in clinics to examine the proposed method from the aspects of robustness to system settings, performance consistency against achromatic pixels, and normalization effectiveness in terms of histological information preservation. As the saturation-weighted statistics proposed in this study generates stable and reliable color cues for stain normalization, our scheme is robust to system parameters and insensitive to image content and achromatic colors. Extensive experimentation suggests that our approach outperforms state-of-the-art normalization methods as the proposed method is the only approach that succeeds to preserve histological information after normalization. The proposed color normalization solution would be useful to mitigate effects of color variation in pathology images on subsequent quantitative analysis.
Performance evaluation of infrared imaging system in field test
NASA Astrophysics Data System (ADS)
Wang, Chensheng; Guo, Xiaodong; Ren, Tingting; Zhang, Zhi-jie
2014-11-01
Infrared imaging system has been applied widely in both military and civilian fields. Since the infrared imager has various types and different parameters, for system manufacturers and customers, there is great demand for evaluating the performance of IR imaging systems with a standard tool or platform. Since the first generation IR imager was developed, the standard method to assess the performance has been the MRTD or related improved methods which are not perfect adaptable for current linear scanning imager or 2D staring imager based on FPA detector. For this problem, this paper describes an evaluation method based on the triangular orientation discrimination metric which is considered as the effective and emerging method to evaluate the synthesis performance of EO system. To realize the evaluation in field test, an experiment instrument is developed. And considering the importance of operational environment, the field test is carried in practical atmospheric environment. The test imagers include panoramic imaging system and staring imaging systems with different optics and detectors parameters (both cooled and uncooled). After showing the instrument and experiment setup, the experiment results are shown. The target range performance is analyzed and discussed. In data analysis part, the article gives the range prediction values obtained from TOD method, MRTD method and practical experiment, and shows the analysis and results discussion. The experimental results prove the effectiveness of this evaluation tool, and it can be taken as a platform to give the uniform performance prediction reference.
Designing a stable feedback control system for blind image deconvolution.
Cheng, Shichao; Liu, Risheng; Fan, Xin; Luo, Zhongxuan
2018-05-01
Blind image deconvolution is one of the main low-level vision problems with wide applications. Many previous works manually design regularization to simultaneously estimate the latent sharp image and the blur kernel under maximum a posterior framework. However, it has been demonstrated that such joint estimation strategies may lead to the undesired trivial solution. In this paper, we present a novel perspective, using a stable feedback control system, to simulate the latent sharp image propagation. The controller of our system consists of regularization and guidance, which decide the sparsity and sharp features of latent image, respectively. Furthermore, the formational model of blind image is introduced into the feedback process to avoid the image restoration deviating from the stable point. The stability analysis of the system indicates the latent image propagation in blind deconvolution task can be efficiently estimated and controlled by cues and priors. Thus the kernel estimation used for image restoration becomes more precision. Experimental results show that our system is effective on image propagation, and can perform favorably against the state-of-the-art blind image deconvolution methods on different benchmark image sets and special blurred images. Copyright © 2018 Elsevier Ltd. All rights reserved.
A novel content-based active contour model for brain tumor segmentation.
Sachdeva, Jainy; Kumar, Vinod; Gupta, Indra; Khandelwal, Niranjan; Ahuja, Chirag Kamal
2012-06-01
Brain tumor segmentation is a crucial step in surgical and treatment planning. Intensity-based active contour models such as gradient vector flow (GVF), magneto static active contour (MAC) and fluid vector flow (FVF) have been proposed to segment homogeneous objects/tumors in medical images. In this study, extensive experiments are done to analyze the performance of intensity-based techniques for homogeneous tumors on brain magnetic resonance (MR) images. The analysis shows that the state-of-art methods fail to segment homogeneous tumors against similar background or when these tumors show partial diversity toward the background. They also have preconvergence problem in case of false edges/saddle points. However, the presence of weak edges and diffused edges (due to edema around the tumor) leads to oversegmentation by intensity-based techniques. Therefore, the proposed method content-based active contour (CBAC) uses both intensity and texture information present within the active contour to overcome above-stated problems capturing large range in an image. It also proposes a novel use of Gray-Level Co-occurrence Matrix to define texture space for tumor segmentation. The effectiveness of this method is tested on two different real data sets (55 patients - more than 600 images) containing five different types of homogeneous, heterogeneous, diffused tumors and synthetic images (non-MR benchmark images). Remarkable results are obtained in segmenting homogeneous tumors of uniform intensity, complex content heterogeneous, diffused tumors on MR images (T1-weighted, postcontrast T1-weighted and T2-weighted) and synthetic images (non-MR benchmark images of varying intensity, texture, noise content and false edges). Further, tumor volume is efficiently extracted from 2-dimensional slices and is named as 2.5-dimensional segmentation. Copyright © 2012 Elsevier Inc. All rights reserved.
Yao, Tao; Yin, Shi-Min; Xiangli, Bin; Lü, Qun-Bo
2010-06-01
Based on in-depth analysis of the relative radiation scaling theorem and acquired scaling data of pixel response nonuniformity correction of CCD (charge-coupled device) in spaceborne visible interferential imaging spectrometer, a pixel response nonuniformity correction method of CCD adapted to visible and infrared interferential imaging spectrometer system was studied out, and it availably resolved the engineering technical problem of nonuniformity correction in detector arrays for interferential imaging spectrometer system. The quantitative impact of CCD nonuniformity on interferogram correction and recovery spectrum accuracy was given simultaneously. Furthermore, an improved method with calibration and nonuniformity correction done after the instrument is successfully assembled was proposed. The method can save time and manpower. It can correct nonuniformity caused by other reasons in spectrometer system besides CCD itself's nonuniformity, can acquire recalibration data when working environment is changed, and can also more effectively improve the nonuniformity calibration accuracy of interferential imaging
End-to-end imaging information rate advantages of various alternative communication systems
NASA Technical Reports Server (NTRS)
Rice, R. F.
1982-01-01
The efficiency of various deep space communication systems which are required to transmit both imaging and a typically error sensitive class of data called general science and engineering (gse) are compared. The approach jointly treats the imaging and gse transmission problems, allowing comparisons of systems which include various channel coding and data compression alternatives. Actual system comparisons include an advanced imaging communication system (AICS) which exhibits the rather significant advantages of sophisticated data compression coupled with powerful yet practical channel coding. For example, under certain conditions the improved AICS efficiency could provide as much as two orders of magnitude increase in imaging information rate compared to a single channel uncoded, uncompressed system while maintaining the same gse data rate in both systems. Additional details describing AICS compression and coding concepts as well as efforts to apply them are provided in support of the system analysis.
Efficient robust reconstruction of dynamic PET activity maps with radioisotope decay constraints.
Gao, Fei; Liu, Huafeng; Shi, Pengcheng
2010-01-01
Dynamic PET imaging performs sequence of data acquisition in order to provide visualization and quantification of physiological changes in specific tissues and organs. The reconstruction of activity maps is generally the first step in dynamic PET. State space Hinfinity approaches have been proved to be a robust method for PET image reconstruction where, however, temporal constraints are not considered during the reconstruction process. In addition, the state space strategies for PET image reconstruction have been computationally prohibitive for practical usage because of the need for matrix inversion. In this paper, we present a minimax formulation of the dynamic PET imaging problem where a radioisotope decay model is employed as physics-based temporal constraints on the photon counts. Furthermore, a robust steady state Hinfinity filter is developed to significantly improve the computational efficiency with minimal loss of accuracy. Experiments are conducted on Monte Carlo simulated image sequences for quantitative analysis and validation.
A novel underwater dam crack detection and classification approach based on sonar images
Shi, Pengfei; Fan, Xinnan; Ni, Jianjun; Khan, Zubair; Li, Min
2017-01-01
Underwater dam crack detection and classification based on sonar images is a challenging task because underwater environments are complex and because cracks are quite random and diverse in nature. Furthermore, obtainable sonar images are of low resolution. To address these problems, a novel underwater dam crack detection and classification approach based on sonar imagery is proposed. First, the sonar images are divided into image blocks. Second, a clustering analysis of a 3-D feature space is used to obtain the crack fragments. Third, the crack fragments are connected using an improved tensor voting method. Fourth, a minimum spanning tree is used to obtain the crack curve. Finally, an improved evidence theory combined with fuzzy rule reasoning is proposed to classify the cracks. Experimental results show that the proposed approach is able to detect underwater dam cracks and classify them accurately and effectively under complex underwater environments. PMID:28640925
A novel underwater dam crack detection and classification approach based on sonar images.
Shi, Pengfei; Fan, Xinnan; Ni, Jianjun; Khan, Zubair; Li, Min
2017-01-01
Underwater dam crack detection and classification based on sonar images is a challenging task because underwater environments are complex and because cracks are quite random and diverse in nature. Furthermore, obtainable sonar images are of low resolution. To address these problems, a novel underwater dam crack detection and classification approach based on sonar imagery is proposed. First, the sonar images are divided into image blocks. Second, a clustering analysis of a 3-D feature space is used to obtain the crack fragments. Third, the crack fragments are connected using an improved tensor voting method. Fourth, a minimum spanning tree is used to obtain the crack curve. Finally, an improved evidence theory combined with fuzzy rule reasoning is proposed to classify the cracks. Experimental results show that the proposed approach is able to detect underwater dam cracks and classify them accurately and effectively under complex underwater environments.
Images of war: using satellite images for human rights monitoring in Turkish Kurdistan.
de Vos, Hugo; Jongerden, Joost; van Etten, Jacob
2008-09-01
In areas of war and armed conflict it is difficult to get trustworthy and coherent information. Civil society and human rights groups often face problems of dealing with fragmented witness reports, disinformation of war propaganda, and difficult direct access to these areas. Turkish Kurdistan was used as a case study of armed conflict to evaluate the potential use of satellite images for verification of witness reports collected by human rights groups. The Turkish army was reported to be burning forests, fields and villages as a strategy in the conflict against guerrilla uprising. This paper concludes that satellite images are useful to validate witness reports of forest fires. Even though the use of this technology for human rights groups will depend on some feasibility factors such as prices, access and expertise, the images proved to be key for analysis of spatial aspects of conflict and valuable for reconstructing a more trustworthy picture.
A Nonlinear Diffusion Equation-Based Model for Ultrasound Speckle Noise Removal
NASA Astrophysics Data System (ADS)
Zhou, Zhenyu; Guo, Zhichang; Zhang, Dazhi; Wu, Boying
2018-04-01
Ultrasound images are contaminated by speckle noise, which brings difficulties in further image analysis and clinical diagnosis. In this paper, we address this problem in the view of nonlinear diffusion equation theories. We develop a nonlinear diffusion equation-based model by taking into account not only the gradient information of the image, but also the information of the gray levels of the image. By utilizing the region indicator as the variable exponent, we can adaptively control the diffusion type which alternates between the Perona-Malik diffusion and the Charbonnier diffusion according to the image gray levels. Furthermore, we analyze the proposed model with respect to the theoretical and numerical properties. Experiments show that the proposed method achieves much better speckle suppression and edge preservation when compared with the traditional despeckling methods, especially in the low gray level and low-contrast regions.
Fractional domain varying-order differential denoising method
NASA Astrophysics Data System (ADS)
Zhang, Yan-Shan; Zhang, Feng; Li, Bing-Zhao; Tao, Ran
2014-10-01
Removal of noise is an important step in the image restoration process, and it remains a challenging problem in image processing. Denoising is a process used to remove the noise from the corrupted image, while retaining the edges and other detailed features as much as possible. Recently, denoising in the fractional domain is a hot research topic. The fractional-order anisotropic diffusion method can bring a less blocky effect and preserve edges in image denoising, a method that has received much interest in the literature. Based on this method, we propose a new method for image denoising, in which fractional-varying-order differential, rather than constant-order differential, is used. The theoretical analysis and experimental results show that compared with the state-of-the-art fractional-order anisotropic diffusion method, the proposed fractional-varying-order differential denoising model can preserve structure and texture well, while quickly removing noise, and yields good visual effects and better peak signal-to-noise ratio.
NASA Technical Reports Server (NTRS)
Johnson, Jeffrey R.
2006-01-01
This viewgraph presentation reviews the problems that non-mission researchers have in accessing data to use in their analysis of Mars. The increasing complexity of Mars datasets results in custom software development by instrument teams that is often the only means to visualize and analyze the data. The solutions to the problem are to continue efforts toward synergizing data from multiple missions and making the data, s/w, derived products available in standardized, easily-accessible formats, encourage release of "lite" versions of mission-related software prior to end-of-mission, and planetary image data should be systematically processed in a coordinated way and made available in an easily accessed form. The recommendations of Mars Environmental GIS Workshop are reviewed.
Evaluation of nucleus segmentation in digital pathology images through large scale image synthesis
NASA Astrophysics Data System (ADS)
Zhou, Naiyun; Yu, Xiaxia; Zhao, Tianhao; Wen, Si; Wang, Fusheng; Zhu, Wei; Kurc, Tahsin; Tannenbaum, Allen; Saltz, Joel; Gao, Yi
2017-03-01
Digital histopathology images with more than 1 Gigapixel are drawing more and more attention in clinical, biomedical research, and computer vision fields. Among the multiple observable features spanning multiple scales in the pathology images, the nuclear morphology is one of the central criteria for diagnosis and grading. As a result it is also the mostly studied target in image computing. Large amount of research papers have devoted to the problem of extracting nuclei from digital pathology images, which is the foundation of any further correlation study. However, the validation and evaluation of nucleus extraction have yet been formulated rigorously and systematically. Some researches report a human verified segmentation with thousands of nuclei, whereas a single whole slide image may contain up to million. The main obstacle lies in the difficulty of obtaining such a large number of validated nuclei, which is essentially an impossible task for pathologist. We propose a systematic validation and evaluation approach based on large scale image synthesis. This could facilitate a more quantitatively validated study for current and future histopathology image analysis field.
Racial and sex differences in "images of the future".
Torrance, E P; Allen, W R
1980-02-01
Scenarios of future careers were written by 454 senior high school students in a southeastern high school. Random samples of 40 black females, 40 black males, 40 white males, and 40 white females were scored for eight characteristics and means were compared through analysis of variance. Only one sex difference was found, girls rated higher than boys on perception of self as changed in the future. The blacks projected greater career satisfaction for the future but the whites wrote longer scenarios and projected greater perceptions of changes in the world/mankind, greater awareness of future problems, more proposals of solutions to future problems, and stronger perceptions of self as a creative problem solver. There were no differences in commitments to making a better world or solving future problems.
Alphan, Hakan
2013-03-01
The aim of this study is (1) to quantify landscape changes in the easternmost Mediterranean deltas using bi-temporal binary change detection approach and (2) to analyze relationships between conservation/management designations and various categories of change that indicate type, degree and severity of human impact. For this purpose, image differencing and ratioing were applied to Landsat TM images of 1984 and 2006. A total of 136 candidate change images including normalized difference vegetation index (NDVI) and principal component analysis (PCA) difference images were tested to understand performance of bi-temporal pre-classification analysis procedures in the Mediterranean delta ecosystems. Results showed that visible image algebra provided high accuracies than did NDVI and PCA differencing. On the other hand, Band 5 differencing had one of the lowest change detection performances. Seven superclasses of change were identified using from/to change categories between the earlier and later dates. These classes were used to understand spatial character of anthropogenic impacts in the study area and derive qualitative and quantitative change information within and outside of the conservation/management areas. Change analysis indicated that natural site and wildlife reserve designations fell short of protecting sand dunes from agricultural expansion in the west. East of the study area, however, was exposed to least human impact owing to the fact that nature conservation status kept human interference at a minimum. Implications of these changes were discussed and solutions were proposed to deal with management problems leading to environmental change.