Science.gov

Sample records for state image analysis

  1. DPABI: Data Processing & Analysis for (Resting-State) Brain Imaging.

    PubMed

    Yan, Chao-Gan; Wang, Xin-Di; Zuo, Xi-Nian; Zang, Yu-Feng

    2016-07-01

    Brain imaging efforts are being increasingly devoted to decode the functioning of the human brain. Among neuroimaging techniques, resting-state fMRI (R-fMRI) is currently expanding exponentially. Beyond the general neuroimaging analysis packages (e.g., SPM, AFNI and FSL), REST and DPARSF were developed to meet the increasing need of user-friendly toolboxes for R-fMRI data processing. To address recently identified methodological challenges of R-fMRI, we introduce the newly developed toolbox, DPABI, which was evolved from REST and DPARSF. DPABI incorporates recent research advances on head motion control and measurement standardization, thus allowing users to evaluate results using stringent control strategies. DPABI also emphasizes test-retest reliability and quality control of data processing. Furthermore, DPABI provides a user-friendly pipeline analysis toolkit for rat/monkey R-fMRI data analysis to reflect the rapid advances in animal imaging. In addition, DPABI includes preprocessing modules for task-based fMRI, voxel-based morphometry analysis, statistical analysis and results viewing. DPABI is designed to make data analysis require fewer manual operations, be less time-consuming, have a lower skill requirement, a smaller risk of inadvertent mistakes, and be more comparable across studies. We anticipate this open-source toolbox will assist novices and expert users alike and continue to support advancing R-fMRI methodology and its application to clinical translational studies. PMID:27075850

  2. Cytological image analysis with a genetic fuzzy finite state machine.

    PubMed

    Estévez, J; Alayón, S; Moreno, L; Sigut, J; Aguilar, R

    2005-12-01

    The objective of this research is to design a pattern recognition system based on a Fuzzy Finite State Machine (FFSM). We try to find an optimal FFSM with Genetic Algorithms (GA). In order to validate this system, the classifier has been applied to a real problem: distinction between normal and abnormal cells in cytological breast fine needle aspirate images and cytological peritoneal fluid images. The characteristic used in the discrimination between normal and abnormal cells is a texture measurement of the chromatin distribution in cellular nuclei. Furthermore, the effectiveness of this method as a pattern classifier is compared with other existing supervised and unsupervised methods and evaluated with Receiver Operating Curves (ROC) methodology. PMID:16520142

  3. Sea state monitoring over Socotra Rock (Ieodo) by dual polarization SAR image analysis

    NASA Astrophysics Data System (ADS)

    Choi, Y.; Kim, J.; Yun, H.; yun, H.

    2013-12-01

    The application SAR in sea state monitoring have been conducted in the large number of fields such as the vessel tracing using the wake in SAR amplitude, the measurement of sea wave height and the oil spill detection. The true merit of SAR application in sea state monitoring is the full independence from the climate conditions. Hence, it is highly useful to secure safety of the anthropogenic activities in ocean and the understanding of the marine environment. Especially the dual and full polarization modes of new L band and X band SAR such as Advanced Land Observing Satellite (ALOS) Phased Array type L-band Synthetic Aperture Radar (PALSAR)'s Fine Beam double Polarization (FDB) and Polarimetry mode (PLR) and terraSAR-X polarization mode provided innovative means to extract sea state information exploiting the different amplitude and phase angle responses by electromagnetic and sea wave interactions. Thus a sample projects for mining the maximum possible sea state information from the ALOS PLASAR FDB SAR/InSAR pairs compared with the in-suit observation of sea state is being conducted. Test site was established over Socotra Rock (Ieodo in Korean), which is located at the Western Sea of Korea. At first, it aimed the measurement of sea waves using ALOS PLASAR multi-polarization images and its doppler-shift analysis. Together with sea state monitoring, auxiliary data analyses to combine the sea state outputs with the other in-orbital sensing image and non image information to trace the influence of sea states in the marine environment are actively undergoing. For instance, MERIS chlorophyll-a products are under investigation to identify the correlation with sea state. However, an significant obstacles to apply SAR interpretation scheme for mining sea state is the temporal gap between SAR image acquisitions in spite of the improved revising time of contemporary in-orbital SAR sensors. To tackle this problem, we are also introducing the multi view angle optical sensor

  4. Comparative study of state-of-the-art algorithms for hyperspectral image analysis

    NASA Astrophysics Data System (ADS)

    Rivera-Borrero, Carlos; Hunt, Shawn D.

    2007-04-01

    This work studies the end-to-end performance of hyperspectral classification and unmixing systems. Specifically, it compares widely used current state of the art algorithms with those developed at the University of Puerto Rico. These include algorithms for image enhancement, band subset selection, feature extraction, supervised and unsupervised classification, and constrained and unconstrained abundance estimation. The end to end performance for different combinations of algorithms is evaluated. The classification algorithms are compared in terms of percent correct classification. This method, however, cannot be applied to abundance estimation, as the binary evaluation used for supervised and unsupervised classification is not directly applicable to unmixing performance analysis. A procedure to evaluate unmixing performance is described in this paper and tested using coregistered data acquired by various sensors at different spatial resolutions. Performance results are generally specific to the image used. In an effort to try and generalize the results, a formal description of the complexity of the images used for the evaluations is required. Techniques for image complexity analysis currently available for automatic target recognizers are included and adapted to quantify the performance of the classifiers for different image classes.

  5. Analysis of a multiple reception model for processing images from the solid-state imaging camera

    NASA Technical Reports Server (NTRS)

    Yan, T.-Y.

    1991-01-01

    A detection model to identify the presence of Galileo optical communications from an Earth-based Transmitter (GOPEX) signal by processing multiple signal receptions extracted from the camera images is described. The model decomposes a multi-signal reception camera image into a set of images so that the location of the pixel being illuminated is known a priori and the laser can illuminate only one pixel at each reception instance. Numerical results show that if effects on the pointing error due to atmospheric refraction can be controlled to between 20 to 30 microrad, the beam divergence of the GOPEX laser should be adjusted to be between 30 to 40 microrad when the spacecraft is 30 million km away from Earth. Furthermore, increasing beyond 5 the number of receptions for processing will not produce a significant detection probability advantage.

  6. Cramer-Rao analysis of steady-state and time-domain fluorescence diffuse optical imaging

    PubMed Central

    Boffety, M.; Allain, M.; Sentenac, A.; Massonneau, M.; Carminati, R.

    2011-01-01

    Using a Cramer-Rao analysis, we study the theoretical performances of a time and spatially resolved fDOT imaging system for jointly estimating the position and the concentration of a point-wide fluorescent volume in a diffusive sample. We show that the fluorescence lifetime is a critical parameter for the precision of the technique. A time resolved fDOT system that does not use spatial information is also considered. In certain cases, a simple steady-state configuration may be as efficient as this time resolved fDOT system. PMID:21698024

  7. [Impact analysis of atmospheric state for target detection in hyperspectral radiance image].

    PubMed

    Zhang, Bing; Sha, Jian-jun; Wang, Xiang-wei; Gao, Lian-ru

    2012-08-01

    Target detection based on hyperspectral radiance images can improve data processing efficiency to meet the requirements of real-time processing. However, the spectral radiance acquired by the remote sensor will be affected by the atmosphere. In the present paper, hyperspectral imaging process is simulated to analyze the effects of the changes in atmospheric state on target detection in hyperspectral radiance image. The results show that hyperspectral radiance image can be directly used for target detection, different atmospheric states have little impacts on the RXD detection, whereas the MF detection is dependent on the accuracy of the input spectrum, and good results can only be obtained by the MF detector when the atmospheric states are similar between the radiance spectrum of the target to be detected and the simulated hyperspectral image. PMID:23156749

  8. Multi-State Transition Kinetics of Intracellular Signaling Molecules by Single-Molecule Imaging Analysis.

    PubMed

    Matsuoka, Satomi; Miyanaga, Yukihiro; Ueda, Masahiro

    2016-01-01

    The chemotactic signaling of eukaryotic cells is based on a chain of interactions between signaling molecules diffusing on the cell membrane and those shuttling between the membrane and cytoplasm. In this chapter, we describe methods to quantify lateral diffusion and reaction kinetics on the cell membrane. By the direct visualization and statistic analyses of molecular Brownian movement achieved by single-molecule imaging techniques, multiple states of membrane-bound molecules are successfully revealed with state transition kinetics. Using PTEN, a phosphatidylinositol-3,4,5-trisphosphate (PI(3,4,5)P3) 3'-phosphatase, in Dictyostelium discoideum undergoing chemotaxis as a model, each process of the analysis is described in detail. The identified multiple state kinetics provides an essential clue to elucidating the molecular mechanism of chemoattractant-induced dynamic redistribution of the signaling molecule asymmetrically on the cell membrane. Quantitative parameters for molecular reactions and diffusion complement a conventional view of the chemotactic signaling system, where changing a static network of molecules connected by causal relationships into a spatiotemporally dynamic one permits a mathematical description of stochastic migration of the cell along a shallow chemoattractant gradient. PMID:27271914

  9. State-of-the-art in retinal optical coherence tomography image analysis

    PubMed Central

    Yu, Zeyun; D’Souza, Roshan M.

    2015-01-01

    Optical coherence tomography (OCT) is an emerging imaging modality that has been widely used in the field of biomedical imaging. In the recent past, it has found uses as a diagnostic tool in dermatology, cardiology, and ophthalmology. In this paper we focus on its applications in the field of ophthalmology and retinal imaging. OCT is able to non-invasively produce cross-sectional volumetric images of the tissues which can be used for analysis of tissue structure and properties. Due to the underlying physics, OCT images suffer from a granular pattern, called speckle noise, which restricts the process of interpretation. This requires specialized noise reduction techniques to eliminate the noise while preserving image details. Another major step in OCT image analysis involves the use of segmentation techniques for distinguishing between different structures, especially in retinal OCT volumes. The outcome of this step is usually thickness maps of different retinal layers which are very useful in study of normal/diseased subjects. Lastly, movements of the tissue under imaging as well as the progression of disease in the tissue affect the quality and the proper interpretation of the acquired images which require the use of different image registration techniques. This paper reviews various techniques that are currently used to process raw image data into a form that can be clearly interpreted by clinicians. PMID:26435924

  10. State-of-the-art in retinal optical coherence tomography image analysis.

    PubMed

    Baghaie, Ahmadreza; Yu, Zeyun; D'Souza, Roshan M

    2015-08-01

    Optical coherence tomography (OCT) is an emerging imaging modality that has been widely used in the field of biomedical imaging. In the recent past, it has found uses as a diagnostic tool in dermatology, cardiology, and ophthalmology. In this paper we focus on its applications in the field of ophthalmology and retinal imaging. OCT is able to non-invasively produce cross-sectional volumetric images of the tissues which can be used for analysis of tissue structure and properties. Due to the underlying physics, OCT images suffer from a granular pattern, called speckle noise, which restricts the process of interpretation. This requires specialized noise reduction techniques to eliminate the noise while preserving image details. Another major step in OCT image analysis involves the use of segmentation techniques for distinguishing between different structures, especially in retinal OCT volumes. The outcome of this step is usually thickness maps of different retinal layers which are very useful in study of normal/diseased subjects. Lastly, movements of the tissue under imaging as well as the progression of disease in the tissue affect the quality and the proper interpretation of the acquired images which require the use of different image registration techniques. This paper reviews various techniques that are currently used to process raw image data into a form that can be clearly interpreted by clinicians. PMID:26435924

  11. Image processing in remote sensing data analysis - The state of the art

    NASA Technical Reports Server (NTRS)

    Rosenfeld, A.

    1983-01-01

    Image analysis techniques applicable to remote sensing data and covering image models, feature detection, segmentation and classification, texture analysis, and matching are studied. Model types for characterizing images examined include random-field, mosaic, and facet models. Edge and corner detection as well as global extraction of linear features are discussed. Pixel clustering and classification are covered in addition to the regional approach to segmentation. Autocorrelation, second-order gray level probability density, and the use of primitive element statistics are discussed in relation to texture analysis. Finally, reducing the cost of (sub)imaging matching methods (e.g., pixelwise comparison of gray levels and normalized cross-correlation between two images) as well as improving match sharpness is considered.

  12. A finite state model for respiratory motion analysis in image guided radiation therapy

    NASA Astrophysics Data System (ADS)

    Wu, Huanmei; Sharp, Gregory C.; Salzberg, Betty; Kaeli, David; Shirato, Hiroki; Jiang, Steve B.

    2004-12-01

    Effective image guided radiation treatment of a moving tumour requires adequate information on respiratory motion characteristics. For margin expansion, beam tracking and respiratory gating, the tumour motion must be quantified for pretreatment planning and monitored on-line. We propose a finite state model for respiratory motion analysis that captures our natural understanding of breathing stages. In this model, a regular breathing cycle is represented by three line segments, exhale, end-of-exhale and inhale, while abnormal breathing is represented by an irregular breathing state. In addition, we describe an on-line implementation of this model in one dimension. We found this model can accurately characterize a wide variety of patient breathing patterns. This model was used to describe the respiratory motion for 23 patients with peak-to-peak motion greater than 7 mm. The average root mean square error over all patients was less than 1 mm and no patient has an error worse than 1.5 mm. Our model provides a convenient tool to quantify respiratory motion characteristics, such as patterns of frequency changes and amplitude changes, and can be applied to internal or external motion, including internal tumour position, abdominal surface, diaphragm, spirometry and other surrogates.

  13. Fluorescence, XPS, and TOF-SIMS surface chemical state image analysis of DNA microarrays.

    PubMed

    Lee, Chi-Ying; Harbers, Gregory M; Grainger, David W; Gamble, Lara J; Castner, David G

    2007-08-01

    Performance improvements in DNA-modified surfaces required for microarray and biosensor applications rely on improved capabilities to accurately characterize the chemistry and structure of immobilized DNA molecules on micropatterned surfaces. Recent innovations in imaging X-ray photoelectron spectroscopy (XPS) and time-of-flight secondary ion mass spectrometry (TOF-SIMS) now permit more detailed studies of micropatterned surfaces. We have exploited the complementary information provided by imaging XPS and imaging TOF-SIMS to detail the chemical composition, spatial distribution, and hybridization efficiency of amine-terminated single-stranded DNA (ssDNA) bound to commercial polyacrylamide-based, amine-reactive microarray slides, immobilized in both macrospot and microarray diagnostic formats. Combinations of XPS imaging and small spot analysis were used to identify micropatterned DNA spots within printed DNA arrays on slide surfaces and quantify DNA elements within individual microarray spots for determination of probe immobilization and hybridization efficiencies. This represents the first report of imaging XPS of DNA immobilization and hybridization efficiencies for arrays fabricated on commercial microarray slides. Imaging TOF-SIMS provided distinct analytical data on the lateral distribution of DNA within single array microspots before and after target hybridization. Principal component analysis (PCA) applied to TOF-SIMS imaging datasets demonstrated that the combination of these two techniques provides information not readily observable in TOF-SIMS images alone, particularly in identifying species associated with array spot nonuniformities (e.g., "halo" or "donut" effects often observed in fluorescence images). Chemically specific spot images were compared to conventional fluorescence scanned images in microarrays to provide new information on spot-to-spot DNA variations that affect current diagnostic reliability, assay variance, and sensitivity. PMID:17625851

  14. The state of the art in the analysis of two-dimensional gel electrophoresis images

    PubMed Central

    Berth, Matthias; Moser, Frank Michael; Kolbe, Markus

    2007-01-01

    Software-based image analysis is a crucial step in the biological interpretation of two-dimensional gel electrophoresis experiments. Recent significant advances in image processing methods combined with powerful computing hardware have enabled the routine analysis of large experiments. We cover the process starting with the imaging of 2-D gels, quantitation of spots, creation of expression profiles to statistical expression analysis followed by the presentation of results. Challenges for analysis software as well as good practices are highlighted. We emphasize image warping and related methods that are able to overcome the difficulties that are due to varying migration positions of spots between gels. Spot detection, quantitation, normalization, and the creation of expression profiles are described in detail. The recent development of consensus spot patterns and complete expression profiles enables one to take full advantage of statistical methods for expression analysis that are well established for the analysis of DNA microarray experiments. We close with an overview of visualization and presentation methods (proteome maps) and current challenges in the field. PMID:17713763

  15. Automatic image analysis methods for the determination of stereological parameters - application to the analysis of densification during solid state sintering of WC-Co compacts

    PubMed

    Missiaen; Roure

    2000-08-01

    Automatic image analysis methods which were used to determine microstructural parameters of sintered materials are presented. Estimation of stereological parameters at interfaces, when the system contains more than two phases, is particularly detailed. It is shown that the specific surface areas and mean curvatures of the various interfaces can be estimated in the numerical space of the images. The methods are applied to the analysis of densification during solid state sintering of WC-Co compacts. The microstructural evolution is commented on. Application of microstructural measurements to the analysis of densification kinetics is also discussed. PMID:10947907

  16. Retinal Imaging and Image Analysis

    PubMed Central

    Abràmoff, Michael D.; Garvin, Mona K.; Sonka, Milan

    2011-01-01

    Many important eye diseases as well as systemic diseases manifest themselves in the retina. While a number of other anatomical structures contribute to the process of vision, this review focuses on retinal imaging and image analysis. Following a brief overview of the most prevalent causes of blindness in the industrialized world that includes age-related macular degeneration, diabetic retinopathy, and glaucoma, the review is devoted to retinal imaging and image analysis methods and their clinical implications. Methods for 2-D fundus imaging and techniques for 3-D optical coherence tomography (OCT) imaging are reviewed. Special attention is given to quantitative techniques for analysis of fundus photographs with a focus on clinically relevant assessment of retinal vasculature, identification of retinal lesions, assessment of optic nerve head (ONH) shape, building retinal atlases, and to automated methods for population screening for retinal diseases. A separate section is devoted to 3-D analysis of OCT images, describing methods for segmentation and analysis of retinal layers, retinal vasculature, and 2-D/3-D detection of symptomatic exudate-associated derangements, as well as to OCT-based analysis of ONH morphology and shape. Throughout the paper, aspects of image acquisition, image analysis, and clinical relevance are treated together considering their mutually interlinked relationships. PMID:21743764

  17. Picosecond Imaging Circuit Analysis

    NASA Astrophysics Data System (ADS)

    Kash, Jeffrey A.

    1998-03-01

    With ever-increasing complexity, probing the internal operation of a silicon IC becomes more challenging. Present methods of internal probing are becoming obsolete. We have discovered that a very weak picosecond pulse of light is emitted by each FET in a CMOS circuit whenever the circuit changes logic state. This pulsed emission can be simultaneously imaged and time resolved, using a technique we have named Picosecond Imaging Circuit Analysis (PICA). With a suitable imaging detector, PICA allows time resolved measurement on thousands of devices simultaneously. Computer videos made from measurements on real IC's will be shown. These videos, along with a more quantitative evaluation of the light emission, permit the complete operation of an IC to be measured in a non-invasive way with picosecond time resolution.

  18. Technique based on LED multispectral imaging and multivariate analysis for monitoring the conservation state of the Dead Sea Scrolls.

    PubMed

    Marengo, Emilio; Manfredi, Marcello; Zerbinati, Orfeo; Robotti, Elisa; Mazzucco, Eleonora; Gosetti, Fabio; Bearman, Greg; France, Fenella; Shor, Pnina

    2011-09-01

    The aim of this project is the development of a noninvasive technique based on LED multispectral imaging (MSI) for monitoring the conservation state of the Dead Sea Scrolls (DSS) collection. It is well-known that changes in the parchment reflectance drive the transition of the scrolls from legible to illegible. Capitalizing on this fact, we will use spectral imaging to detect changes in the reflectance before they become visible to the human eye. The technique uses multivariate analysis and statistical process control theory. The present study was carried out on a "sample" parchment of calfskin. The monitoring of the surface of a commercial modern parchment aged consecutively for 2 h and 6 h at 80 °C and 50% relative humidity (ASTM) was performed at the Imaging Lab of the Library of Congress (Washington, DC, U.S.A.). MSI is here carried out in the vis-NIR range limited to 1 μm, with a number of bands of 13 and bandwidths that range from about 10 nm in UV to 40 nm in IR. Results showed that we could detect and locate changing pixels, on the basis of reflectance changes, after only a few "hours" of aging. PMID:21777009

  19. Histopathological Image Analysis: A Review

    PubMed Central

    Gurcan, Metin N.; Boucheron, Laura; Can, Ali; Madabhushi, Anant; Rajpoot, Nasir; Yener, Bulent

    2010-01-01

    Over the past decade, dramatic increases in computational power and improvement in image analysis algorithms have allowed the development of powerful computer-assisted analytical approaches to radiological data. With the recent advent of whole slide digital scanners, tissue histopathology slides can now be digitized and stored in digital image form. Consequently, digitized tissue histopathology has now become amenable to the application of computerized image analysis and machine learning techniques. Analogous to the role of computer-assisted diagnosis (CAD) algorithms in medical imaging to complement the opinion of a radiologist, CAD algorithms have begun to be developed for disease detection, diagnosis, and prognosis prediction to complement to the opinion of the pathologist. In this paper, we review the recent state of the art CAD technology for digitized histopathology. This paper also briefly describes the development and application of novel image analysis technology for a few specific histopathology related problems being pursued in the United States and Europe. PMID:20671804

  20. Image-analysis library

    NASA Technical Reports Server (NTRS)

    1980-01-01

    MATHPAC image-analysis library is collection of general-purpose mathematical and statistical routines and special-purpose data-analysis and pattern-recognition routines for image analysis. MATHPAC library consists of Linear Algebra, Optimization, Statistical-Summary, Densities and Distribution, Regression, and Statistical-Test packages.

  1. Comparison of Shielding Effect of Steady State Coherent Synchrotron Radiation Using the Image Charge Method and Frequency Analysis

    NASA Astrophysics Data System (ADS)

    Li, R.; Bohn, C. L.; Bisognano, J. J.

    1997-05-01

    There have been several studies([1] J. S. Nodvick and D. S. Saxon, Phys. Rev.) 96, 180 (1954); [2] S. A. Kheifets and B. Zotter, CERN SL-95-92 (AP), 1995; [3] B. Murphy, S. Krinsky, and R. L. Gluckstern, Phys. Rev. E 35, 2584 (1996). on the shielding effect of the coherent synchrotron radiation (CSR) emitted by a Gaussian bunch on a circular orbit in the center plane between two parallel plates. A functional dependence of the beam and machine parameters was given in [2] using the asymptotic analysis in the frequency domain. It indicates the shielded CSR mainly arises from harmonics n_th < n < n_c, where the threshold harmonic is n_th = √2/3(π ρ /h)^3/2, and harmonic cutoff is nc = ρ/σ (ρ: bend radius, h: spacing between plates, σ_s: bunch rms length). In this paper, we extend the frequency domain analysis to the parameter regime n_th > n_c, and the result is compared with the steady state CSR power obtained using the image charge method.

  2. Electronic image analysis

    NASA Astrophysics Data System (ADS)

    Gahm, J.; Grosskopf, R.; Jaeger, H.; Trautwein, F.

    1980-12-01

    An electronic system for image analysis was developed on the basis of low and medium cost integrated circuits. The printed circuit boards were designed, using the principles of modern digital electronics and data processing. The system consists of modules for automatic, semiautomatic and visual image analysis. They can be used for microscopical and macroscopical observations. Photographs can be evaluated, too. The automatic version is controlled by software modules adapted to various applications. The result is a system for image analysis suitable for many different measurement problems. The features contained in large image areas can be measured. For automatic routine analysis controlled by processing calculators the necessary software and hardware modules are available.

  3. Basics of image analysis

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Hyperspectral imaging technology has emerged as a powerful tool for quality and safety inspection of food and agricultural products and in precision agriculture over the past decade. Image analysis is a critical step in implementing hyperspectral imaging technology; it is aimed to improve the qualit...

  4. Image Analysis in Surgical Pathology.

    PubMed

    Lloyd, Mark C; Monaco, James P; Bui, Marilyn M

    2016-06-01

    Digitization of glass slides of surgical pathology samples facilitates a number of value-added capabilities beyond what a pathologist could previously do with a microscope. Image analysis is one of the most fundamental opportunities to leverage the advantages that digital pathology provides. The ability to quantify aspects of a digital image is an extraordinary opportunity to collect data with exquisite accuracy and reliability. In this review, we describe the history of image analysis in pathology and the present state of technology processes as well as examples of research and clinical use. PMID:27241112

  5. Solid state Raman image amplification

    NASA Astrophysics Data System (ADS)

    Calmes, Lonnie K.; Murray, James T.; Austin, William L.; Powell, Richard C.

    1998-07-01

    Lite Cycles has developed a new type of eye-safe, range-gated, lidar sensing element based on Solid-state Raman Image Amplification (SSRIA) in a solid-state optical crystal. SSRIA can amplify low-level infrared images with gains greater than 106 with the addition of only quantum-limited noise. The high gains from SSRIA can compensate for low quantum efficiency detectors and can reduce the need for detector cooling. The range-gate of SSRIA is controlled by the pulsewidth of the pump laser and can be as short as 30 - 100 cm for nanosecond pulses and less than 5 mm if picosecond pulses are used. SSRIA results in higher SNR images throughout a broad range of incident light levels, in contrast to the increasing noise factor with reduced gain in image intensified CCDs. A theoretical framework for the optical resolution of SSRIA is presented and it is shown that SSRIA can produce higher resolution than ICCDs. SSRIA is also superior in rejecting unwanted sunlight background, further increasing image SNR, and can be used for real-time optical signal processing. Applications for military use include eye-safe imaging lidars that can be used for autonomous vehicle identification and targeting.

  6. The presupplementary area within the language network: a resting state functional magnetic resonance imaging functional connectivity analysis.

    PubMed

    Ter Minassian, Aram; Ricalens, Emmanuel; Nguyen The Tich, Sylvie; Dinomais, Mickaël; Aubé, Christophe; Beydon, Laurent

    2014-08-01

    The presupplementary motor area (pre-SMA) is involved in volitional selection. Despite the lateralization of the language network and different functions for both pre-SMA, few studies have reported the lateralization of pre-SMA activity and very little is known about the possible lateralization of pre-SMA connectivity. Via functional connectivity analysis, we sought to understand how the language network may be connected to other intrinsic connectivity networks (ICNs) through the pre-SMA. We performed a spatial independent component analysis of resting state functional magnetic resonance imaging in 30 volunteers to identify the language network. Subsequently, we applied seed-to-voxel functional connectivity analyses centered on peaks detected in the pre-SMA. Three signal peaks were detected in the pre-SMA. The left rostral pre-SMA intrinsic connectivity network (LR ICN) was left lateralized in contrast to bilateral ICNs associated to right pre-SMA peaks. The LR ICN was anticorrelated with the dorsal attention network and the right caudal pre-SMA ICN (RC ICN) anticorrelated with the default mode network. These two ICNs overlapped minimally. In contrast, the right rostral ICN overlapped the LR ICN. Both right ICNs overlapped in the ventral attention network (vATT). The bilateral connectivity of the right rostral pre-SMA may allow right hemispheric recruitment to process semantic ambiguities. Overlap between the right pre-SMA ICNs in vATT may contribute to internal thought to external environment reorientation. Distinct ICNs connected to areas involved in lexico-syntactic selection and phonology converge in the pre-SMA, which may constitute the resolution space of competing condition-action associations for speech production. PMID:24939724

  7. State Analysis Database Tool

    NASA Technical Reports Server (NTRS)

    Rasmussen, Robert; Bennett, Matthew

    2006-01-01

    The State Analysis Database Tool software establishes a productive environment for collaboration among software and system engineers engaged in the development of complex interacting systems. The tool embodies State Analysis, a model-based system engineering methodology founded on a state-based control architecture (see figure). A state represents a momentary condition of an evolving system, and a model may describe how a state evolves and is affected by other states. The State Analysis methodology is a process for capturing system and software requirements in the form of explicit models and states, and defining goal-based operational plans consistent with the models. Requirements, models, and operational concerns have traditionally been documented in a variety of system engineering artifacts that address different aspects of a mission s lifecycle. In State Analysis, requirements, models, and operations information are State Analysis artifacts that are consistent and stored in a State Analysis Database. The tool includes a back-end database, a multi-platform front-end client, and Web-based administrative functions. The tool is structured to prompt an engineer to follow the State Analysis methodology, to encourage state discovery and model description, and to make software requirements and operations plans consistent with model descriptions.

  8. Image Analysis of Foods.

    PubMed

    Russ, John C

    2015-09-01

    The structure of foods, both natural and processed ones, is controlled by many variables ranging from biology to chemistry and mechanical forces. The structure also controls many of the properties of the food, including consumer acceptance, taste, mouthfeel, appearance, and so on, and nutrition. Imaging provides an important tool for measuring the structure of foods. This includes 2-dimensional (2D) images of surfaces and sections, for example, viewed in a microscope, as well as 3-dimensional (3D) images of internal structure as may be produced by confocal microscopy, or computed tomography and magnetic resonance imaging. The use of images also guides robotics for harvesting and sorting. Processing of images may be needed to calibrate colors, reduce noise, enhance detail, and delineate structure and dimensions. Measurement of structural information such as volume fraction and internal surface areas, as well as the analysis of object size, location, and shape in both 2- and 3-dimensional images is illustrated and described, with primary references and examples from a wide range of applications. PMID:26270611

  9. Image analysis library software development

    NASA Technical Reports Server (NTRS)

    Guseman, L. F., Jr.; Bryant, J.

    1977-01-01

    The Image Analysis Library consists of a collection of general purpose mathematical/statistical routines and special purpose data analysis/pattern recognition routines basic to the development of image analysis techniques for support of current and future Earth Resources Programs. Work was done to provide a collection of computer routines and associated documentation which form a part of the Image Analysis Library.

  10. Medical Image Analysis Facility

    NASA Technical Reports Server (NTRS)

    1978-01-01

    To improve the quality of photos sent to Earth by unmanned spacecraft. NASA's Jet Propulsion Laboratory (JPL) developed a computerized image enhancement process that brings out detail not visible in the basic photo. JPL is now applying this technology to biomedical research in its Medical lrnage Analysis Facility, which employs computer enhancement techniques to analyze x-ray films of internal organs, such as the heart and lung. A major objective is study of the effects of I stress on persons with heart disease. In animal tests, computerized image processing is being used to study coronary artery lesions and the degree to which they reduce arterial blood flow when stress is applied. The photos illustrate the enhancement process. The upper picture is an x-ray photo in which the artery (dotted line) is barely discernible; in the post-enhancement photo at right, the whole artery and the lesions along its wall are clearly visible. The Medical lrnage Analysis Facility offers a faster means of studying the effects of complex coronary lesions in humans, and the research now being conducted on animals is expected to have important application to diagnosis and treatment of human coronary disease. Other uses of the facility's image processing capability include analysis of muscle biopsy and pap smear specimens, and study of the microscopic structure of fibroprotein in the human lung. Working with JPL on experiments are NASA's Ames Research Center, the University of Southern California School of Medicine, and Rancho Los Amigos Hospital, Downey, California.

  11. Digital Image Analysis of Cereals

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Image analysis is the extraction of meaningful information from images, mainly digital images by means of digital processing techniques. The field was established in the 1950s and coincides with the advent of computer technology, as image analysis is profoundly reliant on computer processing. As t...

  12. Radar image analysis utilizing junctive image metamorphosis

    NASA Astrophysics Data System (ADS)

    Krueger, Peter G.; Gouge, Sally B.; Gouge, Jim O.

    1998-09-01

    A feasibility study was initiated to investigate the ability of algorithms developed for medical sonogram image analysis, to be trained for extraction of cartographic information from synthetic aperture radar imagery. BioComputer Research Inc. has applied proprietary `junctive image metamorphosis' algorithms to cancer cell recognition and identification in ultrasound prostate images. These algorithms have been shown to support automatic radar image feature detection and identification. Training set images were used to develop determinants for representative point, line and area features, which were used on test images to identify and localize the features of interest. The software is computationally conservative; operating on a PC platform in real time. The algorithms are robust; having applicability to be trained for feature recognition on any digital imagery, not just those formed from reflected energy, such as sonograms and radar images. Applications include land mass characterization, feature identification, target recognition, and change detection.

  13. Statistical image analysis of longitudinal RAVENS images

    PubMed Central

    Lee, Seonjoo; Zipunnikov, Vadim; Reich, Daniel S.; Pham, Dzung L.

    2015-01-01

    Regional analysis of volumes examined in normalized space (RAVENS) are transformation images used in the study of brain morphometry. In this paper, RAVENS images are analyzed using a longitudinal variant of voxel-based morphometry (VBM) and longitudinal functional principal component analysis (LFPCA) for high-dimensional images. We demonstrate that the latter overcomes the limitations of standard longitudinal VBM analyses, which does not separate registration errors from other longitudinal changes and baseline patterns. This is especially important in contexts where longitudinal changes are only a small fraction of the overall observed variability, which is typical in normal aging and many chronic diseases. Our simulation study shows that LFPCA effectively separates registration error from baseline and longitudinal signals of interest by decomposing RAVENS images measured at multiple visits into three components: a subject-specific imaging random intercept that quantifies the cross-sectional variability, a subject-specific imaging slope that quantifies the irreversible changes over multiple visits, and a subject-visit specific imaging deviation. We describe strategies to identify baseline/longitudinal variation and registration errors combined with covariates of interest. Our analysis suggests that specific regional brain atrophy and ventricular enlargement are associated with multiple sclerosis (MS) disease progression. PMID:26539071

  14. Solid state image sensing arrays

    NASA Technical Reports Server (NTRS)

    Sadasiv, G.

    1972-01-01

    The fabrication of a photodiode transistor image sensor array in silicon, and tests on individual elements of the array are described along with design for a scanning system for an image sensor array. The spectral response of p-n junctions was used as a technique for studying the optical-absorption edge in silicon. Heterojunction structures of Sb2S3- Si were fabricated and a system for measuring C-V curves on MOS structures was built.

  15. Component analysis of a new Solid State X-ray Image Intensifier (SSXII) using photon transfer and Instrumentation Noise Equivalent Exposure (INEE) measurements

    PubMed Central

    Kuhls-Gilcrist, Andrew; Bednarek, Daniel R.; Rudin, Stephen

    2009-01-01

    The SSXII is a novel x-ray imager designed to improve upon the performance limitations of conventional dynamic radiographic/fluoroscopic imagers related to resolution, charge-trapping, frame-rate, and instrumentation-noise. The SSXII consists of a CsI:Tl phosphor coupled via a fiber-optic taper (FOT) to an electron-multiplying CCD (EMCCD). To facilitate investigational studies, initial designs enable interchangeability of such imaging components. Measurements of various component and configuration characteristics enable an optimization analysis with respect to overall detector performance. Photon transfer was used to characterize the EMCCD performance including ADC sensitivity, read-noise, full-well capacity and quantum efficiency. X-ray sensitivity was measured using RQA x-ray spectra. Imaging components were analyzed in terms of their MTF and transmission efficiency. The EMCCD was measured to have a very low effective read-noise of less than 1 electron rms at modest EMCCD gains, which is more than two orders-of-magnitude less than flat panel (FPD) and CMOS-based detectors. The variable signal amplification from 1 to 2000 times enables selectable sensitivities ranging from 8.5 (168) to over 15k (300k) electrons per incident x-ray photon with (without) a 4:1 FOT; these sensitivities could be readily increased with further component optimization. MTF and DQE measurements indicate the SSXII performance is comparable to current state-of-the-art detectors at low spatial frequencies and far exceeds them at higher spatial frequencies. The instrumentation noise equivalent exposure (INEE) was measured to be less than 0.3 μR out to 10 cycles/mm, which is substantially better than FPDs. Component analysis suggests that these improvements can be substantially increased with further detector optimization. PMID:19763251

  16. Component analysis of a new solid state x-ray image intensifier (SSXII) using photon transfer and instrumentation noise equivalent exposure (INEE) measurements

    NASA Astrophysics Data System (ADS)

    Kuhls-Gilcrist, Andrew; Bednarek, Daniel R.; Rudin, Stephen

    2009-02-01

    The SSXII is a novel x-ray imager designed to improve upon the performance limitations of conventional dynamic radiographic/fluoroscopic imagers related to resolution, charge-trapping, frame-rate, and instrumentation-noise. The SSXII consists of a CsI:Tl phosphor coupled via a fiber-optic taper (FOT) to an electron-multiplying CCD (EMCCD). To facilitate investigational studies, initial designs enable interchangeability of such imaging components. Measurements of various component and configuration characteristics enable an optimization analysis with respect to overall detector performance. Photon transfer was used to characterize the EMCCD performance including ADC sensitivity, read-noise, full-well capacity and quantum efficiency. X-ray sensitivity was measured using RQA x-ray spectra. Imaging components were analyzed in terms of their MTF and transmission efficiency. The EMCCD was measured to have a very low effective read-noise of less than 1 electron rms at modest EMCCD gains, which is more than two orders-ofmagnitude less than flat panel (FPD) and CMOS-based detectors. The variable signal amplification from 1 to 2000 times enables selectable sensitivities ranging from 8.5 (168) to over 15k (300k) electrons per incident x-ray photon with (without) a 4:1 FOT; these sensitivities could be readily increased with further component optimization. MTF and DQE measurements indicate the SSXII performance is comparable to current state-of-the-art detectors at low spatial frequencies and far exceeds them at higher spatial frequencies. The instrumentation noise equivalent exposure (INEE) was measured to be less than 0.3 μR out to 10 cycles/mm, which is substantially better than FPDs. Component analysis suggests that these improvements can be substantially increased with further detector optimization.

  17. Environmental, scanning electron and optical microscope image analysis software for determining volume and occupied area of solid-state fermentation fungal cultures.

    PubMed

    Osma, Johann F; Toca-Herrera, José L; Rodríguez-Couto, Susana

    2011-01-01

    Here we propose a software for the estimation of the occupied area and volume of fungal cultures. This software was developed using a Matlab platform and allows analysis of high-definition images from optical, electronic or atomic force microscopes. In a first step, a single hypha grown on potato dextrose agar was monitored using optical microscopy to estimate the change in occupied area and volume. Weight measurements were carried out to compare them with the estimated volume, revealing a slight difference of less than 1.5%. Similarly, samples from two different solid-state fermentation cultures were analyzed using images from a scanning electron microscope (SEM) and an environmental SEM (ESEM). Occupied area and volume were calculated for both samples, and the results obtained were correlated with the dry weight of the cultures. The difference between the estimated volume ratio and the dry weight ratio of the two cultures showed a difference of 10%. Therefore, this software is a promising non-invasive technique to determine fungal biomass in solid-state cultures. PMID:21154435

  18. Reflections on ultrasound image analysis.

    PubMed

    Alison Noble, J

    2016-10-01

    Ultrasound (US) image analysis has advanced considerably in twenty years. Progress in ultrasound image analysis has always been fundamental to the advancement of image-guided interventions research due to the real-time acquisition capability of ultrasound and this has remained true over the two decades. But in quantitative ultrasound image analysis - which takes US images and turns them into more meaningful clinical information - thinking has perhaps more fundamentally changed. From roots as a poor cousin to Computed Tomography (CT) and Magnetic Resonance (MR) image analysis, both of which have richer anatomical definition and thus were better suited to the earlier eras of medical image analysis which were dominated by model-based methods, ultrasound image analysis has now entered an exciting new era, assisted by advances in machine learning and the growing clinical and commercial interest in employing low-cost portable ultrasound devices outside traditional hospital-based clinical settings. This short article provides a perspective on this change, and highlights some challenges ahead and potential opportunities in ultrasound image analysis which may both have high impact on healthcare delivery worldwide in the future but may also, perhaps, take the subject further away from CT and MR image analysis research with time. PMID:27503078

  19. Digital-image processing and image analysis of glacier ice

    USGS Publications Warehouse

    Fitzpatrick, Joan J.

    2013-01-01

    This document provides a methodology for extracting grain statistics from 8-bit color and grayscale images of thin sections of glacier ice—a subset of physical properties measurements typically performed on ice cores. This type of analysis is most commonly used to characterize the evolution of ice-crystal size, shape, and intercrystalline spatial relations within a large body of ice sampled by deep ice-coring projects from which paleoclimate records will be developed. However, such information is equally useful for investigating the stress state and physical responses of ice to stresses within a glacier. The methods of analysis presented here go hand-in-hand with the analysis of ice fabrics (aggregate crystal orientations) and, when combined with fabric analysis, provide a powerful method for investigating the dynamic recrystallization and deformation behaviors of bodies of ice in motion. The procedures described in this document compose a step-by-step handbook for a specific image acquisition and data reduction system built in support of U.S. Geological Survey ice analysis projects, but the general methodology can be used with any combination of image processing and analysis software. The specific approaches in this document use the FoveaPro 4 plug-in toolset to Adobe Photoshop CS5 Extended but it can be carried out equally well, though somewhat less conveniently, with software such as the image processing toolbox in MATLAB, Image-Pro Plus, or ImageJ.

  20. Penn State's Visual Image User Study

    ERIC Educational Resources Information Center

    Pisciotta, Henry A.; Dooris, Michael J.; Frost, James; Halm, Michael

    2005-01-01

    The Visual Image User Study (VIUS), an extensive needs assessment project at Penn State University, describes academic users of pictures and their perceptions. These findings outline the potential market for digital images and list the likely determinates of whether or not a system will be used. They also explain some key user requirements for…

  1. Solid-state NMR imaging system

    DOEpatents

    Gopalsami, Nachappa; Dieckman, Stephen L.; Ellingson, William A.

    1992-01-01

    An apparatus for use with a solid-state NMR spectrometer includes a special imaging probe with linear, high-field strength gradient fields and high-power broadband RF coils using a back projection method for data acquisition and image reconstruction, and a real-time pulse programmer adaptable for use by a conventional computer for complex high speed pulse sequences.

  2. Solid-state NMR imaging system

    SciTech Connect

    Gopalsami, N.; Dieckman, S.L.; Ellingson, W.A.

    1990-01-01

    An accessory for use with a solid-state NMR spectrometer includes a special imaging probe with linear, high-field strength gradient fields and high-power broadband RF coils using a back projection method for data acquisition and image reconstruction, and a real-time pulse programmer adaptable for use by a conventional computer for complex high speed pulse sequences.

  3. The Galileo Solid-State Imaging experiment

    USGS Publications Warehouse

    Belton, M.J.S.; Klaasen, K.P.; Clary, M.C.; Anderson, J.L.; Anger, C.D.; Carr, M.H.; Chapman, C.R.; Davies, M.E.; Greeley, R.; Anderson, D.; Bolef, L.K.; Townsend, T.E.; Greenberg, R.; Head, J. W., III; Neukum, G.; Pilcher, C.B.; Veverka, J.; Gierasch, P.J.; Fanale, F.P.; Ingersoll, A.P.; Masursky, H.; Morrison, D.; Pollack, James B.

    1992-01-01

    . The dynamic range is spread over 3 gain states and an exposure range from 4.17 ms to 51.2 s. A low-level of radial, third-order, geometric distortion has been measured in the raw images that is entirely due to the optical design. The distortion is of the pincushion type and amounts to about 1.2 pixels in the corners of the images. It is expected to be very stable. We discuss the measurement objectives of the SSI experiment in the Jupiter system and emphasize their relationships to those of other experiments in the Galileo project. We outline objectives for Jupiter atmospheric science, noting the relationship of SSI data to that to be returned by experiments on the atmospheric entry Probe. We also outline SSI objectives for satellite surfaces, ring structure, and 'darkside' (e.g., aurorae, lightning, etc.) experiments. Proposed cruise measurement objectives that relate to encounters at Venus, Moon, Earth, Gaspra, and, possibly, Ida are also briefly outlined. The article concludes with a description of a 'fully distributed' data analysis system (HIIPS) that SSI team members intend to use at their home institutions. We also list the nature of systematic data products that will become available to the scientific community. Finally, we append a short 'historical' note outlining the responsibilities and roles of institutions and individuals that have been involved in the 14 year development of the SSI experiment so far. ?? 1992 Kluwer Academic Publishers.

  4. Spotlight-8 Image Analysis Software

    NASA Technical Reports Server (NTRS)

    Klimek, Robert; Wright, Ted

    2006-01-01

    Spotlight is a cross-platform GUI-based software package designed to perform image analysis on sequences of images generated by combustion and fluid physics experiments run in a microgravity environment. Spotlight can perform analysis on a single image in an interactive mode or perform analysis on a sequence of images in an automated fashion. Image processing operations can be employed to enhance the image before various statistics and measurement operations are performed. An arbitrarily large number of objects can be analyzed simultaneously with independent areas of interest. Spotlight saves results in a text file that can be imported into other programs for graphing or further analysis. Spotlight can be run on Microsoft Windows, Linux, and Apple OS X platforms.

  5. Oncological image analysis: medical and molecular image analysis

    NASA Astrophysics Data System (ADS)

    Brady, Michael

    2007-03-01

    This paper summarises the work we have been doing on joint projects with GE Healthcare on colorectal and liver cancer, and with Siemens Molecular Imaging on dynamic PET. First, we recall the salient facts about cancer and oncological image analysis. Then we introduce some of the work that we have done on analysing clinical MRI images of colorectal and liver cancer, specifically the detection of lymph nodes and segmentation of the circumferential resection margin. In the second part of the paper, we shift attention to the complementary aspect of molecular image analysis, illustrating our approach with some recent work on: tumour acidosis, tumour hypoxia, and multiply drug resistant tumours.

  6. Children's Precocious Anticipatory Images of End States.

    ERIC Educational Resources Information Center

    Dean, Anne L.; Deist, Steven

    1980-01-01

    The processes by which children construct images of anticipated end states of a transposition movement were examined on two tasks. Results support Piaget's (1977) hypothesis that reasoning on the basis of state correspondence defines a developmental level which precedes the development of transformational thought. (Author/MP)

  7. Hyperspectral image analysis. A tutorial.

    PubMed

    Amigo, José Manuel; Babamoradi, Hamid; Elcoroaristizabal, Saioa

    2015-10-01

    This tutorial aims at providing guidelines and practical tools to assist with the analysis of hyperspectral images. Topics like hyperspectral image acquisition, image pre-processing, multivariate exploratory analysis, hyperspectral image resolution, classification and final digital image processing will be exposed, and some guidelines given and discussed. Due to the broad character of current applications and the vast number of multivariate methods available, this paper has focused on an industrial chemical framework to explain, in a step-wise manner, how to develop a classification methodology to differentiate between several types of plastics by using Near infrared hyperspectral imaging and Partial Least Squares - Discriminant Analysis. Thus, the reader is guided through every single step and oriented in order to adapt those strategies to the user's case. PMID:26481986

  8. Solid-state Raman image amplification

    NASA Astrophysics Data System (ADS)

    Calmes, Lonnie Kirkland

    Amplification of low-light-level optical images is important for extending the range of lidar systems that image and detect objects in the atmosphere and underwater. The use of range-gating to produce images of particular range bins is also important in minimizing the image degradation due to light that is scattered backward from aerosols, smoke, or water along the imaging path. For practical lidar systems that must be operated within sight of unprotected observers, eye safety is of the utmost importance. This dissertation describes a new type of eye-safe, range-gated lidar sensing element based on Solid-state Raman Image Amplification (SSRIA) in a solid- state optical crystal. SSRIA can amplify low-level images in the eye-safe infrared at 1.556 μm with gains up to 106 with the addition of only quantum- limited noise. The high gains from SSRIA can compensate for low quantum efficiency detectors and can reduce the need for detector cooling. The range-gate of SSRIA is controlled by the pulsewidth of the pump laser and can be as short as 30-100 cm, using pump pulses of 2-6.7 nsec FWHM. A rate equation theoretical model is derived to help in the design of short pulsed Raman lasers. A theoretical model for the quantum noise properties of SSRIA is presented. SSRIA results in higher SNR images throughout a broad range of incident light levels, in contrast to the increasing noise factor with reduced gain in image intensified CCD's. A theoretical framework for the optical resolution of SSRIA is presented and it is shown that SSRIA can produce higher resolution than ICCD's. SSRIA is also superior in rejecting unwanted sunlight background, further increasing image SNR. Lastly, SSRIA can be combined with optical pre-filtering to perform optical image processing functions such as high-pass filtering and automatic target detection/recognition. The application of this technology to underwater imaging, called Marine Raman Image Amplification (MARIA) is also discussed. MARIA

  9. Quantitative analysis of an enlarged area Solid State X-ray Image Intensifier (SSXII) detector based on Electron Multiplying Charge Coupled Device (EMCCD) technology

    PubMed Central

    Swetadri, Vasan S.N.; Sharma, P.; Singh, V.; Jain, A.; Ionita, Ciprian N.; Titus, A.H.; Cartwright, A.N.; Bednarek, D.R; Rudin, S.

    2013-01-01

    Present day treatment for neurovascular pathological conditions involves the use of devices with very small features such as stents, coils, and balloons; hence, these interventional procedures demand high resolution x-ray imaging under fluoroscopic conditions to provide the capability to guide the deployment of these fine endovascular devices. To address this issue, a high resolution x-ray detector based on EMCCD technology is being developed. The EMCCD field-of-view is enlarged using a fiber-optic taper so that the detector features an effective pixel size of 37 µm giving it a Nyquist frequency of 13.5 lp/mm, which is significantly higher than that of the state of the art Flat Panel Detectors (FPD). Quantitative analysis of the detector, including gain calibration, instrumentation noise equivalent exposure (INEE) and modulation transfer function (MTF) determination, are presented in this work. The gain of the detector is a function of the detector temperature; with the detector cooled to 5° C, the highest relative gain that could be achieved was calculated to be 116 times. At this gain setting, the lowest INEE was measured to be 0.6 µR/frame. The MTF, measured using the edge method, was over 2% up to 7 cycles/ mm. To evaluate the performance of the detector under clinical conditions, an aneurysm model was placed over an anthropomorphic head phantom and a coil was guided into the aneurysm under fluoroscopic guidance using the detector. Image sequences from the procedure are presented demonstrating the high resolution of this SSXII. PMID:24353386

  10. A Robust Actin Filaments Image Analysis Framework.

    PubMed

    Alioscha-Perez, Mitchel; Benadiba, Carine; Goossens, Katty; Kasas, Sandor; Dietler, Giovanni; Willaert, Ronnie; Sahli, Hichem

    2016-08-01

    The cytoskeleton is a highly dynamical protein network that plays a central role in numerous cellular physiological processes, and is traditionally divided into three components according to its chemical composition, i.e. actin, tubulin and intermediate filament cytoskeletons. Understanding the cytoskeleton dynamics is of prime importance to unveil mechanisms involved in cell adaptation to any stress type. Fluorescence imaging of cytoskeleton structures allows analyzing the impact of mechanical stimulation in the cytoskeleton, but it also imposes additional challenges in the image processing stage, such as the presence of imaging-related artifacts and heavy blurring introduced by (high-throughput) automated scans. However, although there exists a considerable number of image-based analytical tools to address the image processing and analysis, most of them are unfit to cope with the aforementioned challenges. Filamentous structures in images can be considered as a piecewise composition of quasi-straight segments (at least in some finer or coarser scale). Based on this observation, we propose a three-steps actin filaments extraction methodology: (i) first the input image is decomposed into a 'cartoon' part corresponding to the filament structures in the image, and a noise/texture part, (ii) on the 'cartoon' image, we apply a multi-scale line detector coupled with a (iii) quasi-straight filaments merging algorithm for fiber extraction. The proposed robust actin filaments image analysis framework allows extracting individual filaments in the presence of noise, artifacts and heavy blurring. Moreover, it provides numerous parameters such as filaments orientation, position and length, useful for further analysis. Cell image decomposition is relatively under-exploited in biological images processing, and our study shows the benefits it provides when addressing such tasks. Experimental validation was conducted using publicly available datasets, and in osteoblasts grown in

  11. A Robust Actin Filaments Image Analysis Framework

    PubMed Central

    Alioscha-Perez, Mitchel; Benadiba, Carine; Goossens, Katty; Kasas, Sandor; Dietler, Giovanni; Willaert, Ronnie; Sahli, Hichem

    2016-01-01

    The cytoskeleton is a highly dynamical protein network that plays a central role in numerous cellular physiological processes, and is traditionally divided into three components according to its chemical composition, i.e. actin, tubulin and intermediate filament cytoskeletons. Understanding the cytoskeleton dynamics is of prime importance to unveil mechanisms involved in cell adaptation to any stress type. Fluorescence imaging of cytoskeleton structures allows analyzing the impact of mechanical stimulation in the cytoskeleton, but it also imposes additional challenges in the image processing stage, such as the presence of imaging-related artifacts and heavy blurring introduced by (high-throughput) automated scans. However, although there exists a considerable number of image-based analytical tools to address the image processing and analysis, most of them are unfit to cope with the aforementioned challenges. Filamentous structures in images can be considered as a piecewise composition of quasi-straight segments (at least in some finer or coarser scale). Based on this observation, we propose a three-steps actin filaments extraction methodology: (i) first the input image is decomposed into a ‘cartoon’ part corresponding to the filament structures in the image, and a noise/texture part, (ii) on the ‘cartoon’ image, we apply a multi-scale line detector coupled with a (iii) quasi-straight filaments merging algorithm for fiber extraction. The proposed robust actin filaments image analysis framework allows extracting individual filaments in the presence of noise, artifacts and heavy blurring. Moreover, it provides numerous parameters such as filaments orientation, position and length, useful for further analysis. Cell image decomposition is relatively under-exploited in biological images processing, and our study shows the benefits it provides when addressing such tasks. Experimental validation was conducted using publicly available datasets, and in osteoblasts

  12. Flightspeed Integral Image Analysis Toolkit

    NASA Technical Reports Server (NTRS)

    Thompson, David R.

    2009-01-01

    The Flightspeed Integral Image Analysis Toolkit (FIIAT) is a C library that provides image analysis functions in a single, portable package. It provides basic low-level filtering, texture analysis, and subwindow descriptor for applications dealing with image interpretation and object recognition. Designed with spaceflight in mind, it addresses: Ease of integration (minimal external dependencies) Fast, real-time operation using integer arithmetic where possible (useful for platforms lacking a dedicated floatingpoint processor) Written entirely in C (easily modified) Mostly static memory allocation 8-bit image data The basic goal of the FIIAT library is to compute meaningful numerical descriptors for images or rectangular image regions. These n-vectors can then be used directly for novelty detection or pattern recognition, or as a feature space for higher-level pattern recognition tasks. The library provides routines for leveraging training data to derive descriptors that are most useful for a specific data set. Its runtime algorithms exploit a structure known as the "integral image." This is a caching method that permits fast summation of values within rectangular regions of an image. This integral frame facilitates a wide range of fast image-processing functions. This toolkit has applicability to a wide range of autonomous image analysis tasks in the space-flight domain, including novelty detection, object and scene classification, target detection for autonomous instrument placement, and science analysis of geomorphology. It makes real-time texture and pattern recognition possible for platforms with severe computational restraints. The software provides an order of magnitude speed increase over alternative software libraries currently in use by the research community. FIIAT can commercially support intelligent video cameras used in intelligent surveillance. It is also useful for object recognition by robots or other autonomous vehicles

  13. Confocal scanning laser microscopy with complementary 3D image analysis allows quantitative studies of functional state of ionoregulatory cells in the Nile tilapia (Oreochromis niloticus) following salinity challenge.

    PubMed

    Fridman, Sophie; Rana, Krishen J; Bron, James E

    2013-04-01

    The development of a novel three-dimensional image analysis technique of stacks generated by confocal laser scanning microscopy is described allowing visualization of mitochondria-rich cells (MRCs) in the seawater-adapted Nile tilapia in relation to their spatial location. This method permits the assessment and classification of both active and nonactive MRCs based on the distance of the top of the immunopositive cell from the epithelial surface. In addition, this technique offers the potential for informative and quantitative studies, for example, densitometric and morphometric measurements based on MRC functional state. Confocal scanning laser microscopy used with triple staining whole-mount immunohistochemistry was used to detect integumental MRCs in the yolk-sac larvae tail of the Nile tilapia following transfer from freshwater to elevated salinities, that is, 12.5 and 20 ppt. Mean active MRC volume was always significantly larger and displayed a greater staining intensity (GLM; P<0.05) than nonactive MRCs. Following transfer, the percentage of active MRCs was seen to increase as did MRC volume (GLM; P<0.05). PMID:23390074

  14. Design Criteria For Networked Image Analysis System

    NASA Astrophysics Data System (ADS)

    Reader, Cliff; Nitteberg, Alan

    1982-01-01

    Image systems design is currently undergoing a metamorphosis from the conventional computing systems of the past into a new generation of special purpose designs. This change is motivated by several factors, notably among which is the increased opportunity for high performance with low cost offered by advances in semiconductor technology. Another key issue is a maturing in understanding of problems and the applicability of digital processing techniques. These factors allow the design of cost-effective systems that are functionally dedicated to specific applications and used in a utilitarian fashion. Following an overview of the above stated issues, the paper presents a top-down approach to the design of networked image analysis systems. The requirements for such a system are presented, with orientation toward the hospital environment. The three main areas are image data base management, viewing of image data and image data processing. This is followed by a survey of the current state of the art, covering image display systems, data base techniques, communications networks and software systems control. The paper concludes with a description of the functional subystems and architectural framework for networked image analysis in a production environment.

  15. Image potential states at metal-dielectric interfaces

    SciTech Connect

    Merry, W.R. Jr.

    1992-04-01

    Angle-resolved two-photon laser photoemission was used to observe the image potential electronic states on the (111) face of a silver single crystal. The transient image potential states were excited from the occupied bulk bands with photons whose energy was tunable around 4 eV. Photoemission of the image potential states was accomplished with photons of energy tunable around 2 eV. Image potential states were found to persist in the presence of physisorbed adlayers of xenon and cyclohexane. On clean Ag(111), the effective mass of the n=1 image potential state was found to be 1.4{plus minus}0.1 times the mass of a free electron (m{sub e}). A binding energy of 0.77 eV, measured by earlier workers, was assumed in analysis of the data for the clean surface. On Ag(111), at 75 K covered by one monolayer of xenon, the binding energy of the n=1 image potential state was unchanged relative to its value on the clean surface. An effective mass of (1.00{plus minus}0.05) {center dot} m{sub e} was obtained. On Ag(111) at 167 K, covered by one monolayer of cyclohexane, the binding energy of the n=2 member of the image potential series was 0.30{plus minus}0.05 eV. The energy of the n=1 state was again unchanged by deposition of the adsorbate. The effective masses of both states were (0.90{plus minus}0.1) {center dot} m{sub e}.

  16. Image potential states at metal-dielectric interfaces

    SciTech Connect

    Merry, W.R. Jr.

    1992-04-01

    Angle-resolved two-photon laser photoemission was used to observe the image potential electronic states on the (111) face of a silver single crystal. The transient image potential states were excited from the occupied bulk bands with photons whose energy was tunable around 4 eV. Photoemission of the image potential states was accomplished with photons of energy tunable around 2 eV. Image potential states were found to persist in the presence of physisorbed adlayers of xenon and cyclohexane. On clean Ag(111), the effective mass of the n=1 image potential state was found to be 1.4{plus_minus}0.1 times the mass of a free electron (m{sub e}). A binding energy of 0.77 eV, measured by earlier workers, was assumed in analysis of the data for the clean surface. On Ag(111), at 75 K covered by one monolayer of xenon, the binding energy of the n=1 image potential state was unchanged relative to its value on the clean surface. An effective mass of (1.00{plus_minus}0.05) {center_dot} m{sub e} was obtained. On Ag(111) at 167 K, covered by one monolayer of cyclohexane, the binding energy of the n=2 member of the image potential series was 0.30{plus_minus}0.05 eV. The energy of the n=1 state was again unchanged by deposition of the adsorbate. The effective masses of both states were (0.90{plus_minus}0.1) {center_dot} m{sub e}.

  17. Technique for improving solid state mosaic images

    NASA Technical Reports Server (NTRS)

    Saboe, J. M.

    1969-01-01

    Method identifies and corrects mosaic image faults in solid state visual displays and opto-electronic presentation systems. Composite video signals containing faults due to defective sensing elements are corrected by a memory unit that contains the stored fault pattern and supplies the appropriate fault word to the blanking circuit.

  18. Digital imaging with solid state x-ray image intensifiers

    NASA Astrophysics Data System (ADS)

    Damento, Michael A.; Radspinner, Rachel; Roehrig, Hans

    1999-10-01

    X-ray cameras in which a CCD is lens coupled to a large phosphor screen are known to suffer from a loss of x-ray signal due to poor light collection from conventional phosphors, making them unsuitable for most medical imaging applications. By replacing the standard phosphor with a solid-state image intensifier, it may be possible to improve the signal-to-noise ratio of the images produced with these cameras. The solid-state x-ray image intensifier is a multi- layer device in which a photoconductor layer controls the light output from an electroluminescent phosphor layer. While prototype devices have been used for direct viewing and video imaging, they are only now being evaluated in a digital imaging system. In the present work, the preparation and evaluation of intensifiers with a 65 mm square format are described. The intensifiers are prepared by screen- printing or doctor blading the following layers onto an ITO coated glass substrate: ZnS phosphor, opaque layer, CdS photoconductor, and carbon conductor. The total thickness of the layers is approximately 350 micrometers , 350 VAC at 400 Hz is applied to the device for operation. For a given x-ray dose, the intensifiers produce up to three times the intensity (after background subtracting) of Lanex Fast Front screens. X-ray images produced with the present intensifiers are somewhat noisy and their resolution is about half that of Lanex screens. Modifications are suggested which could improve the resolution and noise of the intensifiers.

  19. Mapping Image Potential States on Graphene Quantum Dots

    NASA Astrophysics Data System (ADS)

    Craes, Fabian; Runte, Sven; Klinkhammer, Jürgen; Kralj, Marko; Michely, Thomas; Busse, Carsten

    2013-08-01

    Free-electron-like image potential states are observed in scanning tunneling spectroscopy on graphene quantum dots on Ir(111) acting as potential wells. The spectrum strongly depends on the size of the nanostructure as well as on the spatial position on top, indicating lateral confinement. Analysis of the substructure of the first state by the spatial mapping of the constant energy local density of states reveals characteristic patterns of confined states. The most pronounced state is not the ground state, but an excited state with a favorable combination of the local density of states and parallel momentum transfer in the tunneling process. Chemical gating tunes the confining potential by changing the local work function. Our experimental determination of this work function allows us to deduce the associated shift of the Dirac point.

  20. Mapping image potential states on graphene quantum dots.

    PubMed

    Craes, Fabian; Runte, Sven; Klinkhammer, Jürgen; Kralj, Marko; Michely, Thomas; Busse, Carsten

    2013-08-01

    Free-electron-like image potential states are observed in scanning tunneling spectroscopy on graphene quantum dots on Ir(111) acting as potential wells. The spectrum strongly depends on the size of the nanostructure as well as on the spatial position on top, indicating lateral confinement. Analysis of the substructure of the first state by the spatial mapping of the constant energy local density of states reveals characteristic patterns of confined states. The most pronounced state is not the ground state, but an excited state with a favorable combination of the local density of states and parallel momentum transfer in the tunneling process. Chemical gating tunes the confining potential by changing the local work function. Our experimental determination of this work function allows us to deduce the associated shift of the Dirac point. PMID:23952430

  1. Image analysis for DNA sequencing

    NASA Astrophysics Data System (ADS)

    Palaniappan, Kannappan; Huang, Thomas S.

    1991-07-01

    There is a great deal of interest in automating the process of DNA (deoxyribonucleic acid) sequencing to support the analysis of genomic DNA such as the Human and Mouse Genome projects. In one class of gel-based sequencing protocols autoradiograph images are generated in the final step and usually require manual interpretation to reconstruct the DNA sequence represented by the image. The need to handle a large volume of sequence information necessitates automation of the manual autoradiograph reading step through image analysis in order to reduce the length of time required to obtain sequence data and reduce transcription errors. Various adaptive image enhancement, segmentation and alignment methods were applied to autoradiograph images. The methods are adaptive to the local characteristics of the image such as noise, background signal, or presence of edges. Once the two-dimensional data is converted to a set of aligned one-dimensional profiles waveform analysis is used to determine the location of each band which represents one nucleotide in the sequence. Different classification strategies including a rule-based approach are investigated to map the profile signals, augmented with the original two-dimensional image data as necessary, to textual DNA sequence information.

  2. Anmap: Image and data analysis

    NASA Astrophysics Data System (ADS)

    Alexander, Paul; Waldram, Elizabeth; Titterington, David; Rees, Nick

    2014-11-01

    Anmap analyses and processes images and spectral data. Originally written for use in radio astronomy, much of its functionality is applicable to other disciplines; additional algorithms and analysis procedures allow direct use in, for example, NMR imaging and spectroscopy. Anmap emphasizes the analysis of data to extract quantitative results for comparison with theoretical models and/or other experimental data. To achieve this, Anmap provides a wide range of tools for analysis, fitting and modelling (including standard image and data processing algorithms). It also provides a powerful environment for users to develop their own analysis/processing tools either by combining existing algorithms and facilities with the very powerful command (scripting) language or by writing new routines in FORTRAN that integrate seamlessly with the rest of Anmap.

  3. Image analysis and quantitative morphology.

    PubMed

    Mandarim-de-Lacerda, Carlos Alberto; Fernandes-Santos, Caroline; Aguila, Marcia Barbosa

    2010-01-01

    Quantitative studies are increasingly found in the literature, particularly in the fields of development/evolution, pathology, and neurosciences. Image digitalization converts tissue images into a numeric form by dividing them into very small regions termed picture elements or pixels. Image analysis allows automatic morphometry of digitalized images, and stereology aims to understand the structural inner three-dimensional arrangement based on the analysis of slices showing two-dimensional information. To quantify morphological structures in an unbiased and reproducible manner, appropriate isotropic and uniform random sampling of sections, and updated stereological tools are needed. Through the correct use of stereology, a quantitative study can be performed with little effort; efficiency in stereology means as little counting as possible (little work), low cost (section preparation), but still good accuracy. This short text provides a background guide for non-expert morphologists. PMID:19960334

  4. Multispectral Imaging Broadens Cellular Analysis

    NASA Technical Reports Server (NTRS)

    2007-01-01

    Amnis Corporation, a Seattle-based biotechnology company, developed ImageStream to produce sensitive fluorescence images of cells in flow. The company responded to an SBIR solicitation from Ames Research Center, and proposed to evaluate several methods of extending the depth of field for its ImageStream system and implement the best as an upgrade to its commercial products. This would allow users to view whole cells at the same time, rather than just one section of each cell. Through Phase I and II SBIR contracts, Ames provided Amnis the funding the company needed to develop this extended functionality. For NASA, the resulting high-speed image flow cytometry process made its way into Medusa, a life-detection instrument built to collect, store, and analyze sample organisms from erupting hydrothermal vents, and has the potential to benefit space flight health monitoring. On the commercial end, Amnis has implemented the process in ImageStream, combining high-resolution microscopy and flow cytometry in a single instrument, giving researchers the power to conduct quantitative analyses of individual cells and cell populations at the same time, in the same experiment. ImageStream is also built for many other applications, including cell signaling and pathway analysis; classification and characterization of peripheral blood mononuclear cell populations; quantitative morphology; apoptosis (cell death) assays; gene expression analysis; analysis of cell conjugates; molecular distribution; and receptor mapping and distribution.

  5. Data analysis for GOPEX image frames

    NASA Technical Reports Server (NTRS)

    Levine, B. M.; Shaik, K. S.; Yan, T.-Y.

    1993-01-01

    The data analysis based on the image frames received at the Solid State Imaging (SSI) camera of the Galileo Optical Experiment (GOPEX) demonstration conducted between 9-16 Dec. 1992 is described. Laser uplink was successfully established between the ground and the Galileo spacecraft during its second Earth-gravity-assist phase in December 1992. SSI camera frames were acquired which contained images of detected laser pulses transmitted from the Table Mountain Facility (TMF), Wrightwood, California, and the Starfire Optical Range (SOR), Albuquerque, New Mexico. Laser pulse data were processed using standard image-processing techniques at the Multimission Image Processing Laboratory (MIPL) for preliminary pulse identification and to produce public release images. Subsequent image analysis corrected for background noise to measure received pulse intensities. Data were plotted to obtain histograms on a daily basis and were then compared with theoretical results derived from applicable weak-turbulence and strong-turbulence considerations. Processing steps are described and the theories are compared with the experimental results. Quantitative agreement was found in both turbulence regimes, and better agreement would have been found, given more received laser pulses. Future experiments should consider methods to reliably measure low-intensity pulses, and through experimental planning to geometrically locate pulse positions with greater certainty.

  6. Quantitative histogram analysis of images

    NASA Astrophysics Data System (ADS)

    Holub, Oliver; Ferreira, Sérgio T.

    2006-11-01

    A routine for histogram analysis of images has been written in the object-oriented, graphical development environment LabVIEW. The program converts an RGB bitmap image into an intensity-linear greyscale image according to selectable conversion coefficients. This greyscale image is subsequently analysed by plots of the intensity histogram and probability distribution of brightness, and by calculation of various parameters, including average brightness, standard deviation, variance, minimal and maximal brightness, mode, skewness and kurtosis of the histogram and the median of the probability distribution. The program allows interactive selection of specific regions of interest (ROI) in the image and definition of lower and upper threshold levels (e.g., to permit the removal of a constant background signal). The results of the analysis of multiple images can be conveniently saved and exported for plotting in other programs, which allows fast analysis of relatively large sets of image data. The program file accompanies this manuscript together with a detailed description of two application examples: The analysis of fluorescence microscopy images, specifically of tau-immunofluorescence in primary cultures of rat cortical and hippocampal neurons, and the quantification of protein bands by Western-blot. The possibilities and limitations of this kind of analysis are discussed. Program summaryTitle of program: HAWGC Catalogue identifier: ADXG_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADXG_v1_0 Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Computers: Mobile Intel Pentium III, AMD Duron Installations: No installation necessary—Executable file together with necessary files for LabVIEW Run-time engine Operating systems or monitors under which the program has been tested: WindowsME/2000/XP Programming language used: LabVIEW 7.0 Memory required to execute with typical data:˜16MB for starting and ˜160MB used for

  7. Curvelet Based Offline Analysis of SEM Images

    PubMed Central

    Shirazi, Syed Hamad; Haq, Nuhman ul; Hayat, Khizar; Naz, Saeeda; Haque, Ihsan ul

    2014-01-01

    Manual offline analysis, of a scanning electron microscopy (SEM) image, is a time consuming process and requires continuous human intervention and efforts. This paper presents an image processing based method for automated offline analyses of SEM images. To this end, our strategy relies on a two-stage process, viz. texture analysis and quantification. The method involves a preprocessing step, aimed at the noise removal, in order to avoid false edges. For texture analysis, the proposed method employs a state of the art Curvelet transform followed by segmentation through a combination of entropy filtering, thresholding and mathematical morphology (MM). The quantification is carried out by the application of a box-counting algorithm, for fractal dimension (FD) calculations, with the ultimate goal of measuring the parameters, like surface area and perimeter. The perimeter is estimated indirectly by counting the boundary boxes of the filled shapes. The proposed method, when applied to a representative set of SEM images, not only showed better results in image segmentation but also exhibited a good accuracy in the calculation of surface area and perimeter. The proposed method outperforms the well-known Watershed segmentation algorithm. PMID:25089617

  8. Correlating two-photon excited fluorescence imaging of breast cancer cellular redox state with seahorse flux analysis of normalized cellular oxygen consumption

    NASA Astrophysics Data System (ADS)

    Hou, Jue; Wright, Heather J.; Chan, Nicole; Tran, Richard; Razorenova, Olga V.; Potma, Eric O.; Tromberg, Bruce J.

    2016-06-01

    Two-photon excited fluorescence (TPEF) imaging of the cellular cofactors nicotinamide adenine dinucleotide and oxidized flavin adenine dinucleotide is widely used to measure cellular metabolism, both in normal and pathological cells and tissues. When dual-wavelength excitation is used, ratiometric TPEF imaging of the intrinsic cofactor fluorescence provides a metabolic index of cells-the "optical redox ratio" (ORR). With increased interest in understanding and controlling cellular metabolism in cancer, there is a need to evaluate the performance of ORR in malignant cells. We compare TPEF metabolic imaging with seahorse flux analysis of cellular oxygen consumption in two different breast cancer cell lines (MCF-7 and MDA-MB-231). We monitor metabolic index in living cells under both normal culture conditions and, for MCF-7, in response to cell respiration inhibitors and uncouplers. We observe a significant correlation between the TPEF-derived ORR and the flux analyzer measurements (R=0.7901, p<0.001). Our results confirm that the ORR is a valid dynamic index of cell metabolism under a range of oxygen consumption conditions relevant for cancer imaging.

  9. Momentum space imaging of the FFLO state

    NASA Astrophysics Data System (ADS)

    Akbari, Alireza; Thalmeier, Peter

    2016-06-01

    In a magnetic field superconductors (SC) with small orbital effect exhibit the Fulde–Ferrell–Larkin–Ovchinnikov (FFLO) phase above the Pauli limiting field. It is characterized by Cooper pairs with finite center of mass momentum and is stabilized by the gain in Zeeman energy of depaired electrons in the imbalanced Fermi gas. The ground state is a coherent superposition of paired and depaired states. This concept, although central to the FFLO state lacks a direct experimental confirmation. We propose that STM quasiparticle interference (QPI) can give a direct momentum space image of the depaired states in the FFLO wave function. For a proof of principle we investigate a 2D single orbital tight binding model with a SC s-wave order parameter. Using the equilibrium values of pair momentum and SC gap we calculate the spectral function of quasiparticles and associated QPI spectrum as function of magnetic field. We show that the characteristic depaired Fermi surface parts appear as a fingerprint in the QPI spectrum of the FFLO phase and we demonstrate its evolution with field strength. Its observation in STM experiments would constitute a direct proof for FFLO ground state wave function.

  10. A computational image analysis glossary for biologists.

    PubMed

    Roeder, Adrienne H K; Cunha, Alexandre; Burl, Michael C; Meyerowitz, Elliot M

    2012-09-01

    Recent advances in biological imaging have resulted in an explosion in the quality and quantity of images obtained in a digital format. Developmental biologists are increasingly acquiring beautiful and complex images, thus creating vast image datasets. In the past, patterns in image data have been detected by the human eye. Larger datasets, however, necessitate high-throughput objective analysis tools to computationally extract quantitative information from the images. These tools have been developed in collaborations between biologists, computer scientists, mathematicians and physicists. In this Primer we present a glossary of image analysis terms to aid biologists and briefly discuss the importance of robust image analysis in developmental studies. PMID:22872081

  11. Target identification by image analysis.

    PubMed

    Fetz, V; Prochnow, H; Brönstrup, M; Sasse, F

    2016-05-01

    Covering: 1997 to the end of 2015Each biologically active compound induces phenotypic changes in target cells that are characteristic for its mode of action. These phenotypic alterations can be directly observed under the microscope or made visible by labelling structural elements or selected proteins of the cells with dyes. A comparison of the cellular phenotype induced by a compound of interest with the phenotypes of reference compounds with known cellular targets allows predicting its mode of action. While this approach has been successfully applied to the characterization of natural products based on a visual inspection of images, recent studies used automated microscopy and analysis software to increase speed and to reduce subjective interpretation. In this review, we give a general outline of the workflow for manual and automated image analysis, and we highlight natural products whose bacterial and eucaryotic targets could be identified through such approaches. PMID:26777141

  12. Analysis of an interferometric Stokes imaging polarimeter

    NASA Astrophysics Data System (ADS)

    Murali, Sukumar

    Estimation of Stokes vector components from an interferometric fringe encoded image is a novel way of measuring the State Of Polarization (SOP) distribution across a scene. Imaging polarimeters employing interferometric techniques encode SOP in- formation across a scene in a single image in the form of intensity fringes. The lack of moving parts and use of a single image eliminates the problems of conventional polarimetry - vibration, spurious signal generation due to artifacts, beam wander, and need for registration routines. However, interferometric polarimeters are limited by narrow bandpass and short exposure time operations which decrease the Signal to Noise Ratio (SNR) defined as the ratio of the mean photon count to the standard deviation in the detected image. A simulation environment for designing an Interferometric Stokes Imaging polarimeter (ISIP) and a detector with noise effects is created and presented. Users of this environment are capable of imaging an object with defined SOP through an ISIP onto a detector producing a digitized image output. The simulation also includes bandpass imaging capabilities, control of detector noise, and object brightness levels. The Stokes images are estimated from a fringe encoded image of a scene by means of a reconstructor algorithm. A spatial domain methodology involving the idea of a unit cell and slide approach is applied to the reconstructor model developed using Mueller calculus. The validation of this methodology and effectiveness compared to a discrete approach is demonstrated with suitable examples. The pixel size required to sample the fringes and minimum unit cell size required for reconstruction are investigated using condition numbers. The importance of the PSF of fore-optics (telescope) used in imaging the object is investigated and analyzed using a point source imaging example and a Nyquist criteria is presented. Reconstruction of fringe modulated images in the presence of noise involves choosing an

  13. Planning applications in image analysis

    NASA Technical Reports Server (NTRS)

    Boddy, Mark; White, Jim; Goldman, Robert; Short, Nick, Jr.

    1994-01-01

    We describe two interim results from an ongoing effort to automate the acquisition, analysis, archiving, and distribution of satellite earth science data. Both results are applications of Artificial Intelligence planning research to the automatic generation of processing steps for image analysis tasks. First, we have constructed a linear conditional planner (CPed), used to generate conditional processing plans. Second, we have extended an existing hierarchical planning system to make use of durations, resources, and deadlines, thus supporting the automatic generation of processing steps in time and resource-constrained environments.

  14. Quantitative multi-image analysis for biomedical Raman spectroscopic imaging.

    PubMed

    Hedegaard, Martin A B; Bergholt, Mads S; Stevens, Molly M

    2016-05-01

    Imaging by Raman spectroscopy enables unparalleled label-free insights into cell and tissue composition at the molecular level. With established approaches limited to single image analysis, there are currently no general guidelines or consensus on how to quantify biochemical components across multiple Raman images. Here, we describe a broadly applicable methodology for the combination of multiple Raman images into a single image for analysis. This is achieved by removing image specific background interference, unfolding the series of Raman images into a single dataset, and normalisation of each Raman spectrum to render comparable Raman images. Multivariate image analysis is finally applied to derive the contributing 'pure' biochemical spectra for relative quantification. We present our methodology using four independently measured Raman images of control cells and four images of cells treated with strontium ions from substituted bioactive glass. We show that the relative biochemical distribution per area of the cells can be quantified. In addition, using k-means clustering, we are able to discriminate between the two cell types over multiple Raman images. This study shows a streamlined quantitative multi-image analysis tool for improving cell/tissue characterisation and opens new avenues in biomedical Raman spectroscopic imaging. PMID:26833935

  15. Grid computing in image analysis

    PubMed Central

    2011-01-01

    Diagnostic surgical pathology or tissue–based diagnosis still remains the most reliable and specific diagnostic medical procedure. The development of whole slide scanners permits the creation of virtual slides and to work on so-called virtual microscopes. In addition to interactive work on virtual slides approaches have been reported that introduce automated virtual microscopy, which is composed of several tools focusing on quite different tasks. These include evaluation of image quality and image standardization, analysis of potential useful thresholds for object detection and identification (segmentation), dynamic segmentation procedures, adjustable magnification to optimize feature extraction, and texture analysis including image transformation and evaluation of elementary primitives. Grid technology seems to possess all features to efficiently target and control the specific tasks of image information and detection in order to obtain a detailed and accurate diagnosis. Grid technology is based upon so-called nodes that are linked together and share certain communication rules in using open standards. Their number and functionality can vary according to the needs of a specific user at a given point in time. When implementing automated virtual microscopy with Grid technology, all of the five different Grid functions have to be taken into account, namely 1) computation services, 2) data services, 3) application services, 4) information services, and 5) knowledge services. Although all mandatory tools of automated virtual microscopy can be implemented in a closed or standardized open system, Grid technology offers a new dimension to acquire, detect, classify, and distribute medical image information, and to assure quality in tissue–based diagnosis. PMID:21516880

  16. Automated image analysis of uterine cervical images

    NASA Astrophysics Data System (ADS)

    Li, Wenjing; Gu, Jia; Ferris, Daron; Poirson, Allen

    2007-03-01

    Cervical Cancer is the second most common cancer among women worldwide and the leading cause of cancer mortality of women in developing countries. If detected early and treated adequately, cervical cancer can be virtually prevented. Cervical precursor lesions and invasive cancer exhibit certain morphologic features that can be identified during a visual inspection exam. Digital imaging technologies allow us to assist the physician with a Computer-Aided Diagnosis (CAD) system. In colposcopy, epithelium that turns white after application of acetic acid is called acetowhite epithelium. Acetowhite epithelium is one of the major diagnostic features observed in detecting cancer and pre-cancerous regions. Automatic extraction of acetowhite regions from cervical images has been a challenging task due to specular reflection, various illumination conditions, and most importantly, large intra-patient variation. This paper presents a multi-step acetowhite region detection system to analyze the acetowhite lesions in cervical images automatically. First, the system calibrates the color of the cervical images to be independent of screening devices. Second, the anatomy of the uterine cervix is analyzed in terms of cervix region, external os region, columnar region, and squamous region. Third, the squamous region is further analyzed and subregions based on three levels of acetowhite are identified. The extracted acetowhite regions are accompanied by color scores to indicate the different levels of acetowhite. The system has been evaluated by 40 human subjects' data and demonstrates high correlation with experts' annotations.

  17. Neural network ultrasound image analysis

    NASA Astrophysics Data System (ADS)

    Schneider, Alexander C.; Brown, David G.; Pastel, Mary S.

    1993-09-01

    Neural network based analysis of ultrasound image data was carried out on liver scans of normal subjects and those diagnosed with diffuse liver disease. In a previous study, ultrasound images from a group of normal volunteers, Gaucher's disease patients, and hepatitis patients were obtained by Garra et al., who used classical statistical methods to distinguish from among these three classes. In the present work, neural network classifiers were employed with the same image features found useful in the previous study for this task. Both standard backpropagation neural networks and a recently developed biologically-inspired network called Dystal were used. Classification performance as measured by the area under a receiver operating characteristic curve was generally excellent for the back propagation networks and was roughly comparable to that of classical statistical discriminators tested on the same data set and documented in the earlier study. Performance of the Dystal network was significantly inferior; however, this may be due to the choice of network parameter. Potential methods for enhancing network performance was identified.

  18. Object-Based Image Analysis of WORLDVIEW-2 Satellite Data for the Classification of Mangrove Areas in the City of SÃO LUÍS, MARANHÃO State, Brazil

    NASA Astrophysics Data System (ADS)

    Kux, H. J. H.; Souza, U. D. V.

    2012-07-01

    Taking into account the importance of mangrove environments for the biodiversity of coastal areas, the objective of this paper is to classify the different types of irregular human occupation on the areas of mangrove vegetation in São Luis, capital of Maranhão State, Brazil, considering the OBIA (Object-based Image Analysis) approach with WorldView-2 satellite data and using InterIMAGE, a free image analysis software. A methodology for the study of the area covered by mangroves at the northern portion of the city was proposed to identify the main targets of this area, such as: marsh areas (known locally as Apicum), mangrove forests, tidal channels, blockhouses (irregular constructions), embankments, paved streets and different condominiums. Initially a databank including information on the main types of occupation and environments was established for the area under study. An image fusion (multispectral bands with panchromatic band) was done, to improve the information content of WorldView-2 data. Following an ortho-rectification was made with the dataset used, in order to compare with cartographical data from the municipality, using Ground Control Points (GCPs) collected during field survey. Using the data mining software GEODMA, a series of attributes which characterize the targets of interest was established. Afterwards the classes were structured, a knowledge model was created and the classification performed. The OBIA approach eased mapping of such sensitive areas, showing the irregular occupations and embankments of mangrove forests, reducing its area and damaging the marine biodiversity.

  19. Multimodal Imaging Brain Connectivity Analysis (MIBCA) toolbox

    PubMed Central

    Lacerda, Luis Miguel; Ferreira, Hugo Alexandre

    2015-01-01

    Aim. In recent years, connectivity studies using neuroimaging data have increased the understanding of the organization of large-scale structural and functional brain networks. However, data analysis is time consuming as rigorous procedures must be assured, from structuring data and pre-processing to modality specific data procedures. Until now, no single toolbox was able to perform such investigations on truly multimodal image data from beginning to end, including the combination of different connectivity analyses. Thus, we have developed the Multimodal Imaging Brain Connectivity Analysis (MIBCA) toolbox with the goal of diminishing time waste in data processing and to allow an innovative and comprehensive approach to brain connectivity. Materials and Methods. The MIBCA toolbox is a fully automated all-in-one connectivity toolbox that offers pre-processing, connectivity and graph theoretical analyses of multimodal image data such as diffusion-weighted imaging, functional magnetic resonance imaging (fMRI) and positron emission tomography (PET). It was developed in MATLAB environment and pipelines well-known neuroimaging softwares such as Freesurfer, SPM, FSL, and Diffusion Toolkit. It further implements routines for the construction of structural, functional and effective or combined connectivity matrices, as well as, routines for the extraction and calculation of imaging and graph-theory metrics, the latter using also functions from the Brain Connectivity Toolbox. Finally, the toolbox performs group statistical analysis and enables data visualization in the form of matrices, 3D brain graphs and connectograms. In this paper the MIBCA toolbox is presented by illustrating its capabilities using multimodal image data from a group of 35 healthy subjects (19–73 years old) with volumetric T1-weighted, diffusion tensor imaging, and resting state fMRI data, and 10 subjets with 18F-Altanserin PET data also. Results. It was observed both a high inter-hemispheric symmetry

  20. The Determination Of Titan's Rotational State From Cassini SAR Images

    NASA Astrophysics Data System (ADS)

    Persi Del Marmo, P.; Iess, L.; Picardi, G.; Seu, R.; Bertotti, B.

    2007-12-01

    SAR images acquired by the spacecraft Cassini in overlapping strips have been used to determine the vectorial angular velocity of Titan. The method entails the tracking of surface landmarks at different times (and mean anomalies). Cassini radar observations have provided so far 14 high resolution image pairs of the same portion of Titan surface, spanning a period from 2004 to 2007. Each image is referenced both in an inertial frame and in the IAU, Titan-centric, body-fixed reference frame. This referencing is quite precise, as the position of Cassini relative to Titan is known with an accuracy smaller than 100 m during each flyby. The IAU body-fixed frame assumes a spin axis different from the actual one. Therefore, in this putative frame a landmark appears at different geographic coordinates in the two observations. By correlating the two images of the same surface region, one gets a two-dimensional vector, which retains information about the true spin axis. This vector provides the magnitude and direction of the displacement to be applied to a reference point of each image in order to produce maximum correlation. The correlation results therefore in a new Titan-centric, inertial referencing of the images, R(t1) and R(t2). The spin axis s is then obtained by requiring that [R(t2) - R(t1)] s = 0 for each overlapping image pairs. Due to experimental errors (dominated by image correlation errors and inaccuracies in the spacecraft orbit relative to Titan) the left hand sides cannot be simultaneously zeroed and the spin axis must be determined by means of a least square procedure. The magnitude of the angular velocity is then derived from the angle between the vectors R(t1) and R(t2) and the known time difference between the two observations. Our analysis indicates that the Titan pole coordinates are consistent with the occupancy of the fourth Cassini state. The uncertainties are obtained assuming a realistic error of 250 m in the reconstruction of the inertially

  1. Abnormal regional homogeneity as potential imaging biomarker for psychosis risk syndrome: a resting-state fMRI study and support vector machine analysis.

    PubMed

    Wang, Shuai; Wang, Guodong; Lv, Hailong; Wu, Renrong; Zhao, Jingping; Guo, Wenbin

    2016-01-01

    Subjects with psychosis risk syndrome (PRS) have structural and functional abnormalities in several brain regions. However, regional functional synchronization of PRS has not been clarified. We recruited 34 PRS subjects and 37 healthy controls. Regional homogeneity (ReHo) of resting-state functional magnetic resonance scans was employed to analyze regional functional synchronization in these participants. Receiver operating characteristic curves and support vector machines were used to detect whether abnormal regional functional synchronization could be utilized to separate PRS subjects from healthy controls. We observed that PRS subjects showed significant ReHo decreases in the left inferior temporal gyrus and increases in the right inferior frontal gyrus and right putamen compared with the controls. No correlations between abnormal regional functional synchronization in these brain regions and clinical characteristics existed. A combination of the ReHo values in the three brain regions showed sensitivity, specificity, and accuracy of 88.24%, 91.89%, and 90.14%, respectively, for discriminating PRS subjects from healthy controls. We inferred that abnormal regional functional synchronization exists in the cerebrum of PRS subjects, and a combination of ReHo values in these abnormal regions could be applied as potential image biomarker to identify PRS subjects from healthy controls. PMID:27272341

  2. Abnormal regional homogeneity as potential imaging biomarker for psychosis risk syndrome: a resting-state fMRI study and support vector machine analysis

    PubMed Central

    Wang, Shuai; Wang, Guodong; Lv, Hailong; Wu, Renrong; Zhao, Jingping; Guo, Wenbin

    2016-01-01

    Subjects with psychosis risk syndrome (PRS) have structural and functional abnormalities in several brain regions. However, regional functional synchronization of PRS has not been clarified. We recruited 34 PRS subjects and 37 healthy controls. Regional homogeneity (ReHo) of resting-state functional magnetic resonance scans was employed to analyze regional functional synchronization in these participants. Receiver operating characteristic curves and support vector machines were used to detect whether abnormal regional functional synchronization could be utilized to separate PRS subjects from healthy controls. We observed that PRS subjects showed significant ReHo decreases in the left inferior temporal gyrus and increases in the right inferior frontal gyrus and right putamen compared with the controls. No correlations between abnormal regional functional synchronization in these brain regions and clinical characteristics existed. A combination of the ReHo values in the three brain regions showed sensitivity, specificity, and accuracy of 88.24%, 91.89%, and 90.14%, respectively, for discriminating PRS subjects from healthy controls. We inferred that abnormal regional functional synchronization exists in the cerebrum of PRS subjects, and a combination of ReHo values in these abnormal regions could be applied as potential image biomarker to identify PRS subjects from healthy controls. PMID:27272341

  3. Lineaments derived from analysis of linear features mapped from Landsat images of the Four Corners region of the Southwestern United States

    USGS Publications Warehouse

    Knepper, Daniel H.

    1982-01-01

    Linear features are relatively short, distinct, non-cultural linear elements mappable on Landsat multispectral scanner images (MSS). Most linear features are related to local topographic features, such as cliffs, slope breaks, narrow ridges, and stream valley segments that are interpreted as reflecting directed aspects of local geologic structure including faults, zones of fracturing (joints), and the strike of tilted beds. 6,050 linear features were mapped on computer-enhanced Landsat MSS images of 11 Landsat scenes covering an area from the Rio Grande rift zone on the east to the Grand Canyon on the west and from the San Juan Mountains, Colorado, on the north to the Mogollon Rim on the south. Computer-aided statistical analysis of the linear feature data revealed 5 statistically important trend intervals: 1.) N. 10W.-N.16E., 2.) N.35-72E., 3.) N.33-59W., 4.) N. 74-83W., and 5.) N.89-9-W. and N. 89-90E. Subsequent analysis of the distribution of the linear features indicated that only the first three trend intervals are of regional geologic significance. Computer-generated maps of the linear features in each important trend interval were prepared, as well as contour maps showing the relative concentrations of linear features in each trend interval. These maps were then analyzed for patterns suggestive of possible regional tectonic lines. 20 possible tectonic lines, or lineaments, were interpreted from the maps. One lineament is defined by an obvious change in overall linear feature concentrations along a northwest-trending line cutting across northeastern Arizona. Linear features are abundant northeast of the line and relatively scarce to the southwest. The remaining 19 lineaments represent the axes of clusters of parallel linear features elongated in the direction of the linear feature trends. Most of these lineaments mark previously known structural zones controlled by linear features in the Precambrian basement or show newly recognized relationships to

  4. Analysis of physical processes via imaging vectors

    NASA Astrophysics Data System (ADS)

    Volovodenko, V.; Efremova, N.; Efremov, V.

    2016-06-01

    Practically, all modeling processes in one way or another are random. The foremost formulated theoretical foundation embraces Markov processes, being represented in different forms. Markov processes are characterized as a random process that undergoes transitions from one state to another on a state space, whereas the probability distribution of the next state depends only on the current state and not on the sequence of events that preceded it. In the Markov processes the proposition (model) of the future by no means changes in the event of the expansion and/or strong information progression relative to preceding time. Basically, modeling physical fields involves process changing in time, i.e. non-stationay processes. In this case, the application of Laplace transformation provides unjustified description complications. Transition to other possibilities results in explicit simplification. The method of imaging vectors renders constructive mathematical models and necessary transition in the modeling process and analysis itself. The flexibility of the model itself using polynomial basis leads to the possible rapid transition of the mathematical model and further analysis acceleration. It should be noted that the mathematical description permits operator representation. Conversely, operator representation of the structures, algorithms and data processing procedures significantly improve the flexibility of the modeling process.

  5. Spreadsheet-like image analysis

    NASA Astrophysics Data System (ADS)

    Wilson, Paul

    1992-08-01

    This report describes the design of a new software system being built by the Army to support and augment automated nondestructive inspection (NDI) on-line equipment implemented by the Army for detection of defective manufactured items. The new system recalls and post-processes (off-line) the NDI data sets archived by the on-line equipment for the purpose of verifying the correctness of the inspection analysis paradigms, of developing better analysis paradigms and to gather statistics on the defects of the items inspected. The design of the system is similar to that of a spreadsheet, i.e., an array of cells which may be programmed to contain functions with arguments being data from other cells and whose resultant is the output of that cell's function. Unlike a spreadsheet, the arguments and the resultants of a cell may be a matrix such as a two-dimensional matrix of picture elements (pixels). Functions include matrix mathematics, neural networks and image processing as well as those ordinarily found in spreadsheets. The system employs all of the common environmental supports of the Macintosh computer, which is the hardware platform. The system allows the resultant of a cell to be displayed in any of multiple formats such as a matrix of numbers, text, an image, or a chart. Each cell is a window onto the resultant. Like a spreadsheet if the input value of any cell is changed its effect is cascaded into the resultants of all cells whose functions use that value directly or indirectly. The system encourages the user to play what-of games, as ordinary spreadsheets do.

  6. IMAGE ANALYSIS ALGORITHMS FOR DUAL MODE IMAGING SYSTEMS

    SciTech Connect

    Robinson, Sean M.; Jarman, Kenneth D.; Miller, Erin A.; Misner, Alex C.; Myjak, Mitchell J.; Pitts, W. Karl; Seifert, Allen; Seifert, Carolyn E.; Woodring, Mitchell L.

    2010-06-11

    The level of detail discernable in imaging techniques has generally excluded them from consideration as verification tools in inspection regimes where information barriers are mandatory. However, if a balance can be struck between sufficient information barriers and feature extraction to verify or identify objects of interest, imaging may significantly advance verification efforts. This paper describes the development of combined active (conventional) radiography and passive (auto) radiography techniques for imaging sensitive items assuming that comparison images cannot be furnished. Three image analysis algorithms are presented, each of which reduces full image information to non-sensitive feature information and ultimately is intended to provide only a yes/no response verifying features present in the image. These algorithms are evaluated on both their technical performance in image analysis and their application with or without an explicitly constructed information barrier. The first algorithm reduces images to non-invertible pixel intensity histograms, retaining only summary information about the image that can be used in template comparisons. This one-way transform is sufficient to discriminate between different image structures (in terms of area and density) without revealing unnecessary specificity. The second algorithm estimates the attenuation cross-section of objects of known shape based on transition characteristics around the edge of the object’s image. The third algorithm compares the radiography image with the passive image to discriminate dense, radioactive material from point sources or inactive dense material. By comparing two images and reporting only a single statistic from the combination thereof, this algorithm can operate entirely behind an information barrier stage. Together with knowledge of the radiography system, the use of these algorithms in combination can be used to improve verification capability to inspection regimes and improve

  7. FFDM image quality assessment using computerized image texture analysis

    NASA Astrophysics Data System (ADS)

    Berger, Rachelle; Carton, Ann-Katherine; Maidment, Andrew D. A.; Kontos, Despina

    2010-04-01

    Quantitative measures of image quality (IQ) are routinely obtained during the evaluation of imaging systems. These measures, however, do not necessarily correlate with the IQ of the actual clinical images, which can also be affected by factors such as patient positioning. No quantitative method currently exists to evaluate clinical IQ. Therefore, we investigated the potential of using computerized image texture analysis to quantitatively assess IQ. Our hypothesis is that image texture features can be used to assess IQ as a measure of the image signal-to-noise ratio (SNR). To test feasibility, the "Rachel" anthropomorphic breast phantom (Model 169, Gammex RMI) was imaged with a Senographe 2000D FFDM system (GE Healthcare) using 220 unique exposure settings (target/filter, kVs, and mAs combinations). The mAs were varied from 10%-300% of that required for an average glandular dose (AGD) of 1.8 mGy. A 2.5cm2 retroareolar region of interest (ROI) was segmented from each image. The SNR was computed from the ROIs segmented from images linear with dose (i.e., raw images) after flat-field and off-set correction. Image texture features of skewness, coarseness, contrast, energy, homogeneity, and fractal dimension were computed from the Premium ViewTM postprocessed image ROIs. Multiple linear regression demonstrated a strong association between the computed image texture features and SNR (R2=0.92, p<=0.001). When including kV, target and filter as additional predictor variables, a stronger association with SNR was observed (R2=0.95, p<=0.001). The strong associations indicate that computerized image texture analysis can be used to measure image SNR and potentially aid in automating IQ assessment as a component of the clinical workflow. Further work is underway to validate our findings in larger clinical datasets.

  8. Image analysis applications for grain science

    NASA Astrophysics Data System (ADS)

    Zayas, Inna Y.; Steele, James L.

    1991-02-01

    Morphometrical features of single grain kernels or particles were used to discriminate two visibly similar wheat varieties foreign material in wheat hardsoft and spring-winter wheat classes and whole from broken corn kernels. Milled fractions of hard and soft wheat were evaluated using textural image analysis. Color image analysis of sound and mold damaged corn kernels yielded high recognition rates. The studies collectively demonstrate the potential for automated classification and assessment of grain quality using image analysis.

  9. Automatic processing, analysis, and recognition of images

    NASA Astrophysics Data System (ADS)

    Abrukov, Victor S.; Smirnov, Evgeniy V.; Ivanov, Dmitriy G.

    2004-11-01

    New approaches and computer codes (A&CC) for automatic processing, analysis and recognition of images are offered. The A&CC are based on presentation of object image as a collection of pixels of various colours and consecutive automatic painting of distinguished itself parts of the image. The A&CC have technical objectives centred on such direction as: 1) image processing, 2) image feature extraction, 3) image analysis and some others in any consistency and combination. The A&CC allows to obtain various geometrical and statistical parameters of object image and its parts. Additional possibilities of the A&CC usage deal with a usage of artificial neural networks technologies. We believe that A&CC can be used at creation of the systems of testing and control in a various field of industry and military applications (airborne imaging systems, tracking of moving objects), in medical diagnostics, at creation of new software for CCD, at industrial vision and creation of decision-making system, etc. The opportunities of the A&CC are tested at image analysis of model fires and plumes of the sprayed fluid, ensembles of particles, at a decoding of interferometric images, for digitization of paper diagrams of electrical signals, for recognition of the text, for elimination of a noise of the images, for filtration of the image, for analysis of the astronomical images and air photography, at detection of objects.

  10. Satellite image analysis using neural networks

    NASA Technical Reports Server (NTRS)

    Sheldon, Roger A.

    1990-01-01

    The tremendous backlog of unanalyzed satellite data necessitates the development of improved methods for data cataloging and analysis. Ford Aerospace has developed an image analysis system, SIANN (Satellite Image Analysis using Neural Networks) that integrates the technologies necessary to satisfy NASA's science data analysis requirements for the next generation of satellites. SIANN will enable scientists to train a neural network to recognize image data containing scenes of interest and then rapidly search data archives for all such images. The approach combines conventional image processing technology with recent advances in neural networks to provide improved classification capabilities. SIANN allows users to proceed through a four step process of image classification: filtering and enhancement, creation of neural network training data via application of feature extraction algorithms, configuring and training a neural network model, and classification of images by application of the trained neural network. A prototype experimentation testbed was completed and applied to climatological data.

  11. Current state of imaging in dermatology.

    PubMed

    Hibler, Brian P; Qi, Qiaochu; Rossi, Anthony M

    2016-03-01

    Medical imaging has dramatically transformed the practice of medicine, especially the field of dermatology. Imaging is used to facilitate the transfer of information between providers, document cutaneous disease, assess response to therapy, and plays a crucial role in monitoring and diagnosing skin cancer. Advancements in imaging technology and overall improved quality of imaging have augmented the utility of photography. We provide an overview of current imaging technologies used in dermatology with a focus on their role in skin cancer diagnosis. Future technologies include three-dimensional, total-body photography, mobile smartphone applications, and computerassisted diagnostic devices. With these advancements, we are better equipped to capture and monitor skin conditions longitudinally and achieve improved diagnostic accuracy of skin cancer. PMID:26963110

  12. PIZZARO: Forensic analysis and restoration of image and video data.

    PubMed

    Kamenicky, Jan; Bartos, Michal; Flusser, Jan; Mahdian, Babak; Kotera, Jan; Novozamsky, Adam; Saic, Stanislav; Sroubek, Filip; Sorel, Michal; Zita, Ales; Zitova, Barbara; Sima, Zdenek; Svarc, Petr; Horinek, Jan

    2016-07-01

    This paper introduces a set of methods for image and video forensic analysis. They were designed to help to assess image and video credibility and origin and to restore and increase image quality by diminishing unwanted blur, noise, and other possible artifacts. The motivation came from the best practices used in the criminal investigation utilizing images and/or videos. The determination of the image source, the verification of the image content, and image restoration were identified as the most important issues of which automation can facilitate criminalists work. Novel theoretical results complemented with existing approaches (LCD re-capture detection and denoising) were implemented in the PIZZARO software tool, which consists of the image processing functionality as well as of reporting and archiving functions to ensure the repeatability of image analysis procedures and thus fulfills formal aspects of the image/video analysis work. Comparison of new proposed methods with the state of the art approaches is shown. Real use cases are presented, which illustrate the functionality of the developed methods and demonstrate their applicability in different situations. The use cases as well as the method design were solved in tight cooperation of scientists from the Institute of Criminalistics, National Drug Headquarters of the Criminal Police and Investigation Service of the Police of the Czech Republic, and image processing experts from the Czech Academy of Sciences. PMID:27182830

  13. Cardiac-state-driven CT image reconstruction algorithm for cardiac imaging

    NASA Astrophysics Data System (ADS)

    Cesmeli, Erdogan; Edic, Peter M.; Iatrou, Maria; Hsieh, Jiang; Gupta, Rajiv; Pfoh, Armin H.

    2002-05-01

    Multi-slice CT scanners use EKG gating to predict the cardiac phase during slice reconstruction from projection data. Cardiac phase is generally defined with respect to the RR interval. The implicit assumption made is that the duration of events in a RR interval scales linearly when the heart rate changes. Using a more detailed EKG analysis, we evaluate the impact of relaxing this assumption on image quality. We developed a reconstruction algorithm that analyzes the associated EKG waveform to extract the natural cardiac states. A wavelet transform was used to decompose each RR-interval into P, QRS, and T waves. Subsequently, cardiac phase was defined with respect to these waves instead of a percentage or time delay from the beginning or the end of RR intervals. The projection data was then tagged with the cardiac phase and processed using temporal weights that are function of their cardiac phases. Finally, the tagged projection data were combined from multiple cardiac cycles using a multi-sector algorithm to reconstruct images. The new algorithm was applied to clinical data, collected on a 4-slice (GE LightSpeed Qx/i) and 8-slice CT scanner (GE LightSpeed Plus), with heart rates of 40 to 80 bpm. The quality of reconstruction is assessed by the visualization of the major arteries, e.g. RCA, LAD, LC in the reformat 3D images. Preliminary results indicate that Cardiac State Driven reconstruction algorithm offers better image quality than their RR-based counterparts.

  14. Microscopy image segmentation tool: Robust image data analysis

    SciTech Connect

    Valmianski, Ilya Monton, Carlos; Schuller, Ivan K.

    2014-03-15

    We present a software package called Microscopy Image Segmentation Tool (MIST). MIST is designed for analysis of microscopy images which contain large collections of small regions of interest (ROIs). Originally developed for analysis of porous anodic alumina scanning electron images, MIST capabilities have been expanded to allow use in a large variety of problems including analysis of biological tissue, inorganic and organic film grain structure, as well as nano- and meso-scopic structures. MIST provides a robust segmentation algorithm for the ROIs, includes many useful analysis capabilities, and is highly flexible allowing incorporation of specialized user developed analysis. We describe the unique advantages MIST has over existing analysis software. In addition, we present a number of diverse applications to scanning electron microscopy, atomic force microscopy, magnetic force microscopy, scanning tunneling microscopy, and fluorescent confocal laser scanning microscopy.

  15. Image registration with uncertainty analysis

    DOEpatents

    Simonson, Katherine M.

    2011-03-22

    In an image registration method, edges are detected in a first image and a second image. A percentage of edge pixels in a subset of the second image that are also edges in the first image shifted by a translation is calculated. A best registration point is calculated based on a maximum percentage of edges matched. In a predefined search region, all registration points other than the best registration point are identified that are not significantly worse than the best registration point according to a predetermined statistical criterion.

  16. Current state of ring imaging Cherenkov detectors

    SciTech Connect

    Coutrakon, G.B.

    1984-02-01

    This paper reviews several ring imaging Cherenkov detectors which are being used or developed to identify particles in high energy physics experiments. These detectors must have good detection efficiency for single photon-electrons and good spatial resolution over a large area. Emphasis is placed on the efficiencies and resolutions of these detectors as determined from ring imaging beam tests and other experiments. Following a brief review of the ring imaging technique, comparative evaluations are made of different forms of detectors and their respective materials.

  17. Millimeter-wave sensor image analysis

    NASA Technical Reports Server (NTRS)

    Wilson, William J.; Suess, Helmut

    1989-01-01

    Images of an airborne, scanning, radiometer operating at a frequency of 98 GHz, have been analyzed. The mm-wave images were obtained in 1985/1986 using the JPL mm-wave imaging sensor. The goal of this study was to enhance the information content of these images and make their interpretation easier for human analysis. In this paper, a visual interpretative approach was used for information extraction from the images. This included application of nonlinear transform techniques for noise reduction and for color, contrast and edge enhancement. Results of the techniques on selected mm-wave images are presented.

  18. Image processing software for imaging spectrometry data analysis

    NASA Technical Reports Server (NTRS)

    Mazer, Alan; Martin, Miki; Lee, Meemong; Solomon, Jerry E.

    1988-01-01

    Imaging spectrometers simultaneously collect image data in hundreds of spectral channels, from the near-UV to the IR, and can thereby provide direct surface materials identification by means resembling laboratory reflectance spectroscopy. Attention is presently given to a software system, the Spectral Analysis Manager (SPAM) for the analysis of imaging spectrometer data. SPAM requires only modest computational resources and is composed of one main routine and a set of subroutine libraries. Additions and modifications are relatively easy, and special-purpose algorithms have been incorporated that are tailored to geological applications.

  19. Quantitative analysis of digital microscope images.

    PubMed

    Wolf, David E; Samarasekera, Champika; Swedlow, Jason R

    2013-01-01

    This chapter discusses quantitative analysis of digital microscope images and presents several exercises to provide examples to explain the concept. This chapter also presents the basic concepts in quantitative analysis for imaging, but these concepts rest on a well-established foundation of signal theory and quantitative data analysis. This chapter presents several examples for understanding the imaging process as a transformation from sample to image and the limits and considerations of quantitative analysis. This chapter introduces to the concept of digitally correcting the images and also focuses on some of the more critical types of data transformation and some of the frequently encountered issues in quantization. Image processing represents a form of data processing. There are many examples of data processing such as fitting the data to a theoretical curve. In all these cases, it is critical that care is taken during all steps of transformation, processing, and quantization. PMID:23931513

  20. Multiscale Analysis of Solar Image Data

    NASA Astrophysics Data System (ADS)

    Young, C. A.; Myers, D. C.

    2001-12-01

    It is often said that the blessing and curse of solar physics is that there is too much data. Solar missions such as Yohkoh, SOHO and TRACE have shown us the Sun with amazing clarity but have also cursed us with an increased amount of higher complexity data than previous missions. We have improved our view of the Sun yet we have not improved our analysis techniques. The standard techniques used for analysis of solar images generally consist of observing the evolution of features in a sequence of byte scaled images or a sequence of byte scaled difference images. The determination of features and structures in the images are done qualitatively by the observer. There is little quantitative and objective analysis done with these images. Many advances in image processing techniques have occured in the past decade. Many of these methods are possibly suited for solar image analysis. Multiscale/Multiresolution methods are perhaps the most promising. These methods have been used to formulate the human ability to view and comprehend phenomena on different scales. So these techniques could be used to quantitify the imaging processing done by the observers eyes and brains. In this work we present a preliminary analysis of multiscale techniques applied to solar image data. Specifically, we explore the use of the 2-d wavelet transform and related transforms with EIT, LASCO and TRACE images. This work was supported by NASA contract NAS5-00220.

  1. A 3D image analysis tool for SPECT imaging

    NASA Astrophysics Data System (ADS)

    Kontos, Despina; Wang, Qiang; Megalooikonomou, Vasileios; Maurer, Alan H.; Knight, Linda C.; Kantor, Steve; Fisher, Robert S.; Simonian, Hrair P.; Parkman, Henry P.

    2005-04-01

    We have developed semi-automated and fully-automated tools for the analysis of 3D single-photon emission computed tomography (SPECT) images. The focus is on the efficient boundary delineation of complex 3D structures that enables accurate measurement of their structural and physiologic properties. We employ intensity based thresholding algorithms for interactive and semi-automated analysis. We also explore fuzzy-connectedness concepts for fully automating the segmentation process. We apply the proposed tools to SPECT image data capturing variation of gastric accommodation and emptying. These image analysis tools were developed within the framework of a noninvasive scintigraphic test to measure simultaneously both gastric emptying and gastric volume after ingestion of a solid or a liquid meal. The clinical focus of the particular analysis was to probe associations between gastric accommodation/emptying and functional dyspepsia. Employing the proposed tools, we outline effectively the complex three dimensional gastric boundaries shown in the 3D SPECT images. We also perform accurate volume calculations in order to quantitatively assess the gastric mass variation. This analysis was performed both with the semi-automated and fully-automated tools. The results were validated against manual segmentation performed by a human expert. We believe that the development of an automated segmentation tool for SPECT imaging of the gastric volume variability will allow for other new applications of SPECT imaging where there is a need to evaluate complex organ function or tumor masses.

  2. Histology image analysis for carcinoma detection and grading

    PubMed Central

    He, Lei; Long, L. Rodney; Antani, Sameer; Thoma, George R.

    2012-01-01

    This paper presents an overview of the image analysis techniques in the domain of histopathology, specifically, for the objective of automated carcinoma detection and classification. As in other biomedical imaging areas such as radiology, many computer assisted diagnosis (CAD) systems have been implemented to aid histopathologists and clinicians in cancer diagnosis and research, which have been attempted to significantly reduce the labor and subjectivity of traditional manual intervention with histology images. The task of automated histology image analysis is usually not simple due to the unique characteristics of histology imaging, including the variability in image preparation techniques, clinical interpretation protocols, and the complex structures and very large size of the images themselves. In this paper we discuss those characteristics, provide relevant background information about slide preparation and interpretation, and review the application of digital image processing techniques to the field of histology image analysis. In particular, emphasis is given to state-of-the-art image segmentation methods for feature extraction and disease classification. Four major carcinomas of cervix, prostate, breast, and lung are selected to illustrate the functions and capabilities of existing CAD systems. PMID:22436890

  3. Image Reconstruction Using Analysis Model Prior.

    PubMed

    Han, Yu; Du, Huiqian; Lam, Fan; Mei, Wenbo; Fang, Liping

    2016-01-01

    The analysis model has been previously exploited as an alternative to the classical sparse synthesis model for designing image reconstruction methods. Applying a suitable analysis operator on the image of interest yields a cosparse outcome which enables us to reconstruct the image from undersampled data. In this work, we introduce additional prior in the analysis context and theoretically study the uniqueness issues in terms of analysis operators in general position and the specific 2D finite difference operator. We establish bounds on the minimum measurement numbers which are lower than those in cases without using analysis model prior. Based on the idea of iterative cosupport detection (ICD), we develop a novel image reconstruction model and an effective algorithm, achieving significantly better reconstruction performance. Simulation results on synthetic and practical magnetic resonance (MR) images are also shown to illustrate our theoretical claims. PMID:27379171

  4. Image Reconstruction Using Analysis Model Prior

    PubMed Central

    Han, Yu; Du, Huiqian; Lam, Fan; Mei, Wenbo; Fang, Liping

    2016-01-01

    The analysis model has been previously exploited as an alternative to the classical sparse synthesis model for designing image reconstruction methods. Applying a suitable analysis operator on the image of interest yields a cosparse outcome which enables us to reconstruct the image from undersampled data. In this work, we introduce additional prior in the analysis context and theoretically study the uniqueness issues in terms of analysis operators in general position and the specific 2D finite difference operator. We establish bounds on the minimum measurement numbers which are lower than those in cases without using analysis model prior. Based on the idea of iterative cosupport detection (ICD), we develop a novel image reconstruction model and an effective algorithm, achieving significantly better reconstruction performance. Simulation results on synthetic and practical magnetic resonance (MR) images are also shown to illustrate our theoretical claims. PMID:27379171

  5. Analysis operator learning and its application to image reconstruction.

    PubMed

    Hawe, Simon; Kleinsteuber, Martin; Diepold, Klaus

    2013-06-01

    Exploiting a priori known structural information lies at the core of many image reconstruction methods that can be stated as inverse problems. The synthesis model, which assumes that images can be decomposed into a linear combination of very few atoms of some dictionary, is now a well established tool for the design of image reconstruction algorithms. An interesting alternative is the analysis model, where the signal is multiplied by an analysis operator and the outcome is assumed to be sparse. This approach has only recently gained increasing interest. The quality of reconstruction methods based on an analysis model severely depends on the right choice of the suitable operator. In this paper, we present an algorithm for learning an analysis operator from training images. Our method is based on l(p)-norm minimization on the set of full rank matrices with normalized columns. We carefully introduce the employed conjugate gradient method on manifolds, and explain the underlying geometry of the constraints. Moreover, we compare our approach to state-of-the-art methods for image denoising, inpainting, and single image super-resolution. Our numerical results show competitive performance of our general approach in all presented applications compared to the specialized state-of-the-art techniques. PMID:23412611

  6. Pathology imaging informatics for quantitative analysis of whole-slide images

    PubMed Central

    Kothari, Sonal; Phan, John H; Stokes, Todd H; Wang, May D

    2013-01-01

    Objectives With the objective of bringing clinical decision support systems to reality, this article reviews histopathological whole-slide imaging informatics methods, associated challenges, and future research opportunities. Target audience This review targets pathologists and informaticians who have a limited understanding of the key aspects of whole-slide image (WSI) analysis and/or a limited knowledge of state-of-the-art technologies and analysis methods. Scope First, we discuss the importance of imaging informatics in pathology and highlight the challenges posed by histopathological WSI. Next, we provide a thorough review of current methods for: quality control of histopathological images; feature extraction that captures image properties at the pixel, object, and semantic levels; predictive modeling that utilizes image features for diagnostic or prognostic applications; and data and information visualization that explores WSI for de novo discovery. In addition, we highlight future research directions and discuss the impact of large public repositories of histopathological data, such as the Cancer Genome Atlas, on the field of pathology informatics. Following the review, we present a case study to illustrate a clinical decision support system that begins with quality control and ends with predictive modeling for several cancer endpoints. Currently, state-of-the-art software tools only provide limited image processing capabilities instead of complete data analysis for clinical decision-making. We aim to inspire researchers to conduct more research in pathology imaging informatics so that clinical decision support can become a reality. PMID:23959844

  7. Trapping Image State Electrons on Graphene Layers and Islands

    NASA Astrophysics Data System (ADS)

    Dadap, Jerry; Niesner, Daniel; Fauster, Thomas; Zaki, Nader; Knox, Kevin; Yeh, Po-Chi; Bhandari, Rohan; Osgood, Richard M.; Petrovic, Marin; Kralj, Marko

    2012-02-01

    The understanding of graphene-metal interfaces is of utmost importance in graphene transport phenomena. To probe this interface we use time- and angle-resolved two-photon photoemission to map the bound, unoccupied electronic structure of the weakly coupled graphene/Ir(111) system. The energy, dispersion, and lifetime of the lowest three image-potential states are measured. In addition, the weak interaction between Ir and the smooth, epitaxial graphene permits observation of resonant transitions from an unquenched Shockley-type surface state of the Ir substrate to graphene/Ir image-potential states. The image-potential-state lifetimes are comparable to those of mid-gap clean metal surfaces. Evidence of localization of the excited image-state electrons on single-atom-layer graphene islands is provided by coverage-dependent measurements.

  8. Control of multiple excited image states around segmented carbon nanotubes.

    PubMed

    Knörzer, J; Fey, C; Sadeghpour, H R; Schmelcher, P

    2015-11-28

    Electronic image states around segmented carbon nanotubes can be confined and shaped along the nanotube axis by engineering the image potential. We show how several such image states can be prepared simultaneously along the same nanotube. The inter-electronic distance can be controlled a priori by engineering tubes of specific geometries. High sensitivity to external electric and magnetic fields can be exploited to manipulate these states and their mutual long-range interactions. These building blocks provide access to a new kind of tailored interacting quantum systems. PMID:26627961

  9. Control of multiple excited image states around segmented carbon nanotubes

    SciTech Connect

    Knörzer, J. Fey, C.; Sadeghpour, H. R.; Schmelcher, P.

    2015-11-28

    Electronic image states around segmented carbon nanotubes can be confined and shaped along the nanotube axis by engineering the image potential. We show how several such image states can be prepared simultaneously along the same nanotube. The inter-electronic distance can be controlled a priori by engineering tubes of specific geometries. High sensitivity to external electric and magnetic fields can be exploited to manipulate these states and their mutual long-range interactions. These building blocks provide access to a new kind of tailored interacting quantum systems.

  10. State-of-the-art imaging of prostate cancer.

    PubMed

    Marko, Jamie; Gould, C Frank; Bonavia, Grant H; Wolfman, Darcy J

    2016-03-01

    Prostate cancer is the most common cancer in men. Modern medical imaging is intimately involved in the diagnosis and management of prostate cancer. Ultrasound is primarily used to guide prostate biopsy to establish the diagnosis of prostate carcinoma. Prostate magnetic resonance imaging uses a multiparametric approach, including anatomic and functional imaging sequences. Multiparametric magnetic resonance imaging can be used for detection and localization of prostate cancer and to evaluate for disease recurrence. Computed tomography and scintigraphic imaging are primarily used to detect regional lymph node spread and distant metastases. Recent advancements in ultrasound, multiparametric magnetic resonance imaging, and scintigraphic imaging have the potential to change the way prostate cancer is diagnosed and managed. This article addresses the major imaging modalities involved in the evaluation of prostate cancer and updates the reader on the state of the art for each modality. PMID:26087969

  11. Description, Recognition and Analysis of Biological Images

    SciTech Connect

    Yu Donggang; Jin, Jesse S.; Luo Suhuai; Pham, Tuan D.; Lai Wei

    2010-01-25

    Description, recognition and analysis biological images plays an important role for human to describe and understand the related biological information. The color images are separated by color reduction. A new and efficient linearization algorithm is introduced based on some criteria of difference chain code. A series of critical points is got based on the linearized lines. The series of curvature angle, linearity, maximum linearity, convexity, concavity and bend angle of linearized lines are calculated from the starting line to the end line along all smoothed contours. The useful method can be used for shape description and recognition. The analysis, decision, classification of the biological images are based on the description of morphological structures, color information and prior knowledge, which are associated each other. The efficiency of the algorithms is described based on two applications. One application is the description, recognition and analysis of color flower images. Another one is related to the dynamic description, recognition and analysis of cell-cycle images.

  12. Accuracy in Quantitative 3D Image Analysis

    PubMed Central

    Bassel, George W.

    2015-01-01

    Quantitative 3D imaging is becoming an increasingly popular and powerful approach to investigate plant growth and development. With the increased use of 3D image analysis, standards to ensure the accuracy and reproducibility of these data are required. This commentary highlights how image acquisition and postprocessing can introduce artifacts into 3D image data and proposes steps to increase both the accuracy and reproducibility of these analyses. It is intended to aid researchers entering the field of 3D image processing of plant cells and tissues and to help general readers in understanding and evaluating such data. PMID:25804539

  13. Characterization and analysis of infrared images

    NASA Astrophysics Data System (ADS)

    Raglin, Adrienne; Wetmore, Alan; Ligon, David

    2006-05-01

    Stokes images in the long-wave infrared (LWIR) and methods for processing polarimetric data continue to be areas of interest. Stokes images which are sensitive to geometry and material differences are acquired by measuring the polarization state of the received electromagnetic radiation. The polarimetric data from Stokes images may provide enhancements to conventional IR imagery data. It is generally agreed that polarimetric images can reveal information about objects or features within a scene that are not available through other imaging techniques. This additional information may generate different approaches to segmentation, detection, and recognition of objects or features. Previous research where horizontal and vertical polarization data is used supports the use of this type of data for image processing tasks. In this work we analyze a sample polarimetric image to show both improved segmentation of objects and derivation of their inherent 3-D geometry.

  14. Optical Analysis of Microscope Images

    NASA Astrophysics Data System (ADS)

    Biles, Jonathan R.

    Microscope images were analyzed with coherent and incoherent light using analog optical techniques. These techniques were found to be useful for analyzing large numbers of nonsymbolic, statistical microscope images. In the first part phase coherent transparencies having 20-100 human multiple myeloma nuclei were simultaneously photographed at 100 power magnification using high resolution holographic film developed to high contrast. An optical transform was obtained by focussing the laser onto each nuclear image and allowing the diffracted light to propagate onto a one dimensional photosensor array. This method reduced the data to the position of the first two intensity minima and the intensity of successive maxima. These values were utilized to estimate the four most important cancer detection clues of nuclear size, shape, darkness, and chromatin texture. In the second part, the geometric and holographic methods of phase incoherent optical processing were investigated for pattern recognition of real-time, diffuse microscope images. The theory and implementation of these processors was discussed in view of their mutual problems of dimness, image bias, and detector resolution. The dimness problem was solved by either using a holographic correlator or a speckle free laser microscope. The latter was built using a spinning tilted mirror which caused the speckle to change so quickly that it averaged out during the exposure. To solve the bias problem low image bias templates were generated by four techniques: microphotography of samples, creation of typical shapes by computer graphics editor, transmission holography of photoplates of samples, and by spatially coherent color image bias removal. The first of these templates was used to perform correlations with bacteria images. The aperture bias was successfully removed from the correlation with a video frame subtractor. To overcome the limited detector resolution it is necessary to discover some analog nonlinear intensity

  15. Objective analysis of image quality of video image capture systems

    NASA Astrophysics Data System (ADS)

    Rowberg, Alan H.

    1990-07-01

    As Picture Archiving and Communication System (PACS) technology has matured, video image capture has become a common way of capturing digital images from many modalities. While digital interfaces, such as those which use the ACR/NEMA standard, will become more common in the future, and are preferred because of the accuracy of image transfer, video image capture will be the dominant method in the short term, and may continue to be used for some time because of the low cost and high speed often associated with such devices. Currently, virtually all installed systems use methods of digitizing the video signal that is produced for display on the scanner viewing console itself. A series of digital test images have been developed for display on either a GE CT9800 or a GE Signa MRI scanner. These images have been captured with each of five commercially available image capture systems, and the resultant images digitally transferred on floppy disk to a PC1286 computer containing Optimast' image analysis software. Here the images can be displayed in a comparative manner for visual evaluation, in addition to being analyzed statistically. Each of the images have been designed to support certain tests, including noise, accuracy, linearity, gray scale range, stability, slew rate, and pixel alignment. These image capture systems vary widely in these characteristics, in addition to the presence or absence of other artifacts, such as shading and moire pattern. Other accessories such as video distribution amplifiers and noise filters can also add or modify artifacts seen in the captured images, often giving unusual results. Each image is described, together with the tests which were performed using them. One image contains alternating black and white lines, each one pixel wide, after equilibration strips ten pixels wide. While some systems have a slew rate fast enough to track this correctly, others blur it to an average shade of gray, and do not resolve the lines, or give

  16. Scale-Specific Multifractal Medical Image Analysis

    PubMed Central

    Braverman, Boris

    2013-01-01

    Fractal geometry has been applied widely in the analysis of medical images to characterize the irregular complex tissue structures that do not lend themselves to straightforward analysis with traditional Euclidean geometry. In this study, we treat the nonfractal behaviour of medical images over large-scale ranges by considering their box-counting fractal dimension as a scale-dependent parameter rather than a single number. We describe this approach in the context of the more generalized Rényi entropy, in which we can also compute the information and correlation dimensions of images. In addition, we describe and validate a computational improvement to box-counting fractal analysis. This improvement is based on integral images, which allows the speedup of any box-counting or similar fractal analysis algorithm, including estimation of scale-dependent dimensions. Finally, we applied our technique to images of invasive breast cancer tissue from 157 patients to show a relationship between the fractal analysis of these images over certain scale ranges and pathologic tumour grade (a standard prognosticator for breast cancer). Our approach is general and can be applied to any medical imaging application in which the complexity of pathological image structures may have clinical value. PMID:24023588

  17. Electronic aperture control devised for solid state imaging system

    NASA Technical Reports Server (NTRS)

    Anders, R. A.; Callahan, D. E.; Mc Cann, D. H.

    1968-01-01

    Electronic means of performing the equivalent of automatic aperture control has been devised for the new class of television cameras that incorporates a solid state imaging device in the form of phototransistor mosaic sensors.

  18. Materials characterization through quantitative digital image analysis

    SciTech Connect

    J. Philliber; B. Antoun; B. Somerday; N. Yang

    2000-07-01

    A digital image analysis system has been developed to allow advanced quantitative measurement of microstructural features. This capability is maintained as part of the microscopy facility at Sandia, Livermore. The system records images digitally, eliminating the use of film. Images obtained from other sources may also be imported into the system. Subsequent digital image processing enhances image appearance through the contrast and brightness adjustments. The system measures a variety of user-defined microstructural features--including area fraction, particle size and spatial distributions, grain sizes and orientations of elongated particles. These measurements are made in a semi-automatic mode through the use of macro programs and a computer controlled translation stage. A routine has been developed to create large montages of 50+ separate images. Individual image frames are matched to the nearest pixel to create seamless montages. Results from three different studies are presented to illustrate the capabilities of the system.

  19. Segmentation and learning in the quantitative analysis of microscopy images

    NASA Astrophysics Data System (ADS)

    Ruggiero, Christy; Ross, Amy; Porter, Reid

    2015-02-01

    In material science and bio-medical domains the quantity and quality of microscopy images is rapidly increasing and there is a great need to automatically detect, delineate and quantify particles, grains, cells, neurons and other functional "objects" within these images. These are challenging problems for image processing because of the variability in object appearance that inevitably arises in real world image acquisition and analysis. One of the most promising (and practical) ways to address these challenges is interactive image segmentation. These algorithms are designed to incorporate input from a human operator to tailor the segmentation method to the image at hand. Interactive image segmentation is now a key tool in a wide range of applications in microscopy and elsewhere. Historically, interactive image segmentation algorithms have tailored segmentation on an image-by-image basis, and information derived from operator input is not transferred between images. But recently there has been increasing interest to use machine learning in segmentation to provide interactive tools that accumulate and learn from the operator input over longer periods of time. These new learning algorithms reduce the need for operator input over time, and can potentially provide a more dynamic balance between customization and automation for different applications. This paper reviews the state of the art in this area, provides a unified view of these algorithms, and compares the segmentation performance of various design choices.

  20. Factor Analysis of the Image Correlation Matrix.

    ERIC Educational Resources Information Center

    Kaiser, Henry F.; Cerny, Barbara A.

    1979-01-01

    Whether to factor the image correlation matrix or to use a new model with an alpha factor analysis of it is mentioned, with particular reference to the determinacy problem. It is pointed out that the distribution of the images is sensibly multivariate normal, making for "better" factor analyses. (Author/CTM)

  1. Viewing angle analysis of integral imaging

    NASA Astrophysics Data System (ADS)

    Wang, Hong-Xia; Wu, Chun-Hong; Yang, Yang; Zhang, Lan

    2007-12-01

    Integral imaging (II) is a technique capable of displaying 3D images with continuous parallax in full natural color. It is becoming the most perspective technique in developing next generation three-dimensional TV (3DTV) and visualization field due to its outstanding advantages. However, most of conventional integral images are restricted by its narrow viewing angle. One reason is that the range in which a reconstructed integral image can be displayed with consistent parallax is limited. The other is that the aperture of system is finite. By far many methods , an integral imaging method to enhance the viewing angle of integral images has been proposed. Nevertheless, except Ren's MVW (Maximum Viewing Width) most of these methods involve complex hardware and modifications of optical system, which usually bring other disadvantages and make operation more difficult. At the same time the cost of these systems should be higher. In order to simplify optical systems, this paper systematically analyzes the viewing angle of traditional integral images instead of modified ones. Simultaneously for the sake of cost the research was based on computer generated integral images (CGII). With the analysis result we can know clearly how the viewing angle can be enhanced and how the image overlap or image flipping can be avoided. The result also promotes the development of optical instruments. Based on theoretical analysis, preliminary calculation was done to demonstrate how the other viewing properties which are closely related with the viewing angle, such as viewing distance, viewing zone, lens pitch, and etc. affect the viewing angle.

  2. Sea state variability observed by high resolution satellite radar images

    NASA Astrophysics Data System (ADS)

    Pleskachevsky, A.; Lehner, S.

    2012-04-01

    The spatial variability of the wave parameters is measured and investigated using new TerraSAR-X (TS-X) satellite SAR (Synthetic Aperture Radar) images. Wave groupiness, refraction and breaking of individual wave are studied. Space borne SAR is a unique sensor providing two dimensional information of the ocean surface. Due to its daylight, weather independency and global coverage, the TS-X radar is particularly suitable for many ocean and coastal observations and it acquires images of the sea surface with up to 1m resolution; individual ocean waves with wavelength below 30m are detectable. Two-dimensional information of the ocean surface, retrieved using TS-X data, is validated for different oceanographic applications: derivation of the fine resolved wind field (XMOD algorithm) and integrated sea state parameters (XWAVE algorithm). The algorithms are capable to take into account fine-scale effects in the coastal areas. This two-dimensional information can be successfully applied to validate numerical models. For this, wind field and sea state information retrieved from SAR images are given as input for a spectral numerical wave model (wind forcing and boundary condition). The model runs and sensitivity studies are carried out at a fine spatial horizontal resolution of 100m. The model results are compared to buoy time series at one location and with spatially distributed wave parameters obtained from SAR. The comparison shows the sensitivity of waves to local wind variations and the importance of local effects on wave behavior in coastal areas. Examples for the German Bight, North Sea and Rottenest Island, Australia are shown. The wave refraction, rendered by high resolution SAR images, is also studied. The wave ray tracking technique is applied. The wave rays show the propagation of the peak waves in the SAR-scenes and are estimated using image spectral analysis by deriving peak wavelength and direction. The changing of wavelength and direction in the rays allows

  3. Depth-based selective image reconstruction using spatiotemporal image analysis

    NASA Astrophysics Data System (ADS)

    Haga, Tetsuji; Sumi, Kazuhiko; Hashimoto, Manabu; Seki, Akinobu

    1999-03-01

    In industrial plants, a remote monitoring system which removes physical tour inspection is often considered desirable. However the image sequence given from the mobile inspection robot is hard to see because interested objects are often partially occluded by obstacles such as pillars or fences. Our aim is to improve the image sequence that increases the efficiency and reliability of remote visual inspection. We propose a new depth-based image processing technique, which removes the needless objects from the foreground and recovers the occluded background electronically. Our algorithm is based on spatiotemporal analysis that enables fine and dense depth estimation, depth-based precise segmentation, and accurate interpolation. We apply this technique to a real time sequence given from the mobile inspection robot. The resulted image sequence is satisfactory in that the operator can make correct visual inspection with less fatigue.

  4. Strongly localized image states of spherical graphitic particles.

    PubMed

    Gumbs, Godfrey; Balassis, Antonios; Iurov, Andrii; Fekete, Paula

    2014-01-01

    We investigate the localization of charged particles by the image potential of spherical shells, such as fullerene buckyballs. These spherical image states exist within surface potentials formed by the competition between the attractive image potential and the repulsive centripetal force arising from the angular motion. The image potential has a power law rather than a logarithmic behavior. This leads to fundamental differences in the nature of the effective potential for the two geometries. Our calculations have shown that the captured charge is more strongly localized closest to the surface for fullerenes than for cylindrical nanotube. PMID:24587747

  5. Strongly Localized Image States of Spherical Graphitic Particles

    PubMed Central

    Gumbs, Godfrey

    2014-01-01

    We investigate the localization of charged particles by the image potential of spherical shells, such as fullerene buckyballs. These spherical image states exist within surface potentials formed by the competition between the attractive image potential and the repulsive centripetal force arising from the angular motion. The image potential has a power law rather than a logarithmic behavior. This leads to fundamental differences in the nature of the effective potential for the two geometries. Our calculations have shown that the captured charge is more strongly localized closest to the surface for fullerenes than for cylindrical nanotube. PMID:24587747

  6. Linear digital imaging system fidelity analysis

    NASA Technical Reports Server (NTRS)

    Park, Stephen K.

    1989-01-01

    The combined effects of imaging gathering, sampling and reconstruction are analyzed in terms of image fidelity. The analysis is based upon a standard end-to-end linear system model which is sufficiently general so that the results apply to most line-scan and sensor-array imaging systems. Shift-variant sampling effects are accounted for with an expected value analysis based upon the use of a fixed deterministic input scene which is randomly shifted (mathematically) relative to the sampling grid. This random sample-scene phase approach has been used successfully by the author and associates in several previous related papers.

  7. Infrared image processing and data analysis

    NASA Astrophysics Data System (ADS)

    Ibarra-Castanedo, C.; González, D.; Klein, M.; Pilla, M.; Vallerand, S.; Maldague, X.

    2004-12-01

    Infrared thermography in nondestructive testing provides images (thermograms) in which zones of interest (defects) appear sometimes as subtle signatures. In this context, raw images are not often appropriate since most will be missed. In some other cases, what is needed is a quantitative analysis such as for defect detection and characterization. In this paper, presentation is made of various methods of data analysis required either at preprocessing and/or processing images. References from literature are provided for briefly discussed known methods while novelties are elaborated in more details within the text which include also experimental results.

  8. Malware Analysis Using Visualized Image Matrices

    PubMed Central

    Im, Eul Gyu

    2014-01-01

    This paper proposes a novel malware visual analysis method that contains not only a visualization method to convert binary files into images, but also a similarity calculation method between these images. The proposed method generates RGB-colored pixels on image matrices using the opcode sequences extracted from malware samples and calculates the similarities for the image matrices. Particularly, our proposed methods are available for packed malware samples by applying them to the execution traces extracted through dynamic analysis. When the images are generated, we can reduce the overheads by extracting the opcode sequences only from the blocks that include the instructions related to staple behaviors such as functions and application programming interface (API) calls. In addition, we propose a technique that generates a representative image for each malware family in order to reduce the number of comparisons for the classification of unknown samples and the colored pixel information in the image matrices is used to calculate the similarities between the images. Our experimental results show that the image matrices of malware can effectively be used to classify malware families both statically and dynamically with accuracy of 0.9896 and 0.9732, respectively. PMID:25133202

  9. Malware analysis using visualized image matrices.

    PubMed

    Han, KyoungSoo; Kang, BooJoong; Im, Eul Gyu

    2014-01-01

    This paper proposes a novel malware visual analysis method that contains not only a visualization method to convert binary files into images, but also a similarity calculation method between these images. The proposed method generates RGB-colored pixels on image matrices using the opcode sequences extracted from malware samples and calculates the similarities for the image matrices. Particularly, our proposed methods are available for packed malware samples by applying them to the execution traces extracted through dynamic analysis. When the images are generated, we can reduce the overheads by extracting the opcode sequences only from the blocks that include the instructions related to staple behaviors such as functions and application programming interface (API) calls. In addition, we propose a technique that generates a representative image for each malware family in order to reduce the number of comparisons for the classification of unknown samples and the colored pixel information in the image matrices is used to calculate the similarities between the images. Our experimental results show that the image matrices of malware can effectively be used to classify malware families both statically and dynamically with accuracy of 0.9896 and 0.9732, respectively. PMID:25133202

  10. Object based image analysis for the classification of the growth stages of Avocado crop, in Michoacán State, Mexico

    NASA Astrophysics Data System (ADS)

    Gao, Yan; Marpu, Prashanth; Morales Manila, Luis M.

    2014-11-01

    This paper assesses the suitability of 8-band Worldview-2 (WV2) satellite data and object-based random forest algorithm for the classification of avocado growth stages in Mexico. We tested both pixel-based with minimum distance (MD) and maximum likelihood (MLC) and object-based with Random Forest (RF) algorithm for this task. Training samples and verification data were selected by visual interpreting the WV2 images for seven thematic classes: fully grown, middle stage, and early stage of avocado crops, bare land, two types of natural forests, and water body. To examine the contribution of the four new spectral bands of WV2 sensor, all the tested classifications were carried out with and without the four new spectral bands. Classification accuracy assessment results show that object-based classification with RF algorithm obtained higher overall higher accuracy (93.06%) than pixel-based MD (69.37%) and MLC (64.03%) method. For both pixel-based and object-based methods, the classifications with the four new spectral bands (overall accuracy obtained higher accuracy than those without: overall accuracy of object-based RF classification with vs without: 93.06% vs 83.59%, pixel-based MD: 69.37% vs 67.2%, pixel-based MLC: 64.03% vs 36.05%, suggesting that the four new spectral bands in WV2 sensor contributed to the increase of the classification accuracy.

  11. Image texture analysis of crushed wheat kernels

    NASA Astrophysics Data System (ADS)

    Zayas, Inna Y.; Martin, C. R.; Steele, James L.; Dempster, Richard E.

    1992-03-01

    The development of new approaches for wheat hardness assessment may impact the grain industry in marketing, milling, and breeding. This study used image texture features for wheat hardness evaluation. Application of digital imaging to grain for grading purposes is principally based on morphometrical (shape and size) characteristics of the kernels. A composite sample of 320 kernels for 17 wheat varieties were collected after testing and crushing with a single kernel hardness characterization meter. Six wheat classes where represented: HRW, HRS, SRW, SWW, Durum, and Club. In this study, parameters which characterize texture or spatial distribution of gray levels of an image were determined and used to classify images of crushed wheat kernels. The texture parameters of crushed wheat kernel images were different depending on class, hardness and variety of the wheat. Image texture analysis of crushed wheat kernels showed promise for use in class, hardness, milling quality, and variety discrimination.

  12. Breast tomosynthesis imaging configuration analysis.

    PubMed

    Rayford, Cleveland E; Zhou, Weihua; Chen, Ying

    2013-01-01

    Traditional two-dimensional (2D) X-ray mammography is the most commonly used method for breast cancer diagnosis. Recently, a three-dimensional (3D) Digital Breast Tomosynthesis (DBT) system has been invented, which is likely to challenge the current mammography technology. The DBT system provides stunning 3D information, giving physicians increased detail of anatomical information, while reducing the chance of false negative screening. In this research, two reconstruction algorithms, Back Projection (BP) and Shift-And-Add (SAA), were used to investigate and compare View Angle (VA) and the number of projection images (N) with parallel imaging configurations. In addition, in order to better determine which method displayed better-quality imaging, Modulation Transfer Function (MTF) analyses were conducted with both algorithms, ultimately producing results which improve upon better breast cancer detection. Research studies find evidence that early detection of the disease is the best way to conquer breast cancer, and earlier detection results in the increase of life span for the affected person. PMID:23900440

  13. Breast cancer histopathology image analysis: a review.

    PubMed

    Veta, Mitko; Pluim, Josien P W; van Diest, Paul J; Viergever, Max A

    2014-05-01

    This paper presents an overview of methods that have been proposed for the analysis of breast cancer histopathology images. This research area has become particularly relevant with the advent of whole slide imaging (WSI) scanners, which can perform cost-effective and high-throughput histopathology slide digitization, and which aim at replacing the optical microscope as the primary tool used by pathologist. Breast cancer is the most prevalent form of cancers among women, and image analysis methods that target this disease have a huge potential to reduce the workload in a typical pathology lab and to improve the quality of the interpretation. This paper is meant as an introduction for nonexperts. It starts with an overview of the tissue preparation, staining and slide digitization processes followed by a discussion of the different image processing techniques and applications, ranging from analysis of tissue staining to computer-aided diagnosis, and prognosis of breast cancer patients. PMID:24759275

  14. Principal component analysis of scintimammographic images.

    PubMed

    Bonifazzi, Claudio; Cinti, Maria Nerina; Vincentis, Giuseppe De; Finos, Livio; Muzzioli, Valerio; Betti, Margherita; Nico, Lanconelli; Tartari, Agostino; Pani, Roberto

    2006-01-01

    The recent development of new gamma imagers based on scintillation array with high spatial resolution, has strongly improved the possibility of detecting sub-centimeter cancer in Scintimammography. However, Compton scattering contamination remains the main drawback since it limits the sensitivity of tumor detection. Principal component image analysis (PCA), recently introduced in scintimam nographic imaging, is a data reduction technique able to represent the radiation emitted from chest, breast healthy and damaged tissues as separated images. From these images a Scintimammography can be obtained where the Compton contamination is "removed". In the present paper we compared the PCA reconstructed images with the conventional scintimammographic images resulting from the photopeak (Ph) energy window. Data coming from a clinical trial were used. For both kinds of images the tumor presence was quantified by evaluating the t-student statistics for independent sample as a measure of the signal-to-noise ratio (SNR). Since the absence of Compton scattering, the PCA reconstructed images shows a better noise suppression and allows a more reliable diagnostics in comparison with the images obtained by the photopeak energy window, reducing the trend in producing false positive. PMID:17646004

  15. Image analysis in comparative genomic hybridization

    SciTech Connect

    Lundsteen, C.; Maahr, J.; Christensen, B.

    1995-01-01

    Comparative genomic hybridization (CGH) is a new technique by which genomic imbalances can be detected by combining in situ suppression hybridization of whole genomic DNA and image analysis. We have developed software for rapid, quantitative CGH image analysis by a modification and extension of the standard software used for routine karyotyping of G-banded metaphase spreads in the Magiscan chromosome analysis system. The DAPI-counterstained metaphase spread is karyotyped interactively. Corrections for image shifts between the DAPI, FITC, and TRITC images are done manually by moving the three images relative to each other. The fluorescence background is subtracted. A mean filter is applied to smooth the FITC and TRITC images before the fluorescence ratio between the individual FITC and TRITC-stained chromosomes is computed pixel by pixel inside the area of the chromosomes determined by the DAPI boundaries. Fluorescence intensity ratio profiles are generated, and peaks and valleys indicating possible gains and losses of test DNA are marked if they exceed ratios below 0.75 and above 1.25. By combining the analysis of several metaphase spreads, consistent findings of gains and losses in all or almost all spreads indicate chromosomal imbalance. Chromosomal imbalances are detected either by visual inspection of fluorescence ratio (FR) profiles or by a statistical approach that compares FR measurements of the individual case with measurements of normal chromosomes. The complete analysis of one metaphase can be carried out in approximately 10 minutes. 8 refs., 7 figs., 1 tab.

  16. State Clean Energy Policies Analysis (SCEPA): State Tax Incentives

    SciTech Connect

    Lantz, E.; Doris, E.

    2009-10-01

    As a policy tool, state tax incentives can be structured to help states meet clean energy goals. Policymakers often use state tax incentives in concert with state and federal policies to support renewable energy deployment or reduce market barriers. This analysis used case studies of four states to assess the contributions of state tax incentives to the development of renewable energy markets. State tax incentives that are appropriately paired with complementary state and federal policies generally provide viable mechanisms to support renewable energy deployment. However, challenges to successful implementation of state tax incentives include serving project owners with limited state tax liability, assessing appropriate incentive levels, and differentiating levels of incentives for technologies with different costs. Additionally, state tax incentives may result in moderately higher federal tax burdens. These challenges notwithstanding, state tax incentives that consider certain policy design characteristics can support renewable energy markets and state clean energy goals.The scale of their impact though is directly related to the degree to which they support the renewable energy markets for targeted sectors and technologies. This report highlights important policy design considerations for policymakers using state tax incentives to meet clean energy goals.

  17. State estimation and absolute image registration for geosynchronous satellites

    NASA Technical Reports Server (NTRS)

    Nankervis, R.; Koch, D. W.; Sielski, H.

    1980-01-01

    Spacecraft state estimation and the absolute registration of Earth images acquired by cameras onboard geosynchronous satellites are described. The basic data type of the procedure consists of line and element numbers of image points called landmarks whose geodetic coordinates, relative to United States Geodetic Survey topographic maps, are known. A conventional least squares process is used to estimate navigational parameters and camera pointing biases from observed minus computed landmark line and element numbers. These estimated parameters along with orbit and attitude dynamic models are used to register images, using an automated grey level correlation technique, inside the span represented by the landmark data. In addition, the dynamic models can be employed to register images outside of the data span in a near real time mode. An important application of this mode is in support of meteorological studies where rapid data reduction is required for the rapid tracking and predicting of dynamic phenomena.

  18. Characterization of Image States in Graphene on Ir(111)

    NASA Astrophysics Data System (ADS)

    Dadap, Jerry I.; Kralj, Marko; Petrovic, Marin; Knox, Kevin; Zaki, Nader; Bhandari, Rohan; Yeh, Po-Chun; Osgood, Richard M., Jr.

    2011-03-01

    Two dimensional electron systems involving graphene and graphene/metallic interfaces are increasingly of interest in condensed matter physics. Here, we demonstrate two-photon photoemission to map the image states of highly perfect and weakly bonded graphene on an Ir(111) substrate to reveal the effects of interaction with the underlying metal substrate. We observe a monotonic decrease in the work function with increasing graphene coverage from 5.6 +/- 0.1 eV for clean Ir to 4.5 +/- 0.1 eV for full graphene ML. We observe n = 1, 2, 3 image states with nearly free electron dispersion. Despite the minimal coupling between the graphene and Ir, the energy spacing of the image states is consistent with a single Rydberg series description, in contrast to the expected bifurcation of the image states into odd and even states for a pure graphene layer. At large k| | , we observe a weak state deviating from the n = 1 dipersion. We explain this effect in terms of scattering from the Ir substrate. This work is supported by DOE Contract No. DEFG 02-04-ER-46157.

  19. Quantitative analysis of qualitative images

    NASA Astrophysics Data System (ADS)

    Hockney, David; Falco, Charles M.

    2005-03-01

    We show optical evidence that demonstrates artists as early as Jan van Eyck and Robert Campin (c1425) used optical projections as aids for producing their paintings. We also have found optical evidence within works by later artists, including Bermejo (c1475), Lotto (c1525), Caravaggio (c1600), de la Tour (c1650), Chardin (c1750) and Ingres (c1825), demonstrating a continuum in the use of optical projections by artists, along with an evolution in the sophistication of that use. However, even for paintings where we have been able to extract unambiguous, quantitative evidence of the direct use of optical projections for producing certain of the features, this does not mean that paintings are effectively photographs. Because the hand and mind of the artist are intimately involved in the creation process, understanding these complex images requires more than can be obtained from only applying the equations of geometrical optics.

  20. Hybrid µCT-FMT imaging and image analysis

    PubMed Central

    Zafarnia, Sara; Babler, Anne; Jahnen-Dechent, Willi; Lammers, Twan; Lederle, Wiltrud; Kiessling, Fabian

    2015-01-01

    Fluorescence-mediated tomography (FMT) enables longitudinal and quantitative determination of the fluorescence distribution in vivo and can be used to assess the biodistribution of novel probes and to assess disease progression using established molecular probes or reporter genes. The combination with an anatomical modality, e.g., micro computed tomography (µCT), is beneficial for image analysis and for fluorescence reconstruction. We describe a protocol for multimodal µCT-FMT imaging including the image processing steps necessary to extract quantitative measurements. After preparing the mice and performing the imaging, the multimodal data sets are registered. Subsequently, an improved fluorescence reconstruction is performed, which takes into account the shape of the mouse. For quantitative analysis, organ segmentations are generated based on the anatomical data using our interactive segmentation tool. Finally, the biodistribution curves are generated using a batch-processing feature. We show the applicability of the method by assessing the biodistribution of a well-known probe that binds to bones and joints. PMID:26066033

  1. Particle Pollution Estimation Based on Image Analysis.

    PubMed

    Liu, Chenbin; Tsow, Francis; Zou, Yi; Tao, Nongjian

    2016-01-01

    Exposure to fine particles can cause various diseases, and an easily accessible method to monitor the particles can help raise public awareness and reduce harmful exposures. Here we report a method to estimate PM air pollution based on analysis of a large number of outdoor images available for Beijing, Shanghai (China) and Phoenix (US). Six image features were extracted from the images, which were used, together with other relevant data, such as the position of the sun, date, time, geographic information and weather conditions, to predict PM2.5 index. The results demonstrate that the image analysis method provides good prediction of PM2.5 indexes, and different features have different significance levels in the prediction. PMID:26828757

  2. Particle Pollution Estimation Based on Image Analysis

    PubMed Central

    Liu, Chenbin; Tsow, Francis; Zou, Yi; Tao, Nongjian

    2016-01-01

    Exposure to fine particles can cause various diseases, and an easily accessible method to monitor the particles can help raise public awareness and reduce harmful exposures. Here we report a method to estimate PM air pollution based on analysis of a large number of outdoor images available for Beijing, Shanghai (China) and Phoenix (US). Six image features were extracted from the images, which were used, together with other relevant data, such as the position of the sun, date, time, geographic information and weather conditions, to predict PM2.5 index. The results demonstrate that the image analysis method provides good prediction of PM2.5 indexes, and different features have different significance levels in the prediction. PMID:26828757

  3. Lincoln Laboratory high-speed solid-state imager technology

    NASA Astrophysics Data System (ADS)

    Reich, R. K.; Rathman, D. D.; O'Mara, D. M.; Young, D. J.; Loomis, A. H.; Osgood, R. M.; Murphy, R. A.; Rose, M.; Berger, R.; Tyrrell, B. M.; Watson, S. A.; Ulibarri, M. D.; Perry, T.; Weber, F.; Robey, H.

    2007-01-01

    Massachusetts Institute of Technology, Lincoln Laboratory (MIT LL) has been developing both continuous and burst solid-state focal-plane-array technology for a variety of high-speed imaging applications. For continuous imaging, a 128 × 128-pixel charge coupled device (CCD) has been fabricated with multiple output ports for operating rates greater than 10,000 frames per second with readout noise of less than 10 e- rms. An electronic shutter has been integrated into the pixels of the back-illuminated (BI) CCD imagers that give snapshot exposure times of less than 10 ns. For burst imaging, a 5 cm × 5 cm, 512 × 512-element, multi-frame CCD imager that collects four sequential image frames at megahertz rates has been developed for the Los Alamos National Laboratory Dual Axis Radiographic Hydrodynamic Test (DARHT) facility. To operate at fast frame rates with high sensitivity, the imager uses the same electronic shutter technology as the continuously framing 128 × 128 CCD imager. The design concept and test results are described for the burst-frame-rate imager. Also discussed is an evolving solid-state imager technology that has interesting characteristics for creating large-format x-ray detectors with ultra-short exposure times (100 to 300 ps). The detector will consist of CMOS readouts for high speed sampling (tens of picoseconds transistor switching times) that are bump bonded to deep-depletion silicon photodiodes. A 64 × 64-pixel CMOS test chip has been designed, fabricated and characterized to investigate the feasibility of making large-format detectors with short, simultaneous exposure times.

  4. Membrane composition analysis by imaging mass spectrometry

    SciTech Connect

    Boxer, S G; Kraft, M L; Longo, M; Hutcheon, I D; Weber, P K

    2006-03-29

    Membranes on solid supports offer an ideal format for imaging. Secondary ion mass spectrometry (SIMS) can be used to obtain composition information on membrane-associated components. Using the NanoSIMS50, images of composition variations in membrane domains can be obtained with a lateral resolution better than 100 nm. By suitable calibration, these variations in composition can be translated into a quantitative analysis of the membrane composition. Progress towards imaging small phase-separated lipid domains, membrane-associated proteins and natural biological membranes will be described.

  5. VAICo: visual analysis for image comparison.

    PubMed

    Schmidt, Johanna; Gröller, M Eduard; Bruckner, Stefan

    2013-12-01

    Scientists, engineers, and analysts are confronted with ever larger and more complex sets of data, whose analysis poses special challenges. In many situations it is necessary to compare two or more datasets. Hence there is a need for comparative visualization tools to help analyze differences or similarities among datasets. In this paper an approach for comparative visualization for sets of images is presented. Well-established techniques for comparing images frequently place them side-by-side. A major drawback of such approaches is that they do not scale well. Other image comparison methods encode differences in images by abstract parameters like color. In this case information about the underlying image data gets lost. This paper introduces a new method for visualizing differences and similarities in large sets of images which preserves contextual information, but also allows the detailed analysis of subtle variations. Our approach identifies local changes and applies cluster analysis techniques to embed them in a hierarchy. The results of this process are then presented in an interactive web application which allows users to rapidly explore the space of differences and drill-down on particular features. We demonstrate the flexibility of our approach by applying it to multiple distinct domains. PMID:24051775

  6. A pairwise image analysis with sparse decomposition

    NASA Astrophysics Data System (ADS)

    Boucher, A.; Cloppet, F.; Vincent, N.

    2013-02-01

    This paper aims to detect the evolution between two images representing the same scene. The evolution detection problem has many practical applications, especially in medical images. Indeed, the concept of a patient "file" implies the joint analysis of different acquisitions taken at different times, and the detection of significant modifications. The research presented in this paper is carried out within the application context of the development of computer assisted diagnosis (CAD) applied to mammograms. It is performed on already registered pair of images. As the registration is never perfect, we must develop a comparison method sufficiently adapted to detect real small differences between comparable tissues. In many applications, the assessment of similarity used during the registration step is also used for the interpretation step that yields to prompt suspicious regions. In our case registration is assumed to match the spatial coordinates of similar anatomical elements. In this paper, in order to process the medical images at tissue level, the image representation is based on elementary patterns, therefore seeking patterns, not pixels. Besides, as the studied images have low entropy, the decomposed signal is expressed in a parsimonious way. Parsimonious representations are known to help extract the significant structures of a signal, and generate a compact version of the data. This change of representation should allow us to compare the studied images in a short time, thanks to the low weight of the images thus represented, while maintaining a good representativeness. The good precision of our results show the approach efficiency.

  7. Image analysis of insulation mineral fibres.

    PubMed

    Talbot, H; Lee, T; Jeulin, D; Hanton, D; Hobbs, L W

    2000-12-01

    We present two methods for measuring the diameter and length of man-made vitreous fibres based on the automated image analysis of scanning electron microscopy images. The fibres we want to measure are used in materials such as glass wool, which in turn are used for thermal and acoustic insulation. The measurement of the diameters and lengths of these fibres is used by the glass wool industry for quality control purposes. To obtain reliable quality estimators, the measurement of several hundred images is necessary. These measurements are usually obtained manually by operators. Manual measurements, although reliable when performed by skilled operators, are slow due to the need for the operators to rest often to retain their ability to spot faint fibres on noisy backgrounds. Moreover, the task of measuring thousands of fibres every day, even with the help of semi-automated image analysis systems, is dull and repetitive. The need for an automated procedure which could replace manual measurements is quite real. For each of the two methods that we propose to accomplish this task, we present the sample preparation, the microscope setting and the image analysis algorithms used for the segmentation of the fibres and for their measurement. We also show how a statistical analysis of the results can alleviate most measurement biases, and how we can estimate the true distribution of fibre lengths by diameter class by measuring only the lengths of the fibres visible in the field of view. PMID:11106965

  8. State-Space Formulation for Circuit Analysis

    ERIC Educational Resources Information Center

    Martinez-Marin, T.

    2010-01-01

    This paper presents a new state-space approach for temporal analysis of electrical circuits. The method systematically obtains the state-space formulation of nondegenerate linear networks without using concepts of topology. It employs nodal/mesh systematic analysis to reduce the number of undesired variables. This approach helps students to…

  9. Automated eXpert Spectral Image Analysis

    Energy Science and Technology Software Center (ESTSC)

    2003-11-25

    AXSIA performs automated factor analysis of hyperspectral images. In such images, a complete spectrum is collected an each point in a 1-, 2- or 3- dimensional spatial array. One of the remaining obstacles to adopting these techniques for routine use is the difficulty of reducing the vast quantities of raw spectral data to meaningful information. Multivariate factor analysis techniques have proven effective for extracting the essential information from high dimensional data sets into a limtedmore » number of factors that describe the spectral characteristics and spatial distributions of the pure components comprising the sample. AXSIA provides tools to estimate different types of factor models including Singular Value Decomposition (SVD), Principal Component Analysis (PCA), PCA with factor rotation, and Alternating Least Squares-based Multivariate Curve Resolution (MCR-ALS). As part of the analysis process, AXSIA can automatically estimate the number of pure components that comprise the data and can scale the data to account for Poisson noise. The data analysis methods are fundamentally based on eigenanalysis of the data crossproduct matrix coupled with orthogonal eigenvector rotation and constrained alternating least squares refinement. A novel method for automatically determining the number of significant components, which is based on the eigenvalues of the crossproduct matrix, has also been devised and implemented. The data can be compressed spectrally via PCA and spatially through wavelet transforms, and algorithms have been developed that perform factor analysis in the transform domain while retaining full spatial and spectral resolution in the final result. These latter innovations enable the analysis of larger-than core-memory spectrum-images. AXSIA was designed to perform automated chemical phase analysis of spectrum-images acquired by a variety of chemical imaging techniques. Successful applications include Energy Dispersive X-ray Spectroscopy, X

  10. Objective facial photograph analysis using imaging software.

    PubMed

    Pham, Annette M; Tollefson, Travis T

    2010-05-01

    Facial analysis is an integral part of the surgical planning process. Clinical photography has long been an invaluable tool in the surgeon's practice not only for accurate facial analysis but also for enhancing communication between the patient and surgeon, for evaluating postoperative results, for medicolegal documentation, and for educational and teaching opportunities. From 35-mm slide film to the digital technology of today, clinical photography has benefited greatly from technological advances. With the development of computer imaging software, objective facial analysis becomes easier to perform and less time consuming. Thus, while the original purpose of facial analysis remains the same, the process becomes much more efficient and allows for some objectivity. Although clinical judgment and artistry of technique is never compromised, the ability to perform objective facial photograph analysis using imaging software may become the standard in facial plastic surgery practices in the future. PMID:20511080

  11. Indium tin oxide for solid-state image sensors

    NASA Astrophysics Data System (ADS)

    Weijtens, Christianus Hermanus L.

    Solid State Image Sensors (SSIS) which convert light into an electrical signal are introduced and transparent conductive materials and their deposition methods are reviewed as a solution to imager problems. The development of basic tools to enable replacement of poly-Si by Indium Tin Oxide (ITO) in SSIS is addressed. The installation and optimization of deposition equipment, the development of deposition and process technology of ITO, and the implementation and application of ITO in an image sensor are studied. Deposition rate and homogeneity and morphology and parameters like gas composition, power, pressure and substrate temperature are considered. Scope is limited to a first generation frame transfer imager with only one ITO layer although some concepts of an all ITO imager are discussed. The sensor used is a redesign of the accordion imager. All requirements imposed on ITO were met and the usefulness of the developed technology was demonstrated by implementing ITO in an imager. The characteristics of a constructed frame-transfer image sensor in which half the gates in the light sensitive part were replaced by ITO gates are discussed.

  12. Motion Analysis From Television Images

    NASA Astrophysics Data System (ADS)

    Silberberg, George G.; Keller, Patrick N.

    1982-02-01

    The Department of Defense ranges have relied on photographic instrumentation for gathering data of firings for all types of ordnance. A large inventory of cameras are available on the market that can be used for these tasks. A new set of optical instrumentation is beginning to appear which, in many cases, can directly replace photographic cameras for a great deal of the work being performed now. These are television cameras modified so they can stop motion, see in the dark, perform under hostile environments, and provide real time information. This paper discusses techniques for modifying television cameras so they can be used for motion analysis.

  13. Solid-state flat panel imager with avalanche amorphous selenium

    NASA Astrophysics Data System (ADS)

    Scheuermann, James R.; Howansky, Adrian; Goldan, Amir H.; Tousignant, Olivier; Levéille, Sébastien; Tanioka, K.; Zhao, Wei

    2016-03-01

    Active matrix flat panel imagers (AMFPI) have become the dominant detector technology for digital radiography and fluoroscopy. For low dose imaging, electronic noise from the amorphous silicon thin film transistor (TFT) array degrades imaging performance. We have fabricated the first prototype solid-state AMFPI using a uniform layer of avalanche amorphous selenium (a-Se) photoconductor to amplify the signal to eliminate the effect of electronic noise. We have previously developed a large area solid-state avalanche a-Se sensor structure referred to as High Gain Avalanche Rushing Photoconductor (HARP) capable of achieving gains of 75. In this work we successfully deposited this HARP structure onto a 24 x 30 cm2 TFT array with a pixel pitch of 85 μm. An electric field (ESe) up to 105 Vμm-1 was applied across the a-Se layer without breakdown. Using the HARP layer as a direct detector, an X-ray avalanche gain of 15 +/- 3 was achieved at ESe = 105 Vμm-1. In indirect mode with a 150 μm thick structured CsI scintillator, an optical gain of 76 +/- 5 was measured at ESe = 105 Vμm-1. Image quality at low dose increases with the avalanche gain until the electronic noise is overcome at a constant exposure level of 0.76 mR. We demonstrate the success of a solid-state HARP X-ray imager as well as the largest active area HARP sensor to date.

  14. Endoscopic image analysis in semantic space.

    PubMed

    Kwitt, R; Vasconcelos, N; Rasiwasia, N; Uhl, A; Davis, B; Häfner, M; Wrba, F

    2012-10-01

    A novel approach to the design of a semantic, low-dimensional, encoding for endoscopic imagery is proposed. This encoding is based on recent advances in scene recognition, where semantic modeling of image content has gained considerable attention over the last decade. While the semantics of scenes are mainly comprised of environmental concepts such as vegetation, mountains or sky, the semantics of endoscopic imagery are medically relevant visual elements, such as polyps, special surface patterns, or vascular structures. The proposed semantic encoding differs from the representations commonly used in endoscopic image analysis (for medical decision support) in that it establishes a semantic space, where each coordinate axis has a clear human interpretation. It is also shown to establish a connection to Riemannian geometry, which enables principled solutions to a number of problems that arise in both physician training and clinical practice. This connection is exploited by leveraging results from information geometry to solve problems such as (1) recognition of important semantic concepts, (2) semantically-focused image browsing, and (3) estimation of the average-case semantic encoding for a collection of images that share a medically relevant visual detail. The approach can provide physicians with an easily interpretable, semantic encoding of visual content, upon which further decisions, or operations, can be naturally carried out. This is contrary to the prevalent practice in endoscopic image analysis for medical decision support, where image content is primarily captured by discriminative, high-dimensional, appearance features, which possess discriminative power but lack human interpretability. PMID:22717411

  15. Endoscopic Image Analysis in Semantic Space

    PubMed Central

    Kwitt, R.; Vasconcelos, N.; Rasiwasia, N.; Uhl, A.; Davis, B.; Häfner, M.; Wrba, F.

    2013-01-01

    A novel approach to the design of a semantic, low-dimensional, encoding for endoscopic imagery is proposed. This encoding is based on recent advances in scene recognition, where semantic modeling of image content has gained considerable attention over the last decade. While the semantics of scenes are mainly comprised of environmental concepts such as vegetation, mountains or sky, the semantics of endoscopic imagery are medically relevant visual elements, such as polyps, special surface patterns, or vascular structures. The proposed semantic encoding differs from the representations commonly used in endoscopic image analysis (for medical decision support) in that it establishes a semantic space, where each coordinate axis has a clear human interpretation. It is also shown to establish a connection to Riemannian geometry, which enables principled solutions to a number of problems that arise in both physician training and clinical practice. This connection is exploited by leveraging results from information geometry to solve problems such as 1) recognition of important semantic concepts, 2) semantically-focused image browsing, and 3) estimation of the average-case semantic encoding for a collection of images that share a medically relevant visual detail. The approach can provide physicians with an easily interpretable, semantic encoding of visual content, upon which further decisions, or operations, can be naturally carried out. This is contrary to the prevalent practice in endoscopic image analysis for medical decision support, where image content is primarily captured by discriminative, high-dimensional, appearance features, which possess discriminative power but lack human interpretability. PMID:22717411

  16. Unsupervised hyperspectral image analysis using independent component analysis (ICA)

    SciTech Connect

    S. S. Chiang; I. W. Ginsberg

    2000-06-30

    In this paper, an ICA-based approach is proposed for hyperspectral image analysis. It can be viewed as a random version of the commonly used linear spectral mixture analysis, in which the abundance fractions in a linear mixture model are considered to be unknown independent signal sources. It does not require the full rank of the separating matrix or orthogonality as most ICA methods do. More importantly, the learning algorithm is designed based on the independency of the material abundance vector rather than the independency of the separating matrix generally used to constrain the standard ICA. As a result, the designed learning algorithm is able to converge to non-orthogonal independent components. This is particularly useful in hyperspectral image analysis since many materials extracted from a hyperspectral image may have similar spectral signatures and may not be orthogonal. The AVIRIS experiments have demonstrated that the proposed ICA provides an effective unsupervised technique for hyperspectral image classification.

  17. Medical image analysis with artificial neural networks.

    PubMed

    Jiang, J; Trundle, P; Ren, J

    2010-12-01

    Given that neural networks have been widely reported in the research community of medical imaging, we provide a focused literature survey on recent neural network developments in computer-aided diagnosis, medical image segmentation and edge detection towards visual content analysis, and medical image registration for its pre-processing and post-processing, with the aims of increasing awareness of how neural networks can be applied to these areas and to provide a foundation for further research and practical development. Representative techniques and algorithms are explained in detail to provide inspiring examples illustrating: (i) how a known neural network with fixed structure and training procedure could be applied to resolve a medical imaging problem; (ii) how medical images could be analysed, processed, and characterised by neural networks; and (iii) how neural networks could be expanded further to resolve problems relevant to medical imaging. In the concluding section, a highlight of comparisons among many neural network applications is included to provide a global view on computational intelligence with neural networks in medical imaging. PMID:20713305

  18. Fourier analysis: from cloaking to imaging

    NASA Astrophysics Data System (ADS)

    Wu, Kedi; Cheng, Qiluan; Wang, Guo Ping

    2016-04-01

    Regarding invisibility cloaks as an optical imaging system, we present a Fourier approach to analytically unify both Pendry cloaks and complementary media-based invisibility cloaks into one kind of cloak. By synthesizing different transfer functions, we can construct different devices to realize a series of interesting functions such as hiding objects (events), creating illusions, and performing perfect imaging. In this article, we give a brief review on recent works of applying Fourier approach to analysis invisibility cloaks and optical imaging through scattering layers. We show that, to construct devices to conceal an object, no constructive materials with extreme properties are required, making most, if not all, of the above functions realizable by using naturally occurring materials. As instances, we experimentally verify a method of directionally hiding distant objects and create illusions by using all-dielectric materials, and further demonstrate a non-invasive method of imaging objects completely hidden by scattering layers.

  19. Fractal image analysis - Application to the topography of Oregon and synthetic images.

    NASA Technical Reports Server (NTRS)

    Huang, Jie; Turcotte, Donald L.

    1990-01-01

    Digitized topography for the state of Oregon has been used to obtain maps of fractal dimension and roughness amplitude. The roughness amplitude correlates well with variations in relief and is a promising parameter for the quantitative classification of landforms. The spatial variations in fractal dimension are low and show no clear correlation with different tectonic settings. For Oregon the mean fractal dimension from a two-dimensional spectral analysis is D = 2.586, and for a one-dimensional spectral analysis the mean fractal dimension is D = 1.487, which is close to the Brown noise value D = 1.5. Synthetic two-dimensional images have also been generated for a range of D values. For D = 2.6, the synthetic image has a mean one-dimensional spectral fractal dimension D = 1.58, which is consistent with the results for Oregon. This approach can be easily applied to any digitzed image that obeys fractal statistics.

  20. Imaging of HCC-Current State of the Art.

    PubMed

    Schraml, Christina; Kaufmann, Sascha; Rempp, Hansjoerg; Syha, Roland; Ketelsen, Dominik; Notohamiprodjo, Mike; Nikolaou, Konstantin

    2015-01-01

    Early diagnosis of hepatocellular carcinoma (HCC) is crucial for optimizing treatment outcome. Ongoing advances are being made in imaging of HCC regarding detection, grading, staging, and also treatment monitoring. This review gives an overview of the current international guidelines for diagnosing HCC and their discrepancies as well as critically summarizes the role of magnetic resonance imaging (MRI) and computed tomography (CT) techniques for imaging in HCC. The diagnostic performance of MRI with nonspecific and hepatobililiary contrast agents and the role of functional imaging with diffusion-weighted imaging will be discussed. On the other hand, CT as a fast, cheap and easily accessible imaging modality plays a major role in the clinical routine work-up of HCC. Technical advances in CT, such as dual energy CT and volume perfusion CT, are currently being explored for improving detection, characterization and staging of HCC with promising results. Cone beam CT can provide a three-dimensional analysis of the liver with tumor and vessel characterization comparable to cross-sectional imaging so that this technique is gaining an increasing role in the peri-procedural imaging of HCC treated with interventional techniques. PMID:26854169

  1. Imaging of HCC—Current State of the Art

    PubMed Central

    Schraml, Christina; Kaufmann, Sascha; Rempp, Hansjoerg; Syha, Roland; Ketelsen, Dominik; Notohamiprodjo, Mike; Nikolaou, Konstantin

    2015-01-01

    Early diagnosis of hepatocellular carcinoma (HCC) is crucial for optimizing treatment outcome. Ongoing advances are being made in imaging of HCC regarding detection, grading, staging, and also treatment monitoring. This review gives an overview of the current international guidelines for diagnosing HCC and their discrepancies as well as critically summarizes the role of magnetic resonance imaging (MRI) and computed tomography (CT) techniques for imaging in HCC. The diagnostic performance of MRI with nonspecific and hepatobililiary contrast agents and the role of functional imaging with diffusion-weighted imaging will be discussed. On the other hand, CT as a fast, cheap and easily accessible imaging modality plays a major role in the clinical routine work-up of HCC. Technical advances in CT, such as dual energy CT and volume perfusion CT, are currently being explored for improving detection, characterization and staging of HCC with promising results. Cone beam CT can provide a three-dimensional analysis of the liver with tumor and vessel characterization comparable to cross-sectional imaging so that this technique is gaining an increasing role in the peri-procedural imaging of HCC treated with interventional techniques. PMID:26854169

  2. Measuring toothbrush interproximal penetration using image analysis

    NASA Astrophysics Data System (ADS)

    Hayworth, Mark S.; Lyons, Elizabeth K.

    1994-09-01

    An image analysis method of measuring the effectiveness of a toothbrush in reaching the interproximal spaces of teeth is described. Artificial teeth are coated with a stain that approximates real plaque and then brushed with a toothbrush on a brushing machine. The teeth are then removed and turned sideways so that the interproximal surfaces can be imaged. The areas of stain that have been removed within masked regions that define the interproximal regions are measured and reported. These areas correspond to the interproximal areas of the tooth reached by the toothbrush bristles. The image analysis method produces more precise results (10-fold decrease in standard deviation) in a fraction (22%) of the time as compared to our prior visual grading method.

  3. Morphometry of spermatozoa using semiautomatic image analysis.

    PubMed Central

    Jagoe, J R; Washbrook, N P; Hudson, E A

    1986-01-01

    Human sperm heads were detected and tracked using semiautomatic image analysis. Measurements of size and shape on two specimens from each of 26 men showed that the major component of variability both within and between subjects was the number of small elongated sperm heads. Variability of the computed features between subjects was greater than that between samples from the same subject. PMID:3805320

  4. Scale Free Reduced Rank Image Analysis.

    ERIC Educational Resources Information Center

    Horst, Paul

    In the traditional Guttman-Harris type image analysis, a transformation is applied to the data matrix such that each column of the transformed data matrix is the best least squares estimate of the corresponding column of the data matrix from the remaining columns. The model is scale free. However, it assumes (1) that the correlation matrix is…

  5. Using Image Analysis to Build Reading Comprehension

    ERIC Educational Resources Information Center

    Brown, Sarah Drake; Swope, John

    2010-01-01

    Content area reading remains a primary concern of history educators. In order to better prepare students for encounters with text, the authors propose the use of two image analysis strategies tied with a historical theme to heighten student interest in historical content and provide a basis for improved reading comprehension.

  6. Expert system for imaging spectrometer analysis results

    NASA Technical Reports Server (NTRS)

    Borchardt, Gary C.

    1985-01-01

    Information on an expert system for imaging spectrometer analysis results is outlined. Implementation requirements, the Simple Tool for Automated Reasoning (STAR) program that provides a software environment for the development and operation of rule-based expert systems, STAR data structures, and rule-based identification of surface materials are among the topics outlined.

  7. COMPUTER ANALYSIS OF PLANAR GAMMA CAMERA IMAGES

    EPA Science Inventory



    COMPUTER ANALYSIS OF PLANAR GAMMA CAMERA IMAGES

    T Martonen1 and J Schroeter2

    1Experimental Toxicology Division, National Health and Environmental Effects Research Laboratory, U.S. EPA, Research Triangle Park, NC 27711 USA and 2Curriculum in Toxicology, Unive...

  8. Analysis of Combinatorial Epigenomic States.

    PubMed

    Soloway, Paul D

    2016-03-18

    Hundreds of distinct chemical modifications to DNA and histone amino acids have been described. Regulation exerted by these so-called epigenetic marks is vital to normal development, stability of cell identity through mitosis, and nongenetic transmission of traits between generations through meiosis. Loss of this regulation contributes to many diseases. Evidence indicates epigenetic marks function in combinations, whereby a given modification has distinct effects on local genome control, depending on which additional modifications are locally present. This review summarizes emerging methods for assessing combinatorial epigenomic states, as well as challenges and opportunities for their refinement. PMID:26555135

  9. Using Brain Imaging to Track Problem Solving in a Complex State Space

    PubMed Central

    Anderson, John R.; Fincham, Jon M.; Schneider, Darryl W.; Yang, Jian

    2011-01-01

    This paper describes how behavioral and imaging data can be combined with a Hidden Markov Model (HMM) to track participants’ trajectories through a complex state space. Participants completed a problem-solving variant of a memory game that involved 625 distinct states, 24 operators, and an astronomical number of paths through the state space. Three sources of information were used for classification purposes. First, an Imperfect Memory Model was used to estimate transition probabilities for the HMM. Second, behavioral data provided information about the timing of different events. Third, multivoxel pattern analysis of the imaging data was used to identify features of the operators. By combining the three sources of information, an HMM algorithm was able to efficiently identify the most probable path that participants took through the state space, achieving over 80% accuracy. These results support the approach as a general methodology for tracking mental states that occur during individual problem-solving episodes. PMID:22209783

  10. XANES: Solid state mineral analysis

    NASA Astrophysics Data System (ADS)

    Bell, Peter M.

    Researchers in the field of mineral physics have become aware of new analytical techniques for studying the electronic structure of solids; one such technique is the X ray absorption fine structure (XFAS) method. In this technique the fine structure of the X ray K-edge, for example, can b e employed as a critical probe of t h e intricacies of a crystal structure (P. A. Lee, P. H. Citrin, P. Eisenberger, and B. M. Kincaid, Rev. Mod. Phys., 53, 799, 1981).A similar, related technique, X ray absorption near-edge spectroscopy (XANES), is a relatively unknown method of studying the electronic structure of solids. XANES is new, and due to its complex nature, data on all but very simple solids have not yet been applied rigorously. Among the first XANES results on minerals is the recent study reported by G. Knapp, B. Veal, H. Pan, and T. Klipper (Solid State Comm. 44, 1343, 1982) on perovskites, magnesiowustites, and other 3d oxides in the zircon and spinel groups. The interpretation of these results is still semiquantitative, being based on ground state and basic selection rule considerations. The results show, however, a strong correlation between near-edge spectra and crystal structure.

  11. Good relationships between computational image analysis and radiological physics

    SciTech Connect

    Arimura, Hidetaka; Kamezawa, Hidemi; Jin, Ze; Nakamoto, Takahiro; Soufi, Mazen

    2015-09-30

    Good relationships between computational image analysis and radiological physics have been constructed for increasing the accuracy of medical diagnostic imaging and radiation therapy in radiological physics. Computational image analysis has been established based on applied mathematics, physics, and engineering. This review paper will introduce how computational image analysis is useful in radiation therapy with respect to radiological physics.

  12. Good relationships between computational image analysis and radiological physics

    NASA Astrophysics Data System (ADS)

    Arimura, Hidetaka; Kamezawa, Hidemi; Jin, Ze; Nakamoto, Takahiro; Soufi, Mazen

    2015-09-01

    Good relationships between computational image analysis and radiological physics have been constructed for increasing the accuracy of medical diagnostic imaging and radiation therapy in radiological physics. Computational image analysis has been established based on applied mathematics, physics, and engineering. This review paper will introduce how computational image analysis is useful in radiation therapy with respect to radiological physics.

  13. Texture Analysis of Medical Images Using the Wavelet Transform

    NASA Astrophysics Data System (ADS)

    Fernández, Margarita; Mavilio, Adriana

    2002-08-01

    Texture analysis of images can contribute to a better interpretation of medical images. This type of analysis provides not only qualitative but also quantitative information about tissue affection degree. In this work an algorithm is developed which uses the wavelet transform to carry out the supervised segmentation of echographic images corresponding to injured Achilles tendon of athletes. To construct the pattern, the image corresponding to healthy tendon tissue of the athlete, is taken as a reference based upon the duplicity of this structure. Texture features are calculated on the expansion wavelet coefficients of the images. The Mahalanobis distance between texture samples of the injured tissue and pattern texture is computed and used as the discriminating function. It is concluded that this distance, after appropriate medical calibrations, can offer quantitative information about the injury degree in every point along the damaged tissue. Further, its behavior along the segmented image can serve as a measure of the degree of change in tissue properties. The parameter, similarity degree, is defined and obtained by taking into account the correlation between distance histograms for the healthy tissue and the damaged one. It is also shown that this parameter, when properly calibrated, can offer a quantitative global evaluation of the state of the injured tissue.

  14. Frequency domain analysis of knock images

    NASA Astrophysics Data System (ADS)

    Qi, Yunliang; He, Xin; Wang, Zhi; Wang, Jianxin

    2014-12-01

    High speed imaging-based knock analysis has mainly focused on time domain information, e.g. the spark triggered flame speed, the time when end gas auto-ignition occurs and the end gas flame speed after auto-ignition. This study presents a frequency domain analysis on the knock images recorded using a high speed camera with direct photography in a rapid compression machine (RCM). To clearly visualize the pressure wave oscillation in the combustion chamber, the images were high-pass-filtered to extract the luminosity oscillation. The luminosity spectrum was then obtained by applying fast Fourier transform (FFT) to three basic colour components (red, green and blue) of the high-pass-filtered images. Compared to the pressure spectrum, the luminosity spectra better identify the resonant modes of pressure wave oscillation. More importantly, the resonant mode shapes can be clearly visualized by reconstructing the images based on the amplitudes of luminosity spectra at the corresponding resonant frequencies, which agree well with the analytical solutions for mode shapes of gas vibration in a cylindrical cavity.

  15. The Determination Of Titan'S Rotational State From Cassini Sar Images

    NASA Astrophysics Data System (ADS)

    Persi Del Marmo, Paolo; Iess, L.; Picardi, G.; Seu, R.; Bertotti, B.

    2007-10-01

    SAR images acquired by the spacecraft Cassini in overlapping strips have been used to determine the vectorial angular velocity of Titan. The method entails the tracking of surface landmarks at different times (and mean anomalies), spanning a period from 2004 to 2007. Each image is referenced both in an inertial frame and in the IAU, Titan-centric, body-fixed reference frame. This referencing is quite precise (accuracy of Cassini relative to Titan position smaller than 100 m). The IAU body-fixed frame assumes a spin axis different from the actual one. By correlating the two images of the same surface region, one gets a two-dimensional vector, which retains information about the true spin axis. This vector provides the magnitude and direction of the displacement to be applied to a reference point of each image in order to produce maximum correlation. The correlation results therefore in a new Titan-centric, inertial referencing of the images, R(t1) and R(t2). The spin axis s is then obtained by requiring that [R(t2) - R(t1)] • s = 0 for each overlapping image pairs. Left hand sides cannot be simultaneously zeroed because Titan doesn't move respect to the known polar axis and the real spin axis must be determined by means of a least square procedure. The magnitude of the angular velocity is then derived from the angle and time between two observations. The Titan pole position estimated is consistent with the occupancy of a Cassini state. If Titan were a rigid body in a Cassini state (with an icy crust anchored to the mantle), one could use theoretical arguments to derive the moment of inertia from the obliquity and the second degree gravity field. However, our results suggest that those theoretical arguments cannot be straightforwardly applied to Titan, whose rotational state is more complex than expected.

  16. ImageJ: Image processing and analysis in Java

    NASA Astrophysics Data System (ADS)

    Rasband, W. S.

    2012-06-01

    ImageJ is a public domain Java image processing program inspired by NIH Image. It can display, edit, analyze, process, save and print 8-bit, 16-bit and 32-bit images. It can read many image formats including TIFF, GIF, JPEG, BMP, DICOM, FITS and "raw". It supports "stacks", a series of images that share a single window. It is multithreaded, so time-consuming operations such as image file reading can be performed in parallel with other operations.

  17. Multiscale likelihood analysis and image reconstruction

    NASA Astrophysics Data System (ADS)

    Willett, Rebecca M.; Nowak, Robert D.

    2003-11-01

    The nonparametric multiscale polynomial and platelet methods presented here are powerful new tools for signal and image denoising and reconstruction. Unlike traditional wavelet-based multiscale methods, these methods are both well suited to processing Poisson or multinomial data and capable of preserving image edges. At the heart of these new methods lie multiscale signal decompositions based on polynomials in one dimension and multiscale image decompositions based on what the authors call platelets in two dimensions. Platelets are localized functions at various positions, scales and orientations that can produce highly accurate, piecewise linear approximations to images consisting of smooth regions separated by smooth boundaries. Polynomial and platelet-based maximum penalized likelihood methods for signal and image analysis are both tractable and computationally efficient. Polynomial methods offer near minimax convergence rates for broad classes of functions including Besov spaces. Upper bounds on the estimation error are derived using an information-theoretic risk bound based on squared Hellinger loss. Simulations establish the practical effectiveness of these methods in applications such as density estimation, medical imaging, and astronomy.

  18. Recent advances in morphological cell image analysis.

    PubMed

    Chen, Shengyong; Zhao, Mingzhu; Wu, Guang; Yao, Chunyan; Zhang, Jianwei

    2012-01-01

    This paper summarizes the recent advances in image processing methods for morphological cell analysis. The topic of morphological analysis has received much attention with the increasing demands in both bioinformatics and biomedical applications. Among many factors that affect the diagnosis of a disease, morphological cell analysis and statistics have made great contributions to results and effects for a doctor. Morphological cell analysis finds the cellar shape, cellar regularity, classification, statistics, diagnosis, and so forth. In the last 20 years, about 1000 publications have reported the use of morphological cell analysis in biomedical research. Relevant solutions encompass a rather wide application area, such as cell clumps segmentation, morphological characteristics extraction, 3D reconstruction, abnormal cells identification, and statistical analysis. These reports are summarized in this paper to enable easy referral to suitable methods for practical solutions. Representative contributions and future research trends are also addressed. PMID:22272215

  19. Analysis of imaging system performance capabilities

    NASA Astrophysics Data System (ADS)

    Haim, Harel; Marom, Emanuel

    2013-06-01

    Present performance analysis of optical imaging systems based on results obtained with classic one-dimensional (1D) resolution targets (such as the USAF resolution chart) are significantly different than those obtained with a newly proposed 2D target [1]. We hereby prove such claim and show how the novel 2D target should be used for correct characterization of optical imaging systems in terms of resolution and contrast. We apply thereafter the consequences of these observations on the optimal design of some two-dimensional barcode structures.

  20. A UML Profile for State Analysis

    NASA Technical Reports Server (NTRS)

    Murray, Alex; Rasmussen, Robert

    2010-01-01

    State Analysis is a systems engineering methodology for the specification and design of control systems, developed at the Jet Propulsion Laboratory. The methodology emphasizes an analysis of the system under control in terms of States and their properties and behaviors and their effects on each other, a clear separation of the control system from the controlled system, cognizance in the control system of the controlled system's State, goal-based control built on constraining the controlled system's States, and disciplined techniques for State discovery and characterization. State Analysis (SA) introduces two key diagram types: State Effects and Goal Network diagrams. The team at JPL developed a tool for performing State Analysis. The tool includes a drawing capability, backed by a database that supports the diagram types and the organization of the elements of the SA models. But the tool does not support the usual activities of software engineering and design - a disadvantage, since systems to which State Analysis can be applied tend to be very software-intensive. This motivated the work described in this paper: the development of a preliminary Unified Modeling Language (UML) profile for State Analysis. Having this profile would enable systems engineers to specify a system using the methods and graphical language of State Analysis, which is easily linked with a larger system model in SysML (Systems Modeling Language), while also giving software engineers engaged in implementing the specified control system immediate access to and use of the SA model, in the same language, UML, used for other software design. That is, a State Analysis profile would serve as a shared modeling bridge between system and software models for the behavior aspects of the system. This paper begins with an overview of State Analysis and its underpinnings, followed by an overview of the mapping of SA constructs to the UML metamodel. It then delves into the details of these mappings and the

  1. Autonomous Image Analysis for Future Mars Missions

    NASA Technical Reports Server (NTRS)

    Gulick, V. C.; Morris, R. L.; Ruzon, M. A.; Bandari, E.; Roush, T. L.

    1999-01-01

    To explore high priority landing sites and to prepare for eventual human exploration, future Mars missions will involve rovers capable of traversing tens of kilometers. However, the current process by which scientists interact with a rover does not scale to such distances. Specifically, numerous command cycles are required to complete even simple tasks, such as, pointing the spectrometer at a variety of nearby rocks. In addition, the time required by scientists to interpret image data before new commands can be given and the limited amount of data that can be downlinked during a given command cycle constrain rover mobility and achievement of science goals. Experience with rover tests on Earth supports these concerns. As a result, traverses to science sites as identified in orbital images would require numerous science command cycles over a period of many weeks, months or even years, perhaps exceeding rover design life and other constraints. Autonomous onboard science analysis can address these problems in two ways. First, it will allow the rover to preferentially transmit "interesting" images, defined as those likely to have higher science content. Second, the rover will be able to anticipate future commands. For example, a rover might autonomously acquire and return spectra of "interesting" rocks along with a high-resolution image of those rocks in addition to returning the context images in which they were detected. Such approaches, coupled with appropriate navigational software, help to address both the data volume and command cycle bottlenecks that limit both rover mobility and science yield. We are developing fast, autonomous algorithms to enable such intelligent on-board decision making by spacecraft. Autonomous algorithms developed to date have the ability to identify rocks and layers in a scene, locate the horizon, and compress multi-spectral image data. We are currently investigating the possibility of reconstructing a 3D surface from a sequence of images

  2. Morphological analysis of infrared images for waterjets

    NASA Astrophysics Data System (ADS)

    Gong, Yuxin; Long, Aifang

    2013-03-01

    High-speed waterjet has been widely used in industries and been investigated as a model of free shearing turbulence. This paper presents an investigation involving the flow visualization of high speed water jet, the noise reduction of the raw thermogram using a high-pass morphological filter ? and a median filter; the image enhancement using white top-hat filter; and the image segmentation using the multiple thresholding method. The image processing results by the designed morphological filters, ? - top-hat, were proved being ideal for further quantitative and in-depth analysis and can be used as a new morphological filter bank that may be of general implications for the analogous work

  3. Lifetimes of electronic excitations in unoccupied surface states and the image potential states on Pd(110)

    SciTech Connect

    Tsirkin, S. S. Eremeev, S. V.; Chulkov, E. V.

    2012-10-15

    The contribution of inelastic electron-electron scattering to the decay rate of excitations in the surface states and first two image potential states at the Y-bar point on the surface is calculated in the GW approximation, and the quasi-momentum dependence of the corresponding contribution for the surface states is analyzed. The mechanisms of electron scattering in these states are studied, and the temperature dependence of the excitation lifetime is analyzed with allowance for the contribution of the electron-phonon interaction calculated earlier.

  4. A simple method for labeling CT images with respiratory states

    SciTech Connect

    Berlinger, Kajetan; Sauer, Otto; Vences, Lucia; Roth, Michael

    2006-09-15

    A method is described for labeling CT images with their respiratory state by a needle, connected to the patient's chest/abdomen. By means of a leverage the needle follows the abdominal respiratory motion. The needle is visible as a blurred spot in every CT slice. The method was tested with nine patients. A series of volume scans during free breathing was performed. The detected positions of the moving needle in every single slice were compared to each other thus enabling respiratory state assignment. The tool is an inexpensive alternative to complex respiratory measuring tools for four dimensional (4D) CT and was greatly accepted in the clinic due to its simplicity.

  5. Pain related inflammation analysis using infrared images

    NASA Astrophysics Data System (ADS)

    Bhowmik, Mrinal Kanti; Bardhan, Shawli; Das, Kakali; Bhattacharjee, Debotosh; Nath, Satyabrata

    2016-05-01

    Medical Infrared Thermography (MIT) offers a potential non-invasive, non-contact and radiation free imaging modality for assessment of abnormal inflammation having pain in the human body. The assessment of inflammation mainly depends on the emission of heat from the skin surface. Arthritis is a disease of joint damage that generates inflammation in one or more anatomical joints of the body. Osteoarthritis (OA) is the most frequent appearing form of arthritis, and rheumatoid arthritis (RA) is the most threatening form of them. In this study, the inflammatory analysis has been performed on the infrared images of patients suffering from RA and OA. For the analysis, a dataset of 30 bilateral knee thermograms has been captured from the patient of RA and OA by following a thermogram acquisition standard. The thermograms are pre-processed, and areas of interest are extracted for further processing. The investigation of the spread of inflammation is performed along with the statistical analysis of the pre-processed thermograms. The objectives of the study include: i) Generation of a novel thermogram acquisition standard for inflammatory pain disease ii) Analysis of the spread of the inflammation related to RA and OA using K-means clustering. iii) First and second order statistical analysis of pre-processed thermograms. The conclusion reflects that, in most of the cases, RA oriented inflammation affects bilateral knees whereas inflammation related to OA present in the unilateral knee. Also due to the spread of inflammation in OA, contralateral asymmetries are detected through the statistical analysis.

  6. Image analysis for measuring rod network properties

    NASA Astrophysics Data System (ADS)

    Kim, Dongjae; Choi, Jungkyu; Nam, Jaewook

    2015-12-01

    In recent years, metallic nanowires have been attracting significant attention as next-generation flexible transparent conductive films. The performance of films depends on the network structure created by nanowires. Gaining an understanding of their structure, such as connectivity, coverage, and alignment of nanowires, requires the knowledge of individual nanowires inside the microscopic images taken from the film. Although nanowires are flexible up to a certain extent, they are usually depicted as rigid rods in many analysis and computational studies. Herein, we propose a simple and straightforward algorithm based on the filtering in the frequency domain for detecting the rod-shape objects inside binary images. The proposed algorithm uses a specially designed filter in the frequency domain to detect image segments, namely, the connected components aligned in a certain direction. Those components are post-processed to be combined under a given merging rule in a single rod object. In this study, the microscopic properties of the rod networks relevant to the analysis of nanowire networks were measured for investigating the opto-electric performance of transparent conductive films and their alignment distribution, length distribution, and area fraction. To verify and find the optimum parameters for the proposed algorithm, numerical experiments were performed on synthetic images with predefined properties. By selecting proper parameters, the algorithm was used to investigate silver nanowire transparent conductive films fabricated by the dip coating method.

  7. Scalable histopathological image analysis via active learning.

    PubMed

    Zhu, Yan; Zhang, Shaoting; Liu, Wei; Metaxas, Dimitris N

    2014-01-01

    Training an effective and scalable system for medical image analysis usually requires a large amount of labeled data, which incurs a tremendous annotation burden for pathologists. Recent progress in active learning can alleviate this issue, leading to a great reduction on the labeling cost without sacrificing the predicting accuracy too much. However, most existing active learning methods disregard the "structured information" that may exist in medical images (e.g., data from individual patients), and make a simplifying assumption that unlabeled data is independently and identically distributed. Both may not be suitable for real-world medical images. In this paper, we propose a novel batch-mode active learning method which explores and leverages such structured information in annotations of medical images to enforce diversity among the selected data, therefore maximizing the information gain. We formulate the active learning problem as an adaptive submodular function maximization problem subject to a partition matroid constraint, and further present an efficient greedy algorithm to achieve a good solution with a theoretically proven bound. We demonstrate the efficacy of our algorithm on thousands of histopathological images of breast microscopic tissues. PMID:25320821

  8. Multiresolution simulated annealing for brain image analysis

    NASA Astrophysics Data System (ADS)

    Loncaric, Sven; Majcenic, Zoran

    1999-05-01

    Analysis of biomedical images is an important step in quantification of various diseases such as human spontaneous intracerebral brain hemorrhage (ICH). In particular, the study of outcome in patients having ICH requires measurements of various ICH parameters such as hemorrhage volume and their change over time. A multiresolution probabilistic approach for segmentation of CT head images is presented in this work. This method views the segmentation problem as a pixel labeling problem. In this application the labels are: background, skull, brain tissue, and ICH. The proposed method is based on the Maximum A-Posteriori (MAP) estimation of the unknown pixel labels. The MAP method maximizes the a-posterior probability of segmented image given the observed (input) image. Markov random field (MRF) model has been used for the posterior distribution. The MAP estimation of the segmented image has been determined using the simulated annealing (SA) algorithm. The SA algorithm is used to minimize the energy function associated with MRF posterior distribution function. A multiresolution SA (MSA) has been developed to speed up the annealing process. MSA is presented in detail in this work. A knowledge-based classification based on the brightness, size, shape and relative position toward other regions is performed at the end of the procedure. The regions are identified as background, skull, brain, ICH and calcifications.

  9. The synthesis and analysis of color images

    NASA Technical Reports Server (NTRS)

    Wandell, Brian A.

    1987-01-01

    A method is described for performing the synthesis and analysis of digital color images. The method is based on two principles. First, image data are represented with respect to the separate physical factors, surface reflectance and the spectral power distribution of the ambient light, that give rise to the perceived color of an object. Second, the encoding is made efficiently by using a basis expansion for the surface spectral reflectance and spectral power distribution of the ambient light that takes advantage of the high degree of correlation across the visible wavelengths normally found in such functions. Within this framework, the same basic methods can be used to synthesize image data for color display monitors and printed materials, and to analyze image data into estimates of the spectral power distribution and surface spectral reflectances. The method can be applied to a variety of tasks. Examples of applications include the color balancing of color images, and the identification of material surface spectral reflectance when the lighting cannot be completely controlled.

  10. The synthesis and analysis of color images

    NASA Technical Reports Server (NTRS)

    Wandell, B. A.

    1985-01-01

    A method is described for performing the synthesis and analysis of digital color images. The method is based on two principles. First, image data are represented with respect to the separate physical factors, surface reflectance and the spectral power distribution of the ambient light, that give rise to the perceived color of an object. Second, the encoding is made efficient by using a basis expansion for the surface spectral reflectance and spectral power distribution of the ambient light that takes advantage of the high degree of correlation across the visible wavelengths normally found in such functions. Within this framework, the same basic methods can be used to synthesize image data for color display monitors and printed materials, and to analyze image data into estimates of the spectral power distribution and surface spectral reflectances. The method can be applied to a variety of tasks. Examples of applications include the color balancing of color images, and the identification of material surface spectral reflectance when the lighting cannot be completely controlled.

  11. Using ultrasound image analysis to evaluate the role of elastography imaging in the diagnosis of carotid atherosclerosis.

    PubMed

    Xenikou, Monika-Filitsa; Golemati, Spyretta; Gastounioti, Aimilia; Tzortzi, Marianna; Moraitis, Nektarios; Charalampopulos, Georgios; Liasis, Nicolaos; Dedes, Athanasios; Besias, Nicolaos; Nikita, Konstantina S

    2015-01-01

    Valid characterization of carotid atherosclerosis (CA) is a crucial public health issue, which would limit the major risk held by CA for both patient safety and state economies. CA is typically diagnosed and assessed using duplex ultrasonography (US). Elastrography Imaging (EI) is a promising US technique for quantifying tissue elasticity (ES). In this work, we investigated the association between ES of carotid atherosclerotic lesions, derived from EI, and texture indices, calculated from US image analysis. US and EI images of 23 atherosclerotic plaques (16 patients) were analyzed. Texture features derived from US image analysis (Gray-Scale Median (GSM), plaque area (A) and co-occurrence-matrixderived features) were calculated. Statistical analysis revealed associations between US texture features and EI measured indices. This result indicates accordance in US and EI techniques and states the promising role of EI in diagnosis of CA. PMID:26737736

  12. Quantitative image analysis of celiac disease

    PubMed Central

    Ciaccio, Edward J; Bhagat, Govind; Lewis, Suzanne K; Green, Peter H

    2015-01-01

    We outline the use of quantitative techniques that are currently used for analysis of celiac disease. Image processing techniques can be useful to statistically analyze the pixular data of endoscopic images that is acquired with standard or videocapsule endoscopy. It is shown how current techniques have evolved to become more useful for gastroenterologists who seek to understand celiac disease and to screen for it in suspected patients. New directions for focus in the development of methodology for diagnosis and treatment of this disease are suggested. It is evident that there are yet broad areas where there is potential to expand the use of quantitative techniques for improved analysis in suspected or known celiac disease patients. PMID:25759524

  13. Imaging hydrated microbial extracellular polymers: Comparative analysis by electron microscopy

    SciTech Connect

    Dohnalkova, A.C.; Marshall, M. J.; Arey, B. W.; Williams, K. H.; Buck, E. C.; Fredrickson, J. K.

    2011-01-01

    Microbe-mineral and -metal interactions represent a major intersection between the biosphere and geosphere but require high-resolution imaging and analytical tools for investigating microscale associations. Electron microscopy has been used extensively for geomicrobial investigations and although used bona fide, the traditional methods of sample preparation do not preserve the native morphology of microbiological components, especially extracellular polymers. Herein, we present a direct comparative analysis of microbial interactions using conventional electron microscopy approaches of imaging at room temperature and a suite of cryogenic electron microscopy methods providing imaging in the close-to-natural hydrated state. In situ, we observed an irreversible transformation of the hydrated bacterial extracellular polymers during the traditional dehydration-based sample preparation that resulted in their collapse into filamentous structures. Dehydration-induced polymer collapse can lead to inaccurate spatial relationships and hence could subsequently affect conclusions regarding nature of interactions between microbial extracellular polymers and their environment.

  14. Imaging Hydrated Microbial Extracellular Polymers: Comparative Analysis by Electron Microscopy

    SciTech Connect

    Dohnalkova, Alice; Marshall, Matthew J.; Arey, Bruce W.; Williams, Kenneth H.; Buck, Edgar C.; Fredrickson, Jim K.

    2011-02-01

    Microbe-mineral and -metal interactions represent a major intersection between the biosphere and geosphere but require high-resolution imaging and analytical tools for investigating microscale associations. Electron microscopy has been used extensively for geomicrobial investigations and although used bona fide, the traditional methods of sample preparation do not preserve the native morphology of microbiological components, especially extracellular polymers. Herein, we present a direct comparative analysis of microbial interactions using conventional electron microscopy approaches of imaging at room temperature and a suite of cryo-electron microscopy methods providing imaging in the close-to-natural hydrated state. In situ, we observed an irreversible transformation of bacterial extracellular polymers during the traditional dehydration-based sample preparation that resulted in the collapse of hydrated gel-like EPS into filamentous structures. Dehydration-induced polymer collapse can lead to inaccurate spatial relationships and hence could subsequently affect conclusions regarding nature of interactions between microbial extracellular polymers and their environment.

  15. Imaging hydrated microbial extracellular polymers: comparative analysis by electron microscopy.

    PubMed

    Dohnalkova, Alice C; Marshall, Matthew J; Arey, Bruce W; Williams, Kenneth H; Buck, Edgar C; Fredrickson, James K

    2011-02-01

    Microbe-mineral and -metal interactions represent a major intersection between the biosphere and geosphere but require high-resolution imaging and analytical tools for investigation of microscale associations. Electron microscopy has been used extensively for geomicrobial investigations, and although used bona fide, the traditional methods of sample preparation do not preserve the native morphology of microbiological components, especially extracellular polymers. Herein, we present a direct comparative analysis of microbial interactions by conventional electron microscopy approaches with imaging at room temperature and a suite of cryogenic electron microscopy methods providing imaging in the close-to-natural hydrated state. In situ, we observed an irreversible transformation of the hydrated bacterial extracellular polymers during the traditional dehydration-based sample preparation that resulted in their collapse into filamentous structures. Dehydration-induced polymer collapse can lead to inaccurate spatial relationships and hence could subsequently affect conclusions regarding the nature of interactions between microbial extracellular polymers and their environment. PMID:21169451

  16. Positron emission tomography: physics, instrumentation, and image analysis.

    PubMed

    Porenta, G

    1994-01-01

    Positron emission tomography (PET) is a noninvasive diagnostic technique that permits reconstruction of cross-sectional images of the human body which depict the biodistribution of PET tracer substances. A large variety of physiological PET tracers, mostly based on isotopes of carbon, nitrogen, oxygen, and fluorine is available and allows the in vivo investigation of organ perfusion, metabolic pathways and biomolecular processes in normal and diseased states. PET cameras utilize the physical characteristics of positron decay to derive quantitative measurements of tracer concentrations, a capability that has so far been elusive for conventional SPECT (single photon emission computed tomography) imaging techniques. Due to the short half lives of most PET isotopes, an on-site cyclotron and a radiochemistry unit are necessary to provide an adequate supply of PET tracers. While operating a PET center in the past was a complex procedure restricted to few academic centers with ample resources, PET technology has rapidly advanced in recent years and has entered the commercial nuclear medicine market. To date, the availability of compact cyclotrons with remote computer control, automated synthesis units for PET radiochemistry, high-performance PET cameras, and user-friendly analysis workstations permits installation of a clinical PET center within most nuclear medicine facilities. This review provides simple descriptions of important aspects concerning physics, instrumentation, and image analysis in PET imaging which should be understood by medical personnel involved in the clinical operation of a PET imaging center. PMID:7941595

  17. Towards Cognitive Coherence In Physics Learning: Image-ability Of Undergraduate Solid State Physics Course

    NASA Astrophysics Data System (ADS)

    Sharma, S.; Ahluwalia, P. K.

    2010-07-01

    Based on the famous work of K. Lynch [7] on image-ability of a cityscape, recently a city of physics analogy has been proposed by A.E. Tabor et al.[8] to enhance the cognitive coherence of physics as a subject. The idea of both Lynch and A. abor. et al. is being extended in this paper to image-ability of an undergraduate Solid State Physics course to bring forth cognitive coherence of the subject in a global manner. In this paper an image-ability map of the course is presented both in a pictorial and tabular format with recognition of sections of the syllabus as districts and sub districts. Further in each district and sub district, key concepts as land marks, variables involved as nodes, key physical equations as paths and limits on variables as edges or boundaries are identified through peer discussion among a group of teachers who are teaching this course for the last couple of years. This exercise has helped not only in mental mapping of the subject but focusing on hitherto isolated and advanced topics provided in the syllabus as leading to a very different mental recreational spots in the cityscape of undergraduate Solid State Physics.

  18. Imaging Brain Dynamics Using Independent Component Analysis

    PubMed Central

    Jung, Tzyy-Ping; Makeig, Scott; McKeown, Martin J.; Bell, Anthony J.; Lee, Te-Won; Sejnowski, Terrence J.

    2010-01-01

    The analysis of electroencephalographic (EEG) and magnetoencephalographic (MEG) recordings is important both for basic brain research and for medical diagnosis and treatment. Independent component analysis (ICA) is an effective method for removing artifacts and separating sources of the brain signals from these recordings. A similar approach is proving useful for analyzing functional magnetic resonance brain imaging (fMRI) data. In this paper, we outline the assumptions underlying ICA and demonstrate its application to a variety of electrical and hemodynamic recordings from the human brain. PMID:20824156

  19. Theoretical analysis of multispectral image segmentation criteria.

    PubMed

    Kerfoot, I B; Bresler, Y

    1999-01-01

    Markov random field (MRF) image segmentation algorithms have been extensively studied, and have gained wide acceptance. However, almost all of the work on them has been experimental. This provides a good understanding of the performance of existing algorithms, but not a unified explanation of the significance of each component. To address this issue, we present a theoretical analysis of several MRF image segmentation criteria. Standard methods of signal detection and estimation are used in the theoretical analysis, which quantitatively predicts the performance at realistic noise levels. The analysis is decoupled into the problems of false alarm rate, parameter selection (Neyman-Pearson and receiver operating characteristics), detection threshold, expected a priori boundary roughness, and supervision. Only the performance inherent to a criterion, with perfect global optimization, is considered. The analysis indicates that boundary and region penalties are very useful, while distinct-mean penalties are of questionable merit. Region penalties are far more important for multispectral segmentation than for greyscale. This observation also holds for Gauss-Markov random fields, and for many separable within-class PDFs. To validate the analysis, we present optimization algorithms for several criteria. Theoretical and experimental results agree fairly well. PMID:18267494

  20. Photoacoustic imaging of the excited state lifetime of fluorophores

    NASA Astrophysics Data System (ADS)

    Märk, Julia; Schmitt, Franz-Josef; Laufer, Jan

    2016-05-01

    Photoacoustic (PA) imaging using pump-probe excitation has been shown to allow the detection and visualization of fluorescent contrast agents. The technique relies upon inducing stimulated emission using pump and probe pulses at excitation wavelengths that correspond to the absorption and fluorescence spectra. By changing the time delay between the pulses, the excited state lifetime of the fluorophore is modulated to vary the amount of thermalized energy, and hence PA signal amplitude, to provide fluorophore-specific PA contrast. In this study, this approach was extended to the detection of differences in the excited state lifetime of fluorophores. PA waveforms were measured in solutions of a near-infrared fluorophore using simultaneous and time-delayed pump-probe excitation. The lifetime of the fluorophore solutions was varied by using different solvents and quencher concentrations. By calculating difference signals and by plotting their amplitude as a function of pump-probe time delay, a correlation with the excited state lifetime of the fluorophore was observed. The results agreed with the output of a forward model of the PA signal generation in fluorophores. The application of this method to tomographic PA imaging of differences in the excited state lifetime was demonstrated in tissue phantom experiments.

  1. Integrated wavelets for medical image analysis

    NASA Astrophysics Data System (ADS)

    Heinlein, Peter; Schneider, Wilfried

    2003-11-01

    Integrated wavelets are a new method for discretizing the continuous wavelet transform (CWT). Independent of the choice of discrete scale and orientation parameters they yield tight families of convolution operators. Thus these families can easily be adapted to specific problems. After presenting the fundamental ideas, we focus primarily on the construction of directional integrated wavelets and their application to medical images. We state an exact algorithm for implementing this transform and present applications from the field of digital mammography. The first application covers the enhancement of microcalcifications in digital mammograms. Further, we exploit the directional information provided by integrated wavelets for better separation of microcalcifications from similar structures.

  2. High-resolution slice imaging of quantum state-to-state photodissociation of methyl bromide

    SciTech Connect

    Lipciuc, M. Laura; Janssen, Maurice H. M.

    2007-12-14

    The photodissociation of rotationally state-selected methyl bromide is studied in the wavelength region between 213 and 235 nm using slice imaging. A hexapole state selector is used to focus a single (JK=11) rotational quantum state of the parent molecule, and a high speed slice imaging detector measures directly the three-dimensional recoil distribution of the methyl fragment. Experiments were performed on both normal (CH{sub 3}Br) and deuterated (CD{sub 3}Br) parent molecules. The velocity distribution of the methyl fragment shows a rich structure, especially for the CD{sub 3} photofragment, assigned to the formation of vibrationally excited methyl fragments in the {nu}{sub 1} and {nu}{sub 4} vibrational modes. The CH{sub 3} fragment formed with ground state Br({sup 2}P{sub 3/2}) is observed to be rotationally more excited, by some 230-340 cm{sup -1}, compared to the methyl fragment formed with spin-orbit excited Br({sup 2}P{sub 1/2}). Branching ratios and angular distributions are obtained for various methyl product states and they are observed to vary with photodissociation energy. The nonadiabatic transition probability for the {sup 3}Q{sub 0+}{yields}{sup 1}Q{sub 1} transition is calculated from the images and differences between the isotopes are observed. Comparison with previous non-state-selected experiments indicates an enhanced nonadiabatic transition probability for state-selected K=1 methyl bromide parent molecules. From the state-to-state photodissociation experiments the dissociationenergy for both isotopes was determined, D{sub 0}(CH{sub 3}Br)=23 400{+-}133 cm{sup -1} and D{sub 0}(CD{sub 3}Br)=23 827{+-}94 cm{sup -1}.

  3. Image analysis of Renaissance copperplate prints

    NASA Astrophysics Data System (ADS)

    Hedges, S. Blair

    2008-02-01

    From the fifteenth to the nineteenth centuries, prints were a common form of visual communication, analogous to photographs. Copperplate prints have many finely engraved black lines which were used to create the illusion of continuous tone. Line densities generally are 100-2000 lines per square centimeter and a print can contain more than a million total engraved lines 20-300 micrometers in width. Because hundreds to thousands of prints were made from a single copperplate over decades, variation among prints can have historical value. The largest variation is plate-related, which is the thinning of lines over successive editions as a result of plate polishing to remove time-accumulated corrosion. Thinning can be quantified with image analysis and used to date undated prints and books containing prints. Print-related variation, such as over-inking of the print, is a smaller but significant source. Image-related variation can introduce bias if images were differentially illuminated or not in focus, but improved imaging technology can limit this variation. The Print Index, the percentage of an area composed of lines, is proposed as a primary measure of variation. Statistical methods also are proposed for comparing and identifying prints in the context of a print database.

  4. Multispectral laser imaging for advanced food analysis

    NASA Astrophysics Data System (ADS)

    Senni, L.; Burrascano, P.; Ricci, M.

    2016-07-01

    A hardware-software apparatus for food inspection capable of realizing multispectral NIR laser imaging at four different wavelengths is herein discussed. The system was designed to operate in a through-transmission configuration to detect the presence of unwanted foreign bodies inside samples, whether packed or unpacked. A modified Lock-In technique was employed to counterbalance the significant signal intensity attenuation due to transmission across the sample and to extract the multispectral information more efficiently. The NIR laser wavelengths used to acquire the multispectral images can be varied to deal with different materials and to focus on specific aspects. In the present work the wavelengths were selected after a preliminary analysis to enhance the image contrast between foreign bodies and food in the sample, thus identifying the location and nature of the defects. Experimental results obtained from several specimens, with and without packaging, are presented and the multispectral image processing as well as the achievable spatial resolution of the system are discussed.

  5. Quantifying biodiversity using digital cameras and automated image analysis.

    NASA Astrophysics Data System (ADS)

    Roadknight, C. M.; Rose, R. J.; Barber, M. L.; Price, M. C.; Marshall, I. W.

    2009-04-01

    Monitoring the effects on biodiversity of extensive grazing in complex semi-natural habitats is labour intensive. There are also concerns about the standardization of semi-quantitative data collection. We have chosen to focus initially on automating the most time consuming aspect - the image analysis. The advent of cheaper and more sophisticated digital camera technology has lead to a sudden increase in the number of habitat monitoring images and information that is being collected. We report on the use of automated trail cameras (designed for the game hunting market) to continuously capture images of grazer activity in a variety of habitats at Moor House National Nature Reserve, which is situated in the North of England at an average altitude of over 600m. Rainfall is high, and in most areas the soil consists of deep peat (1m to 3m), populated by a mix of heather, mosses and sedges. The cameras have been continuously in operation over a 6 month period, daylight images are in full colour and night images (IR flash) are black and white. We have developed artificial intelligence based methods to assist in the analysis of the large number of images collected, generating alert states for new or unusual image conditions. This paper describes the data collection techniques, outlines the quantitative and qualitative data collected and proposes online and offline systems that can reduce the manpower overheads and increase focus on important subsets in the collected data. By converting digital image data into statistical composite data it can be handled in a similar way to other biodiversity statistics thus improving the scalability of monitoring experiments. Unsupervised feature detection methods and supervised neural methods were tested and offered solutions to simplifying the process. Accurate (85 to 95%) categorization of faunal content can be obtained, requiring human intervention for only those images containing rare animals or unusual (undecidable) conditions, and

  6. Nursing image: an evolutionary concept analysis.

    PubMed

    Rezaei-Adaryani, Morteza; Salsali, Mahvash; Mohammadi, Eesa

    2012-12-01

    A long-term challenge to the nursing profession is the concept of image. In this study, we used the Rodgers' evolutionary concept analysis approach to analyze the concept of nursing image (NI). The aim of this concept analysis was to clarify the attributes, antecedents, consequences, and implications associated with the concept. We performed an integrative internet-based literature review to retrieve English literature published from 1980-2011. Findings showed that NI is a multidimensional, all-inclusive, paradoxical, dynamic, and complex concept. The media, invisibility, clothing style, nurses' behaviors, gender issues, and professional organizations are the most important antecedents of the concept. We found that NI is pivotal in staff recruitment and nursing shortage, resource allocation to nursing, nurses' job performance, workload, burnout and job dissatisfaction, violence against nurses, public trust, and salaries available to nurses. An in-depth understanding of the NI concept would assist nurses to eliminate negative stereotypes and build a more professional image for the nurse and the profession. PMID:23343236

  7. Shannon information and ROC analysis in imaging.

    PubMed

    Clarkson, Eric; Cushing, Johnathan B

    2015-07-01

    Shannon information (SI) and the ideal-observer receiver operating characteristic (ROC) curve are two different methods for analyzing the performance of an imaging system for a binary classification task, such as the detection of a variable signal embedded within a random background. In this work we describe a new ROC curve, the Shannon information receiver operator curve (SIROC), that is derived from the SI expression for a binary classification task. We then show that the ideal-observer ROC curve and the SIROC have many properties in common, and are equivalent descriptions of the optimal performance of an observer on the task. This equivalence is described mathematically by an integral transform that maps the ideal-observer ROC curve onto the SIROC. This then leads to an integral transform relating the minimum probability of error, as a function of the odds against a signal, to the conditional entropy, as a function of the same variable. This last relation then gives us the complete mathematical equivalence between ideal-observer ROC analysis and SI analysis of the classification task for a given imaging system. We also find that there is a close relationship between the area under the ideal-observer ROC curve, which is often used as a figure of merit for imaging systems and the area under the SIROC. Finally, we show that the relationships between the two curves result in new inequalities relating SI to ROC quantities for the ideal observer. PMID:26367158

  8. Simple Low Level Features for Image Analysis

    NASA Astrophysics Data System (ADS)

    Falcoz, Paolo

    As human beings, we perceive the world around us mainly through our eyes, and give what we see the status of “reality”; as such we historically tried to create ways of recording this reality so we could augment or extend our memory. From early attempts in photography like the image produced in 1826 by the French inventor Nicéphore Niépce (Figure 2.1) to the latest high definition camcorders, the number of recorded pieces of reality increased exponentially, posing the problem of managing all that information. Most of the raw video material produced today has lost its memory augmentation function, as it will hardly ever be viewed by any human; pervasive CCTVs are an example. They generate an enormous amount of data each day, but there is not enough “human processing power” to view them. Therefore the need for effective automatic image analysis tools is great, and a lot effort has been put in it, both from the academia and the industry. In this chapter, a review of some of the most important image analysis tools are presented.

  9. Multiport solid-state imager characterization at variable pixel rates

    SciTech Connect

    Yates, G.J.; Albright, K.A.; Turko, B.T.

    1993-08-01

    The imaging performance of an 8-port Full Frame Transfer Charge Coupled Device (FFT CCD) as a function of several parameters including pixel clock rate is presented. The device, model CCD- 13, manufactured by English Electric Valve (EEV) is a 512 {times} 512 pixel array designed with four individual programmable bidirectional serial registers and eight output amplifiers permitting simultaneous readout of eight segments (128 horizontal {times} 256 vertical pixels) of the array. The imager was evaluated in Los Alamos National Laboratory`s High-Speed Solid-State Imager Test Station at true pixel rates as high as 50 MHz for effective imager pixel rates approaching 400 MHz from multiporting. Key response characteristics measured include absolute responsivity, Charge-Transfer-Efficiency (CTE), dynamic range, resolution, signal-to-noise ratio, and electronic and optical crosstalk among the eight video channels. Preliminary test results and data obtained from the CCD-13 will be presented and the versatility/capabilities of the test station will be reviewed.

  10. Wavelet-based image analysis system for soil texture analysis

    NASA Astrophysics Data System (ADS)

    Sun, Yun; Long, Zhiling; Jang, Ping-Rey; Plodinec, M. John

    2003-05-01

    Soil texture is defined as the relative proportion of clay, silt and sand found in a given soil sample. It is an important physical property of soil that affects such phenomena as plant growth and agricultural fertility. Traditional methods used to determine soil texture are either time consuming (hydrometer), or subjective and experience-demanding (field tactile evaluation). Considering that textural patterns observed at soil surfaces are uniquely associated with soil textures, we propose an innovative approach to soil texture analysis, in which wavelet frames-based features representing texture contents of soil images are extracted and categorized by applying a maximum likelihood criterion. The soil texture analysis system has been tested successfully with an accuracy of 91% in classifying soil samples into one of three general categories of soil textures. In comparison with the common methods, this wavelet-based image analysis approach is convenient, efficient, fast, and objective.

  11. Rydberg and valence state excitation dynamics: a velocity map imaging study involving the E-V state interaction in HBr.

    PubMed

    Zaouris, Dimitris; Kartakoullis, Andreas; Glodic, Pavle; Samartzis, Peter C; Rafn Hróðmarsson, Helgi; Kvaran, Ágúst

    2015-04-28

    Photoexcitation dynamics of the E((1)Σ(+)) (v' = 0) Rydberg state and the V((1)Σ(+)) (v') ion-pair vibrational states of HBr are investigated by velocity map imaging (VMI). H(+) photoions, produced through a number of vibrational and rotational levels of the two states were imaged and kinetic energy release (KER) and angular distributions were extracted from the data. In agreement with previous work, we found the photodissociation channels forming H*(n = 2) + Br((2)P3/2)/Br*((2)P1/2) to be dominant. Autoionization pathways leading to H(+) + Br((2)P3/2)/Br*((2)P1/2) via either HBr(+)((2)Π3/2) or HBr(+)*((2)Π1/2) formation were also present. The analysis of KER and angular distributions and comparison with rotationally and mass resolved resonance enhanced multiphoton ionization (REMPI) spectra revealed the excitation transition mechanisms and characteristics of states involved as well as the involvement of the E-V state interactions and their v' and J' dependence. PMID:25801122

  12. PAMS photo image retrieval prototype alternatives analysis

    SciTech Connect

    Conner, M.L.

    1996-04-30

    Photography and Audiovisual Services uses a system called the Photography and Audiovisual Management System (PAMS) to perform order entry and billing services. The PAMS system utilizes Revelation Technologies database management software, AREV. Work is currently in progress to link the PAMS AREV system to a Microsoft SQL Server database engine to provide photograph indexing and query capabilities. The link between AREV and SQLServer will use a technique called ``bonding.`` This photograph imaging subsystem will interface to the PAMS system and handle the image capture and retrieval portions of the project. The intent of this alternatives analysis is to examine the software and hardware alternatives available to meet the requirements for this project, and identify a cost-effective solution.

  13. Radar image with color as height, Bahia State, Brazil

    NASA Technical Reports Server (NTRS)

    2000-01-01

    This radar image is the first to show the full 240-kilometer-wide (150 mile)swath collected by the Shuttle Radar Topography Mission (SRTM). The area shown is in the state of Bahia in Brazil. The semi-circular mountains along the leftside of the image are the Serra Da Jacobin, which rise to 1100 meters (3600 feet) above sea level. The total relief shown is approximately 800 meters (2600 feet). The top part of the image is the Sertao, a semi-arid region, that is subject to severe droughts during El Nino events. A small portion of the San Francisco River, the longest river (1609 kilometers or 1000 miles) entirely within Brazil, cuts across the upper right corner of the image. This river is a major source of water for irrigation and hydroelectric power. Mapping such regions will allow scientists to better understand the relationships between flooding cycles, drought and human influences on ecosystems.

    This image combines two types of data from the Shuttle Radar Topography Mission. The image brightness corresponds to the strength of the radar signal reflected from the ground, while colors show the elevation as measured by SRTM. The three dark vertical stripes show the boundaries where four segments of the swath are merged to form the full scanned swath. These will be removed in later processing. Colors range from green at the lowest elevations to reddish at the highest elevations.

    The Shuttle Radar Topography Mission (SRTM), launched on February 11, 2000, uses the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on the Space Shuttle Endeavour in 1994. The mission is designed to collect three-dimensional measurements of the Earth's surface. To collect the 3-D data, engineers added a 60-meter-long (200-foot) mast, an additional C-band imaging antenna and improved tracking and navigation devices. The mission is a cooperative project between the National Aeronautics and Space

  14. Polarization state imaging in long-wave infrared for object detection

    NASA Astrophysics Data System (ADS)

    Bieszczad, Grzegorz; Gogler, Sławomir; Krupiński, Michał

    2013-10-01

    The article discusses the use of modern imaging polarimetry from the visible range of the spectrum to the far infrared. The paper presents the analyzes the potential for imaging polarimetry in the far infrared for remote sensing applications. In article a description of measurement stand is presented for examination of polarization state in LWIR. The stand consists of: infrared detector array with electronic circuitry, polarizer plate and software enabling detection method. The article also describes first results of measurements in presented test bed. Based on these measurements it was possible to calculate some of the Stokes parameters of radiation from the scene. The analysis of the measurement results show that the measurement of polarization state can be used to detect certain types of objects. Measuring the degree of polarization may allow for the detection of objects on an infrared image, which are not detectable by other techniques, and in other spectral ranges. In order to at least partially characterize the polarization state of the scene it is required to measure radiation intensity in different configurations of the polarizing filter. Due to additional filtering elements in optical path of the camera, the NETD parameter of the camera with polarizer in proposed measurement stand was equal to about 240mK. In order to visualize the polarization characteristics of objects in the infrared image, a method of imaging measurement results imposing them on the thermal image. Imaging of measurement results of radiation polarization is made by adding color and saturation to black and white thermal image where brightness corresponds to the intensity of infrared radiation.

  15. Developing behavior analysis at the state level

    PubMed Central

    Johnston, J. M.; Shook, Gerald L.

    1987-01-01

    Over the past fifteen years, behavior analysts in Florida have worked together to develop the discipline with a multifaceted system of contingencies. Basing their effort in the area of retardation and with the cooperation of the state's Developmental Services Program Office, they have gradually developed a regulatory manual of programming policy and procedures, a hierarchical system of responsibilities for programming approval and monitoring, a state-sponsored certification program, a professional association, and an active university community. These components are described and discussed in terms of suggested principles for developing the field of behavior analysis within a state. PMID:22477979

  16. Uses of software in digital image analysis: a forensic report

    NASA Astrophysics Data System (ADS)

    Sharma, Mukesh; Jha, Shailendra

    2010-02-01

    Forensic image analysis is required an expertise to interpret the content of an image or the image itself in legal matters. Major sub-disciplines of forensic image analysis with law enforcement applications include photo-grammetry, photographic comparison, content analysis and image authentication. It has wide applications in forensic science range from documenting crime scenes to enhancing faint or indistinct patterns such as partial fingerprints. The process of forensic image analysis can involve several different tasks, regardless of the type of image analysis performed. Through this paper authors have tried to explain these tasks, which are described in to three categories: Image Compression, Image Enhancement & Restoration and Measurement Extraction. With the help of examples like signature comparison, counterfeit currency comparison and foot-wear sole impression using the software Canvas and Corel Draw.

  17. Imaging electronic trap states in perovskite thin films with combined fluorescence and femtosecond transient absorption microscopy

    DOE PAGESBeta

    Xiao, Kai; Ma, Ying -Zhong; Simpson, Mary Jane; Doughty, Benjamin; Yang, Bin

    2016-04-22

    Charge carrier trapping degrades the performance of organometallic halide perovskite solar cells. To characterize the locations of electronic trap states in a heterogeneous photoactive layer, a spatially resolved approach is essential. Here, we report a comparative study on methylammonium lead tri-iodide perovskite thin films subject to different thermal annealing times using a combined photoluminescence (PL) and femtosecond transient absorption microscopy (TAM) approach to spatially map trap states. This approach coregisters the initially populated electronic excited states with the regions that recombine radiatively. Although the TAM images are relatively homogeneous for both samples, the corresponding PL images are highly structured. Themore » remarkable variation in the PL intensities as compared to transient absorption signal amplitude suggests spatially dependent PL quantum efficiency, indicative of trapping events. Furthermore, detailed analysis enables identification of two trapping regimes: a densely packed trapping region and a sparse trapping area that appear as unique spatial features in scaled PL maps.« less

  18. Imaging Electronic Trap States in Perovskite Thin Films with Combined Fluorescence and Femtosecond Transient Absorption Microscopy.

    PubMed

    Simpson, Mary Jane; Doughty, Benjamin; Yang, Bin; Xiao, Kai; Ma, Ying-Zhong

    2016-05-01

    Charge carrier trapping degrades the performance of organometallic halide perovskite solar cells. To characterize the locations of electronic trap states in a heterogeneous photoactive layer, a spatially resolved approach is essential. Here, we report a comparative study on methylammonium lead tri-iodide perovskite thin films subject to different thermal annealing times using a combined photoluminescence (PL) and femtosecond transient absorption microscopy (TAM) approach to spatially map trap states. This approach coregisters the initially populated electronic excited states with the regions that recombine radiatively. Although the TAM images are relatively homogeneous for both samples, the corresponding PL images are highly structured. The remarkable variation in the PL intensities as compared to transient absorption signal amplitude suggests spatially dependent PL quantum efficiency, indicative of trapping events. Detailed analysis enables identification of two trapping regimes: a densely packed trapping region and a sparse trapping area that appear as unique spatial features in scaled PL maps. PMID:27103096

  19. Analysis of katabatic flow using infrared imaging

    NASA Astrophysics Data System (ADS)

    Grudzielanek, M.; Cermak, J.

    2013-12-01

    We present a novel high-resolution IR method which is developed, tested and used for the analysis of katabatic flow. Modern thermal imaging systems allow for the recording of infrared picture sequences and thus the monitoring and analysis of dynamic processes. In order to identify, visualize and analyze dynamic air flow using infrared imaging, a highly reactive 'projection' surface is needed along the air flow. Here, a design for these types of analysis is proposed and evaluated. Air flow situations with strong air temperature gradients and fluctuations, such as katabatic flow, are particularly suitable for this new method. The method is applied here to analyze nocturnal cold air flows on gentle slopes. In combination with traditional methods the vertical and temporal dynamics of cold air flow are analyzed. Several assumptions on cold air flow dynamics can be confirmed explicitly for the first time. By observing the cold air flow in terms of frequency, size and period of the cold air fluctuations, drops are identified and organized in a newly derived classification system of cold air flow phases. In addition, new flow characteristics are detected, like sharp cold air caps and turbulence inside the drops. Vertical temperature gradients inside cold air drops and their temporal evolution are presented in high resolution Hovmöller-type diagrams and sequenced time lapse infrared videos.

  20. State-space formulations for flutter analysis

    NASA Technical Reports Server (NTRS)

    Weiss, S. J.; Tseng, K.; Morino, L.

    1976-01-01

    Various methods are presented and assessed for approximating the aerodynamic forces so that the State Space formulation and off-the-imaginary axis analysis are retained. The advantages of retaining these features are considerable, not only in simplifying the flutter analysis, but especially for more advanced applications such as optimal design of active control in which the flutter is merely a constraint to the optimization problem.

  1. Teleseismic receiver function imaging of the Pacific Northwest, United States

    NASA Astrophysics Data System (ADS)

    Eager, Kevin Charles

    The origins of widespread Cenozoic tectonomagmatism in the Pacific Northwest, United States likely involve complex dynamics including subduction of the Juan de Fuca plate and mantle upwelling processes, all of which are reflected in the crust and upper mantle. To provide an improved understanding of these processes, I analyze P-to- S converted phases using the receiver function method to image topographic variations on regional seismic discontinuities in the upper mantle, which provides constraints on mantle thermal structure, and the crust-mantle interface, which provides constraints on crustal thickness and composition. My results confirm complexity in the Juan de Fuca slab structure as found by regional tomographic studies, including limited evidence of the slab penetrating the transition zone between the 410 and 660 km discontinuities. Evidence is inconclusive for a simple mantle plume beneath the central Oregon High Lava Plains, but indicates a regional increase in mantle temperatures stretching to the east. This result implies the inflow of warm material, either from around the southern edge of the Juan de Fuca plate as it descends into the mantle, or from a regional upwelling to the east related to the Yellowstone hotspot. Results for regional crustal structure reveal thin (˜31 km) crust beneath the High Lava Plains relative to surrounding regions that exhibit thicker (35+ km) crust. The thick (≥ 40 km) crust of the Owyhee Plateau has a sharp western boundary and normal Poisson's ratio, a measure of crustal composition. I find a slightly thickened crust and low Poisson's ratio between Steens Mountain and the Owyhee Plateau, consistent with residuum from source magma of the Steens flood basalts. Central and southern Oregon exhibit very high Poisson's ratios and low velocity zones within the crust, suggesting a degree of intracrustal partial melt not seen along the center of the age-progressive High Lava Plains magmatic track, perhaps due to crustal melt

  2. Machine learning for medical images analysis.

    PubMed

    Criminisi, A

    2016-10-01

    This article discusses the application of machine learning for the analysis of medical images. Specifically: (i) We show how a special type of learning models can be thought of as automatically optimized, hierarchically-structured, rule-based algorithms, and (ii) We discuss how the issue of collecting large labelled datasets applies to both conventional algorithms as well as machine learning techniques. The size of the training database is a function of model complexity rather than a characteristic of machine learning methods. PMID:27374127

  3. Research on automatic human chromosome image analysis

    NASA Astrophysics Data System (ADS)

    Ming, Delie; Tian, Jinwen; Liu, Jian

    2007-11-01

    Human chromosome karyotyping is one of the essential tasks in cytogenetics, especially in genetic syndrome diagnoses. In this thesis, an automatic procedure is introduced for human chromosome image analysis. According to different status of touching and overlapping chromosomes, several segmentation methods are proposed to achieve the best results. Medial axis is extracted by the middle point algorithm. Chromosome band is enhanced by the algorithm based on multiscale B-spline wavelets, extracted by average gray profile, gradient profile and shape profile, and calculated by the WDD (Weighted Density Distribution) descriptors. The multilayer classifier is used in classification. Experiment results demonstrate that the algorithms perform well.

  4. Time-Domain Fluorescence Lifetime Imaging Techniques Suitable for Solid-State Imaging Sensor Arrays

    PubMed Central

    Li, David Day-Uei; Ameer-Beg, Simon; Arlt, Jochen; Tyndall, David; Walker, Richard; Matthews, Daniel R.; Visitkul, Viput; Richardson, Justin; Henderson, Robert K.

    2012-01-01

    We have successfully demonstrated video-rate CMOS single-photon avalanche diode (SPAD)-based cameras for fluorescence lifetime imaging microscopy (FLIM) by applying innovative FLIM algorithms. We also review and compare several time-domain techniques and solid-state FLIM systems, and adapt the proposed algorithms for massive CMOS SPAD-based arrays and hardware implementations. The theoretical error equations are derived and their performances are demonstrated on the data obtained from 0.13 μm CMOS SPAD arrays and the multiple-decay data obtained from scanning PMT systems. In vivo two photon fluorescence lifetime imaging data of FITC-albumin labeled vasculature of a P22 rat carcinosarcoma (BD9 rat window chamber) are used to test how different algorithms perform on bi-decay data. The proposed techniques are capable of producing lifetime images with enough contrast. PMID:22778606

  5. Image reconstruction from Pulsed Fast Neutron Analysis

    NASA Astrophysics Data System (ADS)

    Bendahan, Joseph; Feinstein, Leon; Keeley, Doug; Loveman, Rob

    1999-06-01

    Pulsed Fast Neutron Analysis (PFNA) has been demonstrated to detect drugs and explosives in trucks and large cargo containers. PFNA uses a collimated beam of nanosecond-pulsed fast neutrons that interact with the cargo contents to produce gamma rays characteristic to their elemental composition. By timing the arrival of the emitted radiation to an array of gamma-ray detectors a three-dimensional elemental density map or image of the cargo is created. The process to determine the elemental densities is complex and requires a number of steps. The first step consists of extracting from the characteristic gamma-ray spectra the counts associated with the elements of interest. Other steps are needed to correct for physical quantities such as gamma-ray production cross sections and angular distributions. The image processing includes also phenomenological corrections that take into account the neutron attenuation through the cargo, and the attenuation of the gamma rays from the point they were generated to the gamma-ray detectors. Additional processing is required to map the elemental densities from the data acquisition system of coordinates to a rectilinear system. This paper describes the image processing used to compute the elemental densities from the counts observed in the gamma-ray detectors.

  6. Image reconstruction from Pulsed Fast Neutron Analysis

    SciTech Connect

    Bendahan, Joseph; Feinstein, Leon; Keeley, Doug; Loveman, Rob

    1999-06-10

    Pulsed Fast Neutron Analysis (PFNA) has been demonstrated to detect drugs and explosives in trucks and large cargo containers. PFNA uses a collimated beam of nanosecond-pulsed fast neutrons that interact with the cargo contents to produce gamma rays characteristic to their elemental composition. By timing the arrival of the emitted radiation to an array of gamma-ray detectors a three-dimensional elemental density map or image of the cargo is created. The process to determine the elemental densities is complex and requires a number of steps. The first step consists of extracting from the characteristic gamma-ray spectra the counts associated with the elements of interest. Other steps are needed to correct for physical quantities such as gamma-ray production cross sections and angular distributions. The image processing includes also phenomenological corrections that take into account the neutron attenuation through the cargo, and the attenuation of the gamma rays from the point they were generated to the gamma-ray detectors. Additional processing is required to map the elemental densities from the data acquisition system of coordinates to a rectilinear system. This paper describes the image processing used to compute the elemental densities from the counts observed in the gamma-ray detectors.

  7. Soil Surface Roughness through Image Analysis

    NASA Astrophysics Data System (ADS)

    Tarquis, A. M.; Saa-Requejo, A.; Valencia, J. L.; Moratiel, R.; Paz-Gonzalez, A.; Agro-Environmental Modeling

    2011-12-01

    Soil erosion is a complex phenomenon involving the detachment and transport of soil particles, storage and runoff of rainwater, and infiltration. The relative magnitude and importance of these processes depends on several factors being one of them surface micro-topography, usually quantified trough soil surface roughness (SSR). SSR greatly affects surface sealing and runoff generation, yet little information is available about the effect of roughness on the spatial distribution of runoff and on flow concentration. The methods commonly used to measure SSR involve measuring point elevation using a pin roughness meter or laser, both of which are labor intensive and expensive. Lately a simple and inexpensive technique based on percentage of shadow in soil surface image has been developed to determine SSR in the field in order to obtain measurement for wide spread application. One of the first steps in this technique is image de-noising and thresholding to estimate the percentage of black pixels in the studied area. In this work, a series of soil surface images have been analyzed applying several de-noising wavelet analysis and thresholding algorithms to study the variation in percentage of shadows and the shadows size distribution. Funding provided by Spanish Ministerio de Ciencia e Innovación (MICINN) through project no. AGL2010- 21501/AGR and by Xunta de Galicia through project no INCITE08PXIB1621 are greatly appreciated.

  8. Noise analysis in laser speckle contrast imaging

    NASA Astrophysics Data System (ADS)

    Yuan, Shuai; Chen, Yu; Dunn, Andrew K.; Boas, David A.

    2010-02-01

    Laser speckle contrast imaging (LSCI) is becoming an established method for full-field imaging of blood flow dynamics in animal models. A reliable quantitative model with comprehensive noise analysis is necessary to fully utilize this technique in biomedical applications and clinical trials. In this study, we investigated several major noise sources in LSCI: periodic physiology noise, shot noise and statistical noise. (1) We observed periodic physiology noise in our experiments and found that its sources consist principally of motions induced by heart beats and/or ventilation. (2) We found that shot noise caused an offset of speckle contrast (SC) values, and this offset is directly related to the incident light intensity. (3) A mathematical model of statistical noise was also developed. The model indicated that statistical noise in speckle contrast imaging is related to the SC values and the total number of pixels used in the SC calculation. Our experimental results are consistent with theoretical predications, as well as with other published works.

  9. Difference Image Analysis of Galactic Microlensing. I. Data Analysis

    SciTech Connect

    Alcock, C.; Allsman, R. A.; Alves, D.; Axelrod, T. S.; Becker, A. C.; Bennett, D. P.; Cook, K. H.; Drake, A. J.; Freeman, K. C.; Griest, K.

    1999-08-20

    This is a preliminary report on the application of Difference Image Analysis (DIA) to Galactic bulge images. The aim of this analysis is to increase the sensitivity to the detection of gravitational microlensing. We discuss how the DIA technique simplifies the process of discovering microlensing events by detecting only objects that have variable flux. We illustrate how the DIA technique is not limited to detection of so-called ''pixel lensing'' events but can also be used to improve photometry for classical microlensing events by removing the effects of blending. We will present a method whereby DIA can be used to reveal the true unblended colors, positions, and light curves of microlensing events. We discuss the need for a technique to obtain the accurate microlensing timescales from blended sources and present a possible solution to this problem using the existing Hubble Space Telescope color-magnitude diagrams of the Galactic bulge and LMC. The use of such a solution with both classical and pixel microlensing searches is discussed. We show that one of the major causes of systematic noise in DIA is differential refraction. A technique for removing this systematic by effectively registering images to a common air mass is presented. Improvements to commonly used image differencing techniques are discussed. (c) 1999 The American Astronomical Society.

  10. GRETNA: a graph theoretical network analysis toolbox for imaging connectomics

    PubMed Central

    Wang, Jinhui; Wang, Xindi; Xia, Mingrui; Liao, Xuhong; Evans, Alan; He, Yong

    2015-01-01

    Recent studies have suggested that the brain’s structural and functional networks (i.e., connectomics) can be constructed by various imaging technologies (e.g., EEG/MEG; structural, diffusion and functional MRI) and further characterized by graph theory. Given the huge complexity of network construction, analysis and statistics, toolboxes incorporating these functions are largely lacking. Here, we developed the GRaph thEoreTical Network Analysis (GRETNA) toolbox for imaging connectomics. The GRETNA contains several key features as follows: (i) an open-source, Matlab-based, cross-platform (Windows and UNIX OS) package with a graphical user interface (GUI); (ii) allowing topological analyses of global and local network properties with parallel computing ability, independent of imaging modality and species; (iii) providing flexible manipulations in several key steps during network construction and analysis, which include network node definition, network connectivity processing, network type selection and choice of thresholding procedure; (iv) allowing statistical comparisons of global, nodal and connectional network metrics and assessments of relationship between these network metrics and clinical or behavioral variables of interest; and (v) including functionality in image preprocessing and network construction based on resting-state functional MRI (R-fMRI) data. After applying the GRETNA to a publicly released R-fMRI dataset of 54 healthy young adults, we demonstrated that human brain functional networks exhibit efficient small-world, assortative, hierarchical and modular organizations and possess highly connected hubs and that these findings are robust against different analytical strategies. With these efforts, we anticipate that GRETNA will accelerate imaging connectomics in an easy, quick and flexible manner. GRETNA is freely available on the NITRC website.1 PMID:26175682

  11. Analysis of image quality based on perceptual preference

    NASA Astrophysics Data System (ADS)

    Xue, Liqin; Hua, Yuning; Zhao, Guangzhou; Qi, Yaping

    2007-11-01

    This paper deals with image quality analysis considering the impact of psychological factors involved in assessment. The attributes of image quality requirement were partitioned according to the visual perception characteristics and the preference of image quality were obtained by the factor analysis method. The features of image quality which support the subjective preference were identified, The adequacy of image is evidenced to be the top requirement issues to the display image quality improvement. The approach will be beneficial to the research of the image quality subjective quantitative assessment method.

  12. Chlorophyll fluorescence analysis and imaging in plant stress and disease

    SciTech Connect

    Daley, P.F.

    1994-12-01

    Quantitative analysis of chlorophyll fluorescence transients and quenching has evolved rapidly in the last decade. Instrumentation capable of fluorescence detection in bright actinic light has been used in conjunction with gas exchange analysis to build an empirical foundation relating quenching parameters to photosynthetic electron transport, the state of the photoapparatus, and carbon fixation. We have developed several instruments that collect video images of chlorophyll fluorescence. Digitized versions of these images can be manipulated as numerical data arrays, supporting generation of quenching maps that represent the spatial distribution of photosynthetic activity in leaves. We have applied this technology to analysis of fluorescence quenching during application of stress hormones, herbicides, physical stresses including drought and sudden changes in humidity of the atmosphere surrounding leaves, and during stomatal oscillations in high CO{sub 2}. We describe a recently completed portable fluorescence imaging system utilizing LED illumination and a consumer-grade camcorder, that will be used in long-term, non-destructive field studies of plant virus infections.

  13. Wavelet Analysis of Space Solar Telescope Images

    NASA Astrophysics Data System (ADS)

    Zhu, Xi-An; Jin, Sheng-Zhen; Wang, Jing-Yu; Ning, Shu-Nian

    2003-12-01

    The scientific satellite SST (Space Solar Telescope) is an important research project strongly supported by the Chinese Academy of Sciences. Every day, SST acquires 50 GB of data (after processing) but only 10GB can be transmitted to the ground because of limited time of satellite passage and limited channel volume. Therefore, the data must be compressed before transmission. Wavelets analysis is a new technique developed over the last 10 years, with great potential of application. We start with a brief introduction to the essential principles of wavelet analysis, and then describe the main idea of embedded zerotree wavelet coding, used for compressing the SST images. The results show that this coding is adequate for the job.

  14. The Scientific Image in Behavior Analysis.

    PubMed

    Keenan, Mickey

    2016-05-01

    Throughout the history of science, the scientific image has played a significant role in communication. With recent developments in computing technology, there has been an increase in the kinds of opportunities now available for scientists to communicate in more sophisticated ways. Within behavior analysis, though, we are only just beginning to appreciate the importance of going beyond the printing press to elucidate basic principles of behavior. The aim of this manuscript is to stimulate appreciation of both the role of the scientific image and the opportunities provided by a quick response code (QR code) for enhancing the functionality of the printed page. I discuss the limitations of imagery in behavior analysis ("Introduction"), and I show examples of what can be done with animations and multimedia for teaching philosophical issues that arise when teaching about private events ("Private Events 1 and 2"). Animations are also useful for bypassing ethical issues when showing examples of challenging behavior ("Challenging Behavior"). Each of these topics can be accessed only by scanning the QR code provided. This contingency has been arranged to help the reader embrace this new technology. In so doing, I hope to show its potential for going beyond the limitations of the printing press. PMID:27606187

  15. Monotonic correlation analysis of image quality measures for image fusion

    NASA Astrophysics Data System (ADS)

    Kaplan, Lance M.; Burks, Stephen D.; Moore, Richard K.; Nguyen, Quang

    2008-04-01

    The next generation of night vision goggles will fuse image intensified and long wave infra-red to create a hybrid image that will enable soldiers to better interpret their surroundings during nighttime missions. Paramount to the development of such goggles is the exploitation of image quality (IQ) measures to automatically determine the best image fusion algorithm for a particular task. This work introduces a novel monotonic correlation coefficient to investigate how well possible IQ features correlate to actual human performance, which is measured by a perception study. The paper will demonstrate how monotonic correlation can identify worthy features that could be overlooked by traditional correlation values.

  16. Natural Language Processing Versus Content-Based Image Analysis for Medical Document Retrieval

    PubMed Central

    Névéol, Aurélie; Deserno, Thomas M.; Darmoni, Stéfan J.; Güld, Mark Oliver; Aronson, Alan R.

    2009-01-01

    One of the most significant recent advances in health information systems has been the shift from paper to electronic documents. While research on automatic text and image processing has taken separate paths, there is a growing need for joint efforts, particularly for electronic health records and biomedical literature databases. This work aims at comparing text-based versus image-based access to multimodal medical documents using state-of-the-art methods of processing text and image components. A collection of 180 medical documents containing an image accompanied by a short text describing it was divided into training and test sets. Content-based image analysis and natural language processing techniques are applied individually and combined for multimodal document analysis. The evaluation consists of an indexing task and a retrieval task based on the “gold standard” codes manually assigned to corpus documents. The performance of text-based and image-based access, as well as combined document features, is compared. Image analysis proves more adequate for both the indexing and retrieval of the images. In the indexing task, multimodal analysis outperforms both independent image and text analysis. This experiment shows that text describing images can be usefully analyzed in the framework of a hybrid text/image retrieval system. PMID:19633735

  17. Linear Covariance Analysis and Epoch State Estimators

    NASA Technical Reports Server (NTRS)

    Markley, F. Landis; Carpenter, J. Russell

    2012-01-01

    This paper extends in two directions the results of prior work on generalized linear covariance analysis of both batch least-squares and sequential estimators. The first is an improved treatment of process noise in the batch, or epoch state, estimator with an epoch time that may be later than some or all of the measurements in the batch. The second is to account for process noise in specifying the gains in the epoch state estimator. We establish the conditions under which the latter estimator is equivalent to the Kalman filter.

  18. Linear Covariance Analysis and Epoch State Estimators

    NASA Technical Reports Server (NTRS)

    Markley, F. Landis; Carpenter, J. Russell

    2014-01-01

    This paper extends in two directions the results of prior work on generalized linear covariance analysis of both batch least-squares and sequential estimators. The first is an improved treatment of process noise in the batch, or epoch state, estimator with an epoch time that may be later than some or all of the measurements in the batch. The second is to account for process noise in specifying the gains in the epoch state estimator. We establish the conditions under which the latter estimator is equivalent to the Kalman filter.

  19. Practical steganalysis of digital images: state of the art

    NASA Astrophysics Data System (ADS)

    Fridrich, Jessica; Goljan, Miroslav

    2002-04-01

    Steganography is the art of hiding the very presence of communication by embedding secret messages into innocuous looking cover documents, such as digital images. Detection of steganography, estimation of message length, and its extraction belong to the field of steganalysis. Steganalysis has recently received a great deal of attention both from law enforcement and the media. In our paper, we classify and review current stego-detection algorithms that can be used to trace popular steganographic products. We recognize several qualitatively different approaches to practical steganalysis - visual detection, detection based on first order statistics (histogram analysis), dual statistics methods that use spatial correlations in images and higher-order statistics (RS steganalysis), universal blind detection schemes, and special cases, such as JPEG compatibility steganalysis. We also present some new results regarding our previously proposed detection of LSB embedding using sensitive dual statistics. The recent steganalytic methods indicate that the most common paradigm in image steganography - the bit-replacement or bit substitution - is inherently insecure with safe capacities far smaller than previously thought.

  20. Characterization of Surface Chemical States of a Thick Insulator: Chemical State Imaging on MgO Surface

    NASA Astrophysics Data System (ADS)

    Yi, Yeonjin; Cho, Sangwan; Noh, Myungkeun; Whang, Chung-Nam; Jeong, Kwangho; Shin, Hyun-Joon

    2005-02-01

    We report a surface characterization tool that can be effectively used to investigate the chemical state and subtle radiation damage on a thick insulator surface. It has been used to examine the MgO surface of a plasma display panel (PDP) consisting of a stack of insulator layers of approximately 51 μm thickness on a 2-mm-thick glass plate. The scanning photoelectron microscopy (SPEM) image of the insulating MgO surface was obtained by using the difference in Au 4f peak shift due to the surface charging at each pixel, where a Au adlayer of approximately 15 {\\AA} thickness was formed on the surface to overcome the serious charging shift of the peak position and the spectral deterioration in the photoelectron spectra. The observed contrast in the SPEM image reveals the chemical modification of the underlying MgO surface induced by the plasma discharge damage. The chemical state analysis of the MgO surface was carried out by comparing the Mg 2p, C 1s and O 1s photoemission spectra collected at each pixel of the SPEM image. We assigned four suboxide phases, MgO, MgCO3, Mg(OH)2 and Mg1+, on the initial MgO surface, where the Mg(OH)2 and Mg1+ phases vanished rapidly as the discharge-induced surface damage began.

  1. Imaging Excited State Dynamics with 2d Electronic Spectroscopy

    NASA Astrophysics Data System (ADS)

    Engel, Gregory S.

    2012-06-01

    Excited states in the condensed phase have extremely high chemical potentials making them highly reactive and difficult to control. Yet in biology, excited state dynamics operate with exquisite precision driving solar light harvesting in photosynthetic complexes though excitonic transport and photochemistry through non-radiative relaxation to photochemical products. Optimized by evolution, these biological systems display manifestly quantum mechanical behaviors including coherent energy transfer, steering wavepacket trajectories through conical intersections and protection of long-lived quantum coherence. To image the underlying excited state dynamics, we have developed a new spectroscopic method allowing us to capture excitonic structure in real time. Through this method and other ultrafast multidimensional spectroscopies, we have captured coherent dynamics within photosynthetic antenna complexes. The data not only reveal how biological systems operate, but these same spectral signatures can be exploited to create new spectroscopic tools to elucidate the underlying Hamiltonian. New data on the role of the protein in photosynthetic systems indicates that the chromophores mix strongly with some bath modes within the system. The implications of this mixing for excitonic transport will be discussed along with prospects for transferring underlying design principles to synthetic systems.

  2. Analysis of autostereoscopic three-dimensional images using multiview wavelets.

    PubMed

    Saveljev, Vladimir; Palchikova, Irina

    2016-08-10

    We propose that multiview wavelets can be used in processing multiview images. The reference functions for the synthesis/analysis of multiview images are described. The synthesized binary images were observed experimentally as three-dimensional visual images. The symmetric multiview B-spline wavelets are proposed. The locations recognized in the continuous wavelet transform correspond to the layout of the test objects. The proposed wavelets can be applied to the multiview, integral, and plenoptic images. PMID:27534470

  3. Vector processing enhancements for real-time image analysis.

    SciTech Connect

    Shoaf, S.; APS Engineering Support Division

    2008-01-01

    A real-time image analysis system was developed for beam imaging diagnostics. An Apple Power Mac G5 with an Active Silicon LFG frame grabber was used to capture video images that were processed and analyzed. Software routines were created to utilize vector-processing hardware to reduce the time to process images as compared to conventional methods. These improvements allow for more advanced image processing diagnostics to be performed in real time.

  4. MaZda--a software package for image texture analysis.

    PubMed

    Szczypiński, Piotr M; Strzelecki, Michał; Materka, Andrzej; Klepaczko, Artur

    2009-04-01

    MaZda, a software package for 2D and 3D image texture analysis is presented. It provides a complete path for quantitative analysis of image textures, including computation of texture features, procedures for feature selection and extraction, algorithms for data classification, various data visualization and image segmentation tools. Initially, MaZda was aimed at analysis of magnetic resonance image textures. However, it revealed its effectiveness in analysis of other types of textured images, including X-ray and camera images. The software was utilized by numerous researchers in diverse applications. It was proven to be an efficient and reliable tool for quantitative image analysis, even in more accurate and objective medical diagnosis. MaZda was also successfully used in food industry to assess food product quality. MaZda can be downloaded for public use from the Institute of Electronics, Technical University of Lodz webpage. PMID:18922598

  5. Automated Imaging and Analysis of the Hemagglutination Inhibition Assay.

    PubMed

    Nguyen, Michael; Fries, Katherine; Khoury, Rawia; Zheng, Lingyi; Hu, Branda; Hildreth, Stephen W; Parkhill, Robert; Warren, William

    2016-04-01

    The hemagglutination inhibition (HAI) assay quantifies the level of strain-specific influenza virus antibody present in serum and is the standard by which influenza vaccine immunogenicity is measured. The HAI assay endpoint requires real-time monitoring of rapidly evolving red blood cell (RBC) patterns for signs of agglutination at a rate of potentially thousands of patterns per day to meet the throughput needs for clinical testing. This analysis is typically performed manually through visual inspection by highly trained individuals. However, concordant HAI results across different labs are challenging to demonstrate due to analyst bias and variability in analysis methods. To address these issues, we have developed a bench-top, standalone, high-throughput imaging solution that automatically determines the agglutination states of up to 9600 HAI assay wells per hour and assigns HAI titers to 400 samples in a single unattended 30-min run. Images of the tilted plates are acquired as a function of time and analyzed using algorithms that were developed through comprehensive examination of manual classifications. Concordance testing of the imaging system with eight different influenza antigens demonstrates 100% agreement between automated and manual titer determination with a percent difference of ≤3.4% for all cases. PMID:26464422

  6. Thermal image analysis for detecting facemask leakage

    NASA Astrophysics Data System (ADS)

    Dowdall, Jonathan B.; Pavlidis, Ioannis T.; Levine, James

    2005-03-01

    Due to the modern advent of near ubiquitous accessibility to rapid international transportation the epidemiologic trends of highly communicable diseases can be devastating. With the recent emergence of diseases matching this pattern, such as Severe Acute Respiratory Syndrome (SARS), an area of overt concern has been the transmission of infection through respiratory droplets. Approved facemasks are typically effective physical barriers for preventing the spread of viruses through droplets, but breaches in a mask"s integrity can lead to an elevated risk of exposure and subsequent infection. Quality control mechanisms in place during the manufacturing process insure that masks are defect free when leaving the factory, but there remains little to detect damage caused by transportation or during usage. A system that could monitor masks in real-time while they were in use would facilitate a more secure environment for treatment and screening. To fulfill this necessity, we have devised a touchless method to detect mask breaches in real-time by utilizing the emissive properties of the mask in the thermal infrared spectrum. Specifically, we use a specialized thermal imaging system to detect minute air leakage in masks based on the principles of heat transfer and thermodynamics. The advantage of this passive modality is that thermal imaging does not require contact with the subject and can provide instant visualization and analysis. These capabilities can prove invaluable for protecting personnel in scenarios with elevated levels of transmission risk such as hospital clinics, border check points, and airports.

  7. Vision-sensing image analysis for GTAW process control

    SciTech Connect

    Long, D.D.

    1994-11-01

    Image analysis of a gas tungsten arc welding (GTAW) process was completed using video images from a charge coupled device (CCD) camera inside a specially designed coaxial (GTAW) electrode holder. Video data was obtained from filtered and unfiltered images, with and without the GTAW arc present, showing weld joint features and locations. Data Translation image processing boards, installed in an IBM PC AT 386 compatible computer, and Media Cybernetics image processing software were used to investigate edge flange weld joint geometry for image analysis.

  8. Image analysis by integration of disparate information

    NASA Technical Reports Server (NTRS)

    Lemoigne, Jacqueline

    1993-01-01

    Image analysis often starts with some preliminary segmentation which provides a representation of the scene needed for further interpretation. Segmentation can be performed in several ways, which are categorized as pixel based, edge-based, and region-based. Each of these approaches are affected differently by various factors, and the final result may be improved by integrating several or all of these methods, thus taking advantage of their complementary nature. In this paper, we propose an approach that integrates pixel-based and edge-based results by utilizing an iterative relaxation technique. This approach has been implemented on a massively parallel computer and tested on some remotely sensed imagery from the Landsat-Thematic Mapper (TM) sensor.

  9. APPLICATION OF PRINCIPAL COMPONENT ANALYSIS TO RELAXOGRAPHIC IMAGES

    SciTech Connect

    STOYANOVA,R.S.; OCHS,M.F.; BROWN,T.R.; ROONEY,W.D.; LI,X.; LEE,J.H.; SPRINGER,C.S.

    1999-05-22

    Standard analysis methods for processing inversion recovery MR images traditionally have used single pixel techniques. In these techniques each pixel is independently fit to an exponential recovery, and spatial correlations in the data set are ignored. By analyzing the image as a complete dataset, improved error analysis and automatic segmentation can be achieved. Here, the authors apply principal component analysis (PCA) to a series of relaxographic images. This procedure decomposes the 3-dimensional data set into three separate images and corresponding recovery times. They attribute the 3 images to be spatial representations of gray matter (GM), white matter (WM) and cerebrospinal fluid (CSF) content.

  10. Some selected quantitative methods of thermal image analysis in Matlab.

    PubMed

    Koprowski, Robert

    2016-05-01

    The paper presents a new algorithm based on some selected automatic quantitative methods for analysing thermal images. It shows the practical implementation of these image analysis methods in Matlab. It enables to perform fully automated and reproducible measurements of selected parameters in thermal images. The paper also shows two examples of the use of the proposed image analysis methods for the area of ​​the skin of a human foot and face. The full source code of the developed application is also provided as an attachment. The main window of the program during dynamic analysis of the foot thermal image. PMID:26556680

  11. Dynamic Chest Image Analysis: Model-Based Perfusion Analysis in Dynamic Pulmonary Imaging

    NASA Astrophysics Data System (ADS)

    Liang, Jianming; Järvi, Timo; Kiuru, Aaro; Kormano, Martti; Svedström, Erkki

    2003-12-01

    The "Dynamic Chest Image Analysis" project aims to develop model-based computer analysis and visualization methods for showing focal and general abnormalities of lung ventilation and perfusion based on a sequence of digital chest fluoroscopy frames collected with the dynamic pulmonary imaging technique. We have proposed and evaluated a multiresolutional method with an explicit ventilation model for ventilation analysis. This paper presents a new model-based method for pulmonary perfusion analysis. According to perfusion properties, we first devise a novel mathematical function to form a perfusion model. A simple yet accurate approach is further introduced to extract cardiac systolic and diastolic phases from the heart, so that this cardiac information may be utilized to accelerate the perfusion analysis and improve its sensitivity in detecting pulmonary perfusion abnormalities. This makes perfusion analysis not only fast but also robust in computation; consequently, perfusion analysis becomes computationally feasible without using contrast media. Our clinical case studies with 52 patients show that this technique is effective for pulmonary embolism even without using contrast media, demonstrating consistent correlations with computed tomography (CT) and nuclear medicine (NM) studies. This fluoroscopical examination takes only about 2 seconds for perfusion study with only low radiation dose to patient, involving no preparation, no radioactive isotopes, and no contrast media.

  12. Image potential states mediated STM imaging of cobalt phthalocyanine on NaCl/Cu(100)

    NASA Astrophysics Data System (ADS)

    Qinmin, Guo; Zhihui, Qin; Min, Huang; Vladimir, N. Mantsevich; Gengyu, Cao

    2016-03-01

    The adsorption and electronic properties of isolated cobalt phthalocyanine (CoPc) molecule on an ultrathin layer of NaCl have been investigated. High-resolution STM images give a detailed picture of the lowest unoccupied molecular orbital (LUMO) of an isolated CoPc. It is shown that the NaCl ultrathin layer efficiently decouples the interaction of the molecules from the underneath metal substrate, which makes it an ideal substrate for studying the properties of single molecules. Moreover, strong dependence of the appearance of the molecules on the sample bias in the region of relatively high bias (> 3.1 V) is ascribed to the image potential states (IPSs) of NaCl/Cu(100), which may provide us with a possible method to fabricate quantum storage devices. Project supported by the National Natural Science Foundation of China (Grant Nos. 21203239 and 21311120059) and RFBR (Grant No. 13-02-91180).

  13. State analysis of nonlinear systems using local canonical variate analysis

    SciTech Connect

    Hunter, N.F.

    1997-01-01

    There are many instances in which time series measurements are used to derive an empirical model of a dynamical system. State space reconstruction from time series measurement has applications in many scientific and engineering disciplines including structural engineering, biology, chemistry, climatology, control theory, and physics. Prediction of future time series values from empirical models was attempted as early as 1927 by Yule, who applied linear prediction methods to the sunspot values. More recently, efforts in this area have centered on two related aspects of time series analysis, namely prediction and modeling. In prediction future time series values are estimated from past values, in modeling, fundamental characteristics of the state model underlying the measurements are estimated, such as dimension and eigenvalues. In either approach a measured time series, [{bold y}(t{sub i})], i= 1,... N is assumed to derive from the action of a smooth dynamical system, s(t+{bold {tau}})=a(s(t)), where the bold notation indicates the (potentially ) multivariate nature of the time series. The time series is assumed to derive from the state evolution via a measurement function c. {bold y}(t)=c(s(t)) In general the states s(t), the state evolution function a and the measurement function c are In unknown, and must be inferred from the time series measurements. We approach this problem from the standpoint of time series analysis. We review the principles of state space reconstruction. The specific model formulation used in the local canonical variate analysis algorithm and a detailed description of the state space reconstruction algorithm are included. The application of the algorithm to a single-degree-of- freedom Duffing-like Oscillator and the difficulties involved in reconstruction of an unmeasured degree of freedom in a four degree of freedom nonlinear oscillator are presented. The advantages and current limitations of state space reconstruction are summarized.

  14. High resolution ultraviolet imaging spectrometer for latent image analysis.

    PubMed

    Lyu, Hang; Liao, Ningfang; Li, Hongsong; Wu, Wenmin

    2016-03-21

    In this work, we present a close-range ultraviolet imaging spectrometer with high spatial resolution, and reasonably high spectral resolution. As the transmissive optical components cause chromatic aberration in the ultraviolet (UV) spectral range, an all-reflective imaging scheme is introduced to promote the image quality. The proposed instrument consists of an oscillating mirror, a Cassegrain objective, a Michelson structure, an Offner relay, and a UV enhanced CCD. The finished spectrometer has a spatial resolution of 29.30μm on the target plane; the spectral scope covers both near and middle UV band; and can obtain approximately 100 wavelength samples over the range of 240~370nm. The control computer coordinates all the components of the instrument and enables capturing a series of images, which can be reconstructed into an interferogram datacube. The datacube can be converted into a spectrum datacube, which contains spectral information of each pixel with many wavelength samples. A spectral calibration is carried out by using a high pressure mercury discharge lamp. A test run demonstrated that this interferometric configuration can obtain high resolution spectrum datacube. The pattern recognition algorithm is introduced to analyze the datacube and distinguish the latent traces from the base materials. This design is particularly good at identifying the latent traces in the application field of forensic imaging. PMID:27136837

  15. A framework for joint image-and-shape analysis

    NASA Astrophysics Data System (ADS)

    Gao, Yi; Tannenbaum, Allen; Bouix, Sylvain

    2014-03-01

    Techniques in medical image analysis are many times used for the comparison or regression on the intensities of images. In general, the domain of the image is a given Cartesian grids. Shape analysis, on the other hand, studies the similarities and differences among spatial objects of arbitrary geometry and topology. Usually, there is no function defined on the domain of shapes. Recently, there has been a growing needs for defining and analyzing functions defined on the shape space, and a coupled analysis on both the shapes and the functions defined on them. Following this direction, in this work we present a coupled analysis for both images and shapes. As a result, the statistically significant discrepancies in both the image intensities as well as on the underlying shapes are detected. The method is applied on both brain images for the schizophrenia and heart images for atrial fibrillation patients.

  16. 3D imaging of particle tracks in Solid State Nuclear Track Detectors

    NASA Astrophysics Data System (ADS)

    Wertheim, D.; Gillmore, G.; Brown, L.; Petford, N.

    2009-04-01

    Inhalation of radon gas (222Rn) and associated ionizing decay products is known to cause lung cancer in human. In the U.K., it has been suggested that 3 to 5 % of total lung cancer deaths can be linked to elevated radon concentrations in the home and/or workplace. Radon monitoring in buildings is therefore routinely undertaken in areas of known risk. Indeed, some organisations such as the Radon Council in the UK and the Environmental Protection Agency in the USA, advocate a ‘to test is best' policy. Radon gas occurs naturally, emanating from the decay of 238U in rock and soils. Its concentration can be measured using CR?39 plastic detectors which conventionally are assessed by 2D image analysis of the surface; however there can be some variation in outcomes / readings even in closely spaced detectors. A number of radon measurement methods are currently in use (for examples, activated carbon and electrets) but the most widely used are CR?39 solid state nuclear track?etch detectors (SSNTDs). In this technique, heavily ionizing alpha particles leave tracks in the form of radiation damage (via interaction between alpha particles and the atoms making up the CR?39 polymer). 3D imaging of the tracks has the potential to provide information relating to angle and energy of alpha particles but this could be time consuming. Here we describe a new method for rapid high resolution 3D imaging of SSNTDs. A ‘LEXT' OLS3100 confocal laser scanning microscope was used in confocal mode to successfully obtain 3D image data on four CR?39 plastic detectors. 3D visualisation and image analysis enabled characterisation of track features. This method may provide a means of rapid and detailed 3D analysis of SSNTDs. Keywords: Radon; SSNTDs; confocal laser scanning microscope; 3D imaging; LEXT

  17. LANDSAT-4 image data quality analysis

    NASA Technical Reports Server (NTRS)

    Anuta, P. E. (Principal Investigator)

    1982-01-01

    Work done on evaluating the geometric and radiometric quality of early LANDSAT-4 sensor data is described. Band to band and channel to channel registration evaluations were carried out using a line correlator. Visual blink comparisons were run on an image display to observe band to band registration over 512 x 512 pixel blocks. The results indicate a .5 pixel line misregistration between the 1.55 to 1.75, 2.08 to 2.35 micrometer bands and the first four bands. Also a four 30M line and column misregistration of the thermal IR band was observed. Radiometric evaluation included mean and variance analysis of individual detectors and principal components analysis. Results indicate that detector bias for all bands is very close or within tolerance. Bright spots were observed in the thermal IR band on an 18 line by 128 pixel grid. No explanation for this was pursued. The general overall quality of the TM was judged to be very high.

  18. Multifractal analysis of 2D gray soil images

    NASA Astrophysics Data System (ADS)

    González-Torres, Ivan; Losada, Juan Carlos; Heck, Richard; Tarquis, Ana M.

    2015-04-01

    Soil structure, understood as the spatial arrangement of soil pores, is one of the key factors in soil modelling processes. Geometric properties of individual and interpretation of the morphological parameters of pores can be estimated from thin sections or 3D Computed Tomography images (Tarquis et al., 2003), but there is no satisfactory method to binarized these images and quantify the complexity of their spatial arrangement (Tarquis et al., 2008, Tarquis et al., 2009; Baveye et al., 2010). The objective of this work was to apply a multifractal technique, their singularities (α) and f(α) spectra, to quantify it without applying any threshold (Gónzalez-Torres, 2014). Intact soil samples were collected from four horizons of an Argisol, formed on the Tertiary Barreiras group of formations in Pernambuco state, Brazil (Itapirema Experimental Station). The natural vegetation of the region is tropical, coastal rainforest. From each horizon, showing different porosities and spatial arrangements, three adjacent samples were taken having a set of twelve samples. The intact soil samples were imaged using an EVS (now GE Medical. London, Canada) MS-8 MicroCT scanner with 45 μm pixel-1 resolution (256x256 pixels). Though some samples required paring to fit the 64 mm diameter imaging tubes, field orientation was maintained. References Baveye, P.C., M. Laba, W. Otten, L. Bouckaert, P. Dello, R.R. Goswami, D. Grinev, A. Houston, Yaoping Hu, Jianli Liu, S. Mooney, R. Pajor, S. Sleutel, A. Tarquis, Wei Wang, Qiao Wei, Mehmet Sezgin. Observer-dependent variability of the thresholding step in the quantitative analysis of soil images and X-ray microtomography data. Geoderma, 157, 51-63, 2010. González-Torres, Iván. Theory and application of multifractal analysis methods in images for the study of soil structure. Master thesis, UPM, 2014. Tarquis, A.M., R.J. Heck, J.B. Grau; J. Fabregat, M.E. Sanchez and J.M. Antón. Influence of Thresholding in Mass and Entropy Dimension of 3-D

  19. Dynamic chest image analysis: model-based pulmonary perfusion analysis with pyramid images

    NASA Astrophysics Data System (ADS)

    Liang, Jianming; Haapanen, Arto; Jaervi, Timo; Kiuru, Aaro J.; Kormano, Martti; Svedstrom, Erkki; Virkki, Raimo

    1998-07-01

    The aim of the study 'Dynamic Chest Image Analysis' is to develop computer analysis and visualization methods for showing focal and general abnormalities of lung ventilation and perfusion based on a sequence of digital chest fluoroscopy frames collected at different phases of the respiratory/cardiac cycles in a short period of time. We have proposed a framework for ventilation study with an explicit ventilation model based on pyramid images. In this paper, we extend the framework to pulmonary perfusion study. A perfusion model and the truncated pyramid are introduced. The perfusion model aims at extracting accurate, geographic perfusion parameters, and the truncated pyramid helps in understanding perfusion at multiple resolutions and speeding up the convergence process in optimization. Three cases are included to illustrate the experimental results.

  20. Medical Image Analysis by Cognitive Information Systems - a Review.

    PubMed

    Ogiela, Lidia; Takizawa, Makoto

    2016-10-01

    This publication presents a review of medical image analysis systems. The paradigms of cognitive information systems will be presented by examples of medical image analysis systems. The semantic processes present as it is applied to different types of medical images. Cognitive information systems were defined on the basis of methods for the semantic analysis and interpretation of information - medical images - applied to cognitive meaning of medical images contained in analyzed data sets. Semantic analysis was proposed to analyzed the meaning of data. Meaning is included in information, for example in medical images. Medical image analysis will be presented and discussed as they are applied to various types of medical images, presented selected human organs, with different pathologies. Those images were analyzed using different classes of cognitive information systems. Cognitive information systems dedicated to medical image analysis was also defined for the decision supporting tasks. This process is very important for example in diagnostic and therapy processes, in the selection of semantic aspects/features, from analyzed data sets. Those features allow to create a new way of analysis. PMID:27526188

  1. Image based SAR product simulation for analysis

    NASA Technical Reports Server (NTRS)

    Domik, G.; Leberl, F.

    1987-01-01

    SAR product simulation serves to predict SAR image gray values for various flight paths. Input typically consists of a digital elevation model and backscatter curves. A new method is described of product simulation that employs also a real SAR input image for image simulation. This can be denoted as 'image-based simulation'. Different methods to perform this SAR prediction are presented and advantages and disadvantages discussed. Ascending and descending orbit images from NASA's SIR-B experiment were used for verification of the concept: input images from ascending orbits were converted into images from a descending orbit; the results are compared to the available real imagery to verify that the prediction technique produces meaningful image data.

  2. Imaging biomarkers in multiple Sclerosis: From image analysis to population imaging.

    PubMed

    Barillot, Christian; Edan, Gilles; Commowick, Olivier

    2016-10-01

    The production of imaging data in medicine increases more rapidly than the capacity of computing models to extract information from it. The grand challenges of better understanding the brain, offering better care for neurological disorders, and stimulating new drug design will not be achieved without significant advances in computational neuroscience. The road to success is to develop a new, generic, computational methodology and to confront and validate this methodology on relevant diseases with adapted computational infrastructures. This new concept sustains the need to build new research paradigms to better understand the natural history of the pathology at the early phase; to better aggregate data that will provide the most complete representation of the pathology in order to better correlate imaging with other relevant features such as clinical, biological or genetic data. In this context, one of the major challenges of neuroimaging in clinical neurosciences is to detect quantitative signs of pathological evolution as early as possible to prevent disease progression, evaluate therapeutic protocols or even better understand and model the natural history of a given neurological pathology. Many diseases encompass brain alterations often not visible on conventional MRI sequences, especially in normal appearing brain tissues (NABT). MRI has often a low specificity for differentiating between possible pathological changes which could help in discriminating between the different pathological stages or grades. The objective of medical image analysis procedures is to define new quantitative neuroimaging biomarkers to track the evolution of the pathology at different levels. This paper illustrates this issue in one acute neuro-inflammatory pathology: Multiple Sclerosis (MS). It exhibits the current medical image analysis approaches and explains how this field of research will evolve in the next decade to integrate larger scale of information at the temporal, cellular

  3. Image pattern recognition supporting interactive analysis and graphical visualization

    NASA Technical Reports Server (NTRS)

    Coggins, James M.

    1992-01-01

    Image Pattern Recognition attempts to infer properties of the world from image data. Such capabilities are crucial for making measurements from satellite or telescope images related to Earth and space science problems. Such measurements can be the required product itself, or the measurements can be used as input to a computer graphics system for visualization purposes. At present, the field of image pattern recognition lacks a unified scientific structure for developing and evaluating image pattern recognition applications. The overall goal of this project is to begin developing such a structure. This report summarizes results of a 3-year research effort in image pattern recognition addressing the following three principal aims: (1) to create a software foundation for the research and identify image pattern recognition problems in Earth and space science; (2) to develop image measurement operations based on Artificial Visual Systems; and (3) to develop multiscale image descriptions for use in interactive image analysis.

  4. Atomic force microscope, molecular imaging, and analysis.

    PubMed

    Chen, Shu-wen W; Teulon, Jean-Marie; Godon, Christian; Pellequer, Jean-Luc

    2016-01-01

    Image visibility is a central issue in analyzing all kinds of microscopic images. An increase of intensity contrast helps to raise the image visibility, thereby to reveal fine image features. Accordingly, a proper evaluation of results with current imaging parameters can be used for feedback on future imaging experiments. In this work, we have applied the Laplacian function of image intensity as either an additive component (Laplacian mask) or a multiplying factor (Laplacian weight) for enhancing image contrast of high-resolution AFM images of two molecular systems, an unknown protein imaged in air, provided by AFM COST Action TD1002 (http://www.afm4nanomedbio.eu/), and tobacco mosaic virus (TMV) particles imaged in liquid. Based on both visual inspection and quantitative representation of contrast measurements, we found that the Laplacian weight is more effective than the Laplacian mask for the unknown protein, whereas for the TMV system the strengthened Laplacian mask is superior to the Laplacian weight. The present results indicate that a mathematical function, as exemplified by the Laplacian function, may yield varied processing effects with different operations. To interpret the diversity of molecular structure and topology in images, an explicit expression for processing procedures should be included in scientific reports alongside instrumental setups. PMID:26224520

  5. Wndchrm – an open source utility for biological image analysis

    PubMed Central

    Shamir, Lior; Orlov, Nikita; Eckley, D Mark; Macura, Tomasz; Johnston, Josiah; Goldberg, Ilya G

    2008-01-01

    Background Biological imaging is an emerging field, covering a wide range of applications in biological and clinical research. However, while machinery for automated experimenting and data acquisition has been developing rapidly in the past years, automated image analysis often introduces a bottleneck in high content screening. Methods Wndchrm is an open source utility for biological image analysis. The software works by first extracting image content descriptors from the raw image, image transforms, and compound image transforms. Then, the most informative features are selected, and the feature vector of each image is used for classification and similarity measurement. Results Wndchrm has been tested using several publicly available biological datasets, and provided results which are favorably comparable to the performance of task-specific algorithms developed for these datasets. The simple user interface allows researchers who are not knowledgeable in computer vision methods and have no background in computer programming to apply image analysis to their data. Conclusion We suggest that wndchrm can be effectively used for a wide range of biological image analysis tasks. Using wndchrm can allow scientists to perform automated biological image analysis while avoiding the costly challenge of implementing computer vision and pattern recognition algorithms. PMID:18611266

  6. Myocardial Perfusion Imaging with a Solid State Camera: Simulation of a Very Low Dose Imaging Protocol

    PubMed Central

    Nakazato, Ryo; Berman, Daniel S.; Hayes, Sean W.; Fish, Mathews; Padgett, Richard; Xu, Yuan; Lemley, Mark; Baavour, Rafael; Roth, Nathaniel; Slomka, Piotr J.

    2012-01-01

    High sensitivity dedicated cardiac systems cameras provide an opportunity to lower injected doses for SPECT myocardial perfusion imaging (MPI), but the exact limits for lowering doses have not been determined. List mode data acquisition allows for reconstruction of various fractions of acquired counts, allowing a simulation of gradually lower administered dose. We aimed to determine the feasibility of very low dose MPI by exploring the minimal count level in the myocardium for accurate MPI. Methods Seventy nine patients were studied (mean body mass index 30.0 ± 6.6, range 20.2–54.0 kg/m2) who underwent 1-day standard dose 99mTc-sestamibi exercise or adenosine rest/stress MPI for clinical indications employing a Cadmium Zinc Telluride dedicated cardiac camera. Imaging time was 14-min with 803 ± 200 MBq (21.7 ± 5.4mCi) of 99mTc injected at stress. To simulate clinical scans with lower dose at that imaging time, we reframed the list-mode raw data to have count fractions of the original scan. Accordingly, 6 stress equivalent datasets were reconstructed corresponding to each fraction of the original scan. Automated QPS/QGS software was used to quantify total perfusion deficit (TPD) and ejection fraction (EF) for all 553 datasets. Minimal acceptable count was determined based on previous report with repeatability of same-day same-injection Anger camera studies. Pearson correlation coefficients and SD of differences with TPD for all scans were calculated. Results The correlations of quantitative perfusion and function analysis were excellent for both global and regional analysis on all simulated low-counts scans (all r ≥0.95, p<0.0001). Minimal acceptable count was determined to be 1.0 million counts for the left ventricular region. At this count level, SD of differences was 1.7% for TPD and 4.2% for EF. This count level would correspond to a 92.5 MBq (2.5 mCi) injected dose for the 14 min acquisition. Conclusion 1.0 million myocardial count images appear to be

  7. Wave-Optics Analysis of Pupil Imaging

    NASA Technical Reports Server (NTRS)

    Dean, Bruce H.; Bos, Brent J.

    2006-01-01

    Pupil imaging performance is analyzed from the perspective of physical optics. A multi-plane diffraction model is constructed by propagating the scalar electromagnetic field, surface by surface, along the optical path comprising the pupil imaging optical system. Modeling results are compared with pupil images collected in the laboratory. The experimental setup, although generic for pupil imaging systems in general, has application to the James Webb Space Telescope (JWST) optical system characterization where the pupil images are used as a constraint to the wavefront sensing and control process. Practical design considerations follow from the diffraction modeling which are discussed in the context of the JWST Observatory.

  8. Fan fault diagnosis based on symmetrized dot pattern analysis and image matching

    NASA Astrophysics Data System (ADS)

    Xu, Xiaogang; Liu, Haixiao; Zhu, Hao; Wang, Songling

    2016-07-01

    To detect the mechanical failure of fans, a new diagnostic method based on the symmetrized dot pattern (SDP) analysis and image matching is proposed. Vibration signals of 13 kinds of running states are acquired on a centrifugal fan test bed and reconstructed by the SDP technique. The SDP pattern templates of each running state are established. An image matching method is performed to diagnose the fault. In order to improve the diagnostic accuracy, the single template, multiple templates and clustering fault templates are used to perform the image matching.

  9. Advanced image analysis for the preservation of cultural heritage

    NASA Astrophysics Data System (ADS)

    France, Fenella G.; Christens-Barry, William; Toth, Michael B.; Boydston, Kenneth

    2010-02-01

    The Library of Congress' Preservation Research and Testing Division has established an advanced preservation studies scientific program for research and analysis of the diverse range of cultural heritage objects in its collection. Using this system, the Library is currently developing specialized integrated research methodologies for extending preservation analytical capacities through non-destructive hyperspectral imaging of cultural objects. The research program has revealed key information to support preservation specialists, scholars and other institutions. The approach requires close and ongoing collaboration between a range of scientific and cultural heritage personnel - imaging and preservation scientists, art historians, curators, conservators and technology analysts. A research project of the Pierre L'Enfant Plan of Washington DC, 1791 had been undertaken to implement and advance the image analysis capabilities of the imaging system. Innovative imaging options and analysis techniques allow greater processing and analysis capacities to establish the imaging technique as the first initial non-invasive analysis and documentation step in all cultural heritage analyses. Mapping spectral responses, organic and inorganic data, topography semi-microscopic imaging, and creating full spectrum images have greatly extended this capacity from a simple image capture technique. Linking hyperspectral data with other non-destructive analyses has further enhanced the research potential of this image analysis technique.

  10. Three modality image registration of brain SPECT/CT and MR images for quantitative analysis of dopamine transporter imaging

    NASA Astrophysics Data System (ADS)

    Yamaguchi, Yuzuho; Takeda, Yuta; Hara, Takeshi; Zhou, Xiangrong; Matsusako, Masaki; Tanaka, Yuki; Hosoya, Kazuhiko; Nihei, Tsutomu; Katafuchi, Tetsuro; Fujita, Hiroshi

    2016-03-01

    Important features in Parkinson's disease (PD) are degenerations and losses of dopamine neurons in corpus striatum. 123I-FP-CIT can visualize activities of the dopamine neurons. The activity radio of background to corpus striatum is used for diagnosis of PD and Dementia with Lewy Bodies (DLB). The specific activity can be observed in the corpus striatum on SPECT images, but the location and the shape of the corpus striatum on SPECT images only are often lost because of the low uptake. In contrast, MR images can visualize the locations of the corpus striatum. The purpose of this study was to realize a quantitative image analysis for the SPECT images by using image registration technique with brain MR images that can determine the region of corpus striatum. In this study, the image fusion technique was used to fuse SPECT and MR images by intervening CT image taken by SPECT/CT. The mutual information (MI) for image registration between CT and MR images was used for the registration. Six SPECT/CT and four MR scans of phantom materials are taken by changing the direction. As the results of the image registrations, 16 of 24 combinations were registered within 1.3mm. By applying the approach to 32 clinical SPECT/CT and MR cases, all of the cases were registered within 0.86mm. In conclusions, our registration method has a potential in superimposing MR images on SPECT images.

  11. Image segmentation by iterative parallel region growing with application to data compression and image analysis

    NASA Technical Reports Server (NTRS)

    Tilton, James C.

    1988-01-01

    Image segmentation can be a key step in data compression and image analysis. However, the segmentation results produced by most previous approaches to region growing are suspect because they depend on the order in which portions of the image are processed. An iterative parallel segmentation algorithm avoids this problem by performing globally best merges first. Such a segmentation approach, and two implementations of the approach on NASA's Massively Parallel Processor (MPP) are described. Application of the segmentation approach to data compression and image analysis is then described, and results of such application are given for a LANDSAT Thematic Mapper image.

  12. Femtoelectron-Based Terahertz Imaging of Hydration State in a Proton Exchange Membrane Fuel Cell

    NASA Astrophysics Data System (ADS)

    Buaphad, P.; Thamboon, P.; Kangrang, N.; Rhodes, M. W.; Thongbai, C.

    2015-08-01

    Imbalanced water management in a proton exchange membrane (PEM) fuel cell significantly reduces the cell performance and durability. Visualization of water distribution and transport can provide greater comprehension toward optimization of the PEM fuel cell. In this work, we are interested in water flooding issues that occurred in flow channels on cathode side of the PEM fuel cell. The sample cell was fabricated with addition of a transparent acrylic window allowing light access and observed the process of flooding formation (in situ) via a CCD camera. We then explore potential use of terahertz (THz) imaging, consisting of femtoelectron-based THz source and off-angle reflective-mode imaging, to identify water presence in the sample cell. We present simulations of two hydration states (water and nonwater area), which are in agreement with the THz image results. A line-scan plot is utilized for quantitative analysis and for defining spatial resolution of the image. Implementing metal mesh filtering can improve spatial resolution of our THz imaging system.

  13. Analysis of scanning probe microscope images using wavelets.

    PubMed

    Gackenheimer, C; Cayon, L; Reifenberger, R

    2006-03-01

    The utility of wavelet transforms for analysis of scanning probe images is investigated. Simulated scanning probe images are analyzed using wavelet transforms and compared to a parallel analysis using more conventional Fourier transform techniques. The wavelet method introduced in this paper is particularly useful as an image recognition algorithm to enhance nanoscale objects of a specific scale that may be present in scanning probe images. In its present form, the applied wavelet is optimal for detecting objects with rotational symmetry. The wavelet scheme is applied to the analysis of scanning probe data to better illustrate the advantages that this new analysis tool offers. The wavelet algorithm developed for analysis of scanning probe microscope (SPM) images has been incorporated into the WSxM software which is a versatile freeware SPM analysis package. PMID:16439061

  14. Biomedical image analysis using Markov random fields & efficient linear programing.

    PubMed

    Komodakis, Nikos; Besbes, Ahmed; Glocker, Ben; Paragios, Nikos

    2009-01-01

    Computer-aided diagnosis through biomedical image analysis is increasingly considered in health sciences. This is due to the progress made on the acquisition side, as well as on the processing one. In vivo visualization of human tissues where one can determine both anatomical and functional information is now possible. The use of these images with efficient intelligent mathematical and processing tools allows the interpretation of the tissues state and facilitates the task of the physicians. Segmentation and registration are the two most fundamental tools in bioimaging. The first aims to provide automatic tools for organ delineation from images, while the second focuses on establishing correspondences between observations inter and intra subject and modalities. In this paper, we present some recent results towards a common formulation addressing these problems, called the Markov Random Fields. Such an approach is modular with respect to the application context, can be easily extended to deal with various modalities, provides guarantees on the optimality properties of the obtained solution and is computationally efficient. PMID:19963682

  15. Working to make an image: an analysis of three Philip Morris corporate image media campaigns

    PubMed Central

    Szczypka, Glen; Wakefield, Melanie A; Emery, Sherry; Terry‐McElrath, Yvonne M; Flay, Brian R; Chaloupka, Frank J

    2007-01-01

    Objective To describe the nature and timing of, and population exposure to, Philip Morris USA's three explicit corporate image television advertising campaigns and explore the motivations behind each campaign. Methods : Analysis of television ratings from the largest 75 media markets in the United States, which measure the reach and frequency of population exposure to advertising; copies of all televised commercials produced by Philip Morris; and tobacco industry documents, which provide insights into the specific goals of each campaign. Findings Household exposure to the “Working to Make a Difference: the People of Philip Morris” averaged 5.37 ads/month for 27 months from 1999–2001; the “Tobacco Settlement” campaign averaged 10.05 ads/month for three months in 2000; and “PMUSA” averaged 3.11 ads/month for the last six months in 2003. The percentage of advertising exposure that was purchased in news programming in order to reach opinion leaders increased over the three campaigns from 20%, 39% and 60%, respectively. These public relations campaigns were designed to counter negative images, increase brand recognition, and improve the financial viability of the company. Conclusions Only one early media campaign focused on issues other than tobacco, whereas subsequent campaigns have been specifically concerned with tobacco issues, and more targeted to opinion leaders. The size and timing of the advertising buys appeared to be strategically crafted to maximise advertising exposure for these population subgroups during critical threats to Philip Morris's public image. PMID:17897994

  16. Evaluation of 3D multimodality image registration using receiver operating characteristic (ROC) analysis

    NASA Astrophysics Data System (ADS)

    Holton Tainter, Kerrie S.; Robb, Richard A.; Taneja, Udita; Gray, Joel E.

    1995-04-01

    Receiver operating characteristic analysis has evolved as a useful method for evaluating the discriminatory capability and efficacy of visualization. The ability of such analysis to account for the variance in decision criteria of multiple observers, multiple reading, and a wide range of difficulty in detection among case studies makes ROC especially useful for interpreting the results of a viewing experiment. We are currently using ROC analysis to evaluate the effectiveness of using fused multispectral, or complementary multimodality imaging data in the diagnostic process. The use of multispectral image recordings, gathered from multiple imaging modalities, to provide advanced image visualization and quantization capabilities in evaluating medical images is an important challenge facing medical imaging scientists. Such capabilities would potentially significantly enhance the ability of clinicians to extract scientific and diagnostic information from images. a first step in the effective use of multispectral information is the spatial registration of complementary image datasets so that a point-to-point correspondence exists between them. We are developing a paradigm of measuring the accuracy of existing image registration techniques which includes the ability to relate quantitative measurements, taken from the images themselves, to the decisions made by observers about the state of registration (SOR) of the 3D images. We have used ROC analysis to evaluate the ability of observers to discriminate between correctly registered and incorrectly registered multimodality fused images. We believe this experience is original and represents the first time that ROC analysis has been used to evaluate registered/fused images. We have simulated low-resolution and high-resolution images from real patient MR images of the brain, and fused them with the original MR to produce colorwash superposition images whose exact SOR is known. We have also attempted to extend this analysis to

  17. Image analysis for dental bone quality assessment using CBCT imaging

    NASA Astrophysics Data System (ADS)

    Suprijanto; Epsilawati, L.; Hajarini, M. S.; Juliastuti, E.; Susanti, H.

    2016-03-01

    Cone beam computerized tomography (CBCT) is one of X-ray imaging modalities that are applied in dentistry. Its modality can visualize the oral region in 3D and in a high resolution. CBCT jaw image has potential information for the assessment of bone quality that often used for pre-operative implant planning. We propose comparison method based on normalized histogram (NH) on the region of inter-dental septum and premolar teeth. Furthermore, the NH characteristic from normal and abnormal bone condition are compared and analyzed. Four test parameters are proposed, i.e. the difference between teeth and bone average intensity (s), the ratio between bone and teeth average intensity (n) of NH, the difference between teeth and bone peak value (Δp) of NH, and the ratio between teeth and bone of NH range (r). The results showed that n, s, and Δp have potential to be the classification parameters of dental calcium density.

  18. Analysis of Anechoic Chamber Testing of the Hurricane Imaging Radiometer

    NASA Technical Reports Server (NTRS)

    Fenigstein, David; Ruf, Chris; James, Mark; Simmons, David; Miller, Timothy; Buckley, Courtney

    2010-01-01

    The Hurricane Imaging Radiometer System (HIRAD) is a new airborne passive microwave remote sensor developed to observe hurricanes. HIRAD incorporates synthetic thinned array radiometry technology, which use Fourier synthesis to reconstruct images from an array of correlated antenna elements. The HIRAD system response to a point emitter has been measured in an anechoic chamber. With this data, a Fourier inversion image reconstruction algorithm has been developed. Performance analysis of the apparatus is presented, along with an overview of the image reconstruction algorithm

  19. Image acquisitions, processing and analysis in the process of obtaining characteristics of horse navicular bone

    NASA Astrophysics Data System (ADS)

    Zaborowicz, M.; Włodarek, J.; Przybylak, A.; Przybył, K.; Wojcieszak, D.; Czekała, W.; Ludwiczak, A.; Boniecki, P.; Koszela, K.; Przybył, J.; Skwarcz, J.

    2015-07-01

    The aim of this study was investigate the possibility of using methods of computer image analysis for the assessment and classification of morphological variability and the state of health of horse navicular bone. Assumption was that the classification based on information contained in the graphical form two-dimensional digital images of navicular bone and information of horse health. The first step in the research was define the classes of analyzed bones, and then using methods of computer image analysis for obtaining characteristics from these images. This characteristics were correlated with data concerning the animal, such as: side of hooves, number of navicular syndrome (scale 0-3), type, sex, age, weight, information about lace, information about heel. This paper shows the introduction to the study of use the neural image analysis in the diagnosis of navicular bone syndrome. Prepared method can provide an introduction to the study of non-invasive way to assess the condition of the horse navicular bone.

  20. Image and Data-analysis Tools For Paleoclimatic Reconstructions

    NASA Astrophysics Data System (ADS)

    Pozzi, M.

    It comes here proposed a directory of instruments and computer science resources chosen in order to resolve the problematic ones that regard the paleoclimatic recon- structions. They will come discussed in particular the following points: 1) Numerical analysis of paleo-data (fossils abundances, species analyses, isotopic signals, chemical-physical parameters, biological data): a) statistical analyses (uni- variate, diversity, rarefaction, correlation, ANOVA, F and T tests, Chi^2) b) multidi- mensional analyses (principal components, corrispondence, cluster analysis, seriation, discriminant, autocorrelation, spectral analysis) neural analyses (backpropagation net, kohonen feature map, hopfield net genetic algorithms) 2) Graphical analysis (visu- alization tools) of paleo-data (quantitative and qualitative fossils abundances, species analyses, isotopic signals, chemical-physical parameters): a) 2-D data analyses (graph, histogram, ternary, survivorship) b) 3-D data analyses (direct volume rendering, iso- surfaces, segmentation, surface reconstruction, surface simplification,generation of tetrahedral grids). 3) Quantitative and qualitative digital image analysis (macro and microfossils image analysis, Scanning Electron Microscope. and Optical Polarized Microscope images capture and analysis, morphometric data analysis, 3-D reconstruc- tions): a) 2D image analysis (correction of image defects, enhancement of image de- tail, converting texture and directionality to grey scale or colour differences, visual enhancement using pseudo-colour, pseudo-3D, thresholding of image features, binary image processing, measurements, stereological measurements, measuring features on a white background) b) 3D image analysis (basic stereological procedures, two dimen- sional structures; area fraction from the point count, volume fraction from the point count, three dimensional structures: surface area and the line intercept count, three dimensional microstructures; line length and the

  1. Image analysis of neuropsychological test responses

    NASA Astrophysics Data System (ADS)

    Smith, Stephen L.; Hiller, Darren L.

    1996-04-01

    This paper reports recent advances in the development of an automated approach to neuropsychological testing. High performance image analysis algorithms have been developed as part of a convenient and non-invasive computer-based system to provide an objective assessment of patient responses to figure-copying tests. Tests of this type are important in determining the neurological function of patients following stroke through evaluation of their visuo-spatial performance. Many conventional neuropsychological tests suffer from the serious drawback that subjective judgement on the part of the tester is required in the measurement of the patient's response which leads to a qualitative neuropsychological assessment that can be both inconsistent and inaccurate. Results for this automated approach are presented for three clinical populations: patients suffering right hemisphere stroke are compared with adults with no known neurological disorder and a population comprising normal school children of 11 years is presented to demonstrate the sensitivity of the technique. As well as providing a more reliable and consistent diagnosis this technique is sufficiently sensitive to monitor a patient's progress over a period of time and will provide the neuropsychologist with a practical means of evaluating the effectiveness of therapy or medication administered as part of a rehabilitation program.

  2. Imaging of spatiotemporal coincident states by DC optical tomography.

    PubMed

    Graber, Harry L; Pei, Yaling; Barbour, Randall L

    2002-08-01

    The utility of optical tomography as a practical imaging modality has, thus far, been limited by its intrinsically low spatial resolution and quantitative accuracy. Recently, we have argued that a broad range of physiological phenomena might be accurately studied by adopting this technology to investigate dynamic states (Schmitz et al., 2000; Barbour et al., 2000; Graber et al., 2000; Barbour et aL, 2001; and Barbour et aL, 1999). One such phenomenon holding considerable significance is the dynamics of the vasculature, which has been well characterized as being both spatially and temporally heterogeneous. In this paper, we have modeled such heterogeneity in the limiting case of spatiotemporal coincident behavior involving optical contrast features, in an effort to define the expected limits with which dynamic states can be characterized using two newly described reconstruction methods that evaluate normalized detector data: the normalized difference method (NDM) and the normalized constraint method (NCM). Influencing the design of these studies is the expectation that spatially coincident temporal variations in both the absorption and scattering properties of tissue can occur in vivo. We have also chosen to model dc illumination techniques, in recognition of their favorable performance and cost for practical systems. This choice was made with full knowledge of theoretical findings arguing that separation of the optical absorption and scattering coefficients under these conditions is not possible. Results obtained show that the NDM algorithm provides for good spatial resolution and excellent characterization of the temporal behavior of optical properties but is subject to interparameter crosstalk. The NCM algorithm, while also providing excellent characterization of temporal behavior, provides for improved spatial resolution, as well as for improved separation of absorption and scattering coefficients. A discussion is provided to reconcile these findings with

  3. Resting-state blood oxygen level-dependent functional magnetic resonance imaging for presurgical planning.

    PubMed

    Kamran, Mudassar; Hacker, Carl D; Allen, Monica G; Mitchell, Timothy J; Leuthardt, Eric C; Snyder, Abraham Z; Shimony, Joshua S

    2014-11-01

    Resting-state functional MR imaging (rsfMR imaging) measures spontaneous fluctuations in the blood oxygen level-dependent (BOLD) signal and can be used to elucidate the brain's functional organization. It is used to simultaneously assess multiple distributed resting-state networks. Unlike task-based functional MR imaging, rsfMR imaging does not require task performance. This article presents a brief introduction of rsfMR imaging processing methods followed by a detailed discussion on the use of rsfMR imaging in presurgical planning. Example cases are provided to highlight the strengths and limitations of the technique. PMID:25441506

  4. EVALUATION OF COLOR ALTERATION ON FABRICS BY IMAGE ANALYSIS

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Evaluation of color changes is usually done manually and is often inconsistent. Image analysis provides a method in which to evaluate color-related testing that is not only simple, but also consistent. Image analysis can also be used to measure areas that were considered too large for the colorimet...

  5. Slide Set: Reproducible image analysis and batch processing with ImageJ.

    PubMed

    Nanes, Benjamin A

    2015-11-01

    Most imaging studies in the biological sciences rely on analyses that are relatively simple. However, manual repetition of analysis tasks across multiple regions in many images can complicate even the simplest analysis, making record keeping difficult, increasing the potential for error, and limiting reproducibility. While fully automated solutions are necessary for very large data sets, they are sometimes impractical for the small- and medium-sized data sets common in biology. Here we present the Slide Set plugin for ImageJ, which provides a framework for reproducible image analysis and batch processing. Slide Set organizes data into tables, associating image files with regions of interest and other relevant information. Analysis commands are automatically repeated over each image in the data set, and multiple commands can be chained together for more complex analysis tasks. All analysis parameters are saved, ensuring transparency and reproducibility. Slide Set includes a variety of built-in analysis commands and can be easily extended to automate other ImageJ plugins, reducing the manual repetition of image analysis without the set-up effort or programming expertise required for a fully automated solution. PMID:26554504

  6. Slide Set: reproducible image analysis and batch processing with ImageJ

    PubMed Central

    Nanes, Benjamin A.

    2015-01-01

    Most imaging studies in the biological sciences rely on analyses that are relatively simple. However, manual repetition of analysis tasks across multiple regions in many images can complicate even the simplest analysis, making record keeping difficult, increasing the potential for error, and limiting reproducibility. While fully automated solutions are necessary for very large data sets, they are sometimes impractical for the small- and medium-sized data sets that are common in biology. This paper introduces Slide Set, a framework for reproducible image analysis and batch processing with ImageJ. Slide Set organizes data into tables, associating image files with regions of interest and other relevant information. Analysis commands are automatically repeated over each image in the data set, and multiple commands can be chained together for more complex analysis tasks. All analysis parameters are saved, ensuring transparency and reproducibility. Slide Set includes a variety of built-in analysis commands and can be easily extended to automate other ImageJ plugins, reducing the manual repetition of image analysis without the set-up effort or programming expertise required for a fully automated solution. PMID:26554504

  7. Analysis of state of vehicular scars on Arctic Tundra, Alaska

    NASA Technical Reports Server (NTRS)

    Lathram, E. H.

    1974-01-01

    Identification on ERTS images of severe vehicular scars in the northern Alaska tundra suggests that, if such scars are of an intensity or have spread to a dimension such that they can be resolved by ERTS sensors (20 meters), they can be identified and their state monitored by the use of ERTS images. Field review of the state of vehicular scars in the Umiat area indicates that all are revegetating at varying rates and are approaching a stable state.

  8. DETERMINING TITAN'S SPIN STATE FROM CASSINI RADAR IMAGES

    SciTech Connect

    Stiles, Bryan W.; Hensley, Scott; Ostro, Steven J.; Callahan, Philip S.; Gim, Yonggyu; Hamilton, Gary; Johnson, William T. K.; West, Richard D.; Kirk, Randolph L.; Lee, Ella; Lorenz, Ralph D.; Allison, Michael D.; Iess, Luciano; Del Marmo, Paolo Perci

    2008-05-15

    For some 19 areas of Titan's surface, the Cassini RADAR instrument has obtained synthetic aperture radar (SAR) images during two different flybys. The time interval between flybys varies from several weeks to two years. We have used the apparent misregistration (by 10-30 km) of features between separate flybys to construct a refined model of Titan's spin state, estimating six parameters: north pole right ascension and declination, spin rate, and these quantities' first time derivatives We determine a pole location with right ascension of 39.48 degrees and declination of 83.43 degrees corresponding to a 0.3 degree obliquity. We determine the spin rate to be 22.5781 deg day{sup -1} or 0.001 deg day{sup -1} faster than the synchronous spin rate. Our estimated corrections to the pole and spin rate exceed their corresponding standard errors by factors of 80 and 8, respectively. We also found that the rate of change in the pole right ascension is -30 deg century{sup -1}, ten times faster than right ascension rate of change for the orbit normal. The spin rate is increasing at a rate of 0.05 deg day{sup -1} per century. We observed no significant change in pole declination over the period for which we have data. Applying our pole correction reduces the feature misregistration from tens of km to 3 km. Applying the spin rate and derivative corrections further reduces the misregistration to 1.2 km.

  9. Determining titan's spin state from cassini radar images

    USGS Publications Warehouse

    Stiles, B.W.; Kirk, R.L.; Lorenz, R.D.; Hensley, S.; Lee, E.; Ostro, S.J.; Allison, M.D.; Callahan, P.S.; Gim, Y.; Iess, L.; Del Marmo, P.P.; Hamilton, G.; Johnson, W.T.K.; West, R.D.

    2008-01-01

    For some 19 areas of Titan's surface, the Cassini RADAR instrument has obtained synthetic aperture radar (SAR) images during two different flybys. The time interval between flybys varies from several weeks to two years. We have used the apparent misregistration (by 10-30 km) of features between separate flybys to construct a refined model of Titan's spin state, estimating six parameters: north pole right ascension and declination, spin rate, and these quantities' first time derivatives We determine a pole location with right ascension of 39.48 degrees and declination of 83.43 degrees corresponding to a 0.3 degree obliquity. We determine the spin rate to be 22.5781 deg day -1 or 0.001 deg day-1 faster than the synchronous spin rate. Our estimated corrections to the pole and spin rate exceed their corresponding standard errors by factors of 80 and 8, respectively. We also found that the rate of change in the pole right ascension is -30 deg century-1, ten times faster than right ascension rate of change for the orbit normal. The spin rate is increasing at a rate of 0.05 deg day -1 per century. We observed no significant change in pole declination over the period for which we have data. Applying our pole correction reduces the feature misregistration from tens of km to 3 km. Applying the spin rate and derivative corrections further reduces the misregistration to 1.2 km. ?? 2008. The American Astronomical Society. All rights reserved.

  10. Global pattern analysis and classification of dermoscopic images using textons

    NASA Astrophysics Data System (ADS)

    Sadeghi, Maryam; Lee, Tim K.; McLean, David; Lui, Harvey; Atkins, M. Stella

    2012-02-01

    Detecting and classifying global dermoscopic patterns are crucial steps for detecting melanocytic lesions from non-melanocytic ones. An important stage of melanoma diagnosis uses pattern analysis methods such as 7-point check list, Menzies method etc. In this paper, we present a novel approach to investigate texture analysis and classification of 5 classes of global lesion patterns (reticular, globular, cobblestone, homogeneous, and parallel pattern) in dermoscopic images. Our statistical approach models the texture by the joint probability distribution of filter responses using a comprehensive set of the state of the art filter banks. This distribution is represented by the frequency histogram of filter response cluster centers called textons. We have also examined other two methods: Joint Distribution of Intensities (JDI) and Convolutional Restricted Boltzmann Machine (CRBM) to learn the pattern specific features to be used for textons. The classification performance is compared over the Leung and Malik filters (LM), Root Filter Set (RFS), Maximum Response Filters (MR8), Schmid, Laws and our proposed filter set as well as CRBM and JDI. We analyzed 375 images of the 5 classes of the patterns. Our experiments show that the joint distribution of color (JDC) in the L*a*b* color space outperforms the other color spaces with a correct classification rate of 86.8%.

  11. Image analysis for discrimination of cervical neoplasia

    NASA Astrophysics Data System (ADS)

    Pogue, Brian W.; Mycek, Mary-Ann; Harper, Diane

    2000-01-01

    Colposcopy involves visual imaging of the cervix for patients who have exhibited some prior indication of abnormality, and the major goals are to visually inspect for any malignancies and to guide biopsy sampling. Currently colposcopy equipment is being upgraded in many health care centers to incorporate digital image acquisition and archiving. These permanent images can be analyzed for characteristic features and color patterns which may enhance the specificity and objectivity of the routine exam. In this study a series of images from patients with biopsy confirmed cervical intraepithelia neoplasia stage 2/3 are compared with images from patients with biopsy confirmed immature squamous metaplasia, with the goal of determining optimal criteria for automated discrimination between them. All images were separated into their red, green, and blue channels, and comparisons were made between relative intensity, intensity variation, spatial frequencies, fractal dimension, and Euler number. This study indicates that computer-based processing of cervical images can provide some discrimination of the type of tissue features which are important for clinical evaluation, with the Euler number being the most clinically useful feature to discriminate metaplasia from neoplasia. Also there was a strong indication that morphology observed in the blue channel of the image provided more information about epithelial cell changes. Further research in this field can lead to advances in computer-aided diagnosis as well as the potential for online image enhancement in digital colposcopy.

  12. Sensor for real-time determining the polarization state distribution in the object images

    NASA Astrophysics Data System (ADS)

    Kilosanidze, Barbara; Kakauridze, George; Kvernadze, Teimuraz; Kurkhuli, Georgi

    2015-10-01

    An innovative real-time polarimetric method is presented based on the integral polarization-holographic diffraction element developed by us. This element is suggested to be used for real time analysis of the polarization state of light, to help highlight military equipment in a scene. In the process of diffraction, the element decomposes light incoming on them onto orthogonal circular and linear basis. The simultaneous measurement of the intensities of four diffracted beams by means of photodetectors and the appropriate software enable the polarization state of an analyzable light (all the four Stokes parameters) and its change to be obtained in real time. The element with photodetectors and software is a sensor of the polarization state. Such a sensor allows the point-by-point distribution of the polarization state in the images of objects to be determined. The spectral working range of such an element is 530 - 1600 nm. This sensor is compact, lightweight and relatively cheap, and it can be easily installed on any space and airborne platforms. It has no mechanically moving or electronically controlled elements. The speed of its operation is limited only by computer processing. Such a sensor is proposed to be use for the determination of the characteristics of the surface of objects at optical remote sensing by means of the determination of the distribution of the polarization state of light in the image of recognizable object and the dispersion of this distribution, which provides additional information while identifying an object. The possibility of detection of a useful signal of the predetermined polarization on a background of statistically random noise of an underlying surface is also possible. The application of the sensor is also considered for the nondestructive determination of the distribution of stressed state in different constructions based on the determination of the distribution of the polarization state of light reflected from the object under

  13. A linear mixture analysis-based compression for hyperspectral image analysis

    SciTech Connect

    C. I. Chang; I. W. Ginsberg

    2000-06-30

    In this paper, the authors present a fully constrained least squares linear spectral mixture analysis-based compression technique for hyperspectral image analysis, particularly, target detection and classification. Unlike most compression techniques that directly deal with image gray levels, the proposed compression approach generates the abundance fractional images of potential targets present in an image scene and then encodes these fractional images so as to achieve data compression. Since the vital information used for image analysis is generally preserved and retained in the abundance fractional images, the loss of information may have very little impact on image analysis. In some occasions, it even improves analysis performance. Airborne visible infrared imaging spectrometer (AVIRIS) data experiments demonstrate that it can effectively detect and classify targets while achieving very high compression ratios.

  14. Analysis of airborne MAIS imaging spectrometric data for mineral exploration

    SciTech Connect

    Wang Jinnian; Zheng Lanfen; Tong Qingxi

    1996-11-01

    The high spectral resolution imaging spectrometric system made quantitative analysis and mapping of surface composition possible. The key issue will be the quantitative approach for analysis of surface parameters for imaging spectrometer data. This paper describes the methods and the stages of quantitative analysis. (1) Extracting surface reflectance from imaging spectrometer image. Lab. and inflight field measurements are conducted for calibration of imaging spectrometer data, and the atmospheric correction has also been used to obtain ground reflectance by using empirical line method and radiation transfer modeling. (2) Determining quantitative relationship between absorption band parameters from the imaging spectrometer data and chemical composition of minerals. (3) Spectral comparison between the spectra of spectral library and the spectra derived from the imagery. The wavelet analysis-based spectrum-matching techniques for quantitative analysis of imaging spectrometer data has beer, developed. Airborne MAIS imaging spectrometer data were used for analysis and the analysis results have been applied to the mineral and petroleum exploration in Tarim Basin area china. 8 refs., 8 figs.

  15. A blind dual color images watermarking based on IWT and state coding

    NASA Astrophysics Data System (ADS)

    Su, Qingtang; Niu, Yugang; Liu, Xianxi; Zhu, Yu

    2012-04-01

    In this paper, a state-coding based blind watermarking algorithm is proposed to embed color image watermark to color host image. The technique of state coding, which makes the state code of data set be equal to the hiding watermark information, is introduced in this paper. When embedding watermark, using Integer Wavelet Transform (IWT) and the rules of state coding, these components, R, G and B, of color image watermark are embedded to these components, Y, Cr and Cb, of color host image. Moreover, the rules of state coding are also used to extract watermark from the watermarked image without resorting to the original watermark or original host image. Experimental results show that the proposed watermarking algorithm cannot only meet the demand on invisibility and robustness of the watermark, but also have well performance compared with other proposed methods considered in this work.

  16. Low-cost image analysis system

    SciTech Connect

    Lassahn, G.D.

    1995-01-01

    The author has developed an Automatic Target Recognition system based on parallel processing using transputers. This approach gives a powerful, fast image processing system at relatively low cost. This system scans multi-sensor (e.g., several infrared bands) image data to find any identifiable target, such as physical object or a type of vegetation.

  17. Analysis of Images from Experiments Investigating Fragmentation of Materials

    SciTech Connect

    Kamath, C; Hurricane, O

    2007-09-10

    Image processing techniques have been used extensively to identify objects of interest in image data and extract representative characteristics for these objects. However, this can be a challenge due to the presence of noise in the images and the variation across images in a dataset. When the number of images to be analyzed is large, the algorithms used must also be relatively insensitive to the choice of parameters and lend themselves to partial or full automation. This not only avoids manual analysis which can be time consuming and error-prone, but also makes the analysis reproducible, thus enabling comparisons between images which have been processed in an identical manner. In this paper, we describe our approach to extracting features for objects of interest in experimental images. Focusing on the specific problem of fragmentation of materials, we show how we can extract statistics for the fragments and the gaps between them.

  18. Online evaluation of a commercial video image analysis system (Computer Vision System) to predict beef carcass red meat yield and for augmenting the assignment of USDA yield grades. United States Department of Agriculture.

    PubMed

    Cannell, R C; Belk, K E; Tatum, J D; Wise, J W; Chapman, P L; Scanga, J A; Smith, G C

    2002-05-01

    Objective quantification of differences in wholesale cut yields of beef carcasses at plant chain speeds is important for the application of value-based marketing. This study was conducted to evaluate the ability of a commercial video image analysis system, the Computer Vision System (CVS) to 1) predict commercially fabricated beef subprimal yield and 2) augment USDA yield grading, in order to improve accuracy of grade assessment. The CVS was evaluated as a fully installed production system, operating on a full-time basis at chain speeds. Steer and heifer carcasses (n = 296) were evaluated using CVS, as well as by USDA expert and online graders, before the fabrication of carcasses into industry-standard subprimal cuts. Expert yield grade (YG), online YG, CVS estimated carcass yield, and CVS measured ribeye area in conjunction with expert grader estimates of the remaining YG factors (adjusted fat thickness, percentage of kidney-pelvic-heart fat, hot carcass weight) accounted for 67, 39, 64, and 65% of the observed variation in fabricated yields of closely trimmed subprimals. The dual component CVS predicted wholesale cut yields more accurately than current online yield grading, and, in an augmentation system, CVS ribeye measurement replaced estimated ribeye area in determination of USDA yield grade, and the accuracy of cutability prediction was improved, under packing plant conditions and speeds, to a level close to that of expert graders applying grades at a comfortable rate of speed offline. PMID:12019606

  19. Multimodal digital color imaging system for facial skin lesion analysis

    NASA Astrophysics Data System (ADS)

    Bae, Youngwoo; Lee, Youn-Heum; Jung, Byungjo

    2008-02-01

    In dermatology, various digital imaging modalities have been used as an important tool to quantitatively evaluate the treatment effect of skin lesions. Cross-polarization color image was used to evaluate skin chromophores (melanin and hemoglobin) information and parallel-polarization image to evaluate skin texture information. In addition, UV-A induced fluorescent image has been widely used to evaluate various skin conditions such as sebum, keratosis, sun damages, and vitiligo. In order to maximize the evaluation efficacy of various skin lesions, it is necessary to integrate various imaging modalities into an imaging system. In this study, we propose a multimodal digital color imaging system, which provides four different digital color images of standard color image, parallel and cross-polarization color image, and UV-A induced fluorescent color image. Herein, we describe the imaging system and present the examples of image analysis. By analyzing the color information and morphological features of facial skin lesions, we are able to comparably and simultaneously evaluate various skin lesions. In conclusion, we are sure that the multimodal color imaging system can be utilized as an important assistant tool in dermatology.

  20. Dehazing method through polarimetric imaging and multi-scale analysis

    NASA Astrophysics Data System (ADS)

    Cao, Lei; Shao, Xiaopeng; Liu, Fei; Wang, Lin

    2015-05-01

    An approach for haze removal utilizing polarimetric imaging and multi-scale analysis has been developed to solve one problem that haze weather weakens the interpretation of remote sensing because of the poor visibility and short detection distance of haze images. On the one hand, the polarization effects of the airlight and the object radiance in the imaging procedure has been considered. On the other hand, one fact that objects and haze possess different frequency distribution properties has been emphasized. So multi-scale analysis through wavelet transform has been employed to make it possible for low frequency components that haze presents and high frequency coefficients that image details or edges occupy are processed separately. According to the measure of the polarization feather by Stokes parameters, three linear polarized images (0°, 45°, and 90°) have been taken on haze weather, then the best polarized image min I and the worst one max I can be synthesized. Afterwards, those two polarized images contaminated by haze have been decomposed into different spatial layers with wavelet analysis, and the low frequency images have been processed via a polarization dehazing algorithm while high frequency components manipulated with a nonlinear transform. Then the ultimate haze-free image can be reconstructed by inverse wavelet reconstruction. Experimental results verify that the dehazing method proposed in this study can strongly promote image visibility and increase detection distance through haze for imaging warning and remote sensing systems.

  1. Spatio-spectral image analysis using classical and neural algorithms

    SciTech Connect

    Roberts, S.; Gisler, G.R.; Theiler, J.

    1996-12-31

    Remote imaging at high spatial resolution has a number of environmental, industrial, and military applications. Analysis of high-resolution multi-spectral images usually involves either spectral analysis of single pixels in a multi- or hyper-spectral image or spatial analysis of multi-pixels in a panchromatic or monochromatic image. Although insufficient for some pattern recognition applications individually, the combination of spatial and spectral analytical techniques may allow the identification of more complex signatures that might not otherwise be manifested in the individual spatial or spectral domains. We report on some preliminary investigation of unsupervised classification methodologies (using both ``classical`` and ``neural`` algorithms) to identify potentially revealing features in these images. We apply dimension-reduction preprocessing to the images, duster, and compare the clusterings obtained by different algorithms. Our classification results are analyzed both visually and with a suite of objective, quantitative measures.

  2. Dynamic infrared imaging in identification of breast cancer tissue with combined image processing and frequency analysis.

    PubMed

    Joro, R; Lääperi, A-L; Soimakallio, S; Järvenpää, R; Kuukasjärvi, T; Toivonen, T; Saaristo, R; Dastidar, P

    2008-01-01

    Five combinations of image-processing algorithms were applied to dynamic infrared (IR) images of six breast cancer patients preoperatively to establish optimal enhancement of cancer tissue before frequency analysis. mid-wave photovoltaic (PV) IR cameras with 320x254 and 640x512 pixels were used. The signal-to-noise ratio and the specificity for breast cancer were evaluated with the image-processing combinations from the image series of each patient. Before image processing and frequency analysis the effect of patient movement was minimized with a stabilization program developed and tested in the study by stabilizing image slices using surface markers set as measurement points on the skin of the imaged breast. A mathematical equation for superiority value was developed for comparison of the key ratios of the image-processing combinations. The ability of each combination to locate the mammography finding of breast cancer in each patient was compared. Our results show that data collected with a 640x512-pixel mid-wave PV camera applying image-processing methods optimizing signal-to-noise ratio, morphological image processing and linear image restoration before frequency analysis possess the greatest superiority value, showing the cancer area most clearly also in the match centre of the mammography estimation. PMID:18666012

  3. Analysis of cardiac interventricular septum motion in different respiratory states

    NASA Astrophysics Data System (ADS)

    Tautz, Lennart; Feng, Li; Otazo, Ricardo; Hennemuth, Anja; Axel, Leon

    2016-03-01

    The interaction between the left and right heart ventricles (LV and RV) depends on load and pressure conditions that are affected by cardiac contraction and respiration cycles. A novel MRI sequence, XD-GRASP, allows the acquisition of multi-dimensional, respiration-sorted and cardiac-synchronized free-breathing image data. In these data, effects of the cardiac and respiratory cycles on the LV/RV interaction can be observed independently. To enable the analysis of such data, we developed a semi-automatic exploration workflow. After tracking a cross-sectional line positioned over the heart, over all motion states, the septum and heart wall border locations are detected by analyzing the grey-value profile under the lines. These data are used to quantify septum motion, both in absolute units and as a fraction of the heart size, to compare values for different subjects. In addition to conventional visualization techniques, we used color maps for intuitive exploration of the variable values for this multi-dimensional data set. We acquired short-axis image data of nine healthy volunteers, to analyze the position and the motion of the interventricular septum in different breathing states and different cardiac cycle phases. The results indicate a consistent range of normal septum motion values, and also suggest that respiratory phase-dependent septum motion is greatest near end-diastolic phases. These new methods are a promising tool to assess LV/RV ventricle interaction and the effects of respiration on this interaction.

  4. Vector sparse representation of color image using quaternion matrix analysis.

    PubMed

    Xu, Yi; Yu, Licheng; Xu, Hongteng; Zhang, Hao; Nguyen, Truong

    2015-04-01

    Traditional sparse image models treat color image pixel as a scalar, which represents color channels separately or concatenate color channels as a monochrome image. In this paper, we propose a vector sparse representation model for color images using quaternion matrix analysis. As a new tool for color image representation, its potential applications in several image-processing tasks are presented, including color image reconstruction, denoising, inpainting, and super-resolution. The proposed model represents the color image as a quaternion matrix, where a quaternion-based dictionary learning algorithm is presented using the K-quaternion singular value decomposition (QSVD) (generalized K-means clustering for QSVD) method. It conducts the sparse basis selection in quaternion space, which uniformly transforms the channel images to an orthogonal color space. In this new color space, it is significant that the inherent color structures can be completely preserved during vector reconstruction. Moreover, the proposed sparse model is more efficient comparing with the current sparse models for image restoration tasks due to lower redundancy between the atoms of different color channels. The experimental results demonstrate that the proposed sparse image model avoids the hue bias issue successfully and shows its potential as a general and powerful tool in color image analysis and processing domain. PMID:25643407

  5. An approach to multi-temporal MODIS image analysis using image classification and segmentation

    NASA Astrophysics Data System (ADS)

    Senthilnath, J.; Bajpai, Shivesh; Omkar, S. N.; Diwakar, P. G.; Mani, V.

    2012-11-01

    This paper discusses an approach for river mapping and flood evaluation based on multi-temporal time series analysis of satellite images utilizing pixel spectral information for image classification and region-based segmentation for extracting water-covered regions. Analysis of MODIS satellite images is applied in three stages: before flood, during flood and after flood. Water regions are extracted from the MODIS images using image classification (based on spectral information) and image segmentation (based on spatial information). Multi-temporal MODIS images from "normal" (non-flood) and flood time-periods are processed in two steps. In the first step, image classifiers such as Support Vector Machines (SVM) and Artificial Neural Networks (ANN) separate the image pixels into water and non-water groups based on their spectral features. The classified image is then segmented using spatial features of the water pixels to remove the misclassified water. From the results obtained, we evaluate the performance of the method and conclude that the use of image classification (SVM and ANN) and region-based image segmentation is an accurate and reliable approach for the extraction of water-covered regions.

  6. Multiple sclerosis medical image analysis and information management.

    PubMed

    Liu, Lifeng; Meier, Dominik; Polgar-Turcsanyi, Mariann; Karkocha, Pawel; Bakshi, Rohit; Guttmann, Charles R G

    2005-01-01

    Magnetic resonance imaging (MRI) has become a central tool for patient management, as well as research, in multiple sclerosis (MS). Measurements of disease burden and activity derived from MRI through quantitative image analysis techniques are increasingly being used. There are many complexities and challenges in building computerized processing pipelines to ensure efficiency, reproducibility, and quality control for MRI scans from MS patients. Such paradigms require advanced image processing and analysis technologies, as well as integrated database management systems to ensure the most utility for clinical and research purposes. This article reviews pipelines available for quantitative clinical MRI research in MS, including image segmentation, registration, time-series analysis, performance validation, visualization techniques, and advanced medical imaging software packages. To address the complex demands of the sequential processes, the authors developed a workflow management system that uses a centralized database and distributed computing system for image processing and analysis. The implementation of their system includes a web-form-based Oracle database application for information management and event dispatching, and multiple modules for image processing and analysis. The seamless integration of processing pipelines with the database makes it more efficient for users to navigate complex, multistep analysis protocols, reduces the user's learning curve, reduces the time needed for combining and activating different computing modules, and allows for close monitoring for quality-control purposes. The authors' system can be extended to general applications in clinical trials and to routine processing for image-based clinical research. PMID:16385023

  7. Development of a quantitative autoradiography image analysis system

    SciTech Connect

    Hoffman, T.J.; Volkert, W.A.; Holmes R.A.

    1986-03-01

    A low cost image analysis system suitable for quantitative autoradiography (QAR) analysis has been developed. Autoradiographs can be digitized using a conventional Newvicon television camera interfaced to an IBM-XT microcomputer. Software routines for image digitization and capture permit the acquisition of thresholded or windowed images with graphic overlays that can be stored on storage devices. Image analysis software performs all background and non-linearity corrections prior to display as black/white or pseudocolor images. The relationship of pixel intensity to a standard radionuclide concentration allows the production of quantitative maps of tissue radiotracer concentrations. An easily modified subroutine is provided for adaptation to use appropriate operational equations when parameters such as regional cerebral blood flow or regional cerebral glucose metabolism are under investigation. This system could provide smaller research laboratories with the capability of QAR analysis at relatively low cost.

  8. An image analysis system for near-infrared (NIR) fluorescence lymph imaging

    NASA Astrophysics Data System (ADS)

    Zhang, Jingdan; Zhou, Shaohua Kevin; Xiang, Xiaoyan; Rasmussen, John C.; Sevick-Muraca, Eva M.

    2011-03-01

    Quantitative analysis of lymphatic function is crucial for understanding the lymphatic system and diagnosing the associated diseases. Recently, a near-infrared (NIR) fluorescence imaging system is developed for real-time imaging lymphatic propulsion by intradermal injection of microdose of a NIR fluorophore distal to the lymphatics of interest. However, the previous analysis software3, 4 is underdeveloped, requiring extensive time and effort to analyze a NIR image sequence. In this paper, we develop a number of image processing techniques to automate the data analysis workflow, including an object tracking algorithm to stabilize the subject and remove the motion artifacts, an image representation named flow map to characterize lymphatic flow more reliably, and an automatic algorithm to compute lymph velocity and frequency of propulsion. By integrating all these techniques to a system, the analysis workflow significantly reduces the amount of required user interaction and improves the reliability of the measurement.

  9. Automated thermal mapping techniques using chromatic image analysis

    NASA Technical Reports Server (NTRS)

    Buck, Gregory M.

    1989-01-01

    Thermal imaging techniques are introduced using a chromatic image analysis system and temperature sensitive coatings. These techniques are used for thermal mapping and surface heat transfer measurements on aerothermodynamic test models in hypersonic wind tunnels. Measurements are made on complex vehicle configurations in a timely manner and at minimal expense. The image analysis system uses separate wavelength filtered images to analyze surface spectral intensity data. The system was initially developed for quantitative surface temperature mapping using two-color thermographic phosphors but was found useful in interpreting phase change paint and liquid crystal data as well.

  10. Timing high-speed microprocessor circuits using picosecond imaging circuit analysis

    NASA Astrophysics Data System (ADS)

    Steen, Steven E.; McManus, Moyra K.; Manzer, Dennis G.

    2001-04-01

    IBM Research has developed a time resolved imaging technique, Picosecond Imaging Circuit Analysis (PICA), which uses single photon events to analyze signals in modern microprocessors on a picosecond time scale. This paper will describe the experimental setup as well as the data management software. A case study of a particularly hard debug problem on a state of the art microprocessor will demonstrate the application of the PICA method.

  11. Evolution of mammographic image quality in the state of Rio de Janeiro*

    PubMed Central

    Villar, Vanessa Cristina Felippe Lopes; Seta, Marismary Horsth De; de Andrade, Carla Lourenço Tavares; Delamarque, Elizabete Vianna; de Azevedo, Ana Cecília Pedrosa

    2015-01-01

    Objective To evaluate the evolution of mammographic image quality in the state of Rio de Janeiro on the basis of parameters measured and analyzed during health surveillance inspections in the period from 2006 to 2011. Materials and Methods Descriptive study analyzing parameters connected with imaging quality of 52 mammography apparatuses inspected at least twice with a one-year interval. Results Amongst the 16 analyzed parameters, 7 presented more than 70% of conformity, namely: compression paddle pressure intensity (85.1%), films development (72.7%), film response (72.7%), low contrast fine detail (92.2%), tumor mass visualization (76.5%), absence of image artifacts (94.1%), mammography-specific developers availability (88.2%). On the other hand, relevant parameters were below 50% conformity, namely: monthly image quality control testing (28.8%) and high contrast details with respect to microcalcifications visualization (47.1%). Conclusion The analysis revealed critical situations in terms of compliance with the health surveillance standards. Priority should be given to those mammography apparatuses that remained non-compliant at the second inspection performed within the one-year interval. PMID:25987749

  12. Feasibility test of a solid state spin-scan photo-imaging system

    NASA Technical Reports Server (NTRS)

    Laverty, N. P.

    1973-01-01

    The feasibility of using a solid-state photo-imaging system to obtain resolution imagery from a Pioneer-type spinning spacecraft in future exploratory missions to the outer planets is discussed. Evaluation of the photo-imaging system performance, based on electrical video signal analysis recorded on magnetic tape, shows that the signal-to-noise (S/N) ratios obtained at low spatial frequencies exceed the anticipated performance and that measured modulation transfer functions exhibited some degradation in comparison with the estimated values, primarily owing to the difficulty in obtaining a precise focus of the optical system in the laboratory with the test patterns in close proximity to the objective lens. A preliminary flight model design of the photo-imaging system is developed based on the use of currently available phototransistor arrays. Image quality estimates that will be obtained are presented in terms of S/N ratios and spatial resolution for the various planets and satellites. Parametric design tradeoffs are also defined.

  13. Rapid analysis and exploration of fluorescence microscopy images.

    PubMed

    Pavie, Benjamin; Rajaram, Satwik; Ouyang, Austin; Altschuler, Jason M; Steininger, Robert J; Wu, Lani F; Altschuler, Steven J

    2014-01-01

    Despite rapid advances in high-throughput microscopy, quantitative image-based assays still pose significant challenges. While a variety of specialized image analysis tools are available, most traditional image-analysis-based workflows have steep learning curves (for fine tuning of analysis parameters) and result in long turnaround times between imaging and analysis. In particular, cell segmentation, the process of identifying individual cells in an image, is a major bottleneck in this regard. Here we present an alternate, cell-segmentation-free workflow based on PhenoRipper, an open-source software platform designed for the rapid analysis and exploration of microscopy images. The pipeline presented here is optimized for immunofluorescence microscopy images of cell cultures and requires minimal user intervention. Within half an hour, PhenoRipper can analyze data from a typical 96-well experiment and generate image profiles. Users can then visually explore their data, perform quality control on their experiment, ensure response to perturbations and check reproducibility of replicates. This facilitates a rapid feedback cycle between analysis and experiment, which is crucial during assay optimization. This protocol is useful not just as a first pass analysis for quality control, but also may be used as an end-to-end solution, especially for screening. The workflow described here scales to large data sets such as those generated by high-throughput screens, and has been shown to group experimental conditions by phenotype accurately over a wide range of biological systems. The PhenoBrowser interface provides an intuitive framework to explore the phenotypic space and relate image properties to biological annotations. Taken together, the protocol described here will lower the barriers to adopting quantitative analysis of image based screens. PMID:24686220

  14. Research of second harmonic generation images based on texture analysis

    NASA Astrophysics Data System (ADS)

    Liu, Yao; Li, Yan; Gong, Haiming; Zhu, Xiaoqin; Huang, Zufang; Chen, Guannan

    2014-09-01

    Texture analysis plays a crucial role in identifying objects or regions of interest in an image. It has been applied to a variety of medical image processing, ranging from the detection of disease and the segmentation of specific anatomical structures, to differentiation between healthy and pathological tissues. Second harmonic generation (SHG) microscopy as a potential noninvasive tool for imaging biological tissues has been widely used in medicine, with reduced phototoxicity and photobleaching. In this paper, we clarified the principles of texture analysis including statistical, transform, structural and model-based methods and gave examples of its applications, reviewing studies of the technique. Moreover, we tried to apply texture analysis to the SHG images for the differentiation of human skin scar tissues. Texture analysis method based on local binary pattern (LBP) and wavelet transform was used to extract texture features of SHG images from collagen in normal and abnormal scars, and then the scar SHG images were classified into normal or abnormal ones. Compared with other texture analysis methods with respect to the receiver operating characteristic analysis, LBP combined with wavelet transform was demonstrated to achieve higher accuracy. It can provide a new way for clinical diagnosis of scar types. At last, future development of texture analysis in SHG images were discussed.

  15. Multistage hierarchy for fast image analysis

    NASA Astrophysics Data System (ADS)

    Grudin, Maxim A.; Harvey, David M.; Timchenko, Leonid I.

    1996-12-01

    In this paper, a novel approach is proposed, which allows for an efficient reduction of the amount of visual data required for representing structural information in the image. This is a multistage architecture which investigates partial correlations between structural image components. Mathematical description of the multistage hierarchical processing is provided, together with the network architecture. Initially the image is partitioned to be processed in parallel channels. In each channel, the structural components are transformed and subsequently separated, depending on their structural significance, to be then combined with the components from other channels for further processing. The output result is represented as a pattern vector, whose components are computed one at a time to allow the quickest possible response. The input gray- scale image is transformed before the processing begins, so that each pixel contains information about the spatial structure of its neighborhood. The most correlated information is extracted first, making the algorithm tolerant to minor structural changes.

  16. Introducing PLIA: Planetary Laboratory for Image Analysis

    NASA Astrophysics Data System (ADS)

    Peralta, J.; Hueso, R.; Barrado, N.; Sánchez-Lavega, A.

    2005-08-01

    We present a graphical software tool developed under IDL software to navigate, process and analyze planetary images. The software has a complete Graphical User Interface and is cross-platform. It can also run under the IDL Virtual Machine without the need to own an IDL license. The set of tools included allow image navigation (orientation, centring and automatic limb determination), dynamical and photometric atmospheric measurements (winds and cloud albedos), cylindrical and polar projections, as well as image treatment under several procedures. Being written in IDL, it is modular and easy to modify and grow for adding new capabilities. We show several examples of the software capabilities with Galileo-Venus observations: Image navigation, photometrical corrections, wind profiles obtained by cloud tracking, cylindrical projections and cloud photometric measurements. Acknowledgements: This work has been funded by Spanish MCYT PNAYA2003-03216, fondos FEDER and Grupos UPV 15946/2004. R. Hueso acknowledges a post-doc fellowship from Gobierno Vasco.

  17. Decision-problem state analysis methodology

    NASA Technical Reports Server (NTRS)

    Dieterly, D. L.

    1980-01-01

    A methodology for analyzing a decision-problem state is presented. The methodology is based on the analysis of an incident in terms of the set of decision-problem conditions encountered. By decomposing the events that preceded an unwanted outcome, such as an accident, into the set of decision-problem conditions that were resolved, a more comprehensive understanding is possible. All human-error accidents are not caused by faulty decision-problem resolutions, but it appears to be one of the major areas of accidents cited in the literature. A three-phase methodology is presented which accommodates a wide spectrum of events. It allows for a systems content analysis of the available data to establish: (1) the resolutions made, (2) alternatives not considered, (3) resolutions missed, and (4) possible conditions not considered. The product is a map of the decision-problem conditions that were encountered as well as a projected, assumed set of conditions that should have been considered. The application of this methodology introduces a systematic approach to decomposing the events that transpired prior to the accident. The initial emphasis is on decision and problem resolution. The technique allows for a standardized method of accident into a scenario which may used for review or the development of a training simulation.

  18. MR brain image analysis in dementia: From quantitative imaging biomarkers to ageing brain models and imaging genetics.

    PubMed

    Niessen, Wiro J

    2016-10-01

    MR brain image analysis has constantly been a hot topic research area in medical image analysis over the past two decades. In this article, it is discussed how the field developed from the construction of tools for automatic quantification of brain morphology, function, connectivity and pathology, to creating models of the ageing brain in normal ageing and disease, and tools for integrated analysis of imaging and genetic data. The current and future role of the field in improved understanding of the development of neurodegenerative disease is discussed, and its potential for aiding in early and differential diagnosis and prognosis of different types of dementia. For the latter, the use of reference imaging data and reference models derived from large clinical and population imaging studies, and the application of machine learning techniques on these reference data, are expected to play a key role. PMID:27344937

  19. Localised manifold learning for cardiac image analysis

    NASA Astrophysics Data System (ADS)

    Bhatia, Kanwal K.; Price, Anthony N.; Hajnal, Jo V.; Rueckert, Daniel

    2012-02-01

    Manifold learning is increasingly being used to discover the underlying structure of medical image data. Traditional approaches operate on whole images with a single measure of similarity used to compare entire images. In this way, information on the locality of differences is lost and smaller trends may be masked by dominant global differences. In this paper, we propose the use of multiple local manifolds to analyse regions of images without any prior knowledge of which regions are important. Localised manifolds are created by partitioning images into regular subsections with a manifold constructed for each patch. We propose a framework for incorporating information from the neighbours of each patch to calculate a coherent embedding. This generates a simultaneous dimensionality reduction of all patches and results in the creation of embeddings which are spatially-varying. Additionally, a hierarchical method is presented to enable a multi-scale embedding solution. We use this to extract spatially-varying respiratory and cardiac motions from cardiac MRI. Although there is a complex interplay between these motions, we show how they can be separated on a regional basis. We demonstrate the utility of the localised joint embedding over a global embedding of whole images and over embedding individual patches independently.

  20. Radar images analysis for scattering surfaces characterization

    NASA Astrophysics Data System (ADS)

    Piazza, Enrico

    1998-10-01

    According to the different problems and techniques related to the detection and recognition of airplanes and vehicles moving on the Airport surface, the present work mainly deals with the processing of images gathered by a high-resolution radar sensor. The radar images used to test the investigated algorithms are relative to sequence of images obtained in some field experiments carried out by the Electronic Engineering Department of the University of Florence. The radar is the Ka band radar operating in the'Leonardo da Vinci' Airport in Fiumicino (Rome). The images obtained from the radar scan converter are digitized and putted in x, y, (pixel) co- ordinates. For a correct matching of the images, these are corrected in true geometrical co-ordinates (meters) on the basis of fixed points on an airport map. Correlating the airplane 2-D multipoint template with actual radar images, the value of the signal in the points involved in the template can be extracted. Results for a lot of observation show a typical response for the main section of the fuselage and the wings. For the fuselage, the back-scattered echo is low at the prow, became larger near the center on the aircraft and than it decrease again toward the tail. For the wings the signal is growing with a pretty regular slope from the fuselage to the tips, where the signal is the strongest.

  1. State of the Art Laryngeal Imaging: Research and Clinical Implications

    PubMed Central

    Deliyski, Dimitar D.; Hillman, Robert E.

    2010-01-01

    Purpose of Review This paper provides a review of the latest advances in videostroboscopy, videokymography and high-speed videoendoscopy, and outlines the development of new laryngeal imaging modalities based on optical coherence tomography, laser depth-kymography, and magnetic resonance imaging, published in the past 2 years. Recent Findings Videostroboscopy and Videokymography Image quality has improved and several image processing and measurement techniques have been published. High-speed videoendoscopy Significant progress has been made through increased sensitivity and frame rates of the cameras, and the development of facilitative playbacks, phonovibrography and several image segmentation and measurement methods. Clinical evidence was presented through applications in phonosurgery, comparisons with videostroboscopy, normative data, and better understanding of voice production. Optical coherence tomography Latest developments allow for the capture of dynamic high resolution cross-sectional images of the vibrating vocal fold mucosa during phonation. Depth-kymography New laser technique allowing recording of the vertical movements of the vocal folds during phonation in calibrated spatial values. Laryngeal magnetic resonance New methods allow high-resolution imaging of laryngeal tissue microstructure, or measuring of dynamic laryngeal structures during phonation. Summary The endoscopic laryngeal imaging techniques have made significant advances increasing their clinical value, while techniques providing new types of potentially clinically-relevant information have emerged. PMID:20463479

  2. Image Harvest: an open-source platform for high-throughput plant image processing and analysis

    PubMed Central

    Knecht, Avi C.; Campbell, Malachy T.; Caprez, Adam; Swanson, David R.; Walia, Harkamal

    2016-01-01

    High-throughput plant phenotyping is an effective approach to bridge the genotype-to-phenotype gap in crops. Phenomics experiments typically result in large-scale image datasets, which are not amenable for processing on desktop computers, thus creating a bottleneck in the image-analysis pipeline. Here, we present an open-source, flexible image-analysis framework, called Image Harvest (IH), for processing images originating from high-throughput plant phenotyping platforms. Image Harvest is developed to perform parallel processing on computing grids and provides an integrated feature for metadata extraction from large-scale file organization. Moreover, the integration of IH with the Open Science Grid provides academic researchers with the computational resources required for processing large image datasets at no cost. Image Harvest also offers functionalities to extract digital traits from images to interpret plant architecture-related characteristics. To demonstrate the applications of these digital traits, a rice (Oryza sativa) diversity panel was phenotyped and genome-wide association mapping was performed using digital traits that are used to describe different plant ideotypes. Three major quantitative trait loci were identified on rice chromosomes 4 and 6, which co-localize with quantitative trait loci known to regulate agronomically important traits in rice. Image Harvest is an open-source software for high-throughput image processing that requires a minimal learning curve for plant biologists to analyzephenomics datasets. PMID:27141917

  3. Image Harvest: an open-source platform for high-throughput plant image processing and analysis.

    PubMed

    Knecht, Avi C; Campbell, Malachy T; Caprez, Adam; Swanson, David R; Walia, Harkamal

    2016-05-01

    High-throughput plant phenotyping is an effective approach to bridge the genotype-to-phenotype gap in crops. Phenomics experiments typically result in large-scale image datasets, which are not amenable for processing on desktop computers, thus creating a bottleneck in the image-analysis pipeline. Here, we present an open-source, flexible image-analysis framework, called Image Harvest (IH), for processing images originating from high-throughput plant phenotyping platforms. Image Harvest is developed to perform parallel processing on computing grids and provides an integrated feature for metadata extraction from large-scale file organization. Moreover, the integration of IH with the Open Science Grid provides academic researchers with the computational resources required for processing large image datasets at no cost. Image Harvest also offers functionalities to extract digital traits from images to interpret plant architecture-related characteristics. To demonstrate the applications of these digital traits, a rice (Oryza sativa) diversity panel was phenotyped and genome-wide association mapping was performed using digital traits that are used to describe different plant ideotypes. Three major quantitative trait loci were identified on rice chromosomes 4 and 6, which co-localize with quantitative trait loci known to regulate agronomically important traits in rice. Image Harvest is an open-source software for high-throughput image processing that requires a minimal learning curve for plant biologists to analyzephenomics datasets. PMID:27141917

  4. Unsupervised analysis of small animal dynamic Cerenkov luminescence imaging

    NASA Astrophysics Data System (ADS)

    Spinelli, Antonello E.; Boschi, Federico

    2011-12-01

    Clustering analysis (CA) and principal component analysis (PCA) were applied to dynamic Cerenkov luminescence images (dCLI). In order to investigate the performances of the proposed approaches, two distinct dynamic data sets obtained by injecting mice with 32P-ATP and 18F-FDG were acquired using the IVIS 200 optical imager. The k-means clustering algorithm has been applied to dCLI and was implemented using interactive data language 8.1. We show that cluster analysis allows us to obtain good agreement between the clustered and the corresponding emission regions like the bladder, the liver, and the tumor. We also show a good correspondence between the time activity curves of the different regions obtained by using CA and manual region of interest analysis on dCLIT and PCA images. We conclude that CA provides an automatic unsupervised method for the analysis of preclinical dynamic Cerenkov luminescence image data.

  5. Digital Image Analysis for DETCHIP(®) Code Determination.

    PubMed

    Lyon, Marcus; Wilson, Mark V; Rouhier, Kerry A; Symonsbergen, David J; Bastola, Kiran; Thapa, Ishwor; Holmes, Andrea E; Sikich, Sharmin M; Jackson, Abby

    2012-08-01

    DETECHIP(®) is a molecular sensing array used for identification of a large variety of substances. Previous methodology for the analysis of DETECHIP(®) used human vision to distinguish color changes induced by the presence of the analyte of interest. This paper describes several analysis techniques using digital images of DETECHIP(®). Both a digital camera and flatbed desktop photo scanner were used to obtain Jpeg images. Color information within these digital images was obtained through the measurement of red-green-blue (RGB) values using software such as GIMP, Photoshop and ImageJ. Several different techniques were used to evaluate these color changes. It was determined that the flatbed scanner produced in the clearest and more reproducible images. Furthermore, codes obtained using a macro written for use within ImageJ showed improved consistency versus pervious methods. PMID:25267940

  6. State Teacher Salary Schedules. Policy Analysis

    ERIC Educational Resources Information Center

    Griffith, Michael

    2016-01-01

    In the United States most teacher compensation issues are decided at the school district level. However, a group of states have chosen to play a role in teacher pay decisions by instituting statewide teacher salary schedules. Education Commission of the States has found that 17 states currently make use of teacher salary schedules. This education…

  7. Anima: modular workflow system for comprehensive image data analysis.

    PubMed

    Rantanen, Ville; Valori, Miko; Hautaniemi, Sampsa

    2014-01-01

    Modern microscopes produce vast amounts of image data, and computational methods are needed to analyze and interpret these data. Furthermore, a single image analysis project may require tens or hundreds of analysis steps starting from data import and pre-processing to segmentation and statistical analysis; and ending with visualization and reporting. To manage such large-scale image data analysis projects, we present here a modular workflow system called Anima. Anima is designed for comprehensive and efficient image data analysis development, and it contains several features that are crucial in high-throughput image data analysis: programing language independence, batch processing, easily customized data processing, interoperability with other software via application programing interfaces, and advanced multivariate statistical analysis. The utility of Anima is shown with two case studies focusing on testing different algorithms developed in different imaging platforms and an automated prediction of alive/dead C. elegans worms by integrating several analysis environments. Anima is a fully open source and available with documentation at www.anduril.org/anima. PMID:25126541

  8. Anima: Modular Workflow System for Comprehensive Image Data Analysis

    PubMed Central

    Rantanen, Ville; Valori, Miko; Hautaniemi, Sampsa

    2014-01-01

    Modern microscopes produce vast amounts of image data, and computational methods are needed to analyze and interpret these data. Furthermore, a single image analysis project may require tens or hundreds of analysis steps starting from data import and pre-processing to segmentation and statistical analysis; and ending with visualization and reporting. To manage such large-scale image data analysis projects, we present here a modular workflow system called Anima. Anima is designed for comprehensive and efficient image data analysis development, and it contains several features that are crucial in high-throughput image data analysis: programing language independence, batch processing, easily customized data processing, interoperability with other software via application programing interfaces, and advanced multivariate statistical analysis. The utility of Anima is shown with two case studies focusing on testing different algorithms developed in different imaging platforms and an automated prediction of alive/dead C. elegans worms by integrating several analysis environments. Anima is a fully open source and available with documentation at www.anduril.org/anima. PMID:25126541

  9. Energy minimization in medical image analysis: Methodologies and applications.

    PubMed

    Zhao, Feng; Xie, Xianghua

    2016-02-01

    Energy minimization is of particular interest in medical image analysis. In the past two decades, a variety of optimization schemes have been developed. In this paper, we present a comprehensive survey of the state-of-the-art optimization approaches. These algorithms are mainly classified into two categories: continuous method and discrete method. The former includes Newton-Raphson method, gradient descent method, conjugate gradient method, proximal gradient method, coordinate descent method, and genetic algorithm-based method, while the latter covers graph cuts method, belief propagation method, tree-reweighted message passing method, linear programming method, maximum margin learning method, simulated annealing method, and iterated conditional modes method. We also discuss the minimal surface method, primal-dual method, and the multi-objective optimization method. In addition, we review several comparative studies that evaluate the performance of different minimization techniques in terms of accuracy, efficiency, or complexity. These optimization techniques are widely used in many medical applications, for example, image segmentation, registration, reconstruction, motion tracking, and compressed sensing. We thus give an overview on those applications as well. Copyright © 2015 John Wiley & Sons, Ltd. PMID:26186171

  10. The Analysis of Image Segmentation Hierarchies with a Graph-based Knowledge Discovery System

    NASA Technical Reports Server (NTRS)

    Tilton, James C.; Cooke, diane J.; Ketkar, Nikhil; Aksoy, Selim

    2008-01-01

    Currently available pixel-based analysis techniques do not effectively extract the information content from the increasingly available high spatial resolution remotely sensed imagery data. A general consensus is that object-based image analysis (OBIA) is required to effectively analyze this type of data. OBIA is usually a two-stage process; image segmentation followed by an analysis of the segmented objects. We are exploring an approach to OBIA in which hierarchical image segmentations provided by the Recursive Hierarchical Segmentation (RHSEG) software developed at NASA GSFC are analyzed by the Subdue graph-based knowledge discovery system developed by a team at Washington State University. In this paper we discuss out initial approach to representing the RHSEG-produced hierarchical image segmentations in a graphical form understandable by Subdue, and provide results on real and simulated data. We also discuss planned improvements designed to more effectively and completely convey the hierarchical segmentation information to Subdue and to improve processing efficiency.

  11. Basic research planning in mathematical pattern recognition and image analysis

    NASA Technical Reports Server (NTRS)

    Bryant, J.; Guseman, L. F., Jr.

    1981-01-01

    Fundamental problems encountered while attempting to develop automated techniques for applications of remote sensing are discussed under the following categories: (1) geometric and radiometric preprocessing; (2) spatial, spectral, temporal, syntactic, and ancillary digital image representation; (3) image partitioning, proportion estimation, and error models in object scene interference; (4) parallel processing and image data structures; and (5) continuing studies in polarization; computer architectures and parallel processing; and the applicability of "expert systems" to interactive analysis.

  12. An Analysis of the Magneto-Optic Imaging System

    NASA Technical Reports Server (NTRS)

    Nath, Shridhar

    1996-01-01

    The Magneto-Optic Imaging system is being used for the detection of defects in airframes and other aircraft structures. The system has been successfully applied to detecting surface cracks, but has difficulty in the detection of sub-surface defects such as corrosion. The intent of the grant was to understand the physics of the MOI better, in order to use it effectively for detecting corrosion and for classifying surface defects. Finite element analysis, image classification, and image processing are addressed.

  13. Uncooled LWIR imaging: applications and market analysis

    NASA Astrophysics Data System (ADS)

    Takasawa, Satomi

    2015-05-01

    The evolution of infrared (IR) imaging sensor technology for defense market has played an important role in developing commercial market, as dual use of the technology has expanded. In particular, technologies of both reduction in pixel pitch and vacuum package have drastically evolved in the area of uncooled Long-Wave IR (LWIR; 8-14 μm wavelength region) imaging sensor, increasing opportunity to create new applications. From the macroscopic point of view, the uncooled LWIR imaging market is divided into two areas. One is a high-end market where uncooled LWIR imaging sensor with sensitivity as close to that of cooled one as possible is required, while the other is a low-end market which is promoted by miniaturization and reduction in price. Especially, in the latter case, approaches towards consumer market have recently appeared, such as applications of uncooled LWIR imaging sensors to night visions for automobiles and smart phones. The appearance of such a kind of commodity surely changes existing business models. Further technological innovation is necessary for creating consumer market, and there will be a room for other companies treating components and materials such as lens materials and getter materials and so on to enter into the consumer market.

  14. Continuous-wave terahertz scanning image resolution analysis and restoration

    NASA Astrophysics Data System (ADS)

    Li, Qi; Yin, Qiguo; Yao, Rui; Ding, Shenghui; Wang, Qi

    2010-03-01

    Resolution of continuous-wave (CW) terahertz scanning image is limited by many factors among which the aperture effect of finite focus diameter is very important. We have investigated the factors that affect terahertz (THz) image resolution in details through theory analysis and simulation. On the other hand, in order to enhance THz image resolution, Richardson-Lucy algorithm has been introduced as a promising approach to improve image details. By analyzing the imaging theory, it is proposed that intensity distribution function of actual THz laser focal spot can be approximatively used as point spread function (PSF) in the restoration algorithm. The focal spot image could be obtained by applying the pyroelectric camera, and mean filtering result of the focal spot image is used as the PSF. Simulation and experiment show that the algorithm implemented is comparatively effective.

  15. Multi-Scale Fractal Analysis of Image Texture and Pattern

    NASA Technical Reports Server (NTRS)

    Emerson, Charles W.; Lam, Nina Siu-Ngan; Quattrochi, Dale A.

    1999-01-01

    Analyses of the fractal dimension of Normalized Difference Vegetation Index (NDVI) images of homogeneous land covers near Huntsville, Alabama revealed that the fractal dimension of an image of an agricultural land cover indicates greater complexity as pixel size increases, a forested land cover gradually grows smoother, and an urban image remains roughly self-similar over the range of pixel sizes analyzed (10 to 80 meters). A similar analysis of Landsat Thematic Mapper images of the East Humboldt Range in Nevada taken four months apart show a more complex relation between pixel size and fractal dimension. The major visible difference between the spring and late summer NDVI images is the absence of high elevation snow cover in the summer image. This change significantly alters the relation between fractal dimension and pixel size. The slope of the fractal dimension-resolution relation provides indications of how image classification or feature identification will be affected by changes in sensor spatial resolution.

  16. Multi-Scale Fractal Analysis of Image Texture and Pattern

    NASA Technical Reports Server (NTRS)

    Emerson, Charles W.; Lam, Nina Siu-Ngan; Quattrochi, Dale A.

    1999-01-01

    Analyses of the fractal dimension of Normalized Difference Vegetation Index (NDVI) images of homogeneous land covers near Huntsville, Alabama revealed that the fractal dimension of an image of an agricultural land cover indicates greater complexity as pixel size increases, a forested land cover gradually grows smoother, and an urban image remains roughly self-similar over the range of pixel sizes analyzed (10 to 80 meters). A similar analysis of Landsat Thematic Mapper images of the East Humboldt Range in Nevada taken four months apart show a more complex relation between pixel size and fractal dimension. The major visible difference between the spring and late summer NDVI images of the absence of high elevation snow cover in the summer image. This change significantly alters the relation between fractal dimension and pixel size. The slope of the fractal dimensional-resolution relation provides indications of how image classification or feature identification will be affected by changes in sensor spatial resolution.

  17. REST: a toolkit for resting-state functional magnetic resonance imaging data processing.

    PubMed

    Song, Xiao-Wei; Dong, Zhang-Ye; Long, Xiang-Yu; Li, Su-Fang; Zuo, Xi-Nian; Zhu, Chao-Zhe; He, Yong; Yan, Chao-Gan; Zang, Yu-Feng

    2011-01-01

    Resting-state fMRI (RS-fMRI) has been drawing more and more attention in recent years. However, a publicly available, systematically integrated and easy-to-use tool for RS-fMRI data processing is still lacking. We developed a toolkit for the analysis of RS-fMRI data, namely the RESting-state fMRI data analysis Toolkit (REST). REST was developed in MATLAB with graphical user interface (GUI). After data preprocessing with SPM or AFNI, a few analytic methods can be performed in REST, including functional connectivity analysis based on linear correlation, regional homogeneity, amplitude of low frequency fluctuation (ALFF), and fractional ALFF. A few additional functions were implemented in REST, including a DICOM sorter, linear trend removal, bandpass filtering, time course extraction, regression of covariates, image calculator, statistical analysis, and slice viewer (for result visualization, multiple comparison correction, etc.). REST is an open-source package and is freely available at http://www.restfmri.net. PMID:21949842

  18. Chemical Imaging of Ambient Aerosol Particles: Observational Constraints on Mixing State Parameterization

    SciTech Connect

    O'Brien, Rachel; Wang, Bingbing; Laskin, Alexander; Riemer, Nicole; West, Matthew; Zhang, Qi; Sun, Yele; Yu, Xiao-Ying; Alpert, Peter A.; Knopf, Daniel A.; Gilles, Mary K.; Moffet, Ryan

    2015-09-28

    A new parameterization for quantifying the mixing state of aerosol populations has been applied for the first time to samples of ambient particles analyzed using spectro-microscopy techniques. Scanning transmission x-ray microscopy/near edge x-ray absorption fine structure (STXM/NEXAFS) and computer controlled scanning electron microscopy/energy dispersive x-ray spectroscopy (CCSEM/EDX) were used to probe the composition of the organic and inorganic fraction of individual particles collected on June 27th and 28th during the 2010 Carbonaceous Aerosols and Radiative Effects (CARES) study in the Central Valley, California. The first field site, T0, was located in downtown Sacramento, while T1 was located near the Sierra Nevada Mountains. Mass estimates of the aerosol particle components were used to calculate mixing state metrics, such as the particle-specific diversity, bulk population diversity, and mixing state index, for each sample. Both microscopy imaging techniques showed more changes over these two days in the mixing state at the T0 site than at the T1 site. The STXM data showed evidence of changes in the mixing state associated with a build-up of organic matter confirmed by collocated measurements and the largest impact on the mixing state was due to an increase in soot dominant particles during this build-up. The CCSEM/EDX analysis showed the presence of two types of particle populations; the first was dominated by aged sea salt particles and had a higher mixing state index (indicating a more homogeneous population), the second was dominated by carbonaceous particles and had a lower mixing state index.

  19. Optical image acquisition system for colony analysis

    NASA Astrophysics Data System (ADS)

    Wang, Weixing; Jin, Wenbiao

    2006-02-01

    For counting of both colonies and plaques, there is a large number of applications including food, dairy, beverages, hygiene, environmental monitoring, water, toxicology, sterility testing, AMES testing, pharmaceuticals, paints, sterile fluids and fungal contamination. Recently, many researchers and developers have made efforts for this kind of systems. By investigation, some existing systems have some problems since they belong to a new technology product. One of the main problems is image acquisition. In order to acquire colony images with good quality, an illumination box was constructed as: the box includes front lightning and back lightning, which can be selected by users based on properties of colony dishes. With the illumination box, lightning can be uniform; colony dish can be put in the same place every time, which make image processing easy. A digital camera in the top of the box connected to a PC computer with a USB cable, all the camera functions are controlled by the computer.

  20. System Matrix Analysis for Computed Tomography Imaging

    PubMed Central

    Flores, Liubov; Vidal, Vicent; Verdú, Gumersindo

    2015-01-01

    In practical applications of computed tomography imaging (CT), it is often the case that the set of projection data is incomplete owing to the physical conditions of the data acquisition process. On the other hand, the high radiation dose imposed on patients is also undesired. These issues demand that high quality CT images can be reconstructed from limited projection data. For this reason, iterative methods of image reconstruction have become a topic of increased research interest. Several algorithms have been proposed for few-view CT. We consider that the accurate solution of the reconstruction problem also depends on the system matrix that simulates the scanning process. In this work, we analyze the application of the Siddon method to generate elements of the matrix and we present results based on real projection data. PMID:26575482

  1. System Matrix Analysis for Computed Tomography Imaging.

    PubMed

    Flores, Liubov; Vidal, Vicent; Verdú, Gumersindo

    2015-01-01

    In practical applications of computed tomography imaging (CT), it is often the case that the set of projection data is incomplete owing to the physical conditions of the data acquisition process. On the other hand, the high radiation dose imposed on patients is also undesired. These issues demand that high quality CT images can be reconstructed from limited projection data. For this reason, iterative methods of image reconstruction have become a topic of increased research interest. Several algorithms have been proposed for few-view CT. We consider that the accurate solution of the reconstruction problem also depends on the system matrix that simulates the scanning process. In this work, we analyze the application of the Siddon method to generate elements of the matrix and we present results based on real projection data. PMID:26575482

  2. [Analysis of bone tissues by intravital imaging].

    PubMed

    Mizuno, Hiroki; Yamashita, Erika; Ishii, Masaru

    2016-05-01

    In recent years,"the fluorescent imaging techniques"has made rapid advances, it has become possible to observe the dynamics of living cells in individuals or tissues. It has been considered that it is extremely difficult to observe the living bone marrow directly because bone marrow is surrounded by a hard calcareous. But now, we established a method for observing the cells constituting the bone marrow of living mice in real time by the use of the intravital two-photon imaging system. In this article, we show the latest data and the reports about the hematopoietic stem cells and the leukemia cells by using the intravital imaging techniques, and also discuss its further application. PMID:27117619

  3. Texture Analysis for Classification of Risat-Ii Images

    NASA Astrophysics Data System (ADS)

    Chakraborty, D.; Thakur, S.; Jeyaram, A.; Krishna Murthy, Y. V. N.; Dadhwal, V. K.

    2012-08-01

    RISAT-II or Radar Imaging satellite - II is a microwave-imaging satellite lunched by ISRO to take images of the earth during day and night as well as all weather condition. This satellite enhances the ISRO's capability for disaster management application together with forestry, agricultural, urban and oceanographic applications. The conventional pixel based classification technique cannot classify these type of images since it do not take into account the texture information of the image. This paper presents a method to classify the high-resolution RISAT-II microwave images based on texture analysis. It suppress the speckle noise from the microwave image before analysis the texture of the image since speckle is essentially a form of noise, which degrades the quality of an image; make interpretation (visual or digital) more difficult. A local adaptive median filter is developed that uses local statistics to detect the speckle noise of microwave image and to replace it with a local median value. Local Binary Pattern (LBP) operator is proposed to measure the texture around each pixel of the speckle suppressed microwave image. It considers a series of circles (2D) centered on the pixel with incremental radius values and the intersected pixels on the perimeter of the circles of radius r (where r = 1, 3 and 5) are used for measuring the LBP of the center pixel. The significance of LBP is that it measure the texture around each pixel of the image and computationally simple. ISODATA method is used to cluster the transformed LBP image. The proposed method adequately classifies RISAT-II X band microwave images without human intervention.

  4. Multiple view image analysis of freefalling U.S. wheat grains for damage assessment

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Currently, inspection of wheat in the United States for grade and class is performed by human visual analysis. This is a time consuming operation typically taking several minutes for each sample. Digital imaging research has addressed this issue over the past two decades, with success in recognition...

  5. Four challenges in medical image analysis from an industrial perspective.

    PubMed

    Weese, Jürgen; Lorenz, Cristian

    2016-10-01

    Today's medical imaging systems produce a huge amount of images containing a wealth of information. However, the information is hidden in the data and image analysis algorithms are needed to extract it, to make it readily available for medical decisions and to enable an efficient work flow. Advances in medical image analysis over the past 20 years mean there are now many algorithms and ideas available that allow to address medical image analysis tasks in commercial solutions with sufficient performance in terms of accuracy, reliability and speed. At the same time new challenges have arisen. Firstly, there is a need for more generic image analysis technologies that can be efficiently adapted for a specific clinical task. Secondly, efficient approaches for ground truth generation are needed to match the increasing demands regarding validation and machine learning. Thirdly, algorithms for analyzing heterogeneous image data are needed. Finally, anatomical and organ models play a crucial role in many applications, and algorithms to construct patient-specific models from medical images with a minimum of user interaction are needed. These challenges are complementary to the on-going need for more accurate, more reliable and faster algorithms, and dedicated algorithmic solutions for specific applications. PMID:27344939

  6. Disability in Physical Education Textbooks: An Analysis of Image Content

    ERIC Educational Resources Information Center

    Taboas-Pais, Maria Ines; Rey-Cao, Ana

    2012-01-01

    The aim of this paper is to show how images of disability are portrayed in physical education textbooks for secondary schools in Spain. The sample was composed of 3,316 images published in 36 textbooks by 10 publishing houses. A content analysis was carried out using a coding scheme based on categories employed in other similar studies and adapted…

  7. An Online Image Analysis Tool for Science Education

    ERIC Educational Resources Information Center

    Raeside, L.; Busschots, B.; Waddington, S.; Keating, J. G.

    2008-01-01

    This paper describes an online image analysis tool developed as part of an iterative, user-centered development of an online Virtual Learning Environment (VLE) called the Education through Virtual Experience (EVE) Portal. The VLE provides a Web portal through which schoolchildren and their teachers create scientific proposals, retrieve images and…

  8. Ringed impact craters on Venus: An analysis from Magellan images

    NASA Technical Reports Server (NTRS)

    Alexopoulos, Jim S.; Mckinnon, William B.

    1992-01-01

    We have analyzed cycle 1 Magellan images covering approximately 90 percent of the venusian surface and have identified 55 unequivocal peak-ring craters and multiringed impact basins. This comprehensive study (52 peak-ring craters and at least 3 multiringed impact basins) complements our earlier independent analysis of Arecibo and Venera images and initial Magellan data and that of the Magellan team.

  9. Higher Education Institution Image: A Correspondence Analysis Approach.

    ERIC Educational Resources Information Center

    Ivy, Jonathan

    2001-01-01

    Investigated how marketing is used to convey higher education institution type image in the United Kingdom and South Africa. Using correspondence analysis, revealed the unique positionings created by old and new universities and technikons in these countries. Also identified which marketing tools they use in conveying their image. (EV)

  10. Geopositioning Precision Analysis of Multiple Image Triangulation Using Lro Nac Lunar Images

    NASA Astrophysics Data System (ADS)

    Di, K.; Xu, B.; Liu, B.; Jia, M.; Liu, Z.

    2016-06-01

    This paper presents an empirical analysis of the geopositioning precision of multiple image triangulation using Lunar Reconnaissance Orbiter Camera (LROC) Narrow Angle Camera (NAC) images at the Chang'e-3(CE-3) landing site. Nine LROC NAC images are selected for comparative analysis of geopositioning precision. Rigorous sensor models of the images are established based on collinearity equations with interior and exterior orientation elements retrieved from the corresponding SPICE kernels. Rational polynomial coefficients (RPCs) of each image are derived by least squares fitting using vast number of virtual control points generated according to rigorous sensor models. Experiments of different combinations of images are performed for comparisons. The results demonstrate that the plane coordinates can achieve a precision of 0.54 m to 2.54 m, with a height precision of 0.71 m to 8.16 m when only two images are used for three-dimensional triangulation. There is a general trend that the geopositioning precision, especially the height precision, is improved with the convergent angle of the two images increasing from several degrees to about 50°. However, the image matching precision should also be taken into consideration when choosing image pairs for triangulation. The precisions of using all the 9 images are 0.60 m, 0.50 m, 1.23 m in along-track, cross-track, and height directions, which are better than most combinations of two or more images. However, triangulation with selected fewer images could produce better precision than that using all the images.

  11. Analysis of PETT images in psychiatric disorders

    SciTech Connect

    Brodie, J.D.; Gomez-Mont, F.; Volkow, N.D.; Corona, J.F.; Wolf, A.P.; Wolkin, A.; Russell, J.A.G.; Christman, D.; Jaeger, J.

    1983-01-01

    A quantitative method is presented for studying the pattern of metabolic activity in a set of Positron Emission Transaxial Tomography (PETT) images. Using complex Fourier coefficients as a feature vector for each image, cluster, principal components, and discriminant function analyses are used to empirically describe metabolic differences between control subjects and patients with DSM III diagnosis for schizophrenia or endogenous depression. We also present data on the effects of neuroleptic treatment on the local cerebral metabolic rate of glucose utilization (LCMRGI) in a group of chronic schizophrenics using the region of interest approach. 15 references, 4 figures, 3 tables.

  12. SLAR image interpretation keys for geographic analysis

    NASA Technical Reports Server (NTRS)

    Coiner, J. C.

    1972-01-01

    A means for side-looking airborne radar (SLAR) imagery to become a more widely used data source in geoscience and agriculture is suggested by providing interpretation keys as an easily implemented interpretation model. Interpretation problems faced by the researcher wishing to employ SLAR are specifically described, and the use of various types of image interpretation keys to overcome these problems is suggested. With examples drawn from agriculture and vegetation mapping, direct and associate dichotomous image interpretation keys are discussed and methods of constructing keys are outlined. Initial testing of the keys, key-based automated decision rules, and the role of the keys in an information system for agriculture are developed.

  13. Challenges and opportunities for quantifying roots and rhizosphere interactions through imaging and image analysis.

    PubMed

    Downie, H F; Adu, M O; Schmidt, S; Otten, W; Dupuy, L X; White, P J; Valentine, T A

    2015-07-01

    The morphology of roots and root systems influences the efficiency by which plants acquire nutrients and water, anchor themselves and provide stability to the surrounding soil. Plant genotype and the biotic and abiotic environment significantly influence root morphology, growth and ultimately crop yield. The challenge for researchers interested in phenotyping root systems is, therefore, not just to measure roots and link their phenotype to the plant genotype, but also to understand how the growth of roots is influenced by their environment. This review discusses progress in quantifying root system parameters (e.g. in terms of size, shape and dynamics) using imaging and image analysis technologies and also discusses their potential for providing a better understanding of root:soil interactions. Significant progress has been made in image acquisition techniques, however trade-offs exist between sample throughput, sample size, image resolution and information gained. All of these factors impact on downstream image analysis processes. While there have been significant advances in computation power, limitations still exist in statistical processes involved in image analysis. Utilizing and combining different imaging systems, integrating measurements and image analysis where possible, and amalgamating data will allow researchers to gain a better understanding of root:soil interactions. PMID:25211059

  14. Resting-State Functional Magnetic Resonance Imaging for Language Preoperative Planning

    PubMed Central

    Branco, Paulo; Seixas, Daniela; Deprez, Sabine; Kovacs, Silvia; Peeters, Ronald; Castro, São L.; Sunaert, Stefan

    2016-01-01

    Functional magnetic resonance imaging (fMRI) is a well-known non-invasive technique for the study of brain function. One of its most common clinical applications is preoperative language mapping, essential for the preservation of function in neurosurgical patients. Typically, fMRI is used to track task-related activity, but poor task performance and movement artifacts can be critical limitations in clinical settings. Recent advances in resting-state protocols open new possibilities for pre-surgical mapping of language potentially overcoming these limitations. To test the feasibility of using resting-state fMRI instead of conventional active task-based protocols, we compared results from fifteen patients with brain lesions while performing a verb-to-noun generation task and while at rest. Task-activity was measured using a general linear model analysis and independent component analysis (ICA). Resting-state networks were extracted using ICA and further classified in two ways: manually by an expert and by using an automated template matching procedure. The results revealed that the automated classification procedure correctly identified language networks as compared to the expert manual classification. We found a good overlay between task-related activity and resting-state language maps, particularly within the language regions of interest. Furthermore, resting-state language maps were as sensitive as task-related maps, and had higher specificity. Our findings suggest that resting-state protocols may be suitable to map language networks in a quick and clinically efficient way. PMID:26869899

  15. Resting-State Functional Magnetic Resonance Imaging for Language Preoperative Planning.

    PubMed

    Branco, Paulo; Seixas, Daniela; Deprez, Sabine; Kovacs, Silvia; Peeters, Ronald; Castro, São L; Sunaert, Stefan

    2016-01-01

    Functional magnetic resonance imaging (fMRI) is a well-known non-invasive technique for the study of brain function. One of its most common clinical applications is preoperative language mapping, essential for the preservation of function in neurosurgical patients. Typically, fMRI is used to track task-related activity, but poor task performance and movement artifacts can be critical limitations in clinical settings. Recent advances in resting-state protocols open new possibilities for pre-surgical mapping of language potentially overcoming these limitations. To test the feasibility of using resting-state fMRI instead of conventional active task-based protocols, we compared results from fifteen patients with brain lesions while performing a verb-to-noun generation task and while at rest. Task-activity was measured using a general linear model analysis and independent component analysis (ICA). Resting-state networks were extracted using ICA and further classified in two ways: manually by an expert and by using an automated template matching procedure. The results revealed that the automated classification procedure correctly identified language networks as compared to the expert manual classification. We found a good overlay between task-related activity and resting-state language maps, particularly within the language regions of interest. Furthermore, resting-state language maps were as sensitive as task-related maps, and had higher specificity. Our findings suggest that resting-state protocols may be suitable to map language networks in a quick and clinically efficient way. PMID:26869899

  16. State-selected imaging studies of formic acid photodissociation dynamics

    SciTech Connect

    Huang Cunshun; Yang Xueming; Zhang Cuimei

    2010-04-21

    The photodissociation dynamics of formic acid have been studied using the velocity map ion imaging at the UV region. The measurements were made with resonance enhancement multiphoton ionization (REMPI) spectroscopy and dc slicing ion imaging. The OH REMPI spectrum from the photodissociation of formic acid at 244 nm has been recorded. The spectrum shows low rotational excitation (N{<=}4). By fixing the probe laser at the specific rotational transitions, the resulting OH images from various dissociation wavelengths have been accumulated. The translational energy distributions derived from the OH images imply that about half of the available energies go to the photofragments internal excitation. The dissociation dynamics of formic acid were also discussed in view of the recent theoretical calculations.

  17. Analysis of state-energy-program capabilities

    SciTech Connect

    Tatar, J.; Clifford, D.; Gunnison, F.; Humphrey, B.

    1981-05-01

    This report assesses the potential effects on state energy programs of a reduction in the financial assistance available through the State and Local Assistance Programs and the distribution of those effects. The assessment is based on a survey of nine state energy offices (SEOs), which were selected on the basis of state support of energy programs weighted by state energy consumption. The nine SEOs surveyed were the Arizona Energy Office, Arkansas Department of Energy, California Energy Commission, Florida Governor's Energy Office, Illinois Institute of Natural Resources, Minnesota Energy Agency, New Jersey Department of Energy, South Carolina Governor's Division of Energy Resources, and Washington State Energy Office.

  18. Spatially Weighted Principal Component Analysis for Imaging Classification

    PubMed Central

    Guo, Ruixin; Ahn, Mihye; Zhu, Hongtu

    2014-01-01

    The aim of this paper is to develop a supervised dimension reduction framework, called Spatially Weighted Principal Component Analysis (SWPCA), for high dimensional imaging classification. Two main challenges in imaging classification are the high dimensionality of the feature space and the complex spatial structure of imaging data. In SWPCA, we introduce two sets of novel weights including global and local spatial weights, which enable a selective treatment of individual features and incorporation of the spatial structure of imaging data and class label information. We develop an e cient two-stage iterative SWPCA algorithm and its penalized version along with the associated weight determination. We use both simulation studies and real data analysis to evaluate the finite-sample performance of our SWPCA. The results show that SWPCA outperforms several competing principal component analysis (PCA) methods, such as supervised PCA (SPCA), and other competing methods, such as sparse discriminant analysis (SDA). PMID:26089629

  19. Electron Microscopy and Image Analysis for Selected Materials

    NASA Technical Reports Server (NTRS)

    Williams, George

    1999-01-01

    This particular project was completed in collaboration with the metallurgical diagnostics facility. The objective of this research had four major components. First, we required training in the operation of the environmental scanning electron microscope (ESEM) for imaging of selected materials including biological specimens. The types of materials range from cyanobacteria and diatoms to cloth, metals, sand, composites and other materials. Second, to obtain training in surface elemental analysis technology using energy dispersive x-ray (EDX) analysis, and in the preparation of x-ray maps of these same materials. Third, to provide training for the staff of the metallurgical diagnostics and failure analysis team in the area of image processing and image analysis technology using NIH Image software. Finally, we were to assist in the sample preparation, observing, imaging, and elemental analysis for Mr. Richard Hoover, one of NASA MSFC's solar physicists and Marshall's principal scientist for the agency-wide virtual Astrobiology Institute. These materials have been collected from various places around the world including the Fox Tunnel in Alaska, Siberia, Antarctica, ice core samples from near Lake Vostoc, thermal vents in the ocean floor, hot springs and many others. We were successful in our efforts to obtain high quality, high resolution images of various materials including selected biological ones. Surface analyses (EDX) and x-ray maps were easily prepared with this technology. We also discovered and used some applications for NIH Image software in the metallurgical diagnostics facility.

  20. Feasible logic Bell-state analysis with linear optics.

    PubMed

    Zhou, Lan; Sheng, Yu-Bo

    2016-01-01

    We describe a feasible logic Bell-state analysis protocol by employing the logic entanglement to be the robust concatenated Greenberger-Horne-Zeilinger (C-GHZ) state. This protocol only uses polarization beam splitters and half-wave plates, which are available in current experimental technology. We can conveniently identify two of the logic Bell states. This protocol can be easily generalized to the arbitrary C-GHZ state analysis. We can also distinguish two N-logic-qubit C-GHZ states. As the previous theory and experiment both showed that the C-GHZ state has the robustness feature, this logic Bell-state analysis and C-GHZ state analysis may be essential for linear-optical quantum computation protocols whose building blocks are logic-qubit entangled state. PMID:26877208

  1. Feasible logic Bell-state analysis with linear optics

    NASA Astrophysics Data System (ADS)

    Zhou, Lan; Sheng, Yu-Bo

    2016-02-01

    We describe a feasible logic Bell-state analysis protocol by employing the logic entanglement to be the robust concatenated Greenberger-Horne-Zeilinger (C-GHZ) state. This protocol only uses polarization beam splitters and half-wave plates, which are available in current experimental technology. We can conveniently identify two of the logic Bell states. This protocol can be easily generalized to the arbitrary C-GHZ state analysis. We can also distinguish two N-logic-qubit C-GHZ states. As the previous theory and experiment both showed that the C-GHZ state has the robustness feature, this logic Bell-state analysis and C-GHZ state analysis may be essential for linear-optical quantum computation protocols whose building blocks are logic-qubit entangled state.

  2. Feasible logic Bell-state analysis with linear optics

    PubMed Central

    Zhou, Lan; Sheng, Yu-Bo

    2016-01-01

    We describe a feasible logic Bell-state analysis protocol by employing the logic entanglement to be the robust concatenated Greenberger-Horne-Zeilinger (C-GHZ) state. This protocol only uses polarization beam splitters and half-wave plates, which are available in current experimental technology. We can conveniently identify two of the logic Bell states. This protocol can be easily generalized to the arbitrary C-GHZ state analysis. We can also distinguish two N-logic-qubit C-GHZ states. As the previous theory and experiment both showed that the C-GHZ state has the robustness feature, this logic Bell-state analysis and C-GHZ state analysis may be essential for linear-optical quantum computation protocols whose building blocks are logic-qubit entangled state. PMID:26877208

  3. Automated Analysis of Mammography Phantom Images

    NASA Astrophysics Data System (ADS)

    Brooks, Kenneth Wesley

    The present work stems from the hypothesis that humans are inconsistent when making subjective analyses of images and that human decisions for moderately complex images may be performed by a computer with complete objectivity, once a human acceptance level has been established. The following goals were established to test the hypothesis: (1) investigate observer variability within the standard mammographic phantom evaluation process; (2) evaluate options for high-resolution image digitization and utilize the most appropriate technology for standard mammographic phantom film digitization; (3) develop a machine-based vision system for evaluating standard mammographic phantom images to eliminate effects of human variabilities; and (4) demonstrate the completed system's performance against human observers for accreditation and for manufacturing quality control of standard mammographic phantom images. The following methods and procedures were followed to achieve the goals of the research: (1) human variabilities in the American College of Radiology accreditation process were simulated by observer studies involving 30 medical physicists and these were compared to the same number of diagnostic radiologists and untrained control group of observers; (2) current digitization technologies were presented and performance test procedures were developed; three devices were tested which represented commercially available high, intermediate and low-end contrast and spatial resolution capabilities; (3) optimal image processing schemes were applied and tested which performed low, intermediate and high-level computer vision tasks; and (4) the completed system's performance was tested against human observers for accreditation and for manufacturing quality control of standard mammographic phantom images. The results from application of the procedures were as follows: (1) the simulated American College of Radiology mammography accreditation program phantom evaluation process demonstrated

  4. Image analysis in dual modality tomography for material classification

    NASA Astrophysics Data System (ADS)

    Basarab-Horwath, I.; Daniels, A. T.; Green, R. G.

    2001-08-01

    A dual modality tomographic system is described for material classification in a simulated multi-component flow regime. It combines two tomographic modalities, electrical current and light, to image the interrogated area. Derived image parameters did not allow material classification. PCA analysis was performed on this data set producing a new parameter set, which allowed material classification. This procedure reduces the dimensionality of the data set and also offers a pre-processing technique prior to analysis by another classifier.

  5. Non-Imaging Software/Data Analysis Requirements

    NASA Technical Reports Server (NTRS)

    1984-01-01

    The analysis software needs of the non-imaging planetary data user are discussed. Assumptions as to the nature of the planetary science data centers where the data are physically stored are advanced, the scope of the non-imaging data is outlined, and facilities that users are likely to need to define and access data are identified. Data manipulation and analysis needs and display graphics are discussed.

  6. Analysis on correlation imaging based on fractal interpolation

    NASA Astrophysics Data System (ADS)

    Li, Bailing; Zhang, Wenwen; Chen, Qian; Gu, Guohua

    2015-10-01

    One fractal interpolation algorithm has been discussed in detail and the statistical self-similarity characteristics of light field have been analized in correlated experiment. For the correlation imaging experiment in condition of low sampling frequent, an image analysis approach based on fractal interpolation algorithm is proposed. This approach aims to improve the resolution of original image which contains a fewer number of pixels and highlight the image contour feature which is fuzzy. By using this method, a new model for the light field has been established. For the case of different moments of the intensity in the receiving plane, the local field division also has been established and then the iterated function system based on the experimental data set can be obtained by choosing the appropriate compression ratio under a scientific error estimate. On the basis of the iterative function, an explicit fractal interpolation function expression is given out in this paper. The simulation results show that the correlation image reconstructed by fractal interpolation has good approximations to the original image. The number of pixels of image after interpolation is significantly increased. This method will effectively solve the difficulty of image pixel deficiency and significantly improved the outline of objects in the image. The rate of deviation as the parameter has been adopted in the paper in order to evaluate objectively the effect of the algorithm. To sum up, fractal interpolation method proposed in this paper not only keeps the overall image but also increases the local information of the original image.

  7. Image analysis of dye stained patterns in soils

    NASA Astrophysics Data System (ADS)

    Bogner, Christina; Trancón y Widemann, Baltasar; Lange, Holger

    2013-04-01

    Quality of surface water and groundwater is directly affected by flow processes in the unsaturated zone. In general, it is difficult to measure or model water flow. Indeed, parametrization of hydrological models is problematic and often no unique solution exists. To visualise flow patterns in soils directly dye tracer studies can be done. These experiments provide images of stained soil profiles and their evaluation demands knowledge in hydrology as well as in image analysis and statistics. First, these photographs are converted to binary images classifying the pixels in dye stained and non-stained ones. Then, some feature extraction is necessary to discern relevant hydrological information. In our study we propose to use several index functions to extract different (ideally complementary) features. We associate each image row with a feature vector (i.e. a certain number of image function values) and use these features to cluster the image rows to identify similar image areas. Because images of stained profiles might have different reasonable clusterings, we calculate multiple consensus clusterings. An expert can explore these different solutions and base his/her interpretation of predominant flow mechanisms on quantitative (objective) criteria. The complete workflow from reading-in binary images to final clusterings has been implemented in the free R system, a language and environment for statistical computing. The calculation of image indices is part of our own package Indigo, manipulation of binary images, clustering and visualization of results are done using either build-in facilities in R, additional R packages or the LATEX system.

  8. The ImageJ ecosystem: An open platform for biomedical image analysis.

    PubMed

    Schindelin, Johannes; Rueden, Curtis T; Hiner, Mark C; Eliceiri, Kevin W

    2015-01-01

    Technology in microscopy advances rapidly, enabling increasingly affordable, faster, and more precise quantitative biomedical imaging, which necessitates correspondingly more-advanced image processing and analysis techniques. A wide range of software is available-from commercial to academic, special-purpose to Swiss army knife, small to large-but a key characteristic of software that is suitable for scientific inquiry is its accessibility. Open-source software is ideal for scientific endeavors because it can be freely inspected, modified, and redistributed; in particular, the open-software platform ImageJ has had a huge impact on the life sciences, and continues to do so. From its inception, ImageJ has grown significantly due largely to being freely available and its vibrant and helpful user community. Scientists as diverse as interested hobbyists, technical assistants, students, scientific staff, and advanced biology researchers use ImageJ on a daily basis, and exchange knowledge via its dedicated mailing list. Uses of ImageJ range from data visualization and teaching to advanced image processing and statistical analysis. The software's extensibility continues to attract biologists at all career stages as well as computer scientists who wish to effectively implement specific image-processing algorithms. In this review, we use the ImageJ project as a case study of how open-source software fosters its suites of software tools, making multitudes of image-analysis technology easily accessible to the scientific community. We specifically explore what makes ImageJ so popular, how it impacts the life sciences, how it inspires other projects, and how it is self-influenced by coevolving projects within the ImageJ ecosystem. PMID:26153368

  9. Image Segmentation Analysis for NASA Earth Science Applications

    NASA Technical Reports Server (NTRS)

    Tilton, James C.

    2010-01-01

    NASA collects large volumes of imagery data from satellite-based Earth remote sensing sensors. Nearly all of the computerized image analysis of this data is performed pixel-by-pixel, in which an algorithm is applied directly to individual image pixels. While this analysis approach is satisfactory in many cases, it is usually not fully effective in extracting the full information content from the high spatial resolution image data that s now becoming increasingly available from these sensors. The field of object-based image analysis (OBIA) has arisen in recent years to address the need to move beyond pixel-based analysis. The Recursive Hierarchical Segmentation (RHSEG) software developed by the author is being used to facilitate moving from pixel-based image analysis to OBIA. The key unique aspect of RHSEG is that it tightly intertwines region growing segmentation, which produces spatially connected region objects, with region object classification, which groups sets of region objects together into region classes. No other practical, operational image segmentation approach has this tight integration of region growing object finding with region classification This integration is made possible by the recursive, divide-and-conquer implementation utilized by RHSEG, in which the input image data is recursively subdivided until the image data sections are small enough to successfully mitigat the combinatorial explosion caused by the need to compute the dissimilarity between each pair of image pixels. RHSEG's tight integration of region growing object finding and region classification is what enables the high spatial fidelity of the image segmentations produced by RHSEG. This presentation will provide an overview of the RHSEG algorithm and describe how it is currently being used to support OBIA or Earth Science applications such as snow/ice mapping and finding archaeological sites from remotely sensed data.

  10. Hyperspectral image analysis using artificial color

    NASA Astrophysics Data System (ADS)

    Fu, Jian; Caulfield, H. John; Wu, Dongsheng; Tadesse, Wubishet

    2010-03-01

    By definition, HSC (HyperSpectral Camera) images are much richer in spectral data than, say, a COTS (Commercial-Off-The-Shelf) color camera. But data are not information. If we do the task right, useful information can be derived from the data in HSC images. Nature faced essentially the identical problem. The incident light is so complex spectrally that measuring it with high resolution would provide far more data than animals can handle in real time. Nature's solution was to do irreversible POCS (Projections Onto Convex Sets) to achieve huge reductions in data with minimal reduction in information. Thus we can arrange for our manmade systems to do what nature did - project the HSC image onto two or more broad, overlapping curves. The task we have undertaken in the last few years is to develop this idea that we call Artificial Color. What we report here is the use of the measured HSC image data projected onto two or three convex, overlapping, broad curves in analogy with the sensitivity curves of human cone cells. Testing two quite different HSC images in that manner produced the desired result: good discrimination or segmentation that can be done very simply and hence are likely to be doable in real time with specialized computers. Using POCS on the HSC data to reduce the processing complexity produced excellent discrimination in those two cases. For technical reasons discussed here, the figures of merit for the kind of pattern recognition we use is incommensurate with the figures of merit of conventional pattern recognition. We used some force fitting to make a comparison nevertheless, because it shows what is also obvious qualitatively. In our tasks our method works better.

  11. Analysis of Multipath Pixels in SAR Images

    NASA Astrophysics Data System (ADS)

    Zhao, J. W.; Wu, J. C.; Ding, X. L.; Zhang, L.; Hu, F. M.

    2016-06-01

    As the received radar signal is the sum of signal contributions overlaid in one single pixel regardless of the travel path, the multipath effect should be seriously tackled as the multiple bounce returns are added to direct scatter echoes which leads to ghost scatters. Most of the existing solution towards the multipath is to recover the signal propagation path. To facilitate the signal propagation simulation process, plenty of aspects such as sensor parameters, the geometry of the objects (shape, location, orientation, mutual position between adjacent buildings) and the physical parameters of the surface (roughness, correlation length, permittivity)which determine the strength of radar signal backscattered to the SAR sensor should be given in previous. However, it's not practical to obtain the highly detailed object model in unfamiliar area by field survey as it's a laborious work and time-consuming. In this paper, SAR imaging simulation based on RaySAR is conducted at first aiming at basic understanding of multipath effects and for further comparison. Besides of the pre-imaging simulation, the product of the after-imaging, which refers to radar images is also taken into consideration. Both Cosmo-SkyMed ascending and descending SAR images of Lupu Bridge in Shanghai are used for the experiment. As a result, the reflectivity map and signal distribution map of different bounce level are simulated and validated by 3D real model. The statistic indexes such as the phase stability, mean amplitude, amplitude dispersion, coherence and mean-sigma ratio in case of layover are analyzed with combination of the RaySAR output.

  12. Two-photon imaging and analysis of neural network dynamics

    NASA Astrophysics Data System (ADS)

    Lütcke, Henry; Helmchen, Fritjof

    2011-08-01

    The glow of a starry night sky, the smell of a freshly brewed cup of coffee or the sound of ocean waves breaking on the beach are representations of the physical world that have been created by the dynamic interactions of thousands of neurons in our brains. How the brain mediates perceptions, creates thoughts, stores memories and initiates actions remains one of the most profound puzzles in biology, if not all of science. A key to a mechanistic understanding of how the nervous system works is the ability to measure and analyze the dynamics of neuronal networks in the living organism in the context of sensory stimulation and behavior. Dynamic brain properties have been fairly well characterized on the microscopic level of individual neurons and on the macroscopic level of whole brain areas largely with the help of various electrophysiological techniques. However, our understanding of the mesoscopic level comprising local populations of hundreds to thousands of neurons (so-called 'microcircuits') remains comparably poor. Predominantly, this has been due to the technical difficulties involved in recording from large networks of neurons with single-cell spatial resolution and near-millisecond temporal resolution in the brain of living animals. In recent years, two-photon microscopy has emerged as a technique which meets many of these requirements and thus has become the method of choice for the interrogation of local neural circuits. Here, we review the state-of-research in the field of two-photon imaging of neuronal populations, covering the topics of microscope technology, suitable fluorescent indicator dyes, staining techniques, and in particular analysis techniques for extracting relevant information from the fluorescence data. We expect that functional analysis of neural networks using two-photon imaging will help to decipher fundamental operational principles of neural microcircuits.

  13. Independent component analysis applications on THz sensing and imaging

    NASA Astrophysics Data System (ADS)

    Balci, Soner; Maleski, Alexander; Nascimento, Matheus Mello; Philip, Elizabath; Kim, Ju-Hyung; Kung, Patrick; Kim, Seongsin M.

    2016-05-01

    We report Independent Component Analysis (ICA) technique applied to THz spectroscopy and imaging to achieve a blind source separation. A reference water vapor absorption spectrum was extracted via ICA, then ICA was utilized on a THz spectroscopic image in order to clean the absorption of water molecules from each pixel. For this purpose, silica gel was chosen as the material of interest for its strong water absorption. The resulting image clearly showed that ICA effectively removed the water content in the detected signal allowing us to image the silica gel beads distinctively even though it was totally embedded in water before ICA was applied.

  14. Synthetic aperture sonar imaging using joint time-frequency analysis

    NASA Astrophysics Data System (ADS)

    Wang, Genyuan; Xia, Xiang-Gen

    1999-03-01

    The non-ideal motion of the hydrophone usually induces the aperture error of the synthetic aperture sonar (SAS), which is one of the most important factors degrading the SAS imaging quality. In the SAS imaging, the return signals are usually nonstationary due to the non-ideal hydrophone motion. In this paper, joint time-frequency analysis (JTFA), as a good technique for analyzing nonstationary signals, is used in the SAS imaging. Based on the JTFA of the sonar return signals, a novel SAS imaging algorithm is proposed. The algorithm is verified by simulation examples.

  15. Fiji - an Open Source platform for biological image analysis

    PubMed Central

    Schindelin, Johannes; Arganda-Carreras, Ignacio; Frise, Erwin; Kaynig, Verena; Longair, Mark; Pietzsch, Tobias; Preibisch, Stephan; Rueden, Curtis; Saalfeld, Stephan; Schmid, Benjamin; Tinevez, Jean-Yves; White, Daniel James; Hartenstein, Volker; Eliceiri, Kevin; Tomancak, Pavel; Cardona, Albert

    2013-01-01

    Fiji is a distribution of the popular Open Source software ImageJ focused on biological image analysis. Fiji uses modern software engineering practices to combine powerful software libraries with a broad range of scripting languages to enable rapid prototyping of image processing algorithms. Fiji facilitates the transformation of novel algorithms into ImageJ plugins that can be shared with end users through an integrated update system. We propose Fiji as a platform for productive collaboration between computer science and biology research communities. PMID:22743772

  16. High-speed, electronically shuttered solid-state imager technology (invited)

    NASA Astrophysics Data System (ADS)

    Reich, R. K.; Rathman, D. D.; O'Mara, D. M.; Young, D. J.; Loomis, A. H.; Kohler, E. J.; Osgood, R. M.; Murphy, R. A.; Rose, M.; Berger, R.; Watson, S. A.; Ulibarri, M. D.; Perry, T.; Kosicki, B. B.

    2003-03-01

    Electronically shuttered solid-state imagers are being developed for high-speed imaging applications. A 5 cm×5 cm, 512×512-element, multiframe charge-coupled device (CCD) imager has been fabricated for the Los Alamos National Laboratory DARHT facility that collects four sequential image frames at megahertz rates. To operate at fast frame rates with high sensitivity, the imager uses an electronic shutter technology designed for back-illuminated CCDs. The design concept and test results are described for the burst-frame-rate imager. Also discussed is an evolving solid-state imager technology that has interesting characteristics for creating large-format x-ray detectors with short integration times (100 ps to 1 ns). Proposed device architectures use CMOS technology for high speed sampling (tens of picoseconds transistor switching times). Techniques for parallel clock distribution, that triggers the sampling of x-ray photoelectrons, will be described that exploit features of CMOS technology.

  17. Parameter-Based Performance Analysis of Object-Based Image Analysis Using Aerial and Quikbird-2 Images

    NASA Astrophysics Data System (ADS)

    Kavzoglu, T.; Yildiz, M.

    2014-09-01

    Opening new possibilities for research, very high resolution (VHR) imagery acquired by recent commercial satellites and aerial systems requires advanced approaches and techniques that can handle large volume of data with high local variance. Delineation of land use/cover information from VHR images is a hot research topic in remote sensing. In recent years, object-based image analysis (OBIA) has become a popular solution for image analysis tasks as it considers shape, texture and content information associated with the image objects. The most important stage of OBIA is the image segmentation process applied prior to classification. Determination of optimal segmentation parameters is of crucial importance for the performance of the selected classifier. In this study, effectiveness and applicability of the segmentation method in relation to its parameters was analysed using two VHR images, an aerial photo and a Quickbird-2 image. Multi-resolution segmentation technique was employed with its optimal parameters of scale, shape and compactness that were defined after an extensive trail process on the data sets. Nearest neighbour classifier was applied on the segmented images, and then the accuracy assessment was applied. Results show that segmentation parameters have a direct effect on the classification accuracy, and low values of scale-shape combinations produce the highest classification accuracies. Also, compactness parameter was found to be having minimal effect on the construction of image objects, hence it can be set to a constant value in image classification.

  18. Applications of Aptamers in Targeted Imaging: State of the Art

    PubMed Central

    Dougherty, Casey A.; Cai, Weibo; Hong, Hao

    2015-01-01

    Aptamers are single-stranded oligonucleotides with high affinity and specificity to the target molecules or cells, thus they can serve as an important category of molecular targeting ligand. Since their discove1y, aptamers have been rapidly translated into clinical practice. The strong target affinity/selectivity, cost-effectivity, chemical versatility and safety of aptamers are superior to traditional peptides- or proteins-based ligands which make them unique choices for molecular imaging. Therefore, aptamers are considered to be extremely useful to guide various imaging contrast agents to the target tissues or cells for optical, magnetic resonance, nuclear, computed tomography, ultra sound and multimodality imaging. This review aims to provide an overview of aptamers' advantages as targeting ligands and their application in targeted imaging. Further research in synthesis of new types of aptamers and their conjugation with new categories of contrast agents is required to develop clinically translatable aptamer-based imaging agents which will eventually result in improved patient care. PMID:25866268

  19. Texture analysis on MRI images of non-Hodgkin lymphoma.

    PubMed

    Harrison, L; Dastidar, P; Eskola, H; Järvenpää, R; Pertovaara, H; Luukkaala, T; Kellokumpu-Lehtinen, P-L; Soimakallio, S

    2008-04-01

    The aim here is to show that texture parameters of magnetic resonance imaging (MRI) data changes in lymphoma tissue during chemotherapy. Ten patients having non-Hodgkin lymphoma masses in the abdomen were imaged for chemotherapy response evaluation three consecutive times. The analysis was performed with MaZda texture analysis (TA) application. The best discrimination in lymphoma MRI texture was obtained within T2-weighted images between the pre-treatment and the second response evaluation stage. TA proved to be a promising quantitative means of representing lymphoma tissue changes during medication follow-up. PMID:18342845

  20. The Land Analysis System (LAS) for multispectral image processing

    USGS Publications Warehouse

    Wharton, S. W.; Lu, Y. C.; Quirk, Bruce K.; Oleson, Lyndon R.; Newcomer, J. A.; Irani, Frederick M.

    1988-01-01

    The Land Analysis System (LAS) is an interactive software system available in the public domain for the analysis, display, and management of multispectral and other digital image data. LAS provides over 240 applications functions and utilities, a flexible user interface, complete online and hard-copy documentation, extensive image-data file management, reformatting, conversion utilities, and high-level device independent access to image display hardware. The authors summarize the capabilities of the current release of LAS (version 4.0) and discuss plans for future development. Particular emphasis is given to the issue of system portability and the importance of removing and/or isolating hardware and software dependencies.

  1. (Hyper)-graphical models in biomedical image analysis.

    PubMed

    Paragios, Nikos; Ferrante, Enzo; Glocker, Ben; Komodakis, Nikos; Parisot, Sarah; Zacharaki, Evangelia I

    2016-10-01

    Computational vision, visual computing and biomedical image analysis have made tremendous progress over the past two decades. This is mostly due the development of efficient learning and inference algorithms which allow better and richer modeling of image and visual understanding tasks. Hyper-graph representations are among the most prominent tools to address such perception through the casting of perception as a graph optimization problem. In this paper, we briefly introduce the importance of such representations, discuss their strength and limitations, provide appropriate strategies for their inference and present their application to address a variety of problems in biomedical image analysis. PMID:27377331

  2. Infrared thermal facial image sequence registration analysis and verification

    NASA Astrophysics Data System (ADS)

    Chen, Chieh-Li; Jian, Bo-Lin

    2015-03-01

    To study the emotional responses of subjects to the International Affective Picture System (IAPS), infrared thermal facial image sequence is preprocessed for registration before further analysis such that the variance caused by minor and irregular subject movements is reduced. Without affecting the comfort level and inducing minimal harm, this study proposes an infrared thermal facial image sequence registration process that will reduce the deviations caused by the unconscious head shaking of the subjects. A fixed image for registration is produced through the localization of the centroid of the eye region as well as image translation and rotation processes. Thermal image sequencing will then be automatically registered using the two-stage genetic algorithm proposed. The deviation before and after image registration will be demonstrated by image quality indices. The results show that the infrared thermal image sequence registration process proposed in this study is effective in localizing facial images accurately, which will be beneficial to the correlation analysis of psychological information related to the facial area.

  3. Cloud based toolbox for image analysis, processing and reconstruction tasks.

    PubMed

    Bednarz, Tomasz; Wang, Dadong; Arzhaeva, Yulia; Lagerstrom, Ryan; Vallotton, Pascal; Burdett, Neil; Khassapov, Alex; Szul, Piotr; Chen, Shiping; Sun, Changming; Domanski, Luke; Thompson, Darren; Gureyev, Timur; Taylor, John A

    2015-01-01

    This chapter describes a novel way of carrying out image analysis, reconstruction and processing tasks using cloud based service provided on the Australian National eResearch Collaboration Tools and Resources (NeCTAR) infrastructure. The toolbox allows users free access to a wide range of useful blocks of functionalities (imaging functions) that can be connected together in workflows allowing creation of even more complex algorithms that can be re-run on different data sets, shared with others or additionally adjusted. The functions given are in the area of cellular imaging, advanced X-ray image analysis, computed tomography and 3D medical imaging and visualisation. The service is currently available on the website www.cloudimaging.net.au . PMID:25381109

  4. Segmented infrared image analysis for rotating machinery fault diagnosis

    NASA Astrophysics Data System (ADS)

    Duan, Lixiang; Yao, Mingchao; Wang, Jinjiang; Bai, Tangbo; Zhang, Laibin

    2016-07-01

    As a noncontact and non-intrusive technique, infrared image analysis becomes promising for machinery defect diagnosis. However, the insignificant information and strong noise in infrared image limit its performance. To address this issue, this paper presents an image segmentation approach to enhance the feature extraction in infrared image analysis. A region selection criterion named dispersion degree is also formulated to discriminate fault representative regions from unrelated background information. Feature extraction and fusion methods are then applied to obtain features from selected regions for further diagnosis. Experimental studies on a rotor fault simulator demonstrate that the presented segmented feature enhancement approach outperforms the one from the original image using both Naïve Bayes classifier and support vector machine.

  5. Image analysis and compression: renewed focus on texture

    NASA Astrophysics Data System (ADS)

    Pappas, Thrasyvoulos N.; Zujovic, Jana; Neuhoff, David L.

    2010-01-01

    We argue that a key to further advances in the fields of image analysis and compression is a better understanding of texture. We review a number of applications that critically depend on texture analysis, including image and video compression, content-based retrieval, visual to tactile image conversion, and multimodal interfaces. We introduce the idea of "structurally lossless" compression of visual data that allows significant differences between the original and decoded images, which may be perceptible when they are viewed side-by-side, but do not affect the overall quality of the image. We then discuss the development of objective texture similarity metrics, which allow substantial point-by-point deviations between textures that according to human judgment are essentially identical.

  6. STEM: Science Technology Engineering Mathematics. State-Level Analysis

    ERIC Educational Resources Information Center

    Carnevale, Anthony P.; Smith, Nicole; Melton, Michelle

    2011-01-01

    The science, technology, engineering, and mathematics (STEM) state-level analysis provides policymakers, educators, state government officials, and others with details on the projections of STEM jobs through 2018. This report delivers a state-by-state snapshot of the demand for STEM jobs, including: (1) The number of forecast net new and…

  7. Analysis of human aorta using fluorescence lifetime imaging microscopy (FLIM)

    NASA Astrophysics Data System (ADS)

    Vieira-Damiani, Gislaine; Adur, J.; Ferro, D. P.; Adam, R. L.; Pelegati, V.; Thomáz, A.; Cesar, C. L.; Metze, K.

    2012-03-01

    The use of photonics has improved our understanding of biologic phenomena. For the study of the normal and pathologic architecture of the aorta the use of Two-Photon Excited Fluorescence (TPEF) and Second Harmonic Generation showed interesting details of morphologic changes of the elastin-collagen architecture during aging or development of hypertension in previous studies. In this investigation we tried to apply fluorescence lifetime imaging (FLIM) for the morphologic analysis of human aortas. The aim of our study was to use FLIM in non-stained formalin-fixed and paraffin-embedded samples of the aorta ascendants in hypertensive and normotensive patients of various ages, examining two different topographical regions. The FLIM-spectra of collagen and elastic fibers were clearly distinguishable, thus permitting an exact analysis of unstained material on the microscopic level. Moreover the FLIM spectrum of elastic fibers revealed variations between individual cases, which indicate modifications on a molecular level and might be related to FLIM age or diseases states and reflect modifications on a molecular level.

  8. Automated fine structure image analysis method for discrimination of diabetic retinopathy stage using conjunctival microvasculature images

    PubMed Central

    Khansari, Maziyar M; O’Neill, William; Penn, Richard; Chau, Felix; Blair, Norman P; Shahidi, Mahnaz

    2016-01-01

    The conjunctiva is a densely vascularized mucus membrane covering the sclera of the eye with a unique advantage of accessibility for direct visualization and non-invasive imaging. The purpose of this study is to apply an automated quantitative method for discrimination of different stages of diabetic retinopathy (DR) using conjunctival microvasculature images. Fine structural analysis of conjunctival microvasculature images was performed by ordinary least square regression and Fisher linear discriminant analysis. Conjunctival images between groups of non-diabetic and diabetic subjects at different stages of DR were discriminated. The automated method’s discriminate rates were higher than those determined by human observers. The method allowed sensitive and rapid discrimination by assessment of conjunctival microvasculature images and can be potentially useful for DR screening and monitoring. PMID:27446692

  9. Automated fine structure image analysis method for discrimination of diabetic retinopathy stage using conjunctival microvasculature images.

    PubMed

    Khansari, Maziyar M; O'Neill, William; Penn, Richard; Chau, Felix; Blair, Norman P; Shahidi, Mahnaz

    2016-07-01

    The conjunctiva is a densely vascularized mucus membrane covering the sclera of the eye with a unique advantage of accessibility for direct visualization and non-invasive imaging. The purpose of this study is to apply an automated quantitative method for discrimination of different stages of diabetic retinopathy (DR) using conjunctival microvasculature images. Fine structural analysis of conjunctival microvasculature images was performed by ordinary least square regression and Fisher linear discriminant analysis. Conjunctival images between groups of non-diabetic and diabetic subjects at different stages of DR were discriminated. The automated method's discriminate rates were higher than those determined by human observers. The method allowed sensitive and rapid discrimination by assessment of conjunctival microvasculature images and can be potentially useful for DR screening and monitoring. PMID:27446692

  10. Multispectral image analysis for algal biomass quantification.

    PubMed

    Murphy, Thomas E; Macon, Keith; Berberoglu, Halil

    2013-01-01

    This article reports a novel multispectral image processing technique for rapid, noninvasive quantification of biomass concentration in attached and suspended algae cultures. Monitoring the biomass concentration is critical for efficient production of biofuel feedstocks, food supplements, and bioactive chemicals. Particularly, noninvasive and rapid detection techniques can significantly aid in providing delay-free process control feedback in large-scale cultivation platforms. In this technique, three-band spectral images of Anabaena variabilis cultures were acquired and separated into their red, green, and blue components. A correlation between the magnitude of the green component and the areal biomass concentration was generated. The correlation predicted the biomass concentrations of independently prepared attached and suspended cultures with errors of 7 and 15%, respectively, and the effect of varying lighting conditions and background color were investigated. This method can provide necessary feedback for dilution and harvesting strategies to maximize photosynthetic conversion efficiency in large-scale operation. PMID:23554374

  11. Microtomographic imaging of multiphase flow in porous media: Validation of image analysis algorithms, and assessment of data representativeness and quality

    NASA Astrophysics Data System (ADS)

    Wildenschild, D.; Porter, M. L.

    2009-04-01

    Significant strides have been made in recent years in imaging fluid flow in porous media using x-ray computerized microtomography (CMT) with 1-20 micron resolution; however, difficulties remain in combining representative sample sizes with optimal image resolution and data quality; and in precise quantification of the variables of interest. Tomographic imaging was for many years focused on volume rendering and the more qualitative analyses necessary for rapid assessment of the state of a patient's health. In recent years, many highly quantitative CMT-based studies of fluid flow processes in porous media have been reported; however, many of these analyses are made difficult by the complexities in processing the resulting grey-scale data into reliable applicable information such as pore network structures, phase saturations, interfacial areas, and curvatures. Yet, relatively few rigorous tests of these analysis tools have been reported so far. The work presented here was designed to evaluate the effect of image resolution and quality, as well as the validity of segmentation and surface generation algorithms as they were applied to CMT images of (1) a high-precision glass bead pack and (2) gas-fluid configurations in a number of glass capillary tubes. Interfacial areas calculated with various algorithms were compared to actual interfacial geometries and we found very good agreement between actual and measured surface and interfacial areas. (The test images used are available for download at the website listed below). http://cbee.oregonstate.edu/research/multiphase_data/index.html

  12. PET/MRI in Oncological Imaging: State of the Art

    PubMed Central

    Bashir, Usman; Mallia, Andrew; Stirling, James; Joemon, John; MacKewn, Jane; Charles-Edwards, Geoff; Goh, Vicky; Cook, Gary J.

    2015-01-01

    Positron emission tomography (PET) combined with magnetic resonance imaging (MRI) is a hybrid technology which has recently gained interest as a potential cancer imaging tool. Compared with CT, MRI is advantageous due to its lack of ionizing radiation, superior soft-tissue contrast resolution, and wider range of acquisition sequences. Several studies have shown PET/MRI to be equivalent to PET/CT in most oncological applications, possibly superior in certain body parts, e.g., head and neck, pelvis, and in certain situations, e.g., cancer recurrence. This review will update the readers on recent advances in PET/MRI technology and review key literature, while highlighting the strengths and weaknesses of PET/MRI in cancer imaging. PMID:26854157

  13. PET/MRI in Oncological Imaging: State of the Art.

    PubMed

    Bashir, Usman; Mallia, Andrew; Stirling, James; Joemon, John; MacKewn, Jane; Charles-Edwards, Geoff; Goh, Vicky; Cook, Gary J

    2015-01-01

    Positron emission tomography (PET) combined with magnetic resonance imaging (MRI) is a hybrid technology which has recently gained interest as a potential cancer imaging tool. Compared with CT, MRI is advantageous due to its lack of ionizing radiation, superior soft-tissue contrast resolution, and wider range of acquisition sequences. Several studies have shown PET/MRI to be equivalent to PET/CT in most oncological applications, possibly superior in certain body parts, e.g., head and neck, pelvis, and in certain situations, e.g., cancer recurrence. This review will update the readers on recent advances in PET/MRI technology and review key literature, while highlighting the strengths and weaknesses of PET/MRI in cancer imaging. PMID:26854157

  14. Computerized microscopic image analysis of follicular lymphoma

    NASA Astrophysics Data System (ADS)

    Sertel, Olcay; Kong, Jun; Lozanski, Gerard; Catalyurek, Umit; Saltz, Joel H.; Gurcan, Metin N.

    2008-03-01

    Follicular Lymphoma (FL) is a cancer arising from the lymphatic system. Originating from follicle center B cells, FL is mainly comprised of centrocytes (usually middle-to-small sized cells) and centroblasts (relatively large malignant cells). According to the World Health Organization's recommendations, there are three histological grades of FL characterized by the number of centroblasts per high-power field (hpf) of area 0.159 mm2. In current practice, these cells are manually counted from ten representative fields of follicles after visual examination of hematoxylin and eosin (H&E) stained slides by pathologists. Several studies clearly demonstrate the poor reproducibility of this grading system with very low inter-reader agreement. In this study, we are developing a computerized system to assist pathologists with this process. A hybrid approach that combines information from several slides with different stains has been developed. Thus, follicles are first detected from digitized microscopy images with immunohistochemistry (IHC) stains, (i.e., CD10 and CD20). The average sensitivity and specificity of the follicle detection tested on 30 images at 2×, 4× and 8× magnifications are 85.5+/-9.8% and 92.5+/-4.0%, respectively. Since the centroblasts detection is carried out in the H&E-stained slides, the follicles in the IHC-stained images are mapped to H&E-stained counterparts. To evaluate the centroblast differentiation capabilities of the system, 11 hpf images have been marked by an experienced pathologist who identified 41 centroblast cells and 53 non-centroblast cells. A non-supervised clustering process differentiates the centroblast cells from noncentroblast cells, resulting in 92.68% sensitivity and 90.57% specificity.

  15. Measurement and analysis of image sensors

    NASA Astrophysics Data System (ADS)

    Vitek, Stanislav

    2005-06-01

    For astronomical applications is necessary to have high precision in sensing and processing the image data. In this time are used the large CCD sensors from the various reasons. For the replacement of CCD sensors with CMOS sensing devices is important to know transfer characteristics of used CCD sensors. In the special applications like the robotic telescopes (fully automatic, without human interactions) seems to be good using of specially designed smart sensors, which have integrated more functions and have more features than CCDs.

  16. Analysis of pregerminated barley using hyperspectral image analysis.

    PubMed

    Arngren, Morten; Hansen, Per Waaben; Eriksen, Birger; Larsen, Jan; Larsen, Rasmus

    2011-11-01

    Pregermination is one of many serious degradations to barley when used for malting. A pregerminated barley kernel can under certain conditions not regerminate and is reduced to animal feed of lower quality. Identifying pregermination at an early stage is therefore essential in order to segregate the barley kernels into low or high quality. Current standard methods to quantify pregerminated barley include visual approaches, e.g. to identify the root sprout, or using an embryo staining method, which use a time-consuming procedure. We present an approach using a near-infrared (NIR) hyperspectral imaging system in a mathematical modeling framework to identify pregerminated barley at an early stage of approximately 12 h of pregermination. Our model only assigns pregermination as the cause for a single kernel's lack of germination and is unable to identify dormancy, kernel damage etc. The analysis is based on more than 750 Rosalina barley kernels being pregerminated at 8 different durations between 0 and 60 h based on the BRF method. Regerminating the kernels reveals a grouping of the pregerminated kernels into three categories: normal, delayed and limited germination. Our model employs a supervised classification framework based on a set of extracted features insensitive to the kernel orientation. An out-of-sample classification error of 32% (CI(95%): 29-35%) is obtained for single kernels when grouped into the three categories, and an error of 3% (CI(95%): 0-15%) is achieved on a bulk kernel level. The model provides class probabilities for each kernel, which can assist in achieving homogeneous germination profiles. This research can further be developed to establish an automated and faster procedure as an alternative to the standard procedures for pregerminated barley. PMID:21932866

  17. Performance analysis for geometrical attack on digital image watermarking

    NASA Astrophysics Data System (ADS)

    Jayanthi, VE.; Rajamani, V.; Karthikayen, P.

    2011-11-01

    We present a technique for irreversible watermarking approach robust to affine transform attacks in camera, biomedical and satellite images stored in the form of monochrome bitmap images. The watermarking approach is based on image normalisation in which both watermark embedding and extraction are carried out with respect to an image normalised to meet a set of predefined moment criteria. The normalisation procedure is invariant to affine transform attacks. The result of watermarking scheme is suitable for public watermarking applications, where the original image is not available for watermark extraction. Here, direct-sequence code division multiple access approach is used to embed multibit text information in DCT and DWT transform domains. The proposed watermarking schemes are robust against various types of attacks such as Gaussian noise, shearing, scaling, rotation, flipping, affine transform, signal processing and JPEG compression. Performance analysis results are measured using image processing metrics.

  18. Multispectral/hyperspectral image enhancement for biological cell analysis

    SciTech Connect

    Nuffer, Lisa L.; Medvick, Patricia A.; Foote, Harlan P.; Solinsky, James C.

    2006-08-01

    The paper shows new techniques for analyzing cell images taken with a microscope using multiple filters to form a datacube of spectral image planes. Because of the many neighboring spectral samples, much of the datacube appears as redundant, similar tissue. The analysis is based on the nonGaussian statistics of the image data, allowing for remapping of the data into image components that are dissimilar, and hence isolate subtle, spatial object regions of interest in the tissues. This individual component image set can be recombined into a single RGB color image useful in real-time location of regions of interest. The algorithms are susceptible to parallelization using Field Programmable Gate Array hardware processing.

  19. Method for measuring anterior chamber volume by image analysis

    NASA Astrophysics Data System (ADS)

    Zhai, Gaoshou; Zhang, Junhong; Wang, Ruichang; Wang, Bingsong; Wang, Ningli

    2007-12-01

    Anterior chamber volume (ACV) is very important for an oculist to make rational pathological diagnosis as to patients who have some optic diseases such as glaucoma and etc., yet it is always difficult to be measured accurately. In this paper, a method is devised to measure anterior chamber volumes based on JPEG-formatted image files that have been transformed from medical images using the anterior-chamber optical coherence tomographer (AC-OCT) and corresponding image-processing software. The corresponding algorithms for image analysis and ACV calculation are implemented in VC++ and a series of anterior chamber images of typical patients are analyzed, while anterior chamber volumes are calculated and are verified that they are in accord with clinical observation. It shows that the measurement method is effective and feasible and it has potential to improve accuracy of ACV calculation. Meanwhile, some measures should be taken to simplify the handcraft preprocess working as to images.

  20. Seismoelectric beamforming imaging: a sensitivity analysis

    NASA Astrophysics Data System (ADS)

    El Khoury, P.; Revil, A.; Sava, P.

    2015-06-01

    The electrical current density generated by the propagation of a seismic wave at the interface characterized by a drop in electrical, hydraulic or mechanical properties produces an electrical field of electrokinetic nature. This field can be measured remotely with a signal-to-noise ratio depending on the background noise and signal attenuation. The seismoelectric beamforming approach is an emerging imaging technique based on scanning a porous material using appropriately delayed seismic sources. The idea is to focus the hydromechanical energy on a regular spatial grid and measure the converted electric field remotely at each focus time. This method can be used to image heterogeneities with a high definition and to provide structural information to classical geophysical methods. A numerical experiment is performed to investigate the resolution of the seismoelectric beamforming approach with respect to the main wavelength of the seismic waves. The 2-D model consists of a fictitious water-filled bucket in which a cylindrical sandstone core sample is set up vertically. The hydrophones/seismic sources are located on a 50-cm diameter circle in the bucket and the seismic energy is focused on the grid points in order to scan the medium and determine the geometry of the porous plug using the output electric potential image. We observe that the resolution of the method is given by a density of eight scanning points per wavelength. Additional numerical tests were also performed to see the impact of a wrong velocity model upon the seismoelectric map displaying the heterogeneities of the material.

  1. Estimating zero-strain states of very soft tissue under gravity loading using digital image correlation.

    PubMed

    Gao, Zhan; Desai, Jaydev P

    2010-04-01

    This paper presents several experimental techniques and concepts in the process of measuring mechanical properties of very soft tissue in an ex vivo tensile test. Gravitational body force on very soft tissue causes pre-compression and results in a non-uniform initial deformation. The global digital image correlation technique is used to measure the full-field deformation behavior of liver tissue in uniaxial tension testing. A maximum stretching band is observed in the incremental strain field when a region of tissue passes from compression and enters a state of tension. A new method for estimating the zero-strain state is proposed: the zero strain position is close to, but ahead of the position of the maximum stretching band, or in other words, the tangent of a nominal stress-stretch curve reaches minimum at lambda greater or similar 1. The approach, to identify zero strain by using maximum incremental strain, can be implemented in other types of image-based soft tissue analysis. The experimental results of 10 samples from seven porcine livers are presented and material parameters for the Ogden model fit are obtained. The finite element simulation based on the fitted model confirms the effect of gravity on the deformation of very soft tissue and validates our approach. PMID:20015676

  2. Technical guidance for the development of a solid state image sensor for human low vision image warping

    NASA Technical Reports Server (NTRS)

    Vanderspiegel, Jan

    1994-01-01

    This report surveys different technologies and approaches to realize sensors for image warping. The goal is to study the feasibility, technical aspects, and limitations of making an electronic camera with special geometries which implements certain transformations for image warping. This work was inspired by the research done by Dr. Juday at NASA Johnson Space Center on image warping. The study has looked into different solid-state technologies to fabricate image sensors. It is found that among the available technologies, CMOS is preferred over CCD technology. CMOS provides more flexibility to design different functions into the sensor, is more widely available, and is a lower cost solution. By using an architecture with row and column decoders one has the added flexibility of addressing the pixels at random, or read out only part of the image.

  3. [Interventional MR imaging: state of the art and technological advances].

    PubMed

    Viard, R; Rousseau, J

    2008-01-01

    Due to its excellent soft tissue contrast and lack of ionizing radiation, MR imaging is well suited for interventional procedures. MRI is being increasingly used for guidance during percutaneous procedures or surgery. Technical advances in interventional MR imaging are reviewed in this paper. Ergonomical factors with improved access to patients as well as advances in informatics, electronics and robotics largely explain this increasing role. Different elements are discussed from improved access to patients in the scanners to improved acquisition pulse sequences. Selected clinical applications and recent publications will be presented to illustrate the current status of this technique. PMID:18288022

  4. An approach for quantitative image quality analysis for CT

    NASA Astrophysics Data System (ADS)

    Rahimi, Amir; Cochran, Joe; Mooney, Doug; Regensburger, Joe

    2016-03-01

    An objective and standardized approach to assess image quality of Compute Tomography (CT) systems is required in a wide variety of imaging processes to identify CT systems appropriate for a given application. We present an overview of the framework we have developed to help standardize and to objectively assess CT image quality for different models of CT scanners used for security applications. Within this framework, we have developed methods to quantitatively measure metrics that should correlate with feature identification, detection accuracy and precision, and image registration capabilities of CT machines and to identify strengths and weaknesses in different CT imaging technologies in transportation security. To that end we have designed, developed and constructed phantoms that allow for systematic and repeatable measurements of roughly 88 image quality metrics, representing modulation transfer function, noise equivalent quanta, noise power spectra, slice sensitivity profiles, streak artifacts, CT number uniformity, CT number consistency, object length accuracy, CT number path length consistency, and object registration. Furthermore, we have developed a sophisticated MATLAB based image analysis tool kit to analyze CT generated images of phantoms and report these metrics in a format that is standardized across the considered models of CT scanners, allowing for comparative image quality analysis within a CT model or between different CT models. In addition, we have developed a modified sparse principal component analysis (SPCA) method to generate a modified set of PCA components as compared to the standard principal component analysis (PCA) with sparse loadings in conjunction with Hotelling T2 statistical analysis method to compare, qualify, and detect faults in the tested systems.

  5. A guide to human in vivo microcirculatory flow image analysis.

    PubMed

    Massey, Michael J; Shapiro, Nathan I

    2016-01-01

    Various noninvasive microscopic camera technologies have been used to visualize the sublingual microcirculation in patients. We describe a comprehensive approach to bedside in vivo sublingual microcirculation video image capture and analysis techniques in the human clinical setting. We present a user perspective and guide suitable for clinical researchers and developers interested in the capture and analysis of sublingual microcirculatory flow videos. We review basic differences in the cameras, optics, light sources, operation, and digital image capture. We describe common techniques for image acquisition and discuss aspects of video data management, including data transfer, metadata, and database design and utilization to facilitate the image analysis pipeline. We outline image analysis techniques and reporting including video preprocessing and image quality evaluation. Finally, we propose a framework for future directions in the field of microcirculatory flow videomicroscopy acquisition and analysis. Although automated scoring systems have not been sufficiently robust for widespread clinical or research use to date, we discuss promising innovations that are driving new development. PMID:26861691

  6. New approach to gallbladder ultrasonic images analysis and lesions recognition.

    PubMed

    Bodzioch, Sławomir; Ogiela, Marek R

    2009-03-01

    This paper presents a new approach to gallbladder ultrasonic image processing and analysis towards detection of disease symptoms on processed images. First, in this paper, there is presented a new method of filtering gallbladder contours from USG images. A major stage in this filtration is to segment and section off areas occupied by the said organ. In most cases this procedure is based on filtration that plays a key role in the process of diagnosing pathological changes. Unfortunately ultrasound images present among the most troublesome methods of analysis owing to the echogenic inconsistency of structures under observation. This paper provides for an inventive algorithm for the holistic extraction of gallbladder image contours. The algorithm is based on rank filtration, as well as on the analysis of histogram sections on tested organs. The second part concerns detecting lesion symptoms of the gallbladder. Automating a process of diagnosis always comes down to developing algorithms used to analyze the object of such diagnosis and verify the occurrence of symptoms related to given affection. Usually the final stage is to make a diagnosis based on the detected symptoms. This last stage can be carried out through either dedicated expert systems or more classic pattern analysis approach like using rules to determine illness basing on detected symptoms. This paper discusses the pattern analysis algorithms for gallbladder image interpretation towards classification of the most frequent illness symptoms of this organ. PMID:19124224

  7. Imaging the Impact of Impurities on Topological Surface States

    NASA Astrophysics Data System (ADS)

    Hoffman, Jennifer

    2013-03-01

    Harnessing the technological potential of the spin-polarized surface states on topological insulators requires a detailed understanding of the impact of nanoscale disorder on those surface states. We employ spectroscopic scanning tunneling microscopy (STM) in the presence of a magnetic field to visualize the impact of intrinsic impurities on topological surface states in Sb and Bi2Se3. We find a variety of impurities with different energy profiles that elastically scatter surface states through dispersive quasiparticle interference (QPI), that inelastically scatter surface states into the bulk, that locally destroy the extended surface state Landau level wavefunctions, or that form local resonant states interacting with the Dirac quasiparticles. By identifying impurities that strongly interact with and limit the mobility of the topological surface states, our impurity studies can directly advise the growth and development of future topological materials. Measurements carried out by Anjan Soumyanarayanan, Michael Yee, Yang He. Samples grown by Dillon Gardner & Young Lee; Zahir Salman & Amit Kanigel; Zhi Ren & Kouji Segawa & Yoichi Ando.

  8. Recurrence quantification analysis of chimera states

    NASA Astrophysics Data System (ADS)

    Santos, M. S.; Szezech, J. D.; Batista, A. M.; Caldas, I. L.; Viana, R. L.; Lopes, S. R.

    2015-10-01

    Chimera states, characterised by coexistence of coherence and incoherence in coupled dynamical systems, have been found in various physical systems, such as mechanical oscillator networks and Josephson-junction arrays. We used recurrence plots to provide graphical representations of recurrent patterns and identify chimera states. Moreover, we show that recurrence plots can be used as a diagnostic of chimera states and also to identify the chimera collapse.

  9. [State of the art: new developments in cardiac imaging].

    PubMed

    Albertí, José F Forteza; de Diego, José J Gómez; Delgado, Ricardo Vivancos; Riera, Jaume Candell; Torres, Río Aguilar

    2012-01-01

    Cardiac imaging continues to reveal new anatomical and functional insights into heart disease. In echocardiography, both transesophageal and transthoracic three-dimensional imaging have been fully developed and optimized, and the value of the techniques that have increased our understanding of cardiac mechanics and ventricular function is well established. At the same time, the healthcare industry has released new devices onto the market which, although they are easier to use, have limitations that restrict their use for routine assessment. Tomography's diagnostic and prognostic value in coronary artery disease continues to increase while radiation exposure becomes progressively lower. With cardiac magnetic resonance imaging, myocardial injury and recovery in ischemic heart disease and following acute coronary syndrome can be monitored in exquisite detail. The emergence of new combined tomographic and gamma camera techniques, exclusively developed for nuclear cardiology, have improved the quality of investigations and reduced radiation exposure. The hybrid or fusion images produced by combining different techniques, such as nuclear cardiology techniques and tomography, promise an exciting future. PMID:22269837

  10. Solid state imagers and their applications; Proceedings of the Meeting, Cannes, France, November 26, 27, 1985

    NASA Technical Reports Server (NTRS)

    Declerck, Gilbert J. (Editor)

    1986-01-01

    Topics treated include the use of semiconductor imagers in high energy particle physics, an X-ray image sensor based on an optical TDI-CCD imager, and an electron-sensitive CCD readout array for a circular-scan streak tube. Papers are presented on the pan-imager, high resolution linear arrays, the reduction of reflection losses in solid-state image sensors, a high resolution CCD imager module with swing operation, large area CCD image sensors for scientific applications, and new readout techniques for frame transfer CCDs. Consideration is given to advanced optoelectronical sensors for autonomous rendezvous/docking and proximity operations in space, the testing and characterization of CCDs for the Rosat star sensors, an advanced radial camera for the Hubble Space Telescope, and scanning or staring infrared imagers.

  11. Image Analysis to Estimate Mulch Residual on Soil

    NASA Astrophysics Data System (ADS)

    Moreno Valencia, Carmen; Moreno Valencia, Marta; Tarquis, Ana M.

    2014-05-01

    Organic farmers are currently allowed to use conventional polyethylene mulch, provided it is removed from the field at the end of the growing or harvest season. To some, such use represents a contradiction between the resource conservation goals of sustainable, organic agriculture and the waste generated from the use of polyethylene mulch. One possible solution is to use biodegradable plastic or paper as mulch, which could present an alternative to polyethylene in reducing non-recyclable waste and decreasing the environmental pollution associated with it. Determination of mulch residues on the ground is one of the basic requisites to estimate the potential of each material to degrade. Determination the extent of mulch residue on the field is an exhausting job while there is not a distinct and accurate criterion for its measurement. There are several indices for estimation the residue covers while most of them are not only laborious and time consuming but also impressed by human errors. Human vision system is fast and accurate enough in this case but the problem is that the magnitude must be stated numerically to be reported and to be used for comparison between several mulches or mulches in different times. Interpretation of the extent perceived by vision system to numerals is possible by simulation of human vision system. Machine vision comprising image processing system can afford these jobs. This study aimed to evaluate the residue of mulch materials over a crop campaign in a processing tomato (Solanum lycopersicon L.) crop in Central Spain through image analysis. The mulch materials used were standard black polyethylene (PE), two biodegradable plastic mulches (BD1 and BD2), and one paper (PP1) were compared. Meanwhile the initial appearance of most of the mulches was sort of black PE, at the end of the experiment the materials appeared somewhat discoloured, soil and/or crop residue was impregnated being very difficult to completely remove them. A digital camera

  12. Skin lesions image analysis utilizing smartphones and cloud platforms.

    PubMed

    Doukas, Charalampos; Stagkopoulos, Paris; Maglogiannis, Ilias

    2015-01-01

    This chapter presents the state of the art on mobile teledermoscopy applications, utilizing smartphones able to store digital images of skin areas depicting regions of interest (lesions) and perform self-assessment or communicate the captured images with expert physicians. Mobile teledermoscopy systems consist of a mobile application that can acquire and identify moles in skin images and classify them according their severity and Cloud infrastructure exploiting computational and storage resources. The chapter presents some indicative mobile applications for skin lesions assessment and describes a proposed system developed by our team that can perform skin lesion evaluation both on the phone and on the Cloud, depending on the network availability. PMID:25626556

  13. Towards a Quantitative OCT Image Analysis

    PubMed Central

    Garcia Garrido, Marina; Beck, Susanne C.; Mühlfriedel, Regine; Julien, Sylvie; Schraermeyer, Ulrich; Seeliger, Mathias W.

    2014-01-01

    Background Optical coherence tomography (OCT) is an invaluable diagnostic tool for the detection and follow-up of retinal pathology in patients and experimental disease models. However, as morphological structures and layering in health as well as their alterations in disease are complex, segmentation procedures have not yet reached a satisfactory level of performance. Therefore, raw images and qualitative data are commonly used in clinical and scientific reports. Here, we assess the value of OCT reflectivity profiles as a basis for a quantitative characterization of the retinal status in a cross-species comparative study. Methods Spectral-Domain Optical Coherence Tomography (OCT), confocal Scanning-La­ser Ophthalmoscopy (SLO), and Fluorescein Angiography (FA) were performed in mice (Mus musculus), gerbils (Gerbillus perpadillus), and cynomolgus monkeys (Macaca fascicularis) using the Heidelberg Engineering Spectralis system, and additional SLOs and FAs were obtained with the HRA I (same manufacturer). Reflectivity profiles were extracted from 8-bit greyscale OCT images using the ImageJ software package (http://rsb.info.nih.gov/ij/). Results Reflectivity profiles obtained from OCT scans of all three animal species correlated well with ex vivo histomorphometric data. Each of the retinal layers showed a typical pattern that varied in relative size and degree of reflectivity across species. In general, plexiform layers showed a higher level of reflectivity than nuclear layers. A comparison of reflectivity profiles from specialized retinal regions (e.g. visual streak in gerbils, fovea in non-human primates) with respective regions of human retina revealed multiple similarities. In a model of Retinitis Pigmentosa (RP), the value of reflectivity profiles for the follow-up of therapeutic interventions was demonstrated. Conclusions OCT reflectivity profiles provide a detailed, quantitative description of retinal layers and structures including specialized retinal regions

  14. Nonclassicality thresholds for multiqubit states: Numerical analysis

    SciTech Connect

    Gruca, Jacek; Zukowski, Marek; Laskowski, Wieslaw; Kiesel, Nikolai; Wieczorek, Witlef; Weinfurter, Harald; Schmid, Christian

    2010-07-15

    States that strongly violate Bell's inequalities are required in many quantum-informational protocols as, for example, in cryptography, secret sharing, and the reduction of communication complexity. We investigate families of such states with a numerical method which allows us to reveal nonclassicality even without direct knowledge of Bell's inequalities for the given problem. An extensive set of numerical results is presented and discussed.

  15. Imaging for dismantlement verification: information management and analysis algorithms

    SciTech Connect

    Seifert, Allen; Miller, Erin A.; Myjak, Mitchell J.; Robinson, Sean M.; Jarman, Kenneth D.; Misner, Alex C.; Pitts, W. Karl; Woodring, Mitchell L.

    2010-09-01

    The level of detail discernible in imaging techniques has generally excluded them from consideration as verification tools in inspection regimes. An image will almost certainly contain highly sensitive information, and storing a comparison image will almost certainly violate a cardinal principle of information barriers: that no sensitive information be stored in the system. To overcome this problem, some features of the image might be reduced to a few parameters suitable for definition as an attribute. However, this process must be performed with care. Computing the perimeter, area, and intensity of an object, for example, might reveal sensitive information relating to shape, size, and material composition. This paper presents three analysis algorithms that reduce full image information to non-sensitive feature information. Ultimately, the algorithms are intended to provide only a yes/no response verifying the presence of features in the image. We evaluate the algorithms on both their technical performance in image analysis, and their application with and without an explicitly constructed information barrier. The underlying images can be highly detailed, since they are dynamically generated behind the information barrier. We consider the use of active (conventional) radiography alone and in tandem with passive (auto) radiography.

  16. Image analysis tools and emerging algorithms for expression proteomics

    PubMed Central

    English, Jane A.; Lisacek, Frederique; Morris, Jeffrey S.; Yang, Guang-Zhong; Dunn, Michael J.

    2012-01-01

    Since their origins in academic endeavours in the 1970s, computational analysis tools have matured into a number of established commercial packages that underpin research in expression proteomics. In this paper we describe the image analysis pipeline for the established 2-D Gel Electrophoresis (2-DE) technique of protein separation, and by first covering signal analysis for Mass Spectrometry (MS), we also explain the current image analysis workflow for the emerging high-throughput ‘shotgun’ proteomics platform of Liquid Chromatography coupled to MS (LC/MS). The bioinformatics challenges for both methods are illustrated and compared, whilst existing commercial and academic packages and their workflows are described from both a user’s and a technical perspective. Attention is given to the importance of sound statistical treatment of the resultant quantifications in the search for differential expression. Despite wide availability of proteomics software, a number of challenges have yet to be overcome regarding algorithm accuracy, objectivity and automation, generally due to deterministic spot-centric approaches that discard information early in the pipeline, propagating errors. We review recent advances in signal and image analysis algorithms in 2-DE, MS, LC/MS and Imaging MS. Particular attention is given to wavelet techniques, automated image-based alignment and differential analysis in 2-DE, Bayesian peak mixture models and functional mixed modelling in MS, and group-wise consensus alignment methods for LC/MS. PMID:21046614

  17. A TSVD Analysis of Microwave Inverse Scattering for Breast Imaging

    PubMed Central

    Shea, Jacob D.; Van Veen, Barry D.; Hagness, Susan C.

    2013-01-01

    A variety of methods have been applied to the inverse scattering problem for breast imaging at microwave frequencies. While many techniques have been leveraged toward a microwave imaging solution, they are all fundamentally dependent on the quality of the scattering data. Evaluating and optimizing the information contained in the data are, therefore, instrumental in understanding and achieving optimal performance from any particular imaging method. In this paper, a method of analysis is employed for the evaluation of the information contained in simulated scattering data from a known dielectric profile. The method estimates optimal imaging performance by mapping the data through the inverse of the scattering system. The inverse is computed by truncated singular-value decomposition of a system of scattering equations. The equations are made linear by use of the exact total fields in the imaging volume, which are available in the computational domain. The analysis is applied to anatomically realistic numerical breast phantoms. The utility of the method is demonstrated for a given imaging system through the analysis of various considerations in system design and problem formulation. The method offers an avenue for decoupling the problem of data selection from the problem of image formation from that data. PMID:22113770

  18. Ballistics projectile image analysis for firearm identification.

    PubMed

    Li, Dongguang

    2006-10-01

    This paper is based upon the observation that, when a bullet is fired, it creates characteristic markings on the cartridge case and projectile. From these markings, over 30 different features can be distinguished, which, in combination, produce a "fingerprint" for a firearm. By analyzing features within such a set of firearm fingerprints, it will be possible to identify not only the type and model of a firearm, but also each and every individual weapon just as effectively as human fingerprint identification. A new analytic system based on the fast Fourier transform for identifying projectile specimens by the line-scan imaging technique is proposed in this paper. This paper develops optical, photonic, and mechanical techniques to map the topography of the surfaces of forensic projectiles for the purpose of identification. Experiments discussed in this paper are performed on images acquired from 16 various weapons. Experimental results show that the proposed system can be used for firearm identification efficiently and precisely through digitizing and analyzing the fired projectiles specimens. PMID:17022254

  19. The Spectral Image Processing System (SIPS) - Interactive visualization and analysis of imaging spectrometer data

    NASA Technical Reports Server (NTRS)

    Kruse, F. A.; Lefkoff, A. B.; Boardman, J. W.; Heidebrecht, K. B.; Shapiro, A. T.; Barloon, P. J.; Goetz, A. F. H.

    1993-01-01

    The Center for the Study of Earth from Space (CSES) at the University of Colorado, Boulder, has developed a prototype interactive software system called the Spectral Image Processing System (SIPS) using IDL (the Interactive Data Language) on UNIX-based workstations. SIPS is designed to take advantage of the combination of high spectral resolution and spatial data presentation unique to imaging spectrometers. It streamlines analysis of these data by allowing scientists to rapidly interact with entire datasets. SIPS provides visualization tools for rapid exploratory analysis and numerical tools for quantitative modeling. The user interface is X-Windows-based, user friendly, and provides 'point and click' operation. SIPS is being used for multidisciplinary research concentrating on use of physically based analysis methods to enhance scientific results from imaging spectrometer data. The objective of this continuing effort is to develop operational techniques for quantitative analysis of imaging spectrometer data and to make them available to the scientific community prior to the launch of imaging spectrometer satellite systems such as the Earth Observing System (EOS) High Resolution Imaging Spectrometer (HIRIS).

  20. Subcellular chemical and morphological analysis by stimulated Raman scattering microscopy and image analysis techniques

    PubMed Central

    D’Arco, Annalisa; Brancati, Nadia; Ferrara, Maria Antonietta; Indolfi, Maurizio; Frucci, Maria; Sirleto, Luigi

    2016-01-01

    The visualization of heterogeneous morphology, segmentation and quantification of image features is a crucial point for nonlinear optics microscopy applications, spanning from imaging of living cells or tissues to biomedical diagnostic. In this paper, a methodology combining stimulated Raman scattering microscopy and image analysis technique is presented. The basic idea is to join the potential of vibrational contrast of stimulated Raman scattering and the strength of imaging analysis technique in order to delineate subcellular morphology with chemical specificity. Validation tests on label free imaging of polystyrene-beads and of adipocyte cells are reported and discussed. PMID:27231626

  1. Subcellular chemical and morphological analysis by stimulated Raman scattering microscopy and image analysis techniques.

    PubMed

    D'Arco, Annalisa; Brancati, Nadia; Ferrara, Maria Antonietta; Indolfi, Maurizio; Frucci, Maria; Sirleto, Luigi

    2016-05-01

    The visualization of heterogeneous morphology, segmentation and quantification of image features is a crucial point for nonlinear optics microscopy applications, spanning from imaging of living cells or tissues to biomedical diagnostic. In this paper, a methodology combining stimulated Raman scattering microscopy and image analysis technique is presented. The basic idea is to join the potential of vibrational contrast of stimulated Raman scattering and the strength of imaging analysis technique in order to delineate subcellular morphology with chemical specificity. Validation tests on label free imaging of polystyrene-beads and of adipocyte cells are reported and discussed. PMID:27231626

  2. A parallel solution for high resolution histological image analysis.

    PubMed

    Bueno, G; González, R; Déniz, O; García-Rojo, M; González-García, J; Fernández-Carrobles, M M; Vállez, N; Salido, J

    2012-10-01

    This paper describes a general methodology for developing parallel image processing algorithms based on message passing for high resolution images (on the order of several Gigabytes). These algorithms have been applied to histological images and must be executed on massively parallel processing architectures. Advances in new technologies for complete slide digitalization in pathology have been combined with developments in biomedical informatics. However, the efficient use of these digital slide systems is still a challenge. The image processing that these slides are subject to is still limited both in terms of data processed and processing methods. The work presented here focuses on the need to design and develop parallel image processing tools capable of obtaining and analyzing the entire gamut of information included in digital slides. Tools have been developed to assist pathologists in image analysis and diagnosis, and they cover low and high-level image processing methods applied to histological images. Code portability, reusability and scalability have been tested by using the following parallel computing architectures: distributed memory with massive parallel processors and two networks, INFINIBAND and Myrinet, composed of 17 and 1024 nodes respectively. The parallel framework proposed is flexible, high performance solution and it shows that the efficient processing of digital microscopic images is possible and may offer important benefits to pathology laboratories. PMID:22522064

  3. SNR analysis of 3D magnetic resonance tomosynthesis (MRT) imaging

    NASA Astrophysics Data System (ADS)

    Kim, Min-Oh; Kim, Dong-Hyun

    2012-03-01

    In conventional 3D Fourier transform (3DFT) MR imaging, signal-to-noise ratio (SNR) is governed by the well-known relationship of being proportional to the voxel size and square root of the imaging time. Here, we introduce an alternative 3D imaging approach, termed MRT (Magnetic Resonance Tomosynthesis), which can generate a set of tomographic MR images similar to multiple 2D projection images in x-ray. A multiple-oblique-view (MOV) pulse sequence is designed to acquire the tomography-like images used in tomosynthesis process and an iterative back-projection (IBP) reconstruction method is used to reconstruct 3D images. SNR analysis is performed and shows that resolution and SNR tradeoff is not governed as with typical 3DFT MR imaging case. The proposed method provides a higher SNR than the conventional 3D imaging method with a partial loss of slice-direction resolution. It is expected that this method can be useful for extremely low SNR cases.

  4. Advanced Imaging in Femoroacetabular Impingement: Current State and Future Prospects

    PubMed Central

    Bittersohl, Bernd; Hosalkar, Harish S.; Hesper, Tobias; Tiderius, Carl Johan; Zilkens, Christoph; Krauspe, Rüdiger

    2015-01-01

    Symptomatic femoroacetabular impingement (FAI) is now a known precursor of early osteoarthritis (OA) of the hip. In terms of clinical intervention, the decision between joint preservation and joint replacement hinges on the severity of articular cartilage degeneration. The exact threshold during the course of disease progression when the cartilage damage is irreparable remains elusive. The intention behind radiographic imaging is to accurately identify the morphology of osseous structural abnormalities and to accurately characterize the chondrolabral damage as much as possible. However, both plain radiographs and computed tomography (CT) are insensitive for articular cartilage anatomy and pathology. Advanced magnetic resonance imaging (MRI) techniques include magnetic resonance arthrography and biochemically sensitive techniques of delayed gadolinium-enhanced MRI of cartilage (dGEMRIC), T1rho (T1ρ), T2/T2* mapping, and several others. The diagnostic performance of these techniques to evaluate cartilage degeneration could improve the ability to predict an individual patient-specific outcome with non-surgical and surgical care. This review discusses the facts and current applications of biochemical MRI for hip joint cartilage assessment covering the roles of dGEMRIC, T2/T2*, and T1ρ mapping. The basics of each technique and their specific role in FAI assessment are outlined. Current limitations and potential pitfalls as well as future directions of biochemical imaging are also outlined. PMID:26258129

  5. Advanced Imaging in Femoroacetabular Impingement: Current State and Future Prospects.

    PubMed

    Bittersohl, Bernd; Hosalkar, Harish S; Hesper, Tobias; Tiderius, Carl Johan; Zilkens, Christoph; Krauspe, Rüdiger

    2015-01-01

    Symptomatic femoroacetabular impingement (FAI) is now a known precursor of early osteoarthritis (OA) of the hip. In terms of clinical intervention, the decision between joint preservation and joint replacement hinges on the severity of articular cartilage degeneration. The exact threshold during the course of disease progression when the cartilage damage is irreparable remains elusive. The intention behind radiographic imaging is to accurately identify the morphology of osseous structural abnormalities and to accurately characterize the chondrolabral damage as much as possible. However, both plain radiographs and computed tomography (CT) are insensitive for articular cartilage anatomy and pathology. Advanced magnetic resonance imaging (MRI) techniques include magnetic resonance arthrography and biochemically sensitive techniques of delayed gadolinium-enhanced MRI of cartilage (dGEMRIC), T1rho (T1ρ), T2/T2* mapping, and several others. The diagnostic performance of these techniques to evaluate cartilage degeneration could improve the ability to predict an individual patient-specific outcome with non-surgical and surgical care. This review discusses the facts and current applications of biochemical MRI for hip joint cartilage assessment covering the roles of dGEMRIC, T2/T2*, and T1ρ mapping. The basics of each technique and their specific role in FAI assessment are outlined. Current limitations and potential pitfalls as well as future directions of biochemical imaging are also outlined. PMID:26258129

  6. Vibration analysis using digital image processing for in vitro imaging systems

    NASA Astrophysics Data System (ADS)

    Wang, Zhonghua; Wang, Shaohong; Gonzalez, Carlos

    2011-09-01

    A non-invasive self-measurement method for analyzing vibrations within a biological imaging system is presented. This method utilizes the system's imaging sensor, digital image processing and a custom dot matrix calibration target for in-situ vibration measurements. By taking a series of images of the target within a fixed field of view and time interval, averaging the dot profiles in each image, the in-plane coherent spacing of each dot can be identified in both the horizontal and vertical directions. The incoherent movement in the pattern spacing caused by vibration is then resolved from each image. Accounting for the CMOS imager rolling shutter, vibrations are then measured with different sampling times for intra-frame and inter-frame, the former provides the frame time and the later the image sampling time. The power spectrum density (PSD) analysis is then performed using both measurements to provide the incoherent system displacements and identify potential vibration sources. The PSD plots provide descriptive statistics of the displacement distribution due to random vibration contents. This approach has been successful in identifying vibration sources and measuring vibration geometric moments in imaging systems.

  7. Spectral multiplexing and coherent-state decomposition in Fourier ptychographic imaging

    PubMed Central

    Dong, Siyuan; Shiradkar, Radhika; Nanda, Pariksheet; Zheng, Guoan

    2014-01-01

    Information multiplexing is important for biomedical imaging and chemical sensing. In this paper, we report a microscopy imaging technique, termed state-multiplexed Fourier ptychography (FP), for information multiplexing and coherent-state decomposition. Similar to a typical Fourier ptychographic setting, we use an array of light sources to illuminate the sample from different incident angles and acquire corresponding low-resolution images using a monochromatic camera. In the reported technique, however, multiple light sources are lit up simultaneously for information multiplexing, and the acquired images thus represent incoherent summations of the sample transmission profiles corresponding to different coherent states. We show that, by using the state-multiplexed FP recovery routine, we can decompose the incoherent mixture of the FP acquisitions to recover a high-resolution sample image. We also show that, color-multiplexed imaging can be performed by simultaneously turning on R/G/B LEDs for data acquisition. The reported technique may provide a solution for handling the partially coherent effect of light sources used in Fourier ptychographic imaging platforms. It can also be used to replace spectral filter, gratings or other optical components for spectral multiplexing and demultiplexing. With the availability of cost-effective broadband LEDs, the reported technique may open up exciting opportunities for computational multispectral imaging. PMID:24940538

  8. Microscope-on-Chip Using Micro-Channel and Solid State Image Sensors

    NASA Technical Reports Server (NTRS)

    Wang, Yu

    2000-01-01

    Recently, Jet Propulsion Laboratory has invented and developed a miniature optical microscope, microscope-on-chip using micro-channel and solid state image sensors. It is lightweight, low-power, fast speed instrument, it has no image lens, does not need focus adjustment, and the total mass is less than 100g. A prototype has been built and demonstrated at JPL.

  9. Influence of Appearance-Related TV Commercials on Body Image State

    ERIC Educational Resources Information Center

    Legenbauer, Tanja; Ruhl, Ilka; Vocks, Silja

    2008-01-01

    This study investigates the influence of media exposure on body image state in eating-disordered (ED) patients. The attitudinal and perceptual components of body image are assessed, as well as any associations with dysfunctional cognitions and behavioral consequences. Twenty-five ED patients and 25 non-ED controls (ND) viewed commercials either…

  10. Towards large-scale histopathological image analysis: hashing-based image retrieval.

    PubMed

    Zhang, Xiaofan; Liu, Wei; Dundar, Murat; Badve, Sunil; Zhang, Shaoting

    2015-02-01

    Automatic analysis of histopathological images has been widely utilized leveraging computational image-processing methods and modern machine learning techniques. Both computer-aided diagnosis (CAD) and content-based image-retrieval (CBIR) systems have been successfully developed for diagnosis, disease detection, and decision support in this area. Recently, with the ever-increasing amount of annotated medical data, large-scale and data-driven methods have emerged to offer a promise of bridging the semantic gap between images and diagnostic information. In this paper, we focus on developing scalable image-retrieval techniques to cope intelligently with massive histopathological images. Specifically, we present a supervised kernel hashing technique which leverages a small amount of supervised information in learning to compress a 10 000-dimensional image feature vector into only tens of binary bits with the informative signatures preserved. These binary codes are then indexed into a hash table that enables real-time retrieval of images in a large database. Critically, the supervised information is employed to bridge the semantic gap between low-level image features and high-level diagnostic information. We build a scalable image-retrieval framework based on the supervised hashing technique and validate its performance on several thousand histopathological images acquired from breast microscopic tissues. Extensive evaluations are carried out in terms of image classification (i.e., benign versus actionable categorization) and retrieval tests. Our framework achieves about 88.1% classification accuracy as well as promising time efficiency. For example, the framework can execute around 800 queries in only 0.01 s, comparing favorably with other commonly used dimensionality reduction and feature selection methods. PMID:25314696

  11. Rapid enumeration of viable bacteria by image analysis

    NASA Technical Reports Server (NTRS)

    Singh, A.; Pyle, B. H.; McFeters, G. A.

    1989-01-01

    A direct viable counting method for enumerating viable bacteria was modified and made compatible with image analysis. A comparison was made between viable cell counts determined by the spread plate method and direct viable counts obtained using epifluorescence microscopy either manually or by automatic image analysis. Cultures of Escherichia coli, Salmonella typhimurium, Vibrio cholerae, Yersinia enterocolitica and Pseudomonas aeruginosa were incubated at 35 degrees C in a dilute nutrient medium containing nalidixic acid. Filtered samples were stained for epifluorescence microscopy and analysed manually as well as by image analysis. Cells enlarged after incubation were considered viable. The viable cell counts determined using image analysis were higher than those obtained by either the direct manual count of viable cells or spread plate methods. The volume of sample filtered or the number of cells in the original sample did not influence the efficiency of the method. However, the optimal concentration of nalidixic acid (2.5-20 micrograms ml-1) and length of incubation (4-8 h) varied with the culture tested. The results of this study showed that under optimal conditions, the modification of the direct viable count method in combination with image analysis microscopy provided an efficient and quantitative technique for counting viable bacteria in a short time.

  12. Automated Analysis of Dynamic Ca2+ Signals in Image Sequences

    PubMed Central

    Francis, Michael; Waldrup, Josh; Qian, Xun; Taylor, Mark S.

    2014-01-01

    Intracellular Ca2+ signals are commonly studied with fluorescent Ca2+ indicator dyes and microscopy techniques. However, quantitative analysis of Ca2+ imaging data is time consuming and subject to bias. Automated signal analysis algorithms based on region of interest (ROI) detection have been implemented for one-dimensional line scan measurements, but there is no current algorithm which integrates optimized identification and analysis of ROIs in two-dimensional image sequences. Here an algorithm for rapid acquisition and analysis of ROIs in image sequences is described. It utilizes ellipses fit to noise filtered signals in order to determine optimal ROI placement, and computes Ca2+ signal parameters of amplitude, duration and spatial spread. This algorithm was implemented as a freely available plugin for ImageJ (NIH) software. Together with analysis scripts written for the open source statistical processing software R, this approach provides a high-capacity pipeline for performing quick statistical analysis of experimental output. The authors suggest that use of this analysis protocol will lead to a more complete and unbiased characterization of physiologic Ca2+ signaling. PMID:24962784

  13. Finite-state residual vector quantizer for image coding

    NASA Astrophysics Data System (ADS)

    Huang, Steve S.; Wang, Jia-Shung

    1993-10-01

    Finite state vector quantization (FSVQ) has been proven during recent years to be a high quality and low bit rate coding scheme. A FSVQ has achieved the efficiency of a small codebook (the state codebook) VQ while maintaining the quality of a large codebook (the master codebook) VQ. However, the large master codebook has become a primary limitation of FSVQ if the implementation is carefully taken into account. A large amount of memory would be required in storing the master codebook and also much effort would be spent in maintaining the state codebook if the master codebook became too large. This problem could be partially solved by the mean/residual technique (MRVQ). That is, the block means and the residual vectors would be separately coded. A new hybrid coding scheme called the finite state residual vector quantization (FSRVQ) is proposed in this paper for the sake of utilizing both advantage in FSVQ and MRVQ. The codewords in FSRVQ were designed by removing the block means so as to reduce the codebook size. The block means were predicted by the neighboring blocks to reduce the bit rate. Additionally, the predicted means were added to the residual vectors so that the state codebooks could be generated entirely. The performance of FSRVQ was indicated from the experimental results to be better than that of both ordinary FSVQ and RMVQ uniformly.

  14. Police witness identification images: a geometric morphometric analysis.

    PubMed

    Hayes, Susan; Tullberg, Cameron

    2012-11-01

    Research into witness identification images typically occurs within the laboratory and involves subjective likeness and recognizability judgments. This study analyzed whether actual witness identification images systematically alter the facial shapes of the suspects described. The shape analysis tool, geometric morphometrics, was applied to 46 homologous facial landmarks displayed on 50 witness identification images and their corresponding arrest photographs, using principal component analysis and multivariate regressions. The results indicate that compared with arrest photographs, witness identification images systematically depict suspects with lowered and medially located eyebrows (p = <0.000001). This was found to occur independently of the Police Artist, and did not occur with composites produced under laboratory conditions. There are several possible explanations for this finding, including any, or all, of the following: The suspect was frowning at the time of the incident, the witness had negative feelings toward the suspect, this is an effect of unfamiliar face processing, the suspect displayed fear at the time of their arrest photograph. PMID:22536846

  15. Validating retinal fundus image analysis algorithms: issues and a proposal.

    PubMed

    Trucco, Emanuele; Ruggeri, Alfredo; Karnowski, Thomas; Giancardo, Luca; Chaum, Edward; Hubschman, Jean Pierre; Al-Diri, Bashir; Cheung, Carol Y; Wong, Damon; Abràmoff, Michael; Lim, Gilbert; Kumar, Dinesh; Burlina, Philippe; Bressler, Neil M; Jelinek, Herbert F; Meriaudeau, Fabrice; Quellec, Gwénolé; Macgillivray, Tom; Dhillon, Bal

    2013-05-01

    This paper concerns the validation of automatic retinal image analysis (ARIA) algorithms. For reasons of space and consistency, we concentrate on the validation of algorithms processing color fundus camera images, currently the largest section of the ARIA literature. We sketch the context (imaging instruments and target tasks) of ARIA validation, summarizing the main image analysis and validation techniques. We then present a list of recommendations focusing on the creation of large repositories of test data created by international consortia, easily accessible via moderated Web sites, including multicenter annotations by multiple experts, specific to clinical tasks, and capable of running submitted software automatically on the data stored, with clear and widely agreed-on performance criteria, to provide a fair comparison. PMID:23794433

  16. A hyperspectral image analysis workbench for environmental science applications

    SciTech Connect

    Christiansen, J.H.; Zawada, D.G.; Simunich, K.L.; Slater, J.C.

    1992-10-01

    A significant challenge to the information sciences is to provide more powerful and accessible means to exploit the enormous wealth of data available from high-resolution imaging spectrometry, or ``hyperspectral`` imagery, for analysis, for mapping purposes, and for input to environmental modeling applications. As an initial response to this challenge, Argonne`s Advanced Computer Applications Center has developed a workstation-based prototype software workbench which employs Al techniques and other advanced approaches to deduce surface characteristics and extract features from the hyperspectral images. Among its current capabilities, the prototype system can classify pixels by abstract surface type. The classification process employs neural network analysis of inputs which include pixel spectra and a variety of processed image metrics, including image ``texture spectra`` derived from fractal signatures computed for subimage tiles at each wavelength.

  17. A hyperspectral image analysis workbench for environmental science applications

    SciTech Connect

    Christiansen, J.H.; Zawada, D.G.; Simunich, K.L.; Slater, J.C.

    1992-01-01

    A significant challenge to the information sciences is to provide more powerful and accessible means to exploit the enormous wealth of data available from high-resolution imaging spectrometry, or hyperspectral'' imagery, for analysis, for mapping purposes, and for input to environmental modeling applications. As an initial response to this challenge, Argonne's Advanced Computer Applications Center has developed a workstation-based prototype software workbench which employs Al techniques and other advanced approaches to deduce surface characteristics and extract features from the hyperspectral images. Among its current capabilities, the prototype system can classify pixels by abstract surface type. The classification process employs neural network analysis of inputs which include pixel spectra and a variety of processed image metrics, including image texture spectra'' derived from fractal signatures computed for subimage tiles at each wavelength.

  18. Validating Retinal Fundus Image Analysis Algorithms: Issues and a Proposal

    PubMed Central

    Trucco, Emanuele; Ruggeri, Alfredo; Karnowski, Thomas; Giancardo, Luca; Chaum, Edward; Hubschman, Jean Pierre; al-Diri, Bashir; Cheung, Carol Y.; Wong, Damon; Abràmoff, Michael; Lim, Gilbert; Kumar, Dinesh; Burlina, Philippe; Bressler, Neil M.; Jelinek, Herbert F.; Meriaudeau, Fabrice; Quellec, Gwénolé; MacGillivray, Tom; Dhillon, Bal

    2013-01-01

    This paper concerns the validation of automatic retinal image analysis (ARIA) algorithms. For reasons of space and consistency, we concentrate on the validation of algorithms processing color fundus camera images, currently the largest section of the ARIA literature. We sketch the context (imaging instruments and target tasks) of ARIA validation, summarizing the main image analysis and validation techniques. We then present a list of recommendations focusing on the creation of large repositories of test data created by international consortia, easily accessible via moderated Web sites, including multicenter annotations by multiple experts, specific to clinical tasks, and capable of running submitted software automatically on the data stored, with clear and widely agreed-on performance criteria, to provide a fair comparison. PMID:23794433

  19. Classification of Korla fragrant pears using NIR hyperspectral imaging analysis

    NASA Astrophysics Data System (ADS)

    Rao, Xiuqin; Yang, Chun-Chieh; Ying, Yibin; Kim, Moon S.; Chao, Kuanglin

    2012-05-01

    Korla fragrant pears are small oval pears characterized by light green skin, crisp texture, and a pleasant perfume for which they are named. Anatomically, the calyx of a fragrant pear may be either persistent or deciduous; the deciduouscalyx fruits are considered more desirable due to taste and texture attributes. Chinese packaging standards require that packed cases of fragrant pears contain 5% or less of the persistent-calyx type. Near-infrared hyperspectral imaging was investigated as a potential means for automated sorting of pears according to calyx type. Hyperspectral images spanning the 992-1681 nm region were acquired using an EMCCD-based laboratory line-scan imaging system. Analysis of the hyperspectral images was performed to select wavebands useful for identifying persistent-calyx fruits and for identifying deciduous-calyx fruits. Based on the selected wavebands, an image-processing algorithm was developed that targets automated classification of Korla fragrant pears into the two categories for packaging purposes.

  20. Image analysis of ocular fundus for retinopathy characterization

    SciTech Connect

    Ushizima, Daniela; Cuadros, Jorge

    2010-02-05

    Automated analysis of ocular fundus images is a common procedure in countries as England, including both nonemergency examination and retinal screening of patients with diabetes mellitus. This involves digital image capture and transmission of the images to a digital reading center for evaluation and treatment referral. In collaboration with the Optometry Department, University of California, Berkeley, we have tested computer vision algorithms to segment vessels and lesions in ground-truth data (DRIVE database) and hundreds of images of non-macular centric and nonuniform illumination views of the eye fundus from EyePACS program. Methods under investigation involve mathematical morphology (Figure 1) for image enhancement and pattern matching. Recently, we have focused in more efficient techniques to model the ocular fundus vasculature (Figure 2), using deformable contours. Preliminary results show accurate segmentation of vessels and high level of true-positive microaneurysms.

  1. Peripheral blood smear image analysis: A comprehensive review.

    PubMed

    Mohammed, Emad A; Mohamed, Mostafa M A; Far, Behrouz H; Naugler, Christopher

    2014-01-01

    Peripheral blood smear image examination is a part of the routine work of every laboratory. The manual examination of these images is tedious, time-consuming and suffers from interobserver variation. This has motivated researchers to develop different algorithms and methods to automate peripheral blood smear image analysis. Image analysis itself consists of a sequence of steps consisting of image segmentation, features extraction and selection and pattern classification. The image segmentation step addresses the problem of extraction of the object or region of interest from the complicated peripheral blood smear image. Support vector machine (SVM) and artificial neural networks (ANNs) are two common approaches to image segmentation. Features extraction and selection aims to derive descriptive characteristics of the extracted object, which are similar within the same object class and different between different objects. This will facilitate the last step of the image analysis process: pattern classification. The goal of pattern classification is to assign a class to the selected features from a group of known classes. There are two types of classifier learning algorithms: supervised and unsupervised. Supervised learning algorithms predict the class of the object under test using training data of known classes. The training data have a predefined label for every class and the learning algorithm can utilize this data to predict the class of a test object. Unsupervised learning algorithms use unlabeled training data and divide them into groups using similarity measurements. Unsupervised learning algorithms predict the group to which a new test object belong to, based on the training data without giving an explicit class to that object. ANN, SVM, decision tree and K-nearest neighbor are possible approaches to classification algorithms. Increased discrimination may be obtained by combining several classifiers together. PMID:24843821

  2. Peripheral blood smear image analysis: A comprehensive review

    PubMed Central

    Mohammed, Emad A.; Mohamed, Mostafa M. A.; Far, Behrouz H.; Naugler, Christopher

    2014-01-01

    Peripheral blood smear image examination is a part of the routine work of every laboratory. The manual examination of these images is tedious, time-consuming and suffers from interobserver variation. This has motivated researchers to develop different algorithms and methods to automate peripheral blood smear image analysis. Image analysis itself consists of a sequence of steps consisting of image segmentation, features extraction and selection and pattern classification. The image segmentation step addresses the problem of extraction of the object or region of interest from the complicated peripheral blood smear image. Support vector machine (SVM) and artificial neural networks (ANNs) are two common approaches to image segmentation. Features extraction and selection aims to derive descriptive characteristics of the extracted object, which are similar within the same object class and different between different objects. This will facilitate the last step of the image analysis process: pattern classification. The goal of pattern classification is to assign a class to the selected features from a group of known classes. There are two types of classifier learning algorithms: supervised and unsupervised. Supervised learning algorithms predict the class of the object under test using training data of known classes. The training data have a predefined label for every class and the learning algorithm can utilize this data to predict the class of a test object. Unsupervised learning algorithms use unlabeled training data and divide them into groups using similarity measurements. Unsupervised learning algorithms predict the group to which a new test object belong to, based on the training data without giving an explicit class to that object. ANN, SVM, decision tree and K-nearest neighbor are possible approaches to classification algorithms. Increased discrimination may be obtained by combining several classifiers together. PMID:24843821

  3. Combining multiset resolution and segmentation for hyperspectral image analysis of biological tissues.

    PubMed

    Piqueras, S; Krafft, C; Beleites, C; Egodage, K; von Eggeling, F; Guntinas-Lichius, O; Popp, J; Tauler, R; de Juan, A

    2015-06-30

    Hyperspectral images can provide useful biochemical information about tissue samples. Often, Fourier transform infrared (FTIR) images have been used to distinguish different tissue elements and changes caused by pathological causes. The spectral variation between tissue types and pathological states is very small and multivariate analysis methods are required to describe adequately these subtle changes. In this work, a strategy combining multivariate curve resolution-alternating least squares (MCR-ALS), a resolution (unmixing) method, which recovers distribution maps and pure spectra of image constituents, and K-means clustering, a segmentation method, which identifies groups of similar pixels in an image, is used to provide efficient information on tissue samples. First, multiset MCR-ALS analysis is performed on the set of images related to a particular pathology status to provide basic spectral signatures and distribution maps of the biological contributions needed to describe the tissues. Later on, multiset segmentation analysis is applied to the obtained MCR scores (concentration profiles), used as compressed initial information for segmentation purposes. The multiset idea is transferred to perform image segmentation of different tissue samples. Doing so, a difference can be made between clusters associated with relevant biological parts common to all images, linked to general trends of the type of samples analyzed, and sample-specific clusters, that reflect the natural biological sample-to-sample variability. The last step consists of performing separate multiset MCR-ALS analyses on the pixels of each of the relevant segmentation clusters for the pathology studied to obtain a finer description of the related tissue parts. The potential of the strategy combining multiset resolution on complete images, multiset segmentation and multiset local resolution analysis will be shown on a study focused on FTIR images of tissue sections recorded on inflamed and non

  4. MIXING QUANTIFICATION BY VISUAL IMAGING ANALYSIS

    EPA Science Inventory

    The paper reports on development of a method for quantifying two measures of mixing, the scale and intensity of segregation, through flow visualization, video recording, and software analysis. his non-intrusive method analyzes a planar cross section of a flowing system from an in...

  5. MIXING QUANTIFICATION BY VISUAL IMAGING ANALYSIS

    EPA Science Inventory

    This paper reports on development of a method for quantifying two measures of mixing, the scale and intensity of segregation, through flow visualization, video recording, and software analysis. This non-intrusive method analyzes a planar cross section of a flowing system from an ...

  6. Computer vision algorithms in DNA ploidy image analysis

    NASA Astrophysics Data System (ADS)

    Alexandratou, Eleni; Sofou, Anastasia; Papasaika, Haris; Maragos, Petros; Yova, Dido; Kavantzas, Nikolaos

    2006-02-01

    The high incidence and mortality rates of prostate cancer have stimulated research for prevention, early diagnosis and appropriate treatment. DNA ploidy status of tumour cells is an important parameter with diagnostic and prognostic significance. In the current study, DNA ploidy analysis was performed using image cytometry technique and digital image processing and analysis. Tissue samples from prostate patients were stained using the Feulgen method. Images were acquired using a digital imaging microscopy system consisting of an Olympus BX-50 microscope equipped with a color CCD camera. Segmentation of such images is not a trivial problem because of the uneven background, intensity variations within the nuclei and cell clustering. In this study specific algorithms were developed in Matlab based on the most prominent image segmentation approaches that emanate from the field of Mathematical Morphology, focusing on region-based watershed segmentation. First biomedical images were simplified under non-linear filtering (alternate sequential filters, levelings), and next image features such as gradient information and markers were extracted so as to lead the segmentation process. The extracted markers are used as seeds; watershed transformation was performed to the gradient of the filtered image. Image flooding was performed isotropically from the markers using hierarchical queues based on Beucher and Meyer methodology. The developed algorithms have successfully segmented the cell from its background and from cells clusters as well. To characterize the nuclei, we attempt to derive a set of effective color features. By analyzing more than 50 color features, we have found that a set of color features, hue, saturation-weighted hue, I I=(R+G+B)/3, I II=(R-B),I 3=(2G-R-B)/2, Karhunen-Loeve transformation and energy operator, are effective.

  7. An advanced image analysis tool for the quantification and characterization of breast cancer in microscopy images.

    PubMed

    Goudas, Theodosios; Maglogiannis, Ilias

    2015-03-01

    The paper presents an advanced image analysis tool for the accurate and fast characterization and quantification of cancer and apoptotic cells in microscopy images. The proposed tool utilizes adaptive thresholding and a Support Vector Machines classifier. The segmentation results are enhanced through a Majority Voting and a Watershed technique, while an object labeling algorithm has been developed for the fast and accurate validation of the recognized cells. Expert pathologists evaluated the tool and the reported results are satisfying and reproducible. PMID:25681102

  8. Theoretical analysis of quantum ghost imaging through turbulence

    SciTech Connect

    Chan, Kam Wai Clifford; Simon, D. S.; Sergienko, A. V.; Hardy, Nicholas D.; Shapiro, Jeffrey H.; Dixon, P. Ben; Howland, Gregory A.; Howell, John C.; Eberly, Joseph H.; O'Sullivan, Malcolm N.; Rodenburg, Brandon; Boyd, Robert W.

    2011-10-15

    Atmospheric turbulence generally affects the resolution and visibility of an image in long-distance imaging. In a recent quantum ghost imaging experiment [P. B. Dixon et al., Phys. Rev. A 83, 051803 (2011)], it was found that the effect of the turbulence can nevertheless be mitigated under certain conditions. This paper gives a detailed theoretical analysis to the setup and results reported in the experiment. Entangled photons with a finite correlation area and a turbulence model beyond the phase screen approximation are considered.

  9. Fundus image change analysis: geometric and radiometric normalization

    NASA Astrophysics Data System (ADS)

    Shin, David S.; Kaiser, Richard S.; Lee, Michael S.; Berger, Jeffrey W.

    1999-06-01

    Image change analysis will potentiate fundus feature quantitation in natural history and intervention studies for major blinding diseases such as age-related macular degeneration and diabetic retinopathy. Geometric and radiometric normalization of fundus images acquired at two points in time are required for accurate change detection, but existing methods are unsatisfactory for change analysis. We have developed and explored algorithms for correction of image misalignment (geometric) and inter- and intra-image brightness variation (radiometric) in order to facilitate highly accurate change detection. Thirty-five millimeter color fundus photographs were digitized at 500 to 1000 dpi. Custom-developed registration algorithms correcting for translation only; translation and rotation; translation, rotation, and scale; and polynomial based image-warping algorithms allowed for exploration of registration accuracy required for change detection. Registration accuracy beyond that offered by rigid body transformation is required for accurate change detection. Radiometric correction required shade-correction and normalization of inter-image statistical parameters. Precise geometric and radiometric normalization allows for highly accurate change detection. To our knowledge, these results are the first demonstration of the combination of geometric and radiometric normalization offering sufficient accuracy to allow for accurate fundus image change detection potentiating longitudinal study of retinal disease.

  10. Image-guided breast biopsy: state-of-the-art.

    PubMed

    O'Flynn, E A M; Wilson, A R M; Michell, M J

    2010-04-01

    Percutaneous image-guided breast biopsy is widely practised to evaluate predominantly non-palpable breast lesions. There has been steady development in percutaneous biopsy techniques. Fine-needle aspiration cytology was the original method of sampling, followed in the early 1990s by large core needle biopsy. The accuracy of both has been improved by ultrasound and stereotactic guidance. Larger bore vacuum-assisted biopsy devices became available in the late 1990s and are now commonplace in most breast units. We review the different types of breast biopsy devices currently available together with various localization techniques used, focusing on their advantages, limitations and current controversial clinical management issues. PMID:20338392

  11. Passive detection of copy-move forgery in digital images: state-of-the-art.

    PubMed

    Al-Qershi, Osamah M; Khoo, Bee Ee

    2013-09-10

    Currently, digital images and videos have high importance because they have become the main carriers of information. However, the relative ease of tampering with images and videos makes their authenticity untrustful. Digital image forensics addresses the problem of the authentication of images or their origins. One main branch of image forensics is passive image forgery detection. Images could be forged using different techniques, and the most common forgery is the copy-move, in which a region of an image is duplicated and placed elsewhere in the same image. Active techniques, such as watermarking, have been proposed to solve the image authenticity problem, but those techniques have limitations because they require human intervention or specially equipped cameras. To overcome these limitations, several passive authentication methods have been proposed. In contrast to active methods, passive methods do not require any previous information about the image, and they take advantage of specific detectable changes that forgeries can bring into the image. In this paper, we describe the current state-of-the-art of passive copy-move forgery detection methods. The key current issues in developing a robust copy-move forgery detector are then identified, and the trends of tackling those issues are addressed. PMID:23890651

  12. Direct imaging of topological edge states at a bilayer graphene domain wall.

    PubMed

    Yin, Long-Jing; Jiang, Hua; Qiao, Jia-Bin; He, Lin

    2016-01-01

    The AB-BA domain wall in gapped graphene bilayers is a rare naked structure hosting topological electronic states. Although it has been extensively studied in theory, a direct imaging of its topological edge states is still missing. Here we image the topological edge states at the graphene bilayer domain wall by using scanning tunnelling microscope. The simultaneously obtained atomic-resolution images of the domain wall provide us unprecedented opportunities to measure the spatially varying edge states within it. The one-dimensional conducting channels are observed to be mainly located around the two edges of the domain wall, which is reproduced quite well by our theoretical calculations. Our experiment further demonstrates that the one-dimensional topological states are quite robust even in the presence of high magnetic fields. The result reported here may raise hopes of graphene-based electronics with ultra-low dissipation. PMID:27312315

  13. Direct imaging of topological edge states at a bilayer graphene domain wall

    NASA Astrophysics Data System (ADS)

    Yin, Long-Jing; Jiang, Hua; Qiao, Jia-Bin; He, Lin

    2016-06-01

    The AB-BA domain wall in gapped graphene bilayers is a rare naked structure hosting topological electronic states. Although it has been extensively studied in theory, a direct imaging of its topological edge states is still missing. Here we image the topological edge states at the graphene bilayer domain wall by using scanning tunnelling microscope. The simultaneously obtained atomic-resolution images of the domain wall provide us unprecedented opportunities to measure the spatially varying edge states within it. The one-dimensional conducting channels are observed to be mainly located around the two edges of the domain wall, which is reproduced quite well by our theoretical calculations. Our experiment further demonstrates that the one-dimensional topological states are quite robust even in the presence of high magnetic fields. The result reported here may raise hopes of graphene-based electronics with ultra-low dissipation.

  14. Direct imaging of topological edge states at a bilayer graphene domain wall

    PubMed Central

    Yin, Long-Jing; Jiang, Hua; Qiao, Jia-Bin; He, Lin

    2016-01-01

    The AB–BA domain wall in gapped graphene bilayers is a rare naked structure hosting topological electronic states. Although it has been extensively studied in theory, a direct imaging of its topological edge states is still missing. Here we image the topological edge states at the graphene bilayer domain wall by using scanning tunnelling microscope. The simultaneously obtained atomic-resolution images of the domain wall provide us unprecedented opportunities to measure the spatially varying edge states within it. The one-dimensional conducting channels are observed to be mainly located around the two edges of the domain wall, which is reproduced quite well by our theoretical calculations. Our experiment further demonstrates that the one-dimensional topological states are quite robust even in the presence of high magnetic fields. The result reported here may raise hopes of graphene-based electronics with ultra-low dissipation. PMID:27312315

  15. Proceedings of the Airborne Imaging Spectrometer Data Analysis Workshop

    NASA Technical Reports Server (NTRS)

    Vane, G. (Editor); Goetz, A. F. H. (Editor)

    1985-01-01

    The Airborne Imaging Spectrometer (AIS) Data Analysis Workshop was held at the Jet Propulsion Laboratory on April 8 to 10, 1985. It was attended by 92 people who heard reports on 30 investigations currently under way using AIS data that have been collected over the past two years. Written summaries of 27 of the presentations are in these Proceedings. Many of the results presented at the Workshop are preliminary because most investigators have been working with this fundamentally new type of data for only a relatively short time. Nevertheless, several conclusions can be drawn from the Workshop presentations concerning the value of imaging spectrometry to Earth remote sensing. First, work with AIS has shown that direct identification of minerals through high spectral resolution imaging is a reality for a wide range of materials and geological settings. Second, there are strong indications that high spectral resolution remote sensing will enhance the ability to map vegetation species. There are also good indications that imaging spectrometry will be useful for biochemical studies of vegetation. Finally, there are a number of new data analysis techniques under development which should lead to more efficient and complete information extraction from imaging spectrometer data. The results of the Workshop indicate that as experience is gained with this new class of data, and as new analysis methodologies are developed and applied, the value of imaging spectrometry should increase.

  16. Image-based histologic grade estimation using stochastic geometry analysis

    NASA Astrophysics Data System (ADS)

    Petushi, Sokol; Zhang, Jasper; Milutinovic, Aladin; Breen, David E.; Garcia, Fernando U.

    2011-03-01

    Background: Low reproducibility of histologic grading of breast carcinoma due to its subjectivity has traditionally diminished the prognostic value of histologic breast cancer grading. The objective of this study is to assess the effectiveness and reproducibility of grading breast carcinomas with automated computer-based image processing that utilizes stochastic geometry shape analysis. Methods: We used histology images stained with Hematoxylin & Eosin (H&E) from invasive mammary carcinoma, no special type cases as a source domain and study environment. We developed a customized hybrid semi-automated segmentation algorithm to cluster the raw image data and reduce the image domain complexity to a binary representation with the foreground representing regions of high density of malignant cells. A second algorithm was developed to apply stochastic geometry and texture analysis measurements to the segmented images and to produce shape distributions, transforming the original color images into a histogram representation that captures their distinguishing properties between various histological grades. Results: Computational results were compared against known histological grades assigned by the pathologist. The Earth Mover's Distance (EMD) similarity metric and the K-Nearest Neighbors (KNN) classification algorithm provided correlations between the high-dimensional set of shape distributions and a priori known histological grades. Conclusion: Computational pattern analysis of histology shows promise as an effective software tool in breast cancer histological grading.

  17. SIMA: Python software for analysis of dynamic fluorescence imaging data

    PubMed Central

    Kaifosh, Patrick; Zaremba, Jeffrey D.; Danielson, Nathan B.; Losonczy, Attila

    2014-01-01

    Fluorescence imaging is a powerful method for monitoring dynamic signals in the nervous system. However, analysis of dynamic fluorescence imaging data remains burdensome, in part due to the shortage of available software tools. To address this need, we have developed SIMA, an open source Python package that facilitates common analysis tasks related to fluorescence imaging. Functionality of this package includes correction of motion artifacts occurring during in vivo imaging with laser-scanning microscopy, segmentation of imaged fields into regions of interest (ROIs), and extraction of signals from the segmented ROIs. We have also developed a graphical user interface (GUI) for manual editing of the automatically segmented ROIs and automated registration of ROIs across multiple imaging datasets. This software has been designed with flexibility in mind to allow for future extension with different analysis methods and potential integration with other packages. Software, documentation, and source code for the SIMA package and ROI Buddy GUI are freely available at http://www.losonczylab.org/sima/. PMID:25295002

  18. Automatic quantitative analysis of cardiac MR perfusion images

    NASA Astrophysics Data System (ADS)

    Breeuwer, Marcel M.; Spreeuwers, Luuk J.; Quist, Marcel J.

    2001-07-01

    Magnetic Resonance Imaging (MRI) is a powerful technique for imaging cardiovascular diseases. The introduction of cardiovascular MRI into clinical practice is however hampered by the lack of efficient and accurate image analysis methods. This paper focuses on the evaluation of blood perfusion in the myocardium (the heart muscle) from MR images, using contrast-enhanced ECG-triggered MRI. We have developed an automatic quantitative analysis method, which works as follows. First, image registration is used to compensate for translation and rotation of the myocardium over time. Next, the boundaries of the myocardium are detected and for each position within the myocardium a time-intensity profile is constructed. The time interval during which the contrast agent passes for the first time through the left ventricle and the myocardium is detected and various parameters are measured from the time-intensity profiles in this interval. The measured parameters are visualized as color overlays on the original images. Analysis results are stored, so that they can later on be compared for different stress levels of the heart. The method is described in detail in this paper and preliminary validation results are presented.

  19. In-situ imaging sensors for bioprocess monitoring: state of the art.

    PubMed

    Bluma, Arne; Höpfner, Tim; Lindner, Patrick; Rehbock, Christoph; Beutel, Sascha; Riechers, Daniel; Hitzmann, Bernd; Scheper, Thomas

    2010-11-01

    Over the last two decades, more and more applications of sophisticated sensor technology have been described in the literature on upstreaming and downstreaming for biotechnological processes (Middendorf et al. J Biotechnol 31:395-403, 1993; Lausch et al. J Chromatogr A 654:190-195, 1993; Scheper et al. Ann NY Acad Sci 506:431-445, 1987), in order to improve the quality and stability of these processes. Generally, biotechnological processes consist of complex three-phase systems--the cells (solid phase) are suspended in medium (liquid phase) and will be streamed by a gas phase. The chemical analysis of such processes has to observe all three phases. Furthermore, the bioanalytical processes used must monitor physical process values (e.g. temperature, shear force), chemical process values (e.g. pH), and biological process values (metabolic state of cell, morphology). In particular, for monitoring and estimation of relevant biological process variables, image-based inline sensors are used increasingly. Of special interest are sensors which can be installed in a bioreactor as sensor probes (e.g. pH probe). The cultivation medium is directly monitored in the process without any need for withdrawal of samples or bypassing. Important variables for the control of such processes are cell count, cell-size distribution (CSD), and the morphology of cells (Höpfner et al. Bioprocess Biosyst Eng 33:247-256, 2010). A major impetus for the development of these image-based techniques is the process analytical technology (PAT) initiative of the US Food and Drug Administration (FDA) (Scheper et al. Anal Chim Acta 163:111-118, 1984; Reardon and Scheper 1995; Schügerl et al. Trends Biotechnol 4:11-15, 1986). This contribution gives an overview of non-invasive, image-based, in-situ systems and their applications. The main focus is directed at the wide application area of in-situ microscopes. These inline image analysis systems enable the determination of indirect and direct cell

  20. Imaging the dynamics of free-electron Landau states

    PubMed Central

    Schattschneider, P.; Schachinger, Th.; Stöger-Pollach, M.; Löffler, S.; Steiger-Thirsfeld, A.; Bliokh, K. Y.; Nori, Franco

    2014-01-01

    Landau levels and states of electrons in a magnetic field are fundamental quantum entities underlying the quantum Hall and related effects in condensed matter physics. However, the real-space properties and observation of Landau wave functions remain elusive. Here we report the real-space observation of Landau states and the internal rotational dynamics of free electrons. States with different quantum numbers are produced using nanometre-sized electron vortex beams, with a radius chosen to match the waist of the Landau states, in a quasi-uniform magnetic field. Scanning the beams along the propagation direction, we reconstruct the rotational dynamics of the Landau wave functions with angular frequency ~100 GHz. We observe that Landau modes with different azimuthal quantum numbers belong to three classes, which are characterized by rotations with zero, Larmor and cyclotron frequencies, respectively. This is in sharp contrast to the uniform cyclotron rotation of classical electrons, and in perfect agreement with recent theoretical predictions. PMID:25105563

  1. Imaging the evolution of metallic states in a correlated iridate

    NASA Astrophysics Data System (ADS)

    Okada, Yoshinori; Walkup, Daniel; Lin, Hsin; Dhital, Chetan; Chang, Tay-Rong; Khadka, Sovit; Zhou, Wenwen; Jeng, Horng-Tay; Paranjape, Mandar; Bansil, Arun; Wang, Ziqiang; Wilson, Stephen D.; Madhavan, Vidya

    2013-08-01

    The Ruddlesden-Popper series of iridates (Srn+1IrnO3n+1) have been the subject of much recent attention due to the anticipation of emergent phenomena arising from the cooperative action of spin-orbit-driven band splitting and Coulomb interactions. However, an ongoing debate over the role of correlations in the formation of the charge gap and a lack of understanding of the effects of doping on the low-energy electronic structure have hindered experimental progress in realizing many of the predicted states. Using scanning tunnelling spectroscopy we map out the spatially resolved density of states in Sr3Ir2O7 (Ir327). We show that its parent compound, argued to exist only as a weakly correlated band insulator, in fact possesses a substantial ~ 130 meV charge excitation gap driven by an interplay between structure, spin-orbit coupling and correlations. We find that single-atom defects are associated with a strong electronic inhomogeneity, creating an important distinction between the intrinsic and spatially averaged electronic structure. Combined with first-principles calculations, our measurements reveal how defects at specific atomic sites transfer spectral weight from higher energies to the gap energies, providing a possible route to obtaining metallic electronic states from the parent insulating states in the iridates.

  2. Imaging the dynamics of free-electron Landau states.

    PubMed

    Schattschneider, P; Schachinger, Th; Stöger-Pollach, M; Löffler, S; Steiger-Thirsfeld, A; Bliokh, K Y; Nori, Franco

    2014-01-01

    Landau levels and states of electrons in a magnetic field are fundamental quantum entities underlying the quantum Hall and related effects in condensed matter physics. However, the real-space properties and observation of Landau wave functions remain elusive. Here we report the real-space observation of Landau states and the internal rotational dynamics of free electrons. States with different quantum numbers are produced using nanometre-sized electron vortex beams, with a radius chosen to match the waist of the Landau states, in a quasi-uniform magnetic field. Scanning the beams along the propagation direction, we reconstruct the rotational dynamics of the Landau wave functions with angular frequency ~100 GHz. We observe that Landau modes with different azimuthal quantum numbers belong to three classes, which are characterized by rotations with zero, Larmor and cyclotron frequencies, respectively. This is in sharp contrast to the uniform cyclotron rotation of classical electrons, and in perfect agreement with recent theoretical predictions. PMID:25105563

  3. Images for NCA's Future: Perceptions in One State.

    ERIC Educational Resources Information Center

    Kirkpatrick, Kathryn; Brainard, Edward

    1995-01-01

    Describes a study examining the attitudes of K-12 educators toward their state's North Central Association (NCA) and its future. Indicates that external peer reviews were cited as the most valuable programs, while many suggestions for the future were related to the NCA-university connection, emphasizing a dynamic role for universities serving as…

  4. Simulation study comparing the imaging performance of a solid state detector with a rotating slat collimator versus parallel beam collimator setups

    NASA Astrophysics Data System (ADS)

    Staelens, Steven; Vandenberghe, Stefaan; De Beenhouwer, Jan; De Clercq, Stijn; D'Asseler, Yves; Lemahieu, Ignace; Van de Walle, Rik

    2004-05-01

    The main goal of this work is to assess the overall imaging performance of dedicated new solid state devices compared to a traditional scintillation camera for use in SPECT imaging. A solid state detector with a rotating slat collimator will be compared with the same detector mounted with a classical collimator as opposed to a traditional Anger camera. A better energy resolution characterizes the solid state materials while the rotating slat collimator promises a better sensitivity-resolution tradeoff. The evaluation of the different imaging modalities is done using GATE, a recently developed Monte Carlo code. Several features for imaging performance evaluation were addressed: spatial resolution, energy resolution, sensitivity, and a ROC analysis was performed to evaluate the hot spot detectability. In this way a difference in perfromance was concluded for the diverse imaging techniques which allows a task dependent application of these modalities in future clinical practice.

  5. Digital interactive image analysis by array processing

    NASA Technical Reports Server (NTRS)

    Sabels, B. E.; Jennings, J. D.

    1973-01-01

    An attempt is made to draw a parallel between the existing geophysical data processing service industries and the emerging earth resources data support requirements. The relationship of seismic data analysis to ERTS data analysis is natural because in either case data is digitally recorded in the same format, resulting from remotely sensed energy which has been reflected, attenuated, shifted and degraded on its path from the source to the receiver. In the seismic case the energy is acoustic, ranging in frequencies from 10 to 75 cps, for which the lithosphere appears semi-transparent. In earth survey remote sensing through the atmosphere, visible and infrared frequency bands are being used. Yet the hardware and software required to process the magnetically recorded data from the two realms of inquiry are identical and similar, respectively. The resulting data products are similar.

  6. Rheumatoid arthritis: Nuclear Medicine state-of-the-art imaging

    PubMed Central

    Rosado-de-Castro, Paulo Henrique; Lopes de Souza, Sergio Augusto; Alexandre, Dângelo; Barbosa da Fonseca, Lea Mirian; Gutfilen, Bianca

    2014-01-01

    Rheumatoid arthritis (RA) is an autoimmune disease, which is associated with systemic and chronic inflammation of the joints, resulting in synovitis and pannus formation. For several decades, the assessment of RA has been limited to conventional radiography, assisting in the diagnosis and monitoring of disease. Nevertheless, conventional radiography has poor sensitivity in the detection of the inflammatory process that happens in the initial stages of RA. In the past years, new drugs that significantly decrease the progression of RA have allowed a more efficient treatment. Nuclear Medicine provides functional assessment of physiological processes and therefore has significant potential for timely diagnosis and adequate follow-up of RA. Several single photon emission computed tomography (SPECT) and positron emission tomography (PET) radiopharmaceuticals have been developed and applied in this field. The use of hybrid imaging, which permits computed tomography (CT) and nuclear medicine data to be acquired and fused, has increased even more the diagnostic accuracy of Nuclear Medicine by providing anatomical localization in SPECT/CT and PET/CT studies. More recently, fusion of PET with magnetic resonance imaging (PET/MRI) was introduced in some centers and demonstrated great potential. In this article, we will review studies that have been published using Nuclear Medicine for RA and examine key topics in the area. PMID:25035834

  7. Image analysis of chest radiographs. Final report

    SciTech Connect

    Hankinson, J.L.

    1982-06-01

    The report demonstrates the feasibility of using a computer for automated interpretation of chest radiographs for pneumoconiosis. The primary goal of this project was to continue testing and evaluating the prototype system with a larger set of films. After review of the final contract report and a review of the current literature, it was clear that several modifications to the prototype system were needed before the project could continue. These modifications can be divided into two general areas. The first area was in improving the stability of the system and compensating for the diversity of film quality which exists in films obtained in a surveillance program. Since the system was to be tested with a large number of films, it was impractical to be extremely selective of film quality. The second area is in terms of processing time. With a large set of films, total processing time becomes much more significant. An image display was added to the system so that the computer determined lung boundaries could be verified for each film. A film handling system was also added, enabling the system to scan films continuously without attendance.

  8. Image analysis of optic nerve disease.

    PubMed

    Burgoyne, C F

    2004-11-01

    Existing methodologies for imaging the optic nerve head surface topography and measuring the retinal nerve fibre layer thickness include confocal scanning laser ophthalmoscopy (Heidelberg retinal tomograph), optical coherence tomography, and scanning laser polarimetry. For cross-sectional screening of patient populations, all three approaches have achieved sensitivities and specificities within the 60-80th percentile in various studies, with occasional specificities greater than 90% in select populations. Nevertheless, these methods are not likely to provide useful assistance for the experienced examiner at their present level of performance. For longitudinal change detection in individual patients, strategies for clinically specific change detection have been rigorously evaluated for confocal scanning laser tomography only. While these initial studies are encouraging, applying these algorithms in larger numbers of patients is now necessary. Future directions for these technologies are likely to include ultra-high resolution optical coherence tomography, the use of neural network/machine learning classifiers to improve clinical decision-making, and the ability to evaluate the susceptibility of individual optic nerve heads to potential damage from a given level of intraocular pressure or systemic blood pressure. PMID:15534606

  9. Two-photon photoemission from image-potential states of epitaxial graphene

    NASA Astrophysics Data System (ADS)

    Gugel, Dieter; Niesner, Daniel; Eickhoff, Christian; Wagner, Stefanie; Weinelt, Martin; Fauster, Thomas

    2015-12-01

    Using angle- and time-resolved two-photon photoelectron spectroscopy we observe a single series of image-potential states of graphene on monolayer (MLG) and bilayer graphene (BLG) on SiC(0001). The first image-potential state on MLG (BLG) has a binding energy of 0.93 eV (0.84 eV). Lifetimes of the first three image-potential states of MLG are 9, 44 and 110 fs. On hydrogen-intercalated, quasi-freestanding graphene no unoccupied states are observed. We attribute this to the absence of occupied initial states for direct transitions into image-potential states at photon energies below the work function used in two-photon photoemission. The work function varies between 4.14 and 4.79 eV, but the vacuum level stays ∼4.5 eV above the Dirac point for all surfaces studied. This finding suggests that direct excitation of image-potential states cannot be achieved by doping and the electron dynamics for free-standing graphene is not accessible by two-photon photoemission using photon energies below the work function.

  10. Image Analysis of DNA Fiber and Nucleus in Plants.

    PubMed

    Ohmido, Nobuko; Wako, Toshiyuki; Kato, Seiji; Fukui, Kiichi

    2016-01-01

    Advances in cytology have led to the application of a wide range of visualization methods in plant genome studies. Image analysis methods are indispensable tools where morphology, density, and color play important roles in the biological systems. Visualization and image analysis methods are useful techniques in the analyses of the detailed structure and function of extended DNA fibers (EDFs) and interphase nuclei. The EDF is the highest in the spatial resolving power to reveal genome structure and it can be used for physical mapping, especially for closely located genes and tandemly repeated sequences. One the other hand, analyzing nuclear DNA and proteins would reveal nuclear structure and functions. In this chapter, we describe the image analysis protocol for quantitatively analyzing different types of plant genome, EDFs and interphase nuclei. PMID:27557694

  11. Imaging for dismantlement verification: information management and analysis algorithms

    SciTech Connect

    Robinson, Sean M.; Jarman, Kenneth D.; Pitts, W. Karl; Seifert, Allen; Misner, Alex C.; Woodring, Mitchell L.; Myjak, Mitchell J.

    2012-01-11

    The level of detail discernible in imaging techniques has generally excluded them from consideration as verification tools in inspection regimes. An image will almost certainly contain highly sensitive information, and storing a comparison image will almost certainly violate a cardinal principle of information barriers: that no sensitive information be stored in the system. To overcome this problem, some features of the image might be reduced to a few parameters suitable for definition as an attribute, which must be non-sensitive to be acceptable in an Information Barrier regime. However, this process must be performed with care. Features like the perimeter, area, and intensity of an object, for example, might reveal sensitive information. Any data-reduction technique must provide sufficient information to discriminate a real object from a spoofed or incorrect one, while avoiding disclosure (or storage) of any sensitive object qualities. Ultimately, algorithms are intended to provide only a yes/no response verifying the presence of features in the image. We discuss the utility of imaging for arms control applications and present three image-based verification algorithms in this context. The algorithms reduce full image information to non-sensitive feature information, in a process that is intended to enable verification while eliminating the possibility of image reconstruction. The underlying images can be highly detailed, since they are dynamically generated behind an information barrier. We consider the use of active (conventional) radiography alone and in tandem with passive (auto) radiography. We study these algorithms in terms of technical performance in image analysis and application to an information barrier scheme.

  12. Functional imaging of auditory scene analysis.

    PubMed

    Gutschalk, Alexander; Dykstra, Andrew R

    2014-01-01

    Our auditory system is constantly faced with the task of decomposing the complex mixture of sound arriving at the ears into perceptually independent streams constituting accurate representations of individual sound sources. This decomposition, termed auditory scene analysis, is critical for both survival and communication, and is thought to underlie both speech and music perception. The neural underpinnings of auditory scene analysis have been studied utilizing invasive experiments with animal models as well as non-invasive (MEG, EEG, and fMRI) and invasive (intracranial EEG) studies conducted with human listeners. The present article reviews human neurophysiological research investigating the neural basis of auditory scene analysis, with emphasis on two classical paradigms termed streaming and informational masking. Other paradigms - such as the continuity illusion, mistuned harmonics, and multi-speaker environments - are briefly addressed thereafter. We conclude by discussing the emerging evidence for the role of auditory cortex in remapping incoming acoustic signals into a perceptual representation of auditory streams, which are then available for selective attention and further conscious processing. This article is part of a Special Issue entitled Human Auditory Neuroimaging. PMID:23968821

  13. Perfect imaging analysis of the spherical geodesic waveguide

    NASA Astrophysics Data System (ADS)

    González, Juan C.; Benítez, Pablo; Miñano, Juan C.; Grabovičkić, Dejan

    2012-12-01

    Negative Refractive Lens (NRL) has shown that an optical system can produce images with details below the classic Abbe diffraction limit. This optical system transmits the electromagnetic fields, emitted by an object plane, towards an image plane producing the same field distribution in both planes. In particular, a Dirac delta electric field in the object plane is focused without diffraction limit to the Dirac delta electric field in the image plane. Two devices with positive refraction, the Maxwell Fish Eye lens (MFE) and the Spherical Geodesic Waveguide (SGW) have been claimed to break the diffraction limit using positive refraction with a different meaning. In these cases, it has been considered the power transmission from a point source to a point receptor, which falls drastically when the receptor is displaced from the focus by a distance much smaller than the wavelength. Although these systems can detect displacements up to λ/3000, they cannot be compared to the NRL, since the concept of image is different. The SGW deals only with point source and drain, while in the case of the NRL, there is an object and an image surface. Here, it is presented an analysis of the SGW with defined object and image surfaces (both are conical surfaces), similarly as in the case of the NRL. The results show that a Dirac delta electric field on the object surface produces an image below the diffraction limit on the image surface.

  14. Congruence analysis of point clouds from unstable stereo image sequences

    NASA Astrophysics Data System (ADS)

    Jepping, C.; Bethmann, F.; Luhmann, T.

    2014-06-01

    This paper deals with the correction of exterior orientation parameters of stereo image sequences over deformed free-form surfaces without control points. Such imaging situation can occur, for example, during photogrammetric car crash test recordings where onboard high-speed stereo cameras are used to measure 3D surfaces. As a result of such measurements 3D point clouds of deformed surfaces are generated for a complete stereo sequence. The first objective of this research focusses on the development and investigation of methods for the detection of corresponding spatial and temporal tie points within the stereo image sequences (by stereo image matching and 3D point tracking) that are robust enough for a reliable handling of occlusions and other disturbances that may occur. The second objective of this research is the analysis of object deformations in order to detect stable areas (congruence analysis). For this purpose a RANSAC-based method for congruence analysis has been developed. This process is based on the sequential transformation of randomly selected point groups from one epoch to another by using a 3D similarity transformation. The paper gives a detailed description of the congruence analysis. The approach has been tested successfully on synthetic and real image data.

  15. A unified noise analysis for iterative image estimation.

    PubMed

    Qi, Jinyi

    2003-11-01

    Iterative image estimation methods have been widely used in emission tomography. Accurate estimation of the uncertainty of the reconstructed images is essential for quantitative applications. While both iteration-based noise analysis and fixed-point noise analysis have been developed, current iteration-based results are limited to only a few algorithms that have an explicit multiplicative update equation and some may not converge to the fixed-point result. This paper presents a theoretical noise analysis that is applicable to a wide range of preconditioned gradient-type algorithms. Under a certain condition, the proposed method does not require an explicit expression of the preconditioner. By deriving the fixed-point expression from the iteration-based result, we show that the proposed iteration-based noise analysis is consistent with fixed-point analysis. Examples in emission tomography and transmission tomography are shown. The results are validated using Monte Carlo simulations. PMID:14653559

  16. New York State Special Education Enrollment Analysis

    ERIC Educational Resources Information Center

    Lake, Robin; Gross, Betheny; Denice, Patrick

    2012-01-01

    Responding to concerns that charter schools do not provide equal access to students with disabilities, advocates in districts, states, and courts across the country have sought to improve such access. Adding to these concerns, the U.S. Government Accountability Office recently released a report showing that charter schools, on average, serve a…

  17. A Practical and Portable Solids-State Electronic Terahertz Imaging System.

    PubMed

    Smart, Ken; Du, Jia; Li, Li; Wang, David; Leslie, Keith; Ji, Fan; Li, Xiang Dong; Zeng, Da Zhang

    2016-01-01

    A practical compact solid-state terahertz imaging system is presented. Various beam guiding architectures were explored and hardware performance assessed to improve its compactness, robustness, multi-functionality and simplicity of operation. The system performance in terms of image resolution, signal-to-noise ratio, the electronic signal modulation versus optical chopper, is evaluated and discussed. The system can be conveniently switched between transmission and reflection mode according to the application. A range of imaging application scenarios was explored and images of high visual quality were obtained in both transmission and reflection mode. PMID:27110791

  18. A Practical and Portable Solids-State Electronic Terahertz Imaging System

    PubMed Central

    Smart, Ken; Du, Jia; Li, Li; Wang, David; Leslie, Keith; Ji, Fan; Li, Xiang Dong; Zeng, Da Zhang

    2016-01-01

    A practical compact solid-state terahertz imaging system is presented. Various beam guiding architectures were explored and hardware performance assessed to improve its compactness, robustness, multi-functionality and simplicity of operation. The system performance in terms of image resolution, signal-to-noise ratio, the electronic signal modulation versus optical chopper, is evaluated and discussed. The system can be conveniently switched between transmission and reflection mode according to the application. A range of imaging application scenarios was explored and images of high visual quality were obtained in both transmission and reflection mode. PMID:27110791

  19. Visualization and quantitative analysis of lung microstructure using micro CT images

    NASA Astrophysics Data System (ADS)

    Yamamoto, Tetsuo; Kubo, Mitsuru; Kawata, Yoshiki; Niki, Noboru; Fujii, Masashi; Nakaya, Yoshihiro; Matsui, Eisuke; Ohmatsu, Hironobu; Moriyama, Noriyuki

    2005-04-01

    Micro CT system is developed for lung function analysis at a high resolution of the micrometer order (up to 5μm in spatial resolution). This system reveals the lung distal structures such as interlobular septa, terminal bronchiole, respiratory bronchiole, alveolar duct, and alveolus. In order to visualize lung 3-D microstructures using micro CT images and to analyze them, this research presents a computerized approach. This approach is applied for to micro CT images of human lung tissue specimens that were obtained by surgical excision and were kept in the state of the inflated fixed lung. This report states a wall area such as bronchus wall and alveolus wall about the extraction technique by using the surface thinning process to analyze the lung microstructures from micro CT images measured by the new-model micro CT system.

  20. Pathway Analysis: State of the Art

    PubMed Central

    García-Campos, Miguel A.; Espinal-Enríquez, Jesús; Hernández-Lemus, Enrique

    2015-01-01

    Pathway analysis is a set of widely used tools for research in life sciences intended to give meaning to high-throughput biological data. The methodology of these tools settles in the gathering and usage of knowledge that comprise biomolecular functioning, coupled with statistical testing and other algorithms. Despite their wide employment, pathway analysis foundations and overall background may not be fully understood, leading to misinterpretation of analysis results. This review attempts to comprise the fundamental knowledge to take into consideration when using pathway analysis as a hypothesis generation tool. We discuss the key elements that are part of these methodologies, their capabilities and current deficiencies. We also present an overview of current and all-time popular methods, highlighting different classes across them. In doing so, we show the exploding diversity of methods that pathway analysis encompasses, point out commonly overlooked caveats, and direct attention to a potential new class of methods that attempt to zoom the analysis scope to the sample scale. PMID:26733877

  1. 2D wavelet-analysis-based calibration technique for flat-panel imaging detectors: application in cone beam volume CT

    NASA Astrophysics Data System (ADS)

    Tang, Xiangyang; Ning, Ruola; Yu, Rongfeng; Conover, David L.

    1999-05-01

    The application of the newly developed flat panel x-ray imaging detector in cone beam volume CT has attracted increasing interest recently. Due to an imperfect solid state array manufacturing process, however, defective elements, gain non-uniformity and offset image unavoidably exist in all kinds of flat panel x-ray imaging detectors, which will cause severe streak and ring artifacts in a cone beam reconstruction image and severely degrade image quality. A calibration technique, in which the artifacts resulting from the defective elements, gain non-uniformity and offset image can be reduced significantly, is presented in this paper. The detection of defective elements is distinctively based upon two-dimensional (2D) wavelet analysis. Because of its inherent localizability in recognizing singularities or discontinuities, wavelet analysis possesses the capability of detecting defective elements over a rather large x-ray exposure range, e.g., 20% to approximately 60% of the dynamic range of the detector used. Three-dimensional (3D) images of a low-contrast CT phantom have been reconstructed from projection images acquired by a flat panel x-ray imaging detector with and without calibration process applied. The artifacts caused individually by defective elements, gain non-uniformity and offset image have been separated and investigated in detail, and the correlation with each other have also been exposed explicitly. The investigation is enforced by quantitative analysis of the signal to noise ratio (SNR) and the image uniformity of the cone beam reconstruction image. It has been demonstrated that the ring and streak artifacts resulting from the imperfect performance of a flat panel x-ray imaging detector can be reduced dramatically, and then the image qualities of a cone beam reconstruction image, such as contrast resolution and image uniformity are improved significantly. Furthermore, with little modification, the calibration technique presented here is also applicable

  2. Terahertz spectroscopy and imaging for cultural heritage management: state of art and perspectives

    NASA Astrophysics Data System (ADS)

    Catapano, Ilaria; Soldovieri, Francesco

    2014-05-01

    Non-invasive diagnostic tools able to provide information on the materials and preservation state of artworks are crucial to help conservators, archaeologists and anthropologists to plan and carry out their tasks properly. In this frame, technological solutions exploiting Terahertz (THz) radiation, i.e., working at frequencies ranging from 0.1 to 10 THz, are currently deserving huge attention as complementary techniques to classical analysis methodologies based on electromagnetic radiations from X-rays to mid infrared [1]. The main advantage offered by THz spectroscopy and imaging systems is referred to their capability of providing information useful to determine the construction modality, the history life and the conservation state of artworks as well as to identify previous restoration actions [1,2]. In particular, unlike mid- and near-infrared spectroscopy, which provides fingerprint absorption spectra depending on the intramolecular behavior, THz spectroscopy is related to the structure of the molecules of the investigated object. Hence, it can discriminate, for instance, the different materials mixed in a paint [1,2]. Moreover, THz radiation is able to penetrate several materials which are opaque to both visible and infrared materials, such as varnish, paint, plaster, paper, wood, plastic, and so on. Accordingly, it is useful to detect hidden objects and characterize the inner structure of the artwork under test even in the direction of the depth, while avoiding core drillings. In this frame, THz systems allow us to discriminate different layers of materials present in artworks like paints, to obtain images providing information on the construction technique as well as to discover risk factors affecting the preservation state, such as non-visible cracks, hidden molds and air gaps between the paint layer and underlying structure. Furthermore, adopting a no-ionizing radiation, THz systems offer the not trivial benefit of negligible long term risks to the

  3. Region-based Statistical Analysis of 2D PAGE Images

    PubMed Central

    Li, Feng; Seillier-Moiseiwitsch, Françoise; Korostyshevskiy, Valeriy R.

    2011-01-01

    A new comprehensive procedure for statistical analysis of two-dimensional polyacrylamide gel electrophoresis (2D PAGE) images is proposed, including protein region quantification, normalization and statistical analysis. Protein regions are defined by the master watershed map that is obtained from the mean gel. By working with these protein regions, the approach bypasses the current bottleneck in the analysis of 2D PAGE images: it does not require spot matching. Background correction is implemented in each protein region by local segmentation. Two-dimensional locally weighted smoothing (LOESS) is proposed to remove any systematic bias after quantification of protein regions. Proteins are separated into mutually independent sets based on detected correlations, and a multivariate analysis is used on each set to detect the group effect. A strategy for multiple hypothesis testing based on this multivariate approach combined with the usual Benjamini-Hochberg FDR procedure is formulated and applied to the differential analysis of 2D PAGE images. Each step in the analytical protocol is shown by using an actual dataset. The effectiveness of the proposed methodology is shown using simulated gels in comparison with the commercial software packages PDQuest and Dymension. We also introduce a new procedure for simulating gel images. PMID:21850152

  4. Development of an image-analysis light-scattering technique

    NASA Astrophysics Data System (ADS)

    Algarni, Saad; Kashuri, Hektor; Iannacchione, Germano

    2013-03-01

    We describe the progress in developing a versatile image-analysis approach for a light-scattering experiment. Recent advances in image analysis algorithms, computational power, and CCD image capture has allowed for the complete digital recording of the scattering of coherent laser light by a wide variety of samples. This digital record can then yield both static and dynamic information about the scattering events. Our approach is described using a very simple and in-expensive experimental arrangement for liquid samples. Calibration experiments were performed on aqueous suspensions of latex spheres having 0.5 and 1.0 micrometer diameter for three concentrations of 2 X 10-6, 1 X 10-6, and 5 X 10-7 % w/w at room temperature. The resulting data span a wave-vector range of q = 102 to 105 cm-1 and time averages over 0.05 to 1200 sec. The static analysis yield particle sizes in good agreement with expectations and a simple dynamic analysis yields an estimate of the characteristic time scale of the particle dynamics. Further developments in image corrections (laser stability, vibration, curvature, etc.) as well as time auto-correlation analysis will also be discussed.

  5. Photoacoustic Image Analysis for Cancer Detection and Building a Novel Ultrasound Imaging System

    NASA Astrophysics Data System (ADS)

    Sinha, Saugata

    Photoacoustic (PA) imaging is a rapidly emerging non-invasive soft tissue imaging modality which has the potential to detect tissue abnormality at early stage. Photoacoustic images map the spatially varying optical absorption property of tissue. In multiwavelength photoacoustic imaging, the soft tissue is imaged with different wavelengths, tuned to the absorption peaks of the specific light absorbing tissue constituents or chromophores to obtain images with different contrasts of the same tissue sample. From those images, spatially varying concentration of the chromophores can be recovered. As multiwavelength PA images can provide important physiological information related to function and molecular composition of the tissue, so they can be used for diagnosis of cancer lesions and differentiation of malignant tumors from benign tumors. In this research, a number of parameters have been extracted from multiwavelength 3D PA images of freshly excised human prostate and thyroid specimens, imaged at five different wavelengths. Using marked histology slides as ground truths, region of interests (ROI) corresponding to cancer, benign and normal regions have been identified in the PA images. The extracted parameters belong to different categories namely chromophore concentration, frequency parameters and PA image pixels and they represent different physiological and optical properties of the tissue specimens. Statistical analysis has been performed to test whether the extracted parameters are significantly different between cancer, benign and normal regions. A multidimensional [29 dimensional] feature set, built with the extracted parameters from the 3D PA images, has been divided randomly into training and testing sets. The training set has been used to train support vector machine (SVM) and neural network (NN) classifiers while the performance of the classifiers in differentiating different tissue pathologies have been determined by the testing dataset. Using the NN

  6. Visualization and quantitative analysis of lung microstructure using micro CT images

    NASA Astrophysics Data System (ADS)

    Yamamoto, Tetsuo; Kubo, Mitsuru; Kawata, Yoshiki; Niki, Noboru; Matsui, Eisuke; Ohamatsu, Hironobu; Moriyama, Noriyuki

    2004-04-01

    Micro CT system is developed for lung function analysis at a high resolution of the micrometer order (up to 5 μm in spatial resolution). This system reveals the lung distal structures such as interlobular septa, terminal bronchiole, respiratory bronchiole, alveolar duct, and alveolus. In order to visualize lung 3-D microstructures using micro CT images and to analyze them, this research presents a computerized approach. In this approach, the following things are performed: (1) extracting lung distal structures from micro CT images, (2) visualizing extracted lung microstructure in three dimensions, and (3) visualizing inside of lung distal area in three dimensions with fly-through. This approach is applied for to micro CT images of human lung tissue specimens that were obtained by surgical excision and were kept in the state of the inflated fixed lung. And this research succeeded in visualization of lung microstructures using micro CT images to reveal the lung distal structures from bronchiole up to alveolus.

  7. Kepler mission exoplanet transit data analysis using fractal imaging

    NASA Astrophysics Data System (ADS)

    Dehipawala, S.; Tremberger, G.; Majid, Y.; Holden, T.; Lieberman, D.; Cheung, T.

    2012-10-01

    The Kepler mission is designed to survey a fist-sized patch of the sky within the Milky Way galaxy for the discovery of exoplanets, with emphasis on near Earth-size exoplanets in or near the habitable zone. The Kepler space telescope would detect the brightness fluctuation of a host star and extract periodic dimming in the lightcurve caused by exoplanets that cross in front of their host star. The photometric data of a host star could be interpreted as an image where fractal imaging would be applicable. Fractal analysis could elucidate the incomplete data limitation posed by the data integration window. The fractal dimension difference between the lower and upper halves of the image could be used to identify anomalies associated with transits and stellar activity as the buried signals are expected to be in the lower half of such an image. Using an image fractal dimension resolution of 0.04 and defining the whole image fractal dimension as the Chi-square expected value of the fractal dimension, a p-value can be computed and used to establish a numerical threshold for decision making that may be useful in further studies of lightcurves of stars with candidate exoplanets. Similar fractal dimension difference approaches would be applicable to the study of photometric time series data via the Higuchi method. The correlated randomness of the brightness data series could be used to support inferences based on image fractal dimension differences. Fractal compression techniques could be used to transform a lightcurve image, resulting in a new image with a new fractal dimension value, but this method has been found to be ineffective for images with high information capacity. The three studied criteria could be used together to further constrain the Kepler list of candidate lightcurves of stars with possible exoplanets that may be planned for ground-based telescope confirmation.

  8. LANDSAT-4 image data quality analysis

    NASA Technical Reports Server (NTRS)

    Anuta, P. E. (Principal Investigator)

    1983-01-01

    Analysis during the quarter was carried out on geometric, radiometric, and information content aspects of both MSS and thematic mapper (TM) data. Test sites in Webster County, Iowa and Chicago, IL., and near Joliet, IL were studied. Band to band registration was evaluated and TM Bands 5 and 7 were found to be approximately 0.5 pixel out of registration with 1,2,3,4, and the thermal was found to be misregistered by 4 30 m pixels to the east and 1 pixel south. Certain MSS bands indicated nominally .25 pixel misregistration. Radiometrically, some striping was observed in TM bands and significant oscillatory noise patterns exist in MSS data which is possibly due to jitter. Information content was compared before and after cubic convolution resampling and no differences were observed in statistics or separability of basic scene classes.

  9. Cascaded image analysis for dynamic crack detection in material testing

    NASA Astrophysics Data System (ADS)

    Hampel, U.; Maas, H.-G.

    Concrete probes in civil engineering material testing often show fissures or hairline-cracks. These cracks develop dynamically. Starting at a width of a few microns, they usually cannot be detected visually or in an image of a camera imaging the whole probe. Conventional image analysis techniques will detect fissures only if they show a width in the order of one pixel. To be able to detect and measure fissures with a width of a fraction of a pixel at an early stage of their development, a cascaded image analysis approach has been developed, implemented and tested. The basic idea of the approach is to detect discontinuities in dense surface deformation vector fields. These deformation vector fields between consecutive stereo image pairs, which are generated by cross correlation or least squares matching, show a precision in the order of 1/50 pixel. Hairline-cracks can be detected and measured by applying edge detection techniques such as a Sobel operator to the results of the image matching process. Cracks will show up as linear discontinuities in the deformation vector field and can be vectorized by edge chaining. In practical tests of the method, cracks with a width of 1/20 pixel could be detected, and their width could be determined at a precision of 1/50 pixel.

  10. The analysis of image feature robustness using cometcloud

    PubMed Central

    Qi, Xin; Kim, Hyunjoo; Xing, Fuyong; Parashar, Manish; Foran, David J.; Yang, Lin

    2012-01-01

    The robustness of image features is a very important consideration in quantitative image analysis. The objective of this paper is to investigate the robustness of a range of image texture features using hematoxylin stained breast tissue microarray slides which are assessed while simulating different imaging challenges including out of focus, changes in magnification and variations in illumination, noise, compression, distortion, and rotation. We employed five texture analysis methods and tested them while introducing all of the challenges listed above. The texture features that were evaluated include co-occurrence matrix, center-symmetric auto-correlation, texture feature coding method, local binary pattern, and texton. Due to the independence of each transformation and texture descriptor, a network structured combination was proposed and deployed on the Rutgers private cloud. The experiments utilized 20 randomly selected tissue microarray cores. All the combinations of the image transformations and deformations are calculated, and the whole feature extraction procedure was completed in 70 minutes using a cloud equipped with 20 nodes. Center-symmetric auto-correlation outperforms all the other four texture descriptors but also requires the longest computational time. It is roughly 10 times slower than local binary pattern and texton. From a speed perspective, both the local binary pattern and texton features provided excellent performance for classification and content-based image retrieval. PMID:23248759

  11. Unsupervised change detection in satellite images using fuzzy c-means clustering and principal component analysis

    NASA Astrophysics Data System (ADS)

    Kesikoğlu, M. H.; Atasever, Ü. H.; Özkan, C.

    2013-10-01

    Change detection analyze means that according to observations made in different times, the process of defining the change detection occurring in nature or in the state of any objects or the ability of defining the quantity of temporal effects by using multitemporal data sets. There are lots of change detection techniques met in literature. It is possible to group these techniques under two main topics as supervised and unsupervised change detection. In this study, the aim is to define the land cover changes occurring in specific area of Kayseri with unsupervised change detection techniques by using Landsat satellite images belonging to different years which are obtained by the technique of remote sensing. While that process is being made, image differencing method is going to be applied to the images by following the procedure of image enhancement. After that, the method of Principal Component Analysis is going to be applied to the difference image obtained. To determine the areas that have and don't have changes, the image is grouped as two parts by Fuzzy C-Means Clustering method. For achieving these processes, firstly the process of image to image registration is completed. As a result of this, the images are being referred to each other. After that, gray scale difference image obtained is partitioned into 3 × 3 nonoverlapping blocks. With the method of principal component analysis, eigenvector space is gained and from here, principal components are reached. Finally, feature vector space consisting principal component is partitioned into two clusters using Fuzzy C-Means Clustering and after that change detection process has been done.

  12. Proteome Analysis of Ground State Pluripotency

    PubMed Central

    Taleahmad, Sara; Mirzaei, Mehdi; Parker, Lindsay M.; Hassani, Seyedeh-Nafiseh; Mollamohammadi, Sepideh; Sharifi-Zarchi, Ali; Haynes, Paul A.; Baharvand, Hossein; Salekdeh, Ghasem Hosseini

    2015-01-01

    The differentiation potential of pluripotent embryonic stem cells (ESCs) can be manipulated via serum and medium conditions for direct cellular development or to maintain a naïve ground state. The self-renewal state of ESCs can thus be induced by adding inhibitors of mitogen activated protein kinase (MAPK) and glycogen synthase kinase-3 (Gsk3), known as 2 inhibitors (2i) treatment. We have used a shotgun proteomics approach to investigate differences in protein expressions between 2i- and serum-grown mESCs. The results indicated that 164 proteins were significantly upregulated and 107 proteins downregulated in 2i-grown cells compared to serum. Protein pathways in 2i-grown cells with the highest enrichment were associated with glycolysis and gluconeogenesis. Protein pathways related to organ development were downregulated in 2i-grown cells. In serum-grown ESCs, protein pathways involved in integrin and focal adhesion, and signaling proteins involved in the actin cytoskeleton regulation were enriched. We observed a number of nuclear proteins which were mostly involved in self-renewal maintenance and were expressed at higher levels in 2i compared to serum - Dnmt1, Map2k1, Parp1, Xpo4, Eif3g, Smarca4/Brg1 and Smarcc1/Baf155. Collectively, the results provided an insight into the key protein pathways used by ESCs in the ground state or metastable conditions through 2i or serum culture medium, respectively. PMID:26671762

  13. The influence of image reconstruction algorithms on linear thorax EIT image analysis of ventilation.

    PubMed

    Zhao, Zhanqi; Frerichs, Inéz; Pulletz, Sven; Müller-Lisse, Ullrich; Möller, Knut

    2014-06-01

    Analysis methods of electrical impedance tomography (EIT) images based on different reconstruction algorithms were examined. EIT measurements were performed on eight mechanically ventilated patients with acute respiratory distress syndrome. A maneuver with step increase of airway pressure was performed. EIT raw data were reconstructed offline with (1) filtered back-projection (BP); (2) the Dräger algorithm based on linearized Newton-Raphson (DR); (3) the GREIT (Graz consensus reconstruction algorithm for EIT) reconstruction algorithm with a circular forward model (GR(C)) and (4) GREIT with individual thorax geometry (GR(T)). Individual thorax contours were automatically determined from the routine computed tomography images. Five indices were calculated on the resulting EIT images respectively: (a) the ratio between tidal and deep inflation impedance changes; (b) tidal impedance changes in the right and left lungs; (c) center of gravity; (d) the global inhomogeneity index and (e) ventilation delay at mid-dorsal regions. No significant differences were found in all examined indices among the four reconstruction algorithms (p > 0.2, Kruskal-Wallis test). The examined algorithms used for EIT image reconstruction do not influence the selected indices derived from the EIT image analysis. Indices that validated for images with one reconstruction algorithm are also valid for other reconstruction algorithms. PMID:24845059

  14. Computer-aided photometric analysis of dynamic digital bioluminescent images

    NASA Astrophysics Data System (ADS)

    Gorski, Zbigniew; Bembnista, T.; Floryszak-Wieczorek, J.; Domanski, Marek; Slawinski, Janusz

    2003-04-01

    The paper deals with photometric and morphologic analysis of bioluminescent images obtained by registration of light radiated directly from some plant objects. Registration of images obtained from ultra-weak light sources by the single photon counting (SPC) technique is the subject of this work. The radiation is registered by use of a 16-bit charge coupled device (CCD) camera "Night Owl" together with WinLight EG&G Berthold software. Additional application-specific software has been developed in order to deal with objects that are changing during the exposition time. Advantages of the elaborated set of easy configurable tools named FCT for a computer-aided photometric and morphologic analysis of numerous series of quantitatively imperfect chemiluminescent images are described. Instructions are given how to use these tools and exemplified with several algorithms for the transformation of images library. Using the proposed FCT set, automatic photometric and morphologic analysis of the information hidden within series of chemiluminescent images reflecting defensive processes in poinsettia (Euphorbia pulcherrima Willd) leaves affected by a pathogenic fungus Botrytis cinerea is revealed.

  15. Automated analysis of image mammogram for breast cancer diagnosis

    NASA Astrophysics Data System (ADS)

    Nurhasanah, Sampurno, Joko; Faryuni, Irfana Diah; Ivansyah, Okto

    2016-03-01

    Medical imaging help doctors in diagnosing and detecting diseases that attack the inside of the body without surgery. Mammogram image is a medical image of the inner breast imaging. Diagnosis of breast cancer needs to be done in detail and as soon as possible for determination of next medical treatment. The aim of this work is to increase the objectivity of clinical diagnostic by using fractal analysis. This study applies fractal method based on 2D Fourier analysis to determine the density of normal and abnormal and applying the segmentation technique based on K-Means clustering algorithm to image abnormal for determine the boundary of the organ and calculate the area of organ segmentation results. The results show fractal method based on 2D Fourier analysis can be used to distinguish between the normal and abnormal breast and segmentation techniques with K-Means Clustering algorithm is able to generate the boundaries of normal and abnormal tissue organs, so area of the abnormal tissue can be determined.

  16. Methods for spectral image analysis by exploiting spatial simplicity

    DOEpatents

    Keenan, Michael R.

    2010-05-25

    Several full-spectrum imaging techniques have been introduced in recent years that promise to provide rapid and comprehensive chemical characterization of complex samples. One of the remaining obstacles to adopting these techniques for routine use is the difficulty of reducing the vast quantities of raw spectral data to meaningful chemical information. Multivariate factor analysis techniques, such as Principal Component Analysis and Alternating Least Squares-based Multivariate Curve Resolution, have proven effective for extracting the essential chemical information from high dimensional spectral image data sets into a limited number of components that describe the spectral characteristics and spatial distributions of the chemical species comprising the sample. There are many cases, however, in which those constraints are not effective and where alternative approaches may provide new analytical insights. For many cases of practical importance, imaged samples are "simple" in the sense that they consist of relatively discrete chemical phases. That is, at any given location, only one or a few of the chemical species comprising the entire sample have non-zero concentrations. The methods of spectral image analysis of the present invention exploit this simplicity in the spatial domain to make the resulting factor models more realistic. Therefore, more physically accurate and interpretable spectral and abundance components can be extracted from spectral images that have spatially simple structure.

  17. Methods for spectral image analysis by exploiting spatial simplicity

    DOEpatents

    Keenan, Michael R.

    2010-11-23

    Several full-spectrum imaging techniques have been introduced in recent years that promise to provide rapid and comprehensive chemical characterization of complex samples. One of the remaining obstacles to adopting these techniques for routine use is the difficulty of reducing the vast quantities of raw spectral data to meaningful chemical information. Multivariate factor analysis techniques, such as Principal Component Analysis and Alternating Least Squares-based Multivariate Curve Resolution, have proven effective for extracting the essential chemical information from high dimensional spectral image data sets into a limited number of components that describe the spectral characteristics and spatial distributions of the chemical species comprising the sample. There are many cases, however, in which those constraints are not effective and where alternative approaches may provide new analytical insights. For many cases of practical importance, imaged samples are "simple" in the sense that they consist of relatively discrete chemical phases. That is, at any given location, only one or a few of the chemical species comprising the entire sample have non-zero concentrations. The methods of spectral image analysis of the present invention exploit this simplicity in the spatial domain to make the resulting factor models more realistic. Therefore, more physically accurate and interpretable spectral and abundance components can be extracted from spectral images that have spatially simple structure.

  18. Quantitative Medical Image Analysis for Clinical Development of Therapeutics

    NASA Astrophysics Data System (ADS)

    Analoui, Mostafa

    There has been significant progress in development of therapeutics for prevention and management of several disease areas in recent years, leading to increased average life expectancy, as well as of quality of life, globally. However, due to complexity of addressing a number of medical needs and financial burden of development of new class of therapeutics, there is a need for better tools for decision making and validation of efficacy and safety of new compounds. Numerous biological markers (biomarkers) have been proposed either as adjunct to current clinical endpoints or as surrogates. Imaging biomarkers are among rapidly increasing biomarkers, being examined to expedite effective and rational drug development. Clinical imaging often involves a complex set of multi-modality data sets that require rapid and objective analysis, independent of reviewer's bias and training. In this chapter, an overview of imaging biomarkers for drug development is offered, along with challenges that necessitate quantitative and objective image analysis. Examples of automated and semi-automated analysis approaches are provided, along with technical review of such methods. These examples include the use of 3D MRI for osteoarthritis, ultrasound vascular imaging, and dynamic contrast enhanced MRI for oncology. Additionally, a brief overview of regulatory requirements is discussed. In conclusion, this chapter highlights key challenges and future directions in this area.

  19. RGB calibration for color image analysis in machine vision.

    PubMed

    Chang, Y C; Reid, J F

    1996-01-01

    A color calibration method for correcting the variations in RGB color values caused by vision system components was developed and tested in this study. The calibration scheme concentrated on comprehensively estimating and removing the RGB errors without specifying error sources and their effects. The algorithm for color calibration was based upon the use of a standardized color chart and developed as a preprocessing tool for color image analysis. According to the theory of image formation, RGB errors in color images were categorized into multiplicative and additive errors. Multiplicative and additive errors contained various error sources-gray-level shift, a variation in amplification and quantization in camera electronics or frame grabber, the change of color temperature of illumination with time, and related factors. The RGB errors of arbitrary colors in an image were estimated from the RGB errors of standard colors contained in the image. The color calibration method also contained an algorithm for correcting the nonuniformity of illumination in the scene. The algorithm was tested under two different conditions-uniform and nonuniform illumination in the scene. The RGB errors of arbitrary colors in test images were almost completely removed after color calibration. The maximum residual error was seven gray levels under uniform illumination and 12 gray levels under nonuniform illumination. Most residual RGB errors were caused by residual nonuniformity of illumination in images, The test results showed that the developed method was effective in correcting the variations in RGB color values caused by vision system components. PMID:18290059

  20. Non-Harmonic Analysis Applied to Optical Coherence Tomography Imaging

    NASA Astrophysics Data System (ADS)

    Cao, Xu; Uchida, Tetsuya; Hirobayashi, Shigeki; Chong, Changho; Morosawa, Atsushi; Totsuka, Koki; Suzuki, Takuya

    2012-02-01

    A new processing technique called non-harmonic analysis (NHA) is proposed for optical coherence tomography (OCT) imaging. Conventional Fourier-domain OCT employs the discrete Fourier transform (DFT), which depends on the window function and length. The axial resolution of the OCT image, calculated by using DFT, is inversely proportional to the full width at half maximum (FWHM) of the wavelength range. The FWHM of wavelength range is limited by the sweeping range of the source in swept-source OCT and it is limited by the number of CCD pixels in spectral-domain OCT. However, the NHA process does not have such constraints; NHA can resolve high frequencies irrespective of the window function and the frame length of the sampled data. In this study, the NHA process is described and it is applied to OCT imaging. It is compared with OCT images based on the DFT. To demonstrate the benefits of using NHA for OCT, we perform OCT imaging with NHA of an onion skin. The results reveal that NHA can achieve an image resolution equivalent that of a 100-nm sweep range using a significantly reduced wavelength range. They also reveal the potential of using this technique to achieve high-resolution imaging without using a broadband source. However, the long calculation times required for NHA must be addressed if it is to be used in clinical applications.