Yaniv, Ziv; Lowekamp, Bradley C; Johnson, Hans J; Beare, Richard
2018-06-01
Modern scientific endeavors increasingly require team collaborations to construct and interpret complex computational workflows. This work describes an image-analysis environment that supports the use of computational tools that facilitate reproducible research and support scientists with varying levels of software development skills. The Jupyter notebook web application is the basis of an environment that enables flexible, well-documented, and reproducible workflows via literate programming. Image-analysis software development is made accessible to scientists with varying levels of programming experience via the use of the SimpleITK toolkit, a simplified interface to the Insight Segmentation and Registration Toolkit. Additional features of the development environment include user friendly data sharing using online data repositories and a testing framework that facilitates code maintenance. SimpleITK provides a large number of examples illustrating educational and research-oriented image analysis workflows for free download from GitHub under an Apache 2.0 license: github.com/InsightSoftwareConsortium/SimpleITK-Notebooks .
On an image reconstruction method for ECT
NASA Astrophysics Data System (ADS)
Sasamoto, Akira; Suzuki, Takayuki; Nishimura, Yoshihiro
2007-04-01
An image by Eddy Current Testing(ECT) is a blurred image to original flaw shape. In order to reconstruct fine flaw image, a new image reconstruction method has been proposed. This method is based on an assumption that a very simple relationship between measured data and source were described by a convolution of response function and flaw shape. This assumption leads to a simple inverse analysis method with deconvolution.In this method, Point Spread Function (PSF) and Line Spread Function(LSF) play a key role in deconvolution processing. This study proposes a simple data processing to determine PSF and LSF from ECT data of machined hole and line flaw. In order to verify its validity, ECT data for SUS316 plate(200x200x10mm) with artificial machined hole and notch flaw had been acquired by differential coil type sensors(produced by ZETEC Inc). Those data were analyzed by the proposed method. The proposed method restored sharp discrete multiple hole image from interfered data by multiple holes. Also the estimated width of line flaw has been much improved compared with original experimental data. Although proposed inverse analysis strategy is simple and easy to implement, its validity to holes and line flaw have been shown by many results that much finer image than original image have been reconstructed.
Simulations of multi-contrast x-ray imaging using near-field speckles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zdora, Marie-Christine; Diamond Light Source, Harwell Science and Innovation Campus, Didcot, Oxfordshire, OX11 0DE, United Kingdom and Department of Physics & Astronomy, University College London, London, WC1E 6BT; Thibault, Pierre
2016-01-28
X-ray dark-field and phase-contrast imaging using near-field speckles is a novel technique that overcomes limitations inherent in conventional absorption x-ray imaging, i.e. poor contrast for features with similar density. Speckle-based imaging yields a wealth of information with a simple setup tolerant to polychromatic and divergent beams, and simple data acquisition and analysis procedures. Here, we present a simulation software used to model the image formation with the speckle-based technique, and we compare simulated results on a phantom sample with experimental synchrotron data. Thorough simulation of a speckle-based imaging experiment will help for better understanding and optimising the technique itself.
SU-F-R-33: Can CT and CBCT Be Used Simultaneously for Radiomics Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luo, R; Wang, J; Zhong, H
2016-06-15
Purpose: To investigate whether CBCT and CT can be used in radiomics analysis simultaneously. To establish a batch correction method for radiomics in two similar image modalities. Methods: Four sites including rectum, bladder, femoral head and lung were considered as region of interest (ROI) in this study. For each site, 10 treatment planning CT images were collected. And 10 CBCT images which came from same site of same patient were acquired at first radiotherapy fraction. 253 radiomics features, which were selected by our test-retest study at rectum cancer CT (ICC>0.8), were calculated for both CBCT and CT images in MATLAB.more » Simple scaling (z-score) and nonlinear correction methods were applied to the CBCT radiomics features. The Pearson Correlation Coefficient was calculated to analyze the correlation between radiomics features of CT and CBCT images before and after correction. Cluster analysis of mixed data (for each site, 5 CT and 5 CBCT data are randomly selected) was implemented to validate the feasibility to merge radiomics data from CBCT and CT. The consistency of clustering result and site grouping was verified by a chi-square test for different datasets respectively. Results: For simple scaling, 234 of the 253 features have correlation coefficient ρ>0.8 among which 154 features haveρ>0.9 . For radiomics data after nonlinear correction, 240 of the 253 features have ρ>0.8 among which 220 features have ρ>0.9. Cluster analysis of mixed data shows that data of four sites was almost precisely separated for simple scaling(p=1.29 * 10{sup −7}, χ{sup 2} test) and nonlinear correction (p=5.98 * 10{sup −7}, χ{sup 2} test), which is similar to the cluster result of CT data (p=4.52 * 10{sup −8}, χ{sup 2} test). Conclusion: Radiomics data from CBCT can be merged with those from CT by simple scaling or nonlinear correction for radiomics analysis.« less
Container-Based Clinical Solutions for Portable and Reproducible Image Analysis.
Matelsky, Jordan; Kiar, Gregory; Johnson, Erik; Rivera, Corban; Toma, Michael; Gray-Roncal, William
2018-05-08
Medical imaging analysis depends on the reproducibility of complex computation. Linux containers enable the abstraction, installation, and configuration of environments so that software can be both distributed in self-contained images and used repeatably by tool consumers. While several initiatives in neuroimaging have adopted approaches for creating and sharing more reliable scientific methods and findings, Linux containers are not yet mainstream in clinical settings. We explore related technologies and their efficacy in this setting, highlight important shortcomings, demonstrate a simple use-case, and endorse the use of Linux containers for medical image analysis.
NASA Astrophysics Data System (ADS)
Ali, Mohamed H.; Rakib, Fazle; Al-Saad, Khalid; Al-Saady, Rafif; Lyng, Fiona M.; Goormaghtigh, Erik
2018-07-01
Breast cancer is the second most common cancer after lung cancer. So far, in clinical practice, most cancer parameters originating from histopathology rely on the visualization by a pathologist of microscopic structures observed in stained tissue sections, including immunohistochemistry markers. Fourier transform infrared spectroscopy (FTIR) spectroscopy provides a biochemical fingerprint of a biopsy sample and, together with advanced data analysis techniques, can accurately classify cell types. Yet, one of the challenges when dealing with FTIR imaging is the slow recording of the data. One cm2 tissue section requires several hours of image recording. We show in the present paper that 2D covariance analysis singles out only a few wavenumbers where both variance and covariance are large. Simple models could be built using 4 wavenumbers to identify the 4 main cell types present in breast cancer tissue sections. Decision trees provide particularly simple models to reach discrimination between the 4 cell types. The robustness of these simple decision-tree models were challenged with FTIR spectral data obtained using different recording conditions. One test set was recorded by transflection on tissue sections in the presence of paraffin while the training set was obtained on dewaxed tissue sections by transmission. Furthermore, the test set was collected with a different brand of FTIR microscope and a different pixel size. Despite the different recording conditions, separating extracellular matrix (ECM) from carcinoma spectra was 100% successful, underlying the robustness of this univariate model and the utility of covariance analysis for revealing efficient wavenumbers. We suggest that 2D covariance maps using the full spectral range could be most useful to select the interesting wavenumbers and achieve very fast data acquisition on quantum cascade laser infrared imaging microscopes.
NASA Astrophysics Data System (ADS)
Ramadhan, Rifqi; Prabowo, Rian Gilang; Aprilliyani, Ria; Basari
2018-02-01
Victims of acute cancer and tumor are growing each year and cancer becomes one of the causes of human deaths in the world. Cancers or tumor tissue cells are cells that grow abnormally and turn to take over and damage the surrounding tissues. At the beginning, cancers or tumors do not have definite symptoms in its early stages, and can even attack the tissues inside of the body. This phenomena is not identifiable under visual human observation. Therefore, an early detection system which is cheap, quick, simple, and portable is essensially required to anticipate the further development of cancer or tumor. Among all of the modalities, microwave imaging is considered to be a cheaper, simple, and portable system method. There are at least two simple image reconstruction algorithms i.e. Filtered Back Projection (FBP) and Algebraic Reconstruction Technique (ART), which have been adopted in some common modalities. In this paper, both algorithms will be compared by reconstructing the image from an artificial tissue model (i.e. phantom), which has two different dielectric distributions. We addressed two performance comparisons, namely quantitative and qualitative analysis. Qualitative analysis includes the smoothness of the image and also the success in distinguishing dielectric differences by observing the image with human eyesight. In addition, quantitative analysis includes Histogram, Structural Similarity Index (SSIM), Mean Squared Error (MSE), and Peak Signal-to-Noise Ratio (PSNR) calculation were also performed. As a result, quantitative parameters of FBP might show better values than the ART. However, ART is likely more capable to distinguish two different dielectric value than FBP, due to higher contrast in ART and wide distribution grayscale level.
Methods for spectral image analysis by exploiting spatial simplicity
Keenan, Michael R.
2010-05-25
Several full-spectrum imaging techniques have been introduced in recent years that promise to provide rapid and comprehensive chemical characterization of complex samples. One of the remaining obstacles to adopting these techniques for routine use is the difficulty of reducing the vast quantities of raw spectral data to meaningful chemical information. Multivariate factor analysis techniques, such as Principal Component Analysis and Alternating Least Squares-based Multivariate Curve Resolution, have proven effective for extracting the essential chemical information from high dimensional spectral image data sets into a limited number of components that describe the spectral characteristics and spatial distributions of the chemical species comprising the sample. There are many cases, however, in which those constraints are not effective and where alternative approaches may provide new analytical insights. For many cases of practical importance, imaged samples are "simple" in the sense that they consist of relatively discrete chemical phases. That is, at any given location, only one or a few of the chemical species comprising the entire sample have non-zero concentrations. The methods of spectral image analysis of the present invention exploit this simplicity in the spatial domain to make the resulting factor models more realistic. Therefore, more physically accurate and interpretable spectral and abundance components can be extracted from spectral images that have spatially simple structure.
Methods for spectral image analysis by exploiting spatial simplicity
Keenan, Michael R.
2010-11-23
Several full-spectrum imaging techniques have been introduced in recent years that promise to provide rapid and comprehensive chemical characterization of complex samples. One of the remaining obstacles to adopting these techniques for routine use is the difficulty of reducing the vast quantities of raw spectral data to meaningful chemical information. Multivariate factor analysis techniques, such as Principal Component Analysis and Alternating Least Squares-based Multivariate Curve Resolution, have proven effective for extracting the essential chemical information from high dimensional spectral image data sets into a limited number of components that describe the spectral characteristics and spatial distributions of the chemical species comprising the sample. There are many cases, however, in which those constraints are not effective and where alternative approaches may provide new analytical insights. For many cases of practical importance, imaged samples are "simple" in the sense that they consist of relatively discrete chemical phases. That is, at any given location, only one or a few of the chemical species comprising the entire sample have non-zero concentrations. The methods of spectral image analysis of the present invention exploit this simplicity in the spatial domain to make the resulting factor models more realistic. Therefore, more physically accurate and interpretable spectral and abundance components can be extracted from spectral images that have spatially simple structure.
Constellation Coverage Analysis
NASA Technical Reports Server (NTRS)
Lo, Martin W. (Compiler)
1997-01-01
The design of satellite constellations requires an understanding of the dynamic global coverage provided by the constellations. Even for a small constellation with a simple circular orbit propagator, the combinatorial nature of the analysis frequently renders the problem intractable. Particularly for the initial design phase where the orbital parameters are still fluid and undetermined, the coverage information is crucial to evaluate the performance of the constellation design. We have developed a fast and simple algorithm for determining the global constellation coverage dynamically using image processing techniques. This approach provides a fast, powerful and simple method for the analysis of global constellation coverage.
Slide Set: Reproducible image analysis and batch processing with ImageJ.
Nanes, Benjamin A
2015-11-01
Most imaging studies in the biological sciences rely on analyses that are relatively simple. However, manual repetition of analysis tasks across multiple regions in many images can complicate even the simplest analysis, making record keeping difficult, increasing the potential for error, and limiting reproducibility. While fully automated solutions are necessary for very large data sets, they are sometimes impractical for the small- and medium-sized data sets common in biology. Here we present the Slide Set plugin for ImageJ, which provides a framework for reproducible image analysis and batch processing. Slide Set organizes data into tables, associating image files with regions of interest and other relevant information. Analysis commands are automatically repeated over each image in the data set, and multiple commands can be chained together for more complex analysis tasks. All analysis parameters are saved, ensuring transparency and reproducibility. Slide Set includes a variety of built-in analysis commands and can be easily extended to automate other ImageJ plugins, reducing the manual repetition of image analysis without the set-up effort or programming expertise required for a fully automated solution.
A novel method of the image processing on irregular triangular meshes
NASA Astrophysics Data System (ADS)
Vishnyakov, Sergey; Pekhterev, Vitaliy; Sokolova, Elizaveta
2018-04-01
The paper describes a novel method of the image processing based on irregular triangular meshes implementation. The triangular mesh is adaptive to the image content, least mean square linear approximation is proposed for the basic interpolation within the triangle. It is proposed to use triangular numbers to simplify using of the local (barycentric) coordinates for the further analysis - triangular element of the initial irregular mesh is to be represented through the set of the four equilateral triangles. This allows to use fast and simple pixels indexing in local coordinates, e.g. "for" or "while" loops for access to the pixels. Moreover, representation proposed allows to use discrete cosine transform of the simple "rectangular" symmetric form without additional pixels reordering (as it is used for shape-adaptive DCT forms). Furthermore, this approach leads to the simple form of the wavelet transform on triangular mesh. The results of the method application are presented. It is shown that advantage of the method proposed is a combination of the flexibility of the image-adaptive irregular meshes with the simple form of the pixel indexing in local triangular coordinates and the using of the common forms of the discrete transforms for triangular meshes. Method described is proposed for the image compression, pattern recognition, image quality improvement, image search and indexing. It also may be used as a part of video coding (intra-frame or inter-frame coding, motion detection).
Shanmugam, Akshaya; Usmani, Mohammad; Mayberry, Addison; Perkins, David L; Holcomb, Daniel E
2018-01-01
Miniaturized imaging devices have pushed the boundaries of point-of-care imaging, but existing mobile-phone-based imaging systems do not exploit the full potential of smart phones. This work demonstrates the use of simple imaging configurations to deliver superior image quality and the ability to handle a wide range of biological samples. Results presented in this work are from analysis of fluorescent beads under fluorescence imaging, as well as helminth eggs and freshwater mussel larvae under white light imaging. To demonstrate versatility of the systems, real time analysis and post-processing results of the sample count and sample size are presented in both still images and videos of flowing samples.
Muir, Dylan R; Kampa, Björn M
2014-01-01
Two-photon calcium imaging of neuronal responses is an increasingly accessible technology for probing population responses in cortex at single cell resolution, and with reasonable and improving temporal resolution. However, analysis of two-photon data is usually performed using ad-hoc solutions. To date, no publicly available software exists for straightforward analysis of stimulus-triggered two-photon imaging experiments. In addition, the increasing data rates of two-photon acquisition systems imply increasing cost of computing hardware required for in-memory analysis. Here we present a Matlab toolbox, FocusStack, for simple and efficient analysis of two-photon calcium imaging stacks on consumer-level hardware, with minimal memory footprint. We also present a Matlab toolbox, StimServer, for generation and sequencing of visual stimuli, designed to be triggered over a network link from a two-photon acquisition system. FocusStack is compatible out of the box with several existing two-photon acquisition systems, and is simple to adapt to arbitrary binary file formats. Analysis tools such as stack alignment for movement correction, automated cell detection and peri-stimulus time histograms are already provided, and further tools can be easily incorporated. Both packages are available as publicly-accessible source-code repositories.
Muir, Dylan R.; Kampa, Björn M.
2015-01-01
Two-photon calcium imaging of neuronal responses is an increasingly accessible technology for probing population responses in cortex at single cell resolution, and with reasonable and improving temporal resolution. However, analysis of two-photon data is usually performed using ad-hoc solutions. To date, no publicly available software exists for straightforward analysis of stimulus-triggered two-photon imaging experiments. In addition, the increasing data rates of two-photon acquisition systems imply increasing cost of computing hardware required for in-memory analysis. Here we present a Matlab toolbox, FocusStack, for simple and efficient analysis of two-photon calcium imaging stacks on consumer-level hardware, with minimal memory footprint. We also present a Matlab toolbox, StimServer, for generation and sequencing of visual stimuli, designed to be triggered over a network link from a two-photon acquisition system. FocusStack is compatible out of the box with several existing two-photon acquisition systems, and is simple to adapt to arbitrary binary file formats. Analysis tools such as stack alignment for movement correction, automated cell detection and peri-stimulus time histograms are already provided, and further tools can be easily incorporated. Both packages are available as publicly-accessible source-code repositories1. PMID:25653614
Weekenstroo, Harm H A; Cornelissen, Bart M W; Bernelot Moens, Hein J
2015-06-01
Nailfold capillaroscopy is a non-invasive and safe technique for the analysis of microangiopathologies. Imaging quality of widely used simple videomicroscopes is poor. The use of green illumination instead of the commonly used white light may improve contrast. The aim of the study was to compare the effect of green illumination with white illumination, regarding capillary density, the number of microangiopathologies, and sensitivity and specificity for systemic sclerosis. Five rheumatologists have evaluated 80 images; 40 images acquired with green light, and 40 images acquired with white light. A larger number of microangiopathologies were found in images acquired with green light than in images acquired with white light. This results in slightly higher sensitivity with green light in comparison with white light, without reducing the specificity. These findings suggest that green instead of white illumination may facilitate evaluation of capillaroscopic images obtained with a low-cost digital videomicroscope.
InterFace: A software package for face image warping, averaging, and principal components analysis.
Kramer, Robin S S; Jenkins, Rob; Burton, A Mike
2017-12-01
We describe InterFace, a software package for research in face recognition. The package supports image warping, reshaping, averaging of multiple face images, and morphing between faces. It also supports principal components analysis (PCA) of face images, along with tools for exploring the "face space" produced by PCA. The package uses a simple graphical user interface, allowing users to perform these sophisticated image manipulations without any need for programming knowledge. The program is available for download in the form of an app, which requires that users also have access to the (freely available) MATLAB Runtime environment.
Fourier Analysis and Structure Determination. Part II: Pulse NMR and NMR Imaging.
ERIC Educational Resources Information Center
Chesick, John P.
1989-01-01
Uses simple pulse NMR experiments to discuss Fourier transforms. Studies the generation of spin echoes used in the imaging procedure. Shows that pulse NMR experiments give signals that are additions of sinusoids of differing amplitudes, frequencies, and phases. (MVL)
Mortensen, Kim I; Tassone, Chiara; Ehrlich, Nicky; Andresen, Thomas L; Flyvbjerg, Henrik
2018-05-09
Nanosize lipid vesicles are used extensively at the interface between nanotechnology and biology, e.g., as containers for chemical reactions at minute concentrations and vehicles for targeted delivery of pharmaceuticals. Typically, vesicle samples are heterogeneous as regards vesicle size and structural properties. Consequently, vesicles must be characterized individually to ensure correct interpretation of experimental results. Here we do that using dual-color fluorescence labeling of vesicles-of their lipid bilayers and lumens, separately. A vesicle then images as two spots, one in each color channel. A simple image analysis determines the total intensity and width of each spot. These four data all depend on the vesicle radius in a simple manner for vesicles that are spherical, unilamellar, and optimal encapsulators of molecular cargo. This permits identification of such ideal vesicles. They in turn enable calibration of the dual-color fluorescence microscopy images they appear in. Since this calibration is not a separate experiment but an analysis of images of vesicles to be characterized, it eliminates the potential source of error that a separate calibration experiment would have been. Nonideal vesicles in the same images were characterized by how their four data violate the calibrated relationship established for ideal vesicles. In this way, our method yields size, shape, lamellarity, and encapsulation efficiency of each imaged vesicle. Applying this procedure to extruded samples of vesicles, we found that, contrary to common assumptions, only a fraction of vesicles are ideal.
Single-particle cryo-EM-Improved ab initio 3D reconstruction with SIMPLE/PRIME.
Reboul, Cyril F; Eager, Michael; Elmlund, Dominika; Elmlund, Hans
2018-01-01
Cryogenic electron microscopy (cryo-EM) and single-particle analysis now enables the determination of high-resolution structures of macromolecular assemblies that have resisted X-ray crystallography and other approaches. We developed the SIMPLE open-source image-processing suite for analysing cryo-EM images of single-particles. A core component of SIMPLE is the probabilistic PRIME algorithm for identifying clusters of images in 2D and determine relative orientations of single-particle projections in 3D. Here, we extend our previous work on PRIME and introduce new stochastic optimization algorithms that improve the robustness of the approach. Our refined method for identification of homogeneous subsets of images in accurate register substantially improves the resolution of the cluster centers and of the ab initio 3D reconstructions derived from them. We now obtain maps with a resolution better than 10 Å by exclusively processing cluster centers. Excellent parallel code performance on over-the-counter laptops and CPU workstations is demonstrated. © 2017 The Protein Society.
Mayberry, Addison; Perkins, David L.; Holcomb, Daniel E.
2018-01-01
Miniaturized imaging devices have pushed the boundaries of point-of-care imaging, but existing mobile-phone-based imaging systems do not exploit the full potential of smart phones. This work demonstrates the use of simple imaging configurations to deliver superior image quality and the ability to handle a wide range of biological samples. Results presented in this work are from analysis of fluorescent beads under fluorescence imaging, as well as helminth eggs and freshwater mussel larvae under white light imaging. To demonstrate versatility of the systems, real time analysis and post-processing results of the sample count and sample size are presented in both still images and videos of flowing samples. PMID:29509786
Image contrast of diffraction-limited telescopes for circular incoherent sources of uniform radiance
NASA Technical Reports Server (NTRS)
Shackleford, W. L.
1980-01-01
A simple approximate formula is derived for the background intensity beyond the edge of the image of uniform incoherent circular light source relative to the irradiance near the center of the image. The analysis applies to diffraction-limited telescopes with or without central beam obscuration due to a secondary mirror. Scattering off optical surfaces is neglected. The analysis is expected to be most applicable to spaceborne IR telescopes, for which diffraction can be the major source of off-axis response.
Brodin, N. Patrik; Guha, Chandan; Tomé, Wolfgang A.
2015-01-01
Modern pre-clinical radiation therapy (RT) research requires high precision and accurate dosimetry to facilitate the translation of research findings into clinical practice. Several systems are available that provide precise delivery and on-board imaging capabilities, highlighting the need for a quality management program (QMP) to ensure consistent and accurate radiation dose delivery. An ongoing, simple, and efficient QMP for image-guided robotic small animal irradiators used in pre-clinical RT research is described. Protocols were developed and implemented to assess the dose output constancy (based on the AAPM TG-61 protocol), cone-beam computed tomography (CBCT) image quality and object representation accuracy (using a custom-designed imaging phantom), CBCT-guided target localization accuracy and consistency of the CBCT-based dose calculation. To facilitate an efficient read-out and limit the user dependence of the QMP data analysis, a semi-automatic image analysis and data representation program was developed using the technical computing software MATLAB. The results of the first six months experience using the suggested QMP for a Small Animal Radiation Research Platform (SARRP) are presented, with data collected on a bi-monthly basis. The dosimetric output constancy was established to be within ±1 %, the consistency of the image resolution was within ±0.2 mm, the accuracy of CBCT-guided target localization was within ±0.5 mm, and dose calculation consistency was within ±2 s (± 3 %) per treatment beam. Based on these results, this simple quality assurance program allows for the detection of inconsistencies in dosimetric or imaging parameters that are beyond the acceptable variability for a reliable and accurate pre-clinical RT system, on a monthly or bi-monthly basis. PMID:26425981
Brodin, N Patrik; Guha, Chandan; Tomé, Wolfgang A
2015-11-01
Modern pre-clinical radiation therapy (RT) research requires high precision and accurate dosimetry to facilitate the translation of research findings into clinical practice. Several systems are available that provide precise delivery and on-board imaging capabilities, highlighting the need for a quality management program (QMP) to ensure consistent and accurate radiation dose delivery. An ongoing, simple, and efficient QMP for image-guided robotic small animal irradiators used in pre-clinical RT research is described. Protocols were developed and implemented to assess the dose output constancy (based on the AAPM TG-61 protocol), cone-beam computed tomography (CBCT) image quality and object representation accuracy (using a custom-designed imaging phantom), CBCT-guided target localization accuracy and consistency of the CBCT-based dose calculation. To facilitate an efficient read-out and limit the user dependence of the QMP data analysis, a semi-automatic image analysis and data representation program was developed using the technical computing software MATLAB. The results of the first 6-mo experience using the suggested QMP for a Small Animal Radiation Research Platform (SARRP) are presented, with data collected on a bi-monthly basis. The dosimetric output constancy was established to be within ±1 %, the consistency of the image resolution was within ±0.2 mm, the accuracy of CBCT-guided target localization was within ±0.5 mm, and dose calculation consistency was within ±2 s (±3%) per treatment beam. Based on these results, this simple quality assurance program allows for the detection of inconsistencies in dosimetric or imaging parameters that are beyond the acceptable variability for a reliable and accurate pre-clinical RT system, on a monthly or bi-monthly basis.
A simple 2D composite image analysis technique for the crystal growth study of L-ascorbic acid.
Kumar, Krishan; Kumar, Virender; Lal, Jatin; Kaur, Harmeet; Singh, Jasbir
2017-06-01
This work was destined for 2D crystal growth studies of L-ascorbic acid using the composite image analysis technique. Growth experiments on the L-ascorbic acid crystals were carried out by standard (optical) microscopy, laser diffraction analysis, and composite image analysis. For image analysis, the growth of L-ascorbic acid crystals was captured as digital 2D RGB images, which were then processed to composite images. After processing, the crystal boundaries emerged as white lines against the black (cancelled) background. The crystal boundaries were well differentiated by peaks in the intensity graphs generated for the composite images. The lengths of crystal boundaries measured from the intensity graphs of composite images were in good agreement (correlation coefficient "r" = 0.99) with the lengths measured by standard microscopy. On the contrary, the lengths measured by laser diffraction were poorly correlated with both techniques. Therefore, the composite image analysis can replace the standard microscopy technique for the crystal growth studies of L-ascorbic acid. © 2017 Wiley Periodicals, Inc.
A simple program to measure and analyse tree rings using Excel, R and SigmaScan
Hietz, Peter
2011-01-01
I present a new software that links a program for image analysis (SigmaScan), one for spreadsheets (Excel) and one for statistical analysis (R) for applications of tree-ring analysis. The first macro measures ring width marked by the user on scanned images, stores raw and detrended data in Excel and calculates the distance to the pith and inter-series correlations. A second macro measures darkness along a defined path to identify latewood–earlywood transition in conifers, and a third shows the potential for automatic detection of boundaries. Written in Visual Basic for Applications, the code makes use of the advantages of existing programs and is consequently very economic and relatively simple to adjust to the requirements of specific projects or to expand making use of already available code. PMID:26109835
A simple program to measure and analyse tree rings using Excel, R and SigmaScan.
Hietz, Peter
I present a new software that links a program for image analysis (SigmaScan), one for spreadsheets (Excel) and one for statistical analysis (R) for applications of tree-ring analysis. The first macro measures ring width marked by the user on scanned images, stores raw and detrended data in Excel and calculates the distance to the pith and inter-series correlations. A second macro measures darkness along a defined path to identify latewood-earlywood transition in conifers, and a third shows the potential for automatic detection of boundaries. Written in Visual Basic for Applications, the code makes use of the advantages of existing programs and is consequently very economic and relatively simple to adjust to the requirements of specific projects or to expand making use of already available code.
Analysis and Recognition of Curve Type as The Basis of Object Recognition in Image
NASA Astrophysics Data System (ADS)
Nugraha, Nurma; Madenda, Sarifuddin; Indarti, Dina; Dewi Agushinta, R.; Ernastuti
2016-06-01
An object in an image when analyzed further will show the characteristics that distinguish one object with another object in an image. Characteristics that are used in object recognition in an image can be a color, shape, pattern, texture and spatial information that can be used to represent objects in the digital image. The method has recently been developed for image feature extraction on objects that share characteristics curve analysis (simple curve) and use the search feature of chain code object. This study will develop an algorithm analysis and the recognition of the type of curve as the basis for object recognition in images, with proposing addition of complex curve characteristics with maximum four branches that will be used for the process of object recognition in images. Definition of complex curve is the curve that has a point of intersection. By using some of the image of the edge detection, the algorithm was able to do the analysis and recognition of complex curve shape well.
Characterization of Transiting Exoplanets by Way of Differential Photometry
ERIC Educational Resources Information Center
Cowley, Michael; Hughes, Stephen
2014-01-01
This paper describes a simple activity for plotting and characterizing the light curve from an exoplanet transit event by way of differential photometry analysis. Using free digital imaging software, participants analyse a series of telescope images with the goal of calculating various exoplanet parameters, including size, orbital radius and…
NASA Astrophysics Data System (ADS)
Mustak, S.
2013-09-01
The correction of atmospheric effects is very essential because visible bands of shorter wavelength are highly affected by atmospheric scattering especially of Rayleigh scattering. The objectives of the paper is to find out the haze values present in the all spectral bands and to correct the haze values for urban analysis. In this paper, Improved Dark Object Subtraction method of P. Chavez (1988) is applied for the correction of atmospheric haze in the Resoucesat-1 LISS-4 multispectral satellite image. Dark object Subtraction is a very simple image-based method of atmospheric haze which assumes that there are at least a few pixels within an image which should be black (% reflectance) and such black reflectance termed as dark object which are clear water body and shadows whose DN values zero (0) or Close to zero in the image. Simple Dark Object Subtraction method is a first order atmospheric correction but Improved Dark Object Subtraction method which tends to correct the Haze in terms of atmospheric scattering and path radiance based on the power law of relative scattering effect of atmosphere. The haze values extracted using Simple Dark Object Subtraction method for Green band (Band2), Red band (Band3) and NIR band (band4) are 40, 34 and 18 but the haze values extracted using Improved Dark Object Subtraction method are 40, 18.02 and 11.80 for aforesaid bands. Here it is concluded that the haze values extracted by Improved Dark Object Subtraction method provides more realistic results than Simple Dark Object Subtraction method.
Audible sonar images generated with proprioception for target analysis.
Kuc, Roman B
2017-05-01
Some blind humans have demonstrated the ability to detect and classify objects with echolocation using palatal clicks. An audible-sonar robot mimics human click emissions, binaural hearing, and head movements to extract interaural time and level differences from target echoes. Targets of various complexity are examined by transverse displacements of the sonar and by target pose rotations that model movements performed by the blind. Controlled sonar movements executed by the robot provide data that model proprioception information available to blind humans for examining targets from various aspects. The audible sonar uses this sonar location and orientation information to form two-dimensional target images that are similar to medical diagnostic ultrasound tomograms. Simple targets, such as single round and square posts, produce distinguishable and recognizable images. More complex targets configured with several simple objects generate diffraction effects and multiple reflections that produce image artifacts. The presentation illustrates the capabilities and limitations of target classification from audible sonar images.
Histology image analysis for carcinoma detection and grading
He, Lei; Long, L. Rodney; Antani, Sameer; Thoma, George R.
2012-01-01
This paper presents an overview of the image analysis techniques in the domain of histopathology, specifically, for the objective of automated carcinoma detection and classification. As in other biomedical imaging areas such as radiology, many computer assisted diagnosis (CAD) systems have been implemented to aid histopathologists and clinicians in cancer diagnosis and research, which have been attempted to significantly reduce the labor and subjectivity of traditional manual intervention with histology images. The task of automated histology image analysis is usually not simple due to the unique characteristics of histology imaging, including the variability in image preparation techniques, clinical interpretation protocols, and the complex structures and very large size of the images themselves. In this paper we discuss those characteristics, provide relevant background information about slide preparation and interpretation, and review the application of digital image processing techniques to the field of histology image analysis. In particular, emphasis is given to state-of-the-art image segmentation methods for feature extraction and disease classification. Four major carcinomas of cervix, prostate, breast, and lung are selected to illustrate the functions and capabilities of existing CAD systems. PMID:22436890
Pandey, Anil Kumar; Saroha, Kartik; Sharma, Param Dev; Patel, Chetan; Bal, Chandrashekhar; Kumar, Rakesh
2017-01-01
In this study, we have developed a simple image processing application in MATLAB that uses suprathreshold stochastic resonance (SSR) and helps the user to visualize abdominopelvic tumor on the exported prediuretic positron emission tomography/computed tomography (PET/CT) images. A brainstorming session was conducted for requirement analysis for the program. It was decided that program should load the screen captured PET/CT images and then produces output images in a window with a slider control that should enable the user to view the best image that visualizes the tumor, if present. The program was implemented on personal computer using Microsoft Windows and MATLAB R2013b. The program has option for the user to select the input image. For the selected image, it displays output images generated using SSR in a separate window having a slider control. The slider control enables the user to view images and select one which seems to provide the best visualization of the area(s) of interest. The developed application enables the user to select, process, and view output images in the process of utilizing SSR to detect the presence of abdominopelvic tumor on prediuretic PET/CT image.
Birdcage volume coils and magnetic resonance imaging: a simple experiment for students.
Vincent, Dwight E; Wang, Tianhao; Magyar, Thalia A K; Jacob, Peni I; Buist, Richard; Martin, Melanie
2017-01-01
This article explains some simple experiments that can be used in undergraduate or graduate physics or biomedical engineering laboratory classes to learn how birdcage volume radiofrequency (RF) coils and magnetic resonance imaging (MRI) work. For a clear picture, and to do any quantitative MRI analysis, acquiring images with a high signal-to-noise ratio (SNR) is required. With a given MRI system at a given field strength, the only means to change the SNR using hardware is to change the RF coil used to collect the image. RF coils can be designed in many different ways including birdcage volume RF coil designs. The choice of RF coil to give the best SNR for any MRI study is based on the sample being imaged. The data collected in the simple experiments show that the SNR varies as inverse diameter for the birdcage volume RF coils used in these experiments. The experiments were easily performed by a high school student, an undergraduate student, and a graduate student, in less than 3 h, the time typically allotted for a university laboratory course. The article describes experiments that students in undergraduate or graduate laboratories can perform to observe how birdcage volume RF coils influence MRI measurements. It is designed for students interested in pursuing careers in the imaging field.
The Zombie Plot: A Simple Graphic Method for Visualizing the Efficacy of a Diagnostic Test.
Richardson, Michael L
2016-08-09
One of the most important jobs of a radiologist is to pick the most appropriate imaging test for a particular clinical situation. Making a proper selection sometimes requires statistical analysis. The objective of this article is to introduce a simple graphic technique, an ROC plot that has been divided into zones of mostly bad imaging efficacy (ZOMBIE, hereafter referred to as the "zombie plot"), that transforms information about imaging efficacy from the numeric domain into the visual domain. The numeric rationale for the use of zombie plots is given, as are several examples of the clinical use of these plots. Two online calculators are described that simplify the process of producing a zombie plot.
Fantuzzo, J. A.; Mirabella, V. R.; Zahn, J. D.
2017-01-01
Abstract Synapse formation analyses can be performed by imaging and quantifying fluorescent signals of synaptic markers. Traditionally, these analyses are done using simple or multiple thresholding and segmentation approaches or by labor-intensive manual analysis by a human observer. Here, we describe Intellicount, a high-throughput, fully-automated synapse quantification program which applies a novel machine learning (ML)-based image processing algorithm to systematically improve region of interest (ROI) identification over simple thresholding techniques. Through processing large datasets from both human and mouse neurons, we demonstrate that this approach allows image processing to proceed independently of carefully set thresholds, thus reducing the need for human intervention. As a result, this method can efficiently and accurately process large image datasets with minimal interaction by the experimenter, making it less prone to bias and less liable to human error. Furthermore, Intellicount is integrated into an intuitive graphical user interface (GUI) that provides a set of valuable features, including automated and multifunctional figure generation, routine statistical analyses, and the ability to run full datasets through nested folders, greatly expediting the data analysis process. PMID:29218324
NASA Astrophysics Data System (ADS)
Huang, Wei; Ma, Chengfu; Chen, Yuhang
2014-12-01
A method for simple and reliable displacement measurement with nanoscale resolution is proposed. The measurement is realized by combining a common optical microscopy imaging of a specially coded nonperiodic microstructure, namely two-dimensional zero-reference mark (2-D ZRM), and subsequent correlation analysis of the obtained image sequence. The autocorrelation peak contrast of the ZRM code is maximized with well-developed artificial intelligence algorithms, which enables robust and accurate displacement determination. To improve the resolution, subpixel image correlation analysis is employed. Finally, we experimentally demonstrate the quasi-static and dynamic displacement characterization ability of a micro 2-D ZRM.
Automatic morphological classification of galaxy images
Shamir, Lior
2009-01-01
We describe an image analysis supervised learning algorithm that can automatically classify galaxy images. The algorithm is first trained using a manually classified images of elliptical, spiral, and edge-on galaxies. A large set of image features is extracted from each image, and the most informative features are selected using Fisher scores. Test images can then be classified using a simple Weighted Nearest Neighbor rule such that the Fisher scores are used as the feature weights. Experimental results show that galaxy images from Galaxy Zoo can be classified automatically to spiral, elliptical and edge-on galaxies with accuracy of ~90% compared to classifications carried out by the author. Full compilable source code of the algorithm is available for free download, and its general-purpose nature makes it suitable for other uses that involve automatic image analysis of celestial objects. PMID:20161594
Simple Colorimetric Sensor for Trinitrotoluene Testing
NASA Astrophysics Data System (ADS)
Samanman, S.; Masoh, N.; Salah, Y.; Srisawat, S.; Wattanayon, R.; Wangsirikul, P.; Phumivanichakit, K.
2017-02-01
A simple operating colorimetric sensor for trinitrotoluene (TNT) determination using a commercial scanner as a captured image was designed. The sensor is based on the chemical reaction between TNT and sodium hydroxide reagent to produce the color change within 96 well plates, which observed finally, recorded using a commercial scanner. The intensity of the color change increased with increase in TNT concentration and could easily quantify the concentration of TNT by digital image analysis using the Image J free software. Under optimum conditions, the sensor provided a linear dynamic range between 0.20 and 1.00 mg mL-1(r = 0.9921) with a limit of detection of 0.10± 0.01 mg mL-1. The relative standard deviation for eight experiments for the sensitivity was 3.8%. When applied for the analysis of TNT in two soil extract samples, the concentrations were found to be non-detectable to 0.26±0.04 mg mL-1. The obtained recovery values (93-95%) were acceptable for soil samples tested.
Tcheng, David K.; Nayak, Ashwin K.; Fowlkes, Charless C.; Punyasena, Surangi W.
2016-01-01
Discriminating between black and white spruce (Picea mariana and Picea glauca) is a difficult palynological classification problem that, if solved, would provide valuable data for paleoclimate reconstructions. We developed an open-source visual recognition software (ARLO, Automated Recognition with Layered Optimization) capable of differentiating between these two species at an accuracy on par with human experts. The system applies pattern recognition and machine learning to the analysis of pollen images and discovers general-purpose image features, defined by simple features of lines and grids of pixels taken at different dimensions, size, spacing, and resolution. It adapts to a given problem by searching for the most effective combination of both feature representation and learning strategy. This results in a powerful and flexible framework for image classification. We worked with images acquired using an automated slide scanner. We first applied a hash-based “pollen spotting” model to segment pollen grains from the slide background. We next tested ARLO’s ability to reconstruct black to white spruce pollen ratios using artificially constructed slides of known ratios. We then developed a more scalable hash-based method of image analysis that was able to distinguish between the pollen of black and white spruce with an estimated accuracy of 83.61%, comparable to human expert performance. Our results demonstrate the capability of machine learning systems to automate challenging taxonomic classifications in pollen analysis, and our success with simple image representations suggests that our approach is generalizable to many other object recognition problems. PMID:26867017
Zhai, Hong Lin; Zhai, Yue Yuan; Li, Pei Zhen; Tian, Yue Li
2013-01-21
A very simple approach to quantitative analysis is proposed based on the technology of digital image processing using three-dimensional (3D) spectra obtained by high-performance liquid chromatography coupled with a diode array detector (HPLC-DAD). As the region-based shape features of a grayscale image, Zernike moments with inherently invariance property were employed to establish the linear quantitative models. This approach was applied to the quantitative analysis of three compounds in mixed samples using 3D HPLC-DAD spectra, and three linear models were obtained, respectively. The correlation coefficients (R(2)) for training and test sets were more than 0.999, and the statistical parameters and strict validation supported the reliability of established models. The analytical results suggest that the Zernike moment selected by stepwise regression can be used in the quantitative analysis of target compounds. Our study provides a new idea for quantitative analysis using 3D spectra, which can be extended to the analysis of other 3D spectra obtained by different methods or instruments.
Imaging with hypertelescopes: a simple modal approach
NASA Astrophysics Data System (ADS)
Aime, C.
2008-05-01
Aims: We give a simple analysis of imaging with hypertelescopes, a technique proposed by Labeyrie to produce snapshot images using arrays of telescopes. The approach is modal: we describe the transformations induced by the densification onto a sinusoidal decomposition of the focal image instead of the usual point spread function approach. Methods: We first express the image formed at the focus of a diluted array of apertures as the product R_0(α) X_F(α) of the diffraction pattern of the elementary apertures R_0(α) by the object-dependent interference term X_F(α) between all apertures. The interference term, which can be written in the form of a Fourier Series for an extremely diluted array, produces replications of the object, which makes observing the image difficult. We express the focal image after the densification using the approach of Tallon and Tallon-Bosc. Results: The result is very simple for an extremely diluted array. We show that the focal image in a periscopic densification of the array can be written as R_0(α) X_F(α/γ), where γ is the factor of densification. There is a dilatation of the interference term while the diffraction term is unchanged. After de-zooming, the image can be written as γ2 X_F(α)R_0(γ α), an expression which clearly indicates that the final image corresponds to the center of the Fizeau image intensified by γ2. The imaging limitations of hypertelescopes are therefore those of the original configuration. The effect of the suppression of image replications is illustrated in a numerical simulation for a fully redundant configuration and a non-redundant one.
2013-01-01
Background Scanning electron microscopy (SEM) has been used for high-resolution imaging of plant cell surfaces for many decades. Most SEM imaging employs the secondary electron detector under high vacuum to provide pseudo-3D images of plant organs and especially of surface structures such as trichomes and stomatal guard cells; these samples generally have to be metal-coated to avoid charging artefacts. Variable pressure-SEM allows examination of uncoated tissues, and provides a flexible range of options for imaging, either with a secondary electron detector or backscattered electron detector. In one application, we used the backscattered electron detector under low vacuum conditions to collect images of uncoated barley leaf tissue followed by simple quantification of cell areas. Results Here, we outline methods for backscattered electron imaging of a variety of plant tissues with particular focus on collecting images for quantification of cell size and shape. We demonstrate the advantages of this technique over other methods to obtain high contrast cell outlines, and define a set of parameters for imaging Arabidopsis thaliana leaf epidermal cells together with a simple image analysis protocol. We also show how to vary parameters such as accelerating voltage and chamber pressure to optimise imaging in a range of other plant tissues. Conclusions Backscattered electron imaging of uncoated plant tissue allows acquisition of images showing details of plant morphology together with images of high contrast cell outlines suitable for semi-automated image analysis. The method is easily adaptable to many types of tissue and suitable for any laboratory with standard SEM preparation equipment and a variable-pressure-SEM or tabletop SEM. PMID:24135233
Simulating and mapping spatial complexity using multi-scale techniques
De Cola, L.
1994-01-01
A central problem in spatial analysis is the mapping of data for complex spatial fields using relatively simple data structures, such as those of a conventional GIS. This complexity can be measured using such indices as multi-scale variance, which reflects spatial autocorrelation, and multi-fractal dimension, which characterizes the values of fields. These indices are computed for three spatial processes: Gaussian noise, a simple mathematical function, and data for a random walk. Fractal analysis is then used to produce a vegetation map of the central region of California based on a satellite image. This analysis suggests that real world data lie on a continuum between the simple and the random, and that a major GIS challenge is the scientific representation and understanding of rapidly changing multi-scale fields. -Author
NASA Astrophysics Data System (ADS)
Vatanparast, Maryam; Vullum, Per Erik; Nord, Magnus; Zuo, Jian-Min; Reenaas, Turid W.; Holmestad, Randi
2017-09-01
Geometric phase analysis (GPA), a fast and simple Fourier space method for strain analysis, can give useful information on accumulated strain and defect propagation in multiple layers of semiconductors, including quantum dot materials. In this work, GPA has been applied to high resolution Z-contrast scanning transmission electron microscopy (STEM) images. Strain maps determined from different g vectors of these images are compared to each other, in order to analyze and assess the GPA technique in terms of accuracy. The SmartAlign tool has been used to improve the STEM image quality getting more reliable results. Strain maps from template matching as a real space approach are compared with strain maps from GPA, and it is discussed that a real space analysis is a better approach than GPA for aberration corrected STEM images.
[The application of stereology in radiology imaging and cell biology fields].
Hu, Na; Wang, Yan; Feng, Yuanming; Lin, Wang
2012-08-01
Stereology is an interdisciplinary method for 3D morphological study developed from mathematics and morphology. It is widely used in medical image analysis and cell biology studies. Because of its unbiased, simple, fast, reliable and non-invasive characteristics, stereology has been widely used in biomedical areas for quantitative analysis and statistics, such as histology, pathology and medical imaging. Because the stereological parameters show distinct differences in different pathology, many scholars use stereological methods to do quantitative analysis in their studies in recent years, for example, in the areas of the condition of cancer cells, tumor grade, disease development and the patient's prognosis, etc. This paper describes the stereological concept and estimation methods, also illustrates the applications of stereology in the fields of CT images, MRI images and cell biology, and finally reflects the universality, the superiority and reliability of stereology.
Why Do Photo Finish Images Look Weird?
ERIC Educational Resources Information Center
Gregorcic, Bor; Planinsic, Gorazd
2012-01-01
This paper deals with effects that appear on photographs of rotating objects when taken by a photo finish camera, a rolling shutter camera or a computer scanner. These effects are very similar to Roget's palisade illusion. A simple quantitative analysis of the images is also provided. The effects are explored using a computer scanner in a way that…
In vivo neuronal calcium imaging in C. elegans.
Chung, Samuel H; Sun, Lin; Gabel, Christopher V
2013-04-10
The nematode worm C. elegans is an ideal model organism for relatively simple, low cost neuronal imaging in vivo. Its small transparent body and simple, well-characterized nervous system allows identification and fluorescence imaging of any neuron within the intact animal. Simple immobilization techniques with minimal impact on the animal's physiology allow extended time-lapse imaging. The development of genetically-encoded calcium sensitive fluorophores such as cameleon and GCaMP allow in vivo imaging of neuronal calcium relating both cell physiology and neuronal activity. Numerous transgenic strains expressing these fluorophores in specific neurons are readily available or can be constructed using well-established techniques. Here, we describe detailed procedures for measuring calcium dynamics within a single neuron in vivo using both GCaMP and cameleon. We discuss advantages and disadvantages of both as well as various methods of sample preparation (animal immobilization) and image analysis. Finally, we present results from two experiments: 1) Using GCaMP to measure the sensory response of a specific neuron to an external electrical field and 2) Using cameleon to measure the physiological calcium response of a neuron to traumatic laser damage. Calcium imaging techniques such as these are used extensively in C. elegans and have been extended to measurements in freely moving animals, multiple neurons simultaneously and comparison across genetic backgrounds. C. elegans presents a robust and flexible system for in vivo neuronal imaging with advantages over other model systems in technical simplicity and cost.
Are V1 Simple Cells Optimized for Visual Occlusions? A Comparative Study
Bornschein, Jörg; Henniges, Marc; Lücke, Jörg
2013-01-01
Simple cells in primary visual cortex were famously found to respond to low-level image components such as edges. Sparse coding and independent component analysis (ICA) emerged as the standard computational models for simple cell coding because they linked their receptive fields to the statistics of visual stimuli. However, a salient feature of image statistics, occlusions of image components, is not considered by these models. Here we ask if occlusions have an effect on the predicted shapes of simple cell receptive fields. We use a comparative approach to answer this question and investigate two models for simple cells: a standard linear model and an occlusive model. For both models we simultaneously estimate optimal receptive fields, sparsity and stimulus noise. The two models are identical except for their component superposition assumption. We find the image encoding and receptive fields predicted by the models to differ significantly. While both models predict many Gabor-like fields, the occlusive model predicts a much sparser encoding and high percentages of ‘globular’ receptive fields. This relatively new center-surround type of simple cell response is observed since reverse correlation is used in experimental studies. While high percentages of ‘globular’ fields can be obtained using specific choices of sparsity and overcompleteness in linear sparse coding, no or only low proportions are reported in the vast majority of studies on linear models (including all ICA models). Likewise, for the here investigated linear model and optimal sparsity, only low proportions of ‘globular’ fields are observed. In comparison, the occlusive model robustly infers high proportions and can match the experimentally observed high proportions of ‘globular’ fields well. Our computational study, therefore, suggests that ‘globular’ fields may be evidence for an optimal encoding of visual occlusions in primary visual cortex. PMID:23754938
Jackman, Patrick; Sun, Da-Wen; Allen, Paul; Valous, Nektarios A; Mendoza, Fernando; Ward, Paddy
2010-04-01
A method to discriminate between various grades of pork and turkey ham was developed using colour and wavelet texture features. Image analysis methods originally developed for predicting the palatability of beef were applied to rapidly identify the ham grade. With high quality digital images of 50-94 slices per ham it was possible to identify the greyscale that best expressed the differences between the various ham grades. The best 10 discriminating image features were then found with a genetic algorithm. Using the best 10 image features, simple linear discriminant analysis models produced 100% correct classifications for both pork and turkey on both calibration and validation sets. 2009 Elsevier Ltd. All rights reserved.
de Sena, Rodrigo Caciano; Soares, Matheus; Pereira, Maria Luiza Oliveira; da Silva, Rogério Cruz Domingues; do Rosário, Francisca Ferreira; da Silva, Joao Francisco Cajaiba
2011-01-01
The development of a simple, rapid and low cost method based on video image analysis and aimed at the detection of low concentrations of precipitated barium sulfate is described. The proposed system is basically composed of a webcam with a CCD sensor and a conventional dichroic lamp. For this purpose, software for processing and analyzing the digital images based on the RGB (Red, Green and Blue) color system was developed. The proposed method had shown very good repeatability and linearity and also presented higher sensitivity than the standard turbidimetric method. The developed method is presented as a simple alternative for future applications in the study of precipitations of inorganic salts and also for detecting the crystallization of organic compounds. PMID:22346607
Automated quantitative cytological analysis using portable microfluidic microscopy.
Jagannadh, Veerendra Kalyan; Murthy, Rashmi Sreeramachandra; Srinivasan, Rajesh; Gorthi, Sai Siva
2016-06-01
In this article, a portable microfluidic microscopy based approach for automated cytological investigations is presented. Inexpensive optical and electronic components have been used to construct a simple microfluidic microscopy system. In contrast to the conventional slide-based methods, the presented method employs microfluidics to enable automated sample handling and image acquisition. The approach involves the use of simple in-suspension staining and automated image acquisition to enable quantitative cytological analysis of samples. The applicability of the presented approach to research in cellular biology is shown by performing an automated cell viability assessment on a given population of yeast cells. Further, the relevance of the presented approach to clinical diagnosis and prognosis has been demonstrated by performing detection and differential assessment of malaria infection in a given sample. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Muldoon, Timothy J.; Thekkek, Nadhi; Roblyer, Darren; Maru, Dipen; Harpaz, Noam; Potack, Jonathan; Anandasabapathy, Sharmila; Richards-Kortum, Rebecca
2010-03-01
Early detection of neoplasia in patients with Barrett's esophagus is essential to improve outcomes. The aim of this ex vivo study was to evaluate the ability of high-resolution microendoscopic imaging and quantitative image analysis to identify neoplastic lesions in patients with Barrett's esophagus. Nine patients with pathologically confirmed Barrett's esophagus underwent endoscopic examination with biopsies or endoscopic mucosal resection. Resected fresh tissue was imaged with fiber bundle microendoscopy; images were analyzed by visual interpretation or by quantitative image analysis to predict whether the imaged sites were non-neoplastic or neoplastic. The best performing pair of quantitative features were chosen based on their ability to correctly classify the data into the two groups. Predictions were compared to the gold standard of histopathology. Subjective analysis of the images by expert clinicians achieved average sensitivity and specificity of 87% and 61%, respectively. The best performing quantitative classification algorithm relied on two image textural features and achieved a sensitivity and specificity of 87% and 85%, respectively. This ex vivo pilot trial demonstrates that quantitative analysis of images obtained with a simple microendoscope system can distinguish neoplasia in Barrett's esophagus with good sensitivity and specificity when compared to histopathology and to subjective image interpretation.
NASA Technical Reports Server (NTRS)
Dejesusparada, N. (Principal Investigator); Sausen, T. M.
1980-01-01
Computer compatible tapes from LANDSAT were used to compartmentalize the Ires Marias reservoir according to respective grey level spectral response. Interactive and automatic, supervised classification, was executed from the IMAGE-100 system. From the simple correlation analysis and graphic representation, it is shown that grey tone levels are inversely proportional to Secchi Depth values. It is further shown that the most favorable period to conduct an analysis of this type is during the rainy season.
Brunner, J; Krummenauer, F; Lehr, H A
2000-04-01
Study end-points in microcirculation research are usually video-taped images rather than numeric computer print-outs. Analysis of these video-taped images for the quantification of microcirculatory parameters usually requires computer-based image analysis systems. Most software programs for image analysis are custom-made, expensive, and limited in their applicability to selected parameters and study end-points. We demonstrate herein that an inexpensive, commercially available computer software (Adobe Photoshop), run on a Macintosh G3 computer with inbuilt graphic capture board provides versatile, easy to use tools for the quantification of digitized video images. Using images obtained by intravital fluorescence microscopy from the pre- and postischemic muscle microcirculation in the skinfold chamber model in hamsters, Photoshop allows simple and rapid quantification (i) of microvessel diameters, (ii) of the functional capillary density and (iii) of postischemic leakage of FITC-labeled high molecular weight dextran from postcapillary venules. We present evidence of the technical accuracy of the software tools and of a high degree of interobserver reliability. Inexpensive commercially available imaging programs (i.e., Adobe Photoshop) provide versatile tools for image analysis with a wide range of potential applications in microcirculation research.
Methods and potentials for using satellite image classification in school lessons
NASA Astrophysics Data System (ADS)
Voss, Kerstin; Goetzke, Roland; Hodam, Henryk
2011-11-01
The FIS project - FIS stands for Fernerkundung in Schulen (Remote Sensing in Schools) - aims at a better integration of the topic "satellite remote sensing" in school lessons. According to this, the overarching objective is to teach pupils basic knowledge and fields of application of remote sensing. Despite the growing significance of digital geomedia, the topic "remote sensing" is not broadly supported in schools. Often, the topic is reduced to a short reflection on satellite images and used only for additional illustration of issues relevant for the curriculum. Without addressing the issue of image data, this can hardly contribute to the improvement of the pupils' methodical competences. Because remote sensing covers more than simple, visual interpretation of satellite images, it is necessary to integrate remote sensing methods like preprocessing, classification and change detection. Dealing with these topics often fails because of confusing background information and the lack of easy-to-use software. Based on these insights, the FIS project created different simple analysis tools for remote sensing in school lessons, which enable teachers as well as pupils to be introduced to the topic in a structured way. This functionality as well as the fields of application of these analysis tools will be presented in detail with the help of three different classification tools for satellite image classification.
Legal issues of computer imaging in plastic surgery: a primer.
Chávez, A E; Dagum, P; Koch, R J; Newman, J P
1997-11-01
Although plastic surgeons are increasingly incorporating computer imaging techniques into their practices, many fear the possibility of legally binding themselves to achieve surgical results identical to those reflected in computer images. Computer imaging allows surgeons to manipulate digital photographs of patients to project possible surgical outcomes. Some of the many benefits imaging techniques pose include improving doctor-patient communication, facilitating the education and training of residents, and reducing administrative and storage costs. Despite the many advantages computer imaging systems offer, however, surgeons understandably worry that imaging systems expose them to immense legal liability. The possible exploitation of computer imaging by novice surgeons as a marketing tool, coupled with the lack of consensus regarding the treatment of computer images, adds to the concern of surgeons. A careful analysis of the law, however, reveals that surgeons who use computer imaging carefully and conservatively, and adopt a few simple precautions, substantially reduce their vulnerability to legal claims. In particular, surgeons face possible claims of implied contract, failure to instruct, and malpractice from their use or failure to use computer imaging. Nevertheless, legal and practical obstacles frustrate each of those causes of actions. Moreover, surgeons who incorporate a few simple safeguards into their practice may further reduce their legal susceptibility.
A scene-analysis approach to remote sensing. [San Francisco, California
NASA Technical Reports Server (NTRS)
Tenenbaum, J. M. (Principal Investigator); Fischler, M. A.; Wolf, H. C.
1978-01-01
The author has identified the following significant results. Geometric correspondance between a sensed image and a symbolic map is established in an initial stage of processing by adjusting parameters of a sensed model so that the image features predicted from the map optimally match corresponding features extracted from the sensed image. Information in the map is then used to constrain where to look in an image, what to look for, and how to interpret what is seen. For simple monitoring tasks involving multispectral classification, these constraints significantly reduce computation, simplify interpretation, and improve the utility of the resulting information. Previously intractable tasks requiring spatial and textural analysis may become straightforward in the context established by the map knowledge. The use of map-guided image analysis in monitoring the volume of water in a reservoir, the number of boxcars in a railyard, and the number of ships in a harbor is demonstrated.
Silvoniemi, Antti; Din, Mueez U; Suilamo, Sami; Shepherd, Tony; Minn, Heikki
2016-11-01
Delineation of gross tumour volume in 3D is a critical step in the radiotherapy (RT) treatment planning for oropharyngeal cancer (OPC). Static [ 18 F]-FDG PET/CT imaging has been suggested as a method to improve the reproducibility of tumour delineation, but it suffers from low specificity. We undertook this pilot study in which dynamic features in time-activity curves (TACs) of [ 18 F]-FDG PET/CT images were applied to help the discrimination of tumour from inflammation and adjacent normal tissue. Five patients with OPC underwent dynamic [ 18 F]-FDG PET/CT imaging in treatment position. Voxel-by-voxel analysis was performed to evaluate seven dynamic features developed with the knowledge of differences in glucose metabolism in different tissue types and visual inspection of TACs. The Gaussian mixture model and K-means algorithms were used to evaluate the performance of the dynamic features in discriminating tumour voxels compared to the performance of standardized uptake values obtained from static imaging. Some dynamic features showed a trend towards discrimination of different metabolic areas but lack of consistency means that clinical application is not recommended based on these results alone. Impact of inflammatory tissue remains a problem for volume delineation in RT of OPC, but a simple dynamic imaging protocol proved practicable and enabled simple data analysis techniques that show promise for complementing the information in static uptake values.
Image Quality Ranking Method for Microscopy
Koho, Sami; Fazeli, Elnaz; Eriksson, John E.; Hänninen, Pekka E.
2016-01-01
Automated analysis of microscope images is necessitated by the increased need for high-resolution follow up of events in time. Manually finding the right images to be analyzed, or eliminated from data analysis are common day-to-day problems in microscopy research today, and the constantly growing size of image datasets does not help the matter. We propose a simple method and a software tool for sorting images within a dataset, according to their relative quality. We demonstrate the applicability of our method in finding good quality images in a STED microscope sample preparation optimization image dataset. The results are validated by comparisons to subjective opinion scores, as well as five state-of-the-art blind image quality assessment methods. We also show how our method can be applied to eliminate useless out-of-focus images in a High-Content-Screening experiment. We further evaluate the ability of our image quality ranking method to detect out-of-focus images, by extensive simulations, and by comparing its performance against previously published, well-established microscopy autofocus metrics. PMID:27364703
BioImageXD: an open, general-purpose and high-throughput image-processing platform.
Kankaanpää, Pasi; Paavolainen, Lassi; Tiitta, Silja; Karjalainen, Mikko; Päivärinne, Joacim; Nieminen, Jonna; Marjomäki, Varpu; Heino, Jyrki; White, Daniel J
2012-06-28
BioImageXD puts open-source computer science tools for three-dimensional visualization and analysis into the hands of all researchers, through a user-friendly graphical interface tuned to the needs of biologists. BioImageXD has no restrictive licenses or undisclosed algorithms and enables publication of precise, reproducible and modifiable workflows. It allows simple construction of processing pipelines and should enable biologists to perform challenging analyses of complex processes. We demonstrate its performance in a study of integrin clustering in response to selected inhibitors.
Concurrent Image Processing Executive (CIPE). Volume 3: User's guide
NASA Technical Reports Server (NTRS)
Lee, Meemong; Cooper, Gregory T.; Groom, Steven L.; Mazer, Alan S.; Williams, Winifred I.; Kong, Mih-Seh
1990-01-01
CIPE (the Concurrent Image Processing Executive) is both an executive which organizes the parameter inputs for hypercube applications and an environment which provides temporary data workspace and simple real-time function definition facilities for image analysis. CIPE provides two types of user interface. The Command Line Interface (CLI) provides a simple command-driven environment allowing interactive function definition and evaluation of algebraic expressions. The menu interface employs a hierarchical screen-oriented menu system where the user is led through a menu tree to any specific application and then given a formatted panel screen for parameter entry. How to initialize the system through the setup function, how to read data into CIPE symbols, how to manipulate and display data through the use of executive functions, and how to run an application in either user interface mode, are described.
Photometric detection of high proper motions in dense stellar fields using difference image analysis
NASA Astrophysics Data System (ADS)
Eyer, L.; Woźniak, P. R.
2001-10-01
The difference image analysis (DIA) of the images obtained by the Optical Gravitational Lensing Experiment (OGLE-II) revealed a peculiar artefact in the sample of stars proposed as variable by Woźniak in one of the Galactic bulge fields: the occurrence of pairs of candidate variables showing anti-correlated light curves monotonic over a period of 3yr. This effect can be understood, quantified and related to the stellar proper motions. DIA photometry supplemented with a simple model offers an effective and easy way to detect high proper motion stars in very dense stellar fields, where conventional astrometric searches are extremely inefficient.
Iterative Stable Alignment and Clustering of 2D Transmission Electron Microscope Images
Yang, Zhengfan; Fang, Jia; Chittuluru, Johnathan; Asturias, Francisco J.; Penczek, Pawel A.
2012-01-01
SUMMARY Identification of homogeneous subsets of images in a macromolecular electron microscopy (EM) image data set is a critical step in single-particle analysis. The task is handled by iterative algorithms, whose performance is compromised by the compounded limitations of image alignment and K-means clustering. Here we describe an approach, iterative stable alignment and clustering (ISAC) that, relying on a new clustering method and on the concepts of stability and reproducibility, can extract validated, homogeneous subsets of images. ISAC requires only a small number of simple parameters and, with minimal human intervention, can eliminate bias from two-dimensional image clustering and maximize the quality of group averages that can be used for ab initio three-dimensional structural determination and analysis of macromolecular conformational variability. Repeated testing of the stability and reproducibility of a solution within ISAC eliminates heterogeneous or incorrect classes and introduces critical validation to the process of EM image clustering. PMID:22325773
An x ray archive on your desk: The Einstein CD-ROM's
NASA Technical Reports Server (NTRS)
Prestwich, A.; Mcdowell, J.; Plummer, D.; Manning, K.; Garcia, M.
1992-01-01
Data from the Einstein Observatory imaging proportional counter (IPC) and high resolution imager (HRI) were released on several CD-ROM sets. The sets released so far include pointed IPC and HRI observations in both simple image and detailed photon event list format, as well as the IPC slew survey. With the data on these CD-ROMS's the user can perform spatial analysis (e.g., surface brightness distributions), spectral analysis (with the IPC event lists), and timing analysis (with the IPC and HRI event lists). The next CD-ROM set will contain IPC unscreened data, allowing the user to perform custom screening to recover, for instance, data during times of lost aspect data or high particle background rates.
NASA Astrophysics Data System (ADS)
Kalchenko, Vyacheslav; Molodij, Guillaume; Kuznetsov, Yuri; Smolyakov, Yuri; Israeli, David; Meglinski, Igor; Harmelin, Alon
2016-03-01
The use of fluorescence imaging of vascular permeability becomes a golden standard for assessing the inflammation process during experimental immune response in vivo. The use of the optical fluorescence imaging provides a very useful and simple tool to reach this purpose. The motivation comes from the necessity of a robust and simple quantification and data presentation of inflammation based on a vascular permeability. Changes of the fluorescent intensity, as a function of time is a widely accepted method to assess the vascular permeability during inflammation related to the immune response. In the present study we propose to bring a new dimension by applying a more sophisticated approach to the analysis of vascular reaction by using a quantitative analysis based on methods derived from astronomical observations, in particular by using a space-time Fourier filtering analysis followed by a polynomial orthogonal modes decomposition. We demonstrate that temporal evolution of the fluorescent intensity observed at certain pixels correlates quantitatively to the blood flow circulation at normal conditions. The approach allows to determine the regions of permeability and monitor both the fast kinetics related to the contrast material distribution in the circulatory system and slow kinetics associated with extravasation of the contrast material. Thus, we introduce a simple and convenient method for fast quantitative visualization of the leakage related to the inflammatory (immune) reaction in vivo.
Theory and applications of structured light single pixel imaging
NASA Astrophysics Data System (ADS)
Stokoe, Robert J.; Stockton, Patrick A.; Pezeshki, Ali; Bartels, Randy A.
2018-02-01
Many single-pixel imaging techniques have been developed in recent years. Though the methods of image acquisition vary considerably, the methods share unifying features that make general analysis possible. Furthermore, the methods developed thus far are based on intuitive processes that enable simple and physically-motivated reconstruction algorithms, however, this approach may not leverage the full potential of single-pixel imaging. We present a general theoretical framework of single-pixel imaging based on frame theory, which enables general, mathematically rigorous analysis. We apply our theoretical framework to existing single-pixel imaging techniques, as well as provide a foundation for developing more-advanced methods of image acquisition and reconstruction. The proposed frame theoretic framework for single-pixel imaging results in improved noise robustness, decrease in acquisition time, and can take advantage of special properties of the specimen under study. By building on this framework, new methods of imaging with a single element detector can be developed to realize the full potential associated with single-pixel imaging.
NASA Technical Reports Server (NTRS)
Duong, T. A.
2004-01-01
In this paper, we present a new, simple, and optimized hardware architecture sequential learning technique for adaptive Principle Component Analysis (PCA) which will help optimize the hardware implementation in VLSI and to overcome the difficulties of the traditional gradient descent in learning convergence and hardware implementation.
Gallo-Oller, Gabriel; Ordoñez, Raquel; Dotor, Javier
2018-06-01
Since its first description, Western blot has been widely used in molecular labs. It constitutes a multistep method that allows the detection and/or quantification of proteins from simple to complex protein mixtures. Western blot quantification method constitutes a critical step in order to obtain accurate and reproducible results. Due to the technical knowledge required for densitometry analysis together with the resources availability, standard office scanners are often used for the imaging acquisition of developed Western blot films. Furthermore, the use of semi-quantitative software as ImageJ (Java-based image-processing and analysis software) is clearly increasing in different scientific fields. In this work, we describe the use of office scanner coupled with the ImageJ software together with a new image background subtraction method for accurate Western blot quantification. The proposed method represents an affordable, accurate and reproducible approximation that could be used in the presence of limited resources availability. Copyright © 2018 Elsevier B.V. All rights reserved.
Application of RNAMlet to surface defect identification of steels
NASA Astrophysics Data System (ADS)
Xu, Ke; Xu, Yang; Zhou, Peng; Wang, Lei
2018-06-01
As three main production lines of steels, continuous casting slabs, hot rolled steel plates and cold rolled steel strips have different surface appearances and are produced at different speeds of their production lines. Therefore, the algorithms for the surface defect identifications of the three steel products have different requirements for real-time and anti-interference. The existing algorithms cannot be adaptively applied to surface defect identification of the three steel products. A new method of adaptive multi-scale geometric analysis named RNAMlet was proposed. The idea of RNAMlet came from the non-symmetry anti-packing pattern representation model (NAM). The image is decomposed into a set of rectangular blocks asymmetrically according to gray value changes of image pixels. Then two-dimensional Haar wavelet transform is applied to all blocks. If the image background is complex, the number of blocks is large, and more details of the image are utilized. If the image background is simple, the number of blocks is small, and less computation time is needed. RNAMlet was tested with image samples of the three steel products, and compared with three classical methods of multi-scale geometric analysis, including Contourlet, Shearlet and Tetrolet. For the image samples with complicated backgrounds, such as continuous casting slabs and hot rolled steel plates, the defect identification rate obtained by RNAMlet was 1% higher than other three methods. For the image samples with simple backgrounds, such as cold rolled steel strips, the computation time of RNAMlet was one-tenth of the other three MGA methods, while the defect identification rates obtained by RNAMlet were higher than the other three methods.
Real-time restoration of white-light confocal microscope optical sections
Balasubramanian, Madhusudhanan; Iyengar, S. Sitharama; Beuerman, Roger W.; Reynaud, Juan; Wolenski, Peter
2009-01-01
Confocal microscopes (CM) are routinely used for building 3-D images of microscopic structures. Nonideal imaging conditions in a white-light CM introduce additive noise and blur. The optical section images need to be restored prior to quantitative analysis. We present an adaptive noise filtering technique using Karhunen–Loéve expansion (KLE) by the method of snapshots and a ringing metric to quantify the ringing artifacts introduced in the images restored at various iterations of iterative Lucy–Richardson deconvolution algorithm. The KLE provides a set of basis functions that comprise the optimal linear basis for an ensemble of empirical observations. We show that most of the noise in the scene can be removed by reconstructing the images using the KLE basis vector with the largest eigenvalue. The prefiltering scheme presented is faster and does not require prior knowledge about image noise. Optical sections processed using the KLE prefilter can be restored using a simple inverse restoration algorithm; thus, the methodology is suitable for real-time image restoration applications. The KLE image prefilter outperforms the temporal-average prefilter in restoring CM optical sections. The ringing metric developed uses simple binary morphological operations to quantify the ringing artifacts and confirms with the visual observation of ringing artifacts in the restored images. PMID:20186290
Composition of the cellular infiltrate in patients with simple and complex appendicitis.
Gorter, Ramon R; Wassenaar, Emma C E; de Boer, Onno J; Bakx, Roel; Roelofs, Joris J T H; Bunders, Madeleine J; van Heurn, L W Ernst; Heij, Hugo A
2017-06-15
It is now well established that there are two types of appendicitis: simple (nonperforating) and complex (perforating). This study evaluates differences in the composition of the immune cellular infiltrate in children with simple and complex appendicitis. A total of 47 consecutive children undergoing appendectomy for acute appendicitis between January 2011 and December 2012 were included. Intraoperative criteria were used to identify patients with either simple or complex appendicitis and were confirmed histopathologically. Immune histochemical techniques were used to identify immune cell markers in the appendiceal specimens. Digital imaging analysis was performed using Image J. In the specimens of patients with complex appendicitis, significantly more myeloperoxidase positive cells (neutrophils) (8.7% versus 1.2%, P < 0.001) were detected compared to patients with a simple appendicitis. In contrast, fewer CD8+ T cells (0.4% versus 1.3%, P = 0.016), CD20 + cells (2.9% versus 9.0%, P = 0.027), and CD21 + cells (0.2% versus 0.6%, P = 0.028) were present in tissue from patients with complex compared to simple appendicitis. The increase in proinflammatory innate cells and decrease of adaptive cells in patients with complex appendicitis suggest potential aggravating processes in complex appendicitis. Further research into the underlying mechanisms may identify novel biomarkers to be able to differentiate simple and complex appendicitis. Copyright © 2017 Elsevier Inc. All rights reserved.
Ciraj-Bjelac, Olivera; Faj, Dario; Stimac, Damir; Kosutic, Dusko; Arandjic, Danijela; Brkic, Hrvoje
2011-04-01
The purpose of this study is to investigate the need for and the possible achievements of a comprehensive QA programme and to look at effects of simple corrective actions on image quality in Croatia and in Serbia. The paper focuses on activities related to the technical and radiological aspects of QA. The methodology consisted of two phases. The aim of the first phase was the initial assessment of mammography practice in terms of image quality, patient dose and equipment performance in selected number of mammography units in Croatia and Serbia. Subsequently, corrective actions were suggested and implemented. Then the same parameters were re-assessed. Most of the suggested corrective actions were simple, low-cost and possible to implement immediately, as these were related to working habits in mammography units, such as film processing and darkroom conditions. It has been demonstrated how simple quantitative assessment of image quality can be used for optimisation purposes. Analysis of image quality parameters as OD, gradient and contrast demonstrated general similarities between mammography practices in Croatia and Serbia. The applied methodology should be expanded to larger number of hospitals and applied on a regular basis. Copyright © 2009 Elsevier Ireland Ltd. All rights reserved.
Simple Perfusion Apparatus (SPA) for Manipulation, Tracking and Study of Oocytes and Embryos
Angione, Stephanie L.; Oulhen, Nathalie; Brayboy, Lynae M.; Tripathi, Anubhav; Wessel, Gary M.
2016-01-01
Objective To develop and implement a device and protocol for oocyte analysis at a single cell level. The device must be capable of high resolution imaging, temperature control, perfusion of media, drugs, sperm, and immunolabeling reagents all at defined flow-rates. Each oocyte and resultant embryo must remain spatially separated and defined. Design Experimental laboratory study Setting University and Academic Center for reproductive medicine. Patients/Animals Women with eggs retrieved for ICSI cycles, adult female FVBN and B6C3F1 mouse strains, sea stars. Intervention Real-time, longitudinal imaging of oocytes following fluorescent labeling, insemination, and viability tests. Main outcome measure(s) Cell and embryo viability, immunolabeling efficiency, live cell endocytosis quantitation, precise metrics of fertilization and embryonic development. Results Single oocytes were longitudinally imaged following significant changes in media, markers, endocytosis quantitation, and development, all with supreme control by microfluidics. Cells remained viable, enclosed, and separate for precision measurements, repeatability, and imaging. Conclusions We engineered a simple device to load, visualize, experiment, and effectively record individual oocytes and embryos, without loss of cells. Prolonged incubation capabilities provide longitudinal studies without need for transfer and potential loss of cells. This simple perfusion apparatus (SPA) provides for careful, precise, and flexible handling of precious samples facilitating clinical in vitro fertilization approaches. PMID:25450296
Low-rate image coding using vector quantization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Makur, A.
1990-01-01
This thesis deals with the development and analysis of a computationally simple vector quantization image compression system for coding monochrome images at low bit rate. Vector quantization has been known to be an effective compression scheme when a low bit rate is desirable, but the intensive computation required in a vector quantization encoder has been a handicap in using it for low rate image coding. The present work shows that, without substantially increasing the coder complexity, it is indeed possible to achieve acceptable picture quality while attaining a high compression ratio. Several modifications to the conventional vector quantization coder aremore » proposed in the thesis. These modifications are shown to offer better subjective quality when compared to the basic coder. Distributed blocks are used instead of spatial blocks to construct the input vectors. A class of input-dependent weighted distortion functions is used to incorporate psychovisual characteristics in the distortion measure. Computationally simple filtering techniques are applied to further improve the decoded image quality. Finally, unique designs of the vector quantization coder using electronic neural networks are described, so that the coding delay is reduced considerably.« less
Analyzing microtomography data with Python and the scikit-image library.
Gouillart, Emmanuelle; Nunez-Iglesias, Juan; van der Walt, Stéfan
2017-01-01
The exploration and processing of images is a vital aspect of the scientific workflows of many X-ray imaging modalities. Users require tools that combine interactivity, versatility, and performance. scikit-image is an open-source image processing toolkit for the Python language that supports a large variety of file formats and is compatible with 2D and 3D images. The toolkit exposes a simple programming interface, with thematic modules grouping functions according to their purpose, such as image restoration, segmentation, and measurements. scikit-image users benefit from a rich scientific Python ecosystem that contains many powerful libraries for tasks such as visualization or machine learning. scikit-image combines a gentle learning curve, versatile image processing capabilities, and the scalable performance required for the high-throughput analysis of X-ray imaging data.
Deep Learning Nuclei Detection in Digitized Histology Images by Superpixels.
Sornapudi, Sudhir; Stanley, Ronald Joe; Stoecker, William V; Almubarak, Haidar; Long, Rodney; Antani, Sameer; Thoma, George; Zuna, Rosemary; Frazier, Shelliane R
2018-01-01
Advances in image analysis and computational techniques have facilitated automatic detection of critical features in histopathology images. Detection of nuclei is critical for squamous epithelium cervical intraepithelial neoplasia (CIN) classification into normal, CIN1, CIN2, and CIN3 grades. In this study, a deep learning (DL)-based nuclei segmentation approach is investigated based on gathering localized information through the generation of superpixels using a simple linear iterative clustering algorithm and training with a convolutional neural network. The proposed approach was evaluated on a dataset of 133 digitized histology images and achieved an overall nuclei detection (object-based) accuracy of 95.97%, with demonstrated improvement over imaging-based and clustering-based benchmark techniques. The proposed DL-based nuclei segmentation Method with superpixel analysis has shown improved segmentation results in comparison to state-of-the-art methods.
A Simple Instrument Designed to Provide Consistent Digital Facial Images in Dermatology
Nirmal, Balakrishnan; Pai, Sathish B; Sripathi, Handattu
2013-01-01
Photography has proven to be a valuable tool in the field of dermatology. The major reason for poor photographs is the inability to produce comparable images in the subsequent follow ups. Combining digital photography with image processing software analysis brings consistency in tracking serial images. Digital photographs were taken with the aid of an instrument which we designed in our workshop to ensure that photographs were taken with identical patient positioning, camera angles and distance. It is of paramount importance in aesthetic dermatology to appreciate even subtle changes after each treatment session which can be achieved by taking consistent digital images. PMID:23723469
A simple instrument designed to provide consistent digital facial images in dermatology.
Nirmal, Balakrishnan; Pai, Sathish B; Sripathi, Handattu
2013-05-01
Photography has proven to be a valuable tool in the field of dermatology. The major reason for poor photographs is the inability to produce comparable images in the subsequent follow ups. Combining digital photography with image processing software analysis brings consistency in tracking serial images. Digital photographs were taken with the aid of an instrument which we designed in our workshop to ensure that photographs were taken with identical patient positioning, camera angles and distance. It is of paramount importance in aesthetic dermatology to appreciate even subtle changes after each treatment session which can be achieved by taking consistent digital images.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Herchko, S; Ding, G
2016-06-15
Purpose: To develop an accurate, straightforward, and user-independent method for performing light versus radiation field coincidence quality assurance utilizing EPID images, a simple phantom made of readily-accessible materials, and a free software program. Methods: A simple phantom consisting of a blocking tray, graph paper, and high-density wire was constructed. The phantom was used to accurately set the size of a desired light field and imaged on the electronic portal imaging device (EPID). A macro written for use in ImageJ, a free image processing software, was then use to determine the radiation field size utilizing the high density wires on themore » phantom for a pixel to distance calibration. The macro also performs an analysis on the measured radiation field utilizing the tolerances recommended in the AAPM Task Group #142. To verify the accuracy of this method, radiochromic film was used to qualitatively demonstrate agreement between the film and EPID results, and an additional ImageJ macro was used to quantitatively compare the radiation field sizes measured both with the EPID and film images. Results: The results of this technique were benchmarked against film measurements, which have been the gold standard for testing light versus radiation field coincidence. The agreement between this method and film measurements were within 0.5 mm. Conclusion: Due to the operator dependency associated with tracing light fields and measuring radiation fields by hand when using film, this method allows for a more accurate comparison between the light and radiation fields with minimal operator dependency. Removing the need for radiographic or radiochromic film also eliminates a reoccurring cost and increases procedural efficiency.« less
Humans make efficient use of natural image statistics when performing spatial interpolation.
D'Antona, Anthony D; Perry, Jeffrey S; Geisler, Wilson S
2013-12-16
Visual systems learn through evolution and experience over the lifespan to exploit the statistical structure of natural images when performing visual tasks. Understanding which aspects of this statistical structure are incorporated into the human nervous system is a fundamental goal in vision science. To address this goal, we measured human ability to estimate the intensity of missing image pixels in natural images. Human estimation accuracy is compared with various simple heuristics (e.g., local mean) and with optimal observers that have nearly complete knowledge of the local statistical structure of natural images. Human estimates are more accurate than those of simple heuristics, and they match the performance of an optimal observer that knows the local statistical structure of relative intensities (contrasts). This optimal observer predicts the detailed pattern of human estimation errors and hence the results place strong constraints on the underlying neural mechanisms. However, humans do not reach the performance of an optimal observer that knows the local statistical structure of the absolute intensities, which reflect both local relative intensities and local mean intensity. As predicted from a statistical analysis of natural images, human estimation accuracy is negligibly improved by expanding the context from a local patch to the whole image. Our results demonstrate that the human visual system exploits efficiently the statistical structure of natural images.
Some spectral and spatial characteristics of LANDSAT data
NASA Technical Reports Server (NTRS)
1982-01-01
Activities are provided for: (1) developing insight into the way in which the LANDSAT MSS produces multispectral data; (2) promoting understanding of what a "pixel" means in a LANDSAT image and the implications of the term "mixed pixel"; (3) explaining the concept of spectral signatures; (4) deriving a simple signature for a class or feature by analysis: of the four band images; (5) understanding the production of false color composites; (6) appreciating the use of color additive techniques; (7) preparing Diazo images; and (8) making quick visual identifications of major land cover types by their characteristic gray tones or colors in LANDSAT images.
Nema, Shubham; Hasan, Whidul; Bhargava, Anamika; Bhargava, Yogesh
2016-09-15
Behavioural neuroscience relies on software driven methods for behavioural assessment, but the field lacks cost-effective, robust, open source software for behavioural analysis. Here we propose a novel method which we called as ZebraTrack. It includes cost-effective imaging setup for distraction-free behavioural acquisition, automated tracking using open-source ImageJ software and workflow for extraction of behavioural endpoints. Our ImageJ algorithm is capable of providing control to users at key steps while maintaining automation in tracking without the need for the installation of external plugins. We have validated this method by testing novelty induced anxiety behaviour in adult zebrafish. Our results, in agreement with established findings, showed that during state-anxiety, zebrafish showed reduced distance travelled, increased thigmotaxis and freezing events. Furthermore, we proposed a method to represent both spatial and temporal distribution of choice-based behaviour which is currently not possible to represent using simple videograms. ZebraTrack method is simple and economical, yet robust enough to give results comparable with those obtained from costly proprietary software like Ethovision XT. We have developed and validated a novel cost-effective method for behavioural analysis of adult zebrafish using open-source ImageJ software. Copyright © 2016 Elsevier B.V. All rights reserved.
Smartphone-based colorimetric analysis for detection of saliva alcohol concentration.
Jung, Youngkee; Kim, Jinhee; Awofeso, Olumide; Kim, Huisung; Regnier, Fred; Bae, Euiwon
2015-11-01
A simple device and associated analytical methods are reported. We provide objective and accurate determination of saliva alcohol concentrations using smartphone-based colorimetric imaging. The device utilizes any smartphone with a miniature attachment that positions the sample and provides constant illumination for sample imaging. Analyses of histograms based on channel imaging of red-green-blue (RGB) and hue-saturation-value (HSV) color space provide unambiguous determination of blood alcohol concentration from color changes on sample pads. A smartphone-based sample analysis by colorimetry was developed and tested with blind samples that matched with the training sets. This technology can be adapted to any smartphone and used to conduct color change assays.
Computer simulation of schlieren images of rotationally symmetric plasma systems: a simple method.
Noll, R; Haas, C R; Weikl, B; Herziger, G
1986-03-01
Schlieren techniques are commonly used methods for quantitative analysis of cylindrical or spherical index of refraction profiles. Many schlieren objects, however, are characterized by more complex geometries, so we have investigated the more general case of noncylindrical, rotationally symmetric distributions of index of refraction n(r,z). Assuming straight ray paths in the schlieren object we have calculated 2-D beam deviation profiles. It is shown that experimental schlieren images of the noncylindrical plasma generated by a plasma focus device can be simulated with these deviation profiles. The computer simulation allows a quantitative analysis of these schlieren images, which yields, for example, the plasma parameters, electron density, and electron density gradients.
Bieri, Michael; d'Auvergne, Edward J; Gooley, Paul R
2011-06-01
Investigation of protein dynamics on the ps-ns and μs-ms timeframes provides detailed insight into the mechanisms of enzymes and the binding properties of proteins. Nuclear magnetic resonance (NMR) is an excellent tool for studying protein dynamics at atomic resolution. Analysis of relaxation data using model-free analysis can be a tedious and time consuming process, which requires good knowledge of scripting procedures. The software relaxGUI was developed for fast and simple model-free analysis and is fully integrated into the software package relax. It is written in Python and uses wxPython to build the graphical user interface (GUI) for maximum performance and multi-platform use. This software allows the analysis of NMR relaxation data with ease and the generation of publication quality graphs as well as color coded images of molecular structures. The interface is designed for simple data analysis and management. The software was tested and validated against the command line version of relax.
Qualitative and quantitative interpretation of SEM image using digital image processing.
Saladra, Dawid; Kopernik, Magdalena
2016-10-01
The aim of the this study is improvement of qualitative and quantitative analysis of scanning electron microscope micrographs by development of computer program, which enables automatic crack analysis of scanning electron microscopy (SEM) micrographs. Micromechanical tests of pneumatic ventricular assist devices result in a large number of micrographs. Therefore, the analysis must be automatic. Tests for athrombogenic titanium nitride/gold coatings deposited on polymeric substrates (Bionate II) are performed. These tests include microshear, microtension and fatigue analysis. Anisotropic surface defects observed in the SEM micrographs require support for qualitative and quantitative interpretation. Improvement of qualitative analysis of scanning electron microscope images was achieved by a set of computational tools that includes binarization, simplified expanding, expanding, simple image statistic thresholding, the filters Laplacian 1, and Laplacian 2, Otsu and reverse binarization. Several modifications of the known image processing techniques and combinations of the selected image processing techniques were applied. The introduced quantitative analysis of digital scanning electron microscope images enables computation of stereological parameters such as area, crack angle, crack length, and total crack length per unit area. This study also compares the functionality of the developed computer program of digital image processing with existing applications. The described pre- and postprocessing may be helpful in scanning electron microscopy and transmission electron microscopy surface investigations. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.
Tracking and imaging humans on heterogeneous infrared sensor arrays for law enforcement applications
NASA Astrophysics Data System (ADS)
Feller, Steven D.; Zheng, Y.; Cull, Evan; Brady, David J.
2002-08-01
We present a plan for the integration of geometric constraints in the source, sensor and analysis levels of sensor networks. The goal of geometric analysis is to reduce the dimensionality and complexity of distributed sensor data analysis so as to achieve real-time recognition and response to significant events. Application scenarios include biometric tracking of individuals, counting and analysis of individuals in groups of humans and distributed sentient environments. We are particularly interested in using this approach to provide networks of low cost point detectors, such as infrared motion detectors, with complex imaging capabilities. By extending the capabilities of simple sensors, we expect to reduce the cost of perimeter and site security applications.
NASA Astrophysics Data System (ADS)
Al-Durgham, K.; Lichti, D. D.; Detchev, I.; Kuntze, G.; Ronsky, J. L.
2018-05-01
A fundamental task in photogrammetry is the temporal stability analysis of a camera/imaging-system's calibration parameters. This is essential to validate the repeatability of the parameters' estimation, to detect any behavioural changes in the camera/imaging system and to ensure precise photogrammetric products. Many stability analysis methods exist in the photogrammetric literature; each one has different methodological bases, and advantages and disadvantages. This paper presents a simple and rigorous stability analysis method that can be straightforwardly implemented for a single camera or an imaging system with multiple cameras. The basic collinearity model is used to capture differences between two calibration datasets, and to establish the stability analysis methodology. Geometric simulation is used as a tool to derive image and object space scenarios. Experiments were performed on real calibration datasets from a dual fluoroscopy (DF; X-ray-based) imaging system. The calibration data consisted of hundreds of images and thousands of image observations from six temporal points over a two-day period for a precise evaluation of the DF system stability. The stability of the DF system - for a single camera analysis - was found to be within a range of 0.01 to 0.66 mm in terms of 3D coordinates root-mean-square-error (RMSE), and 0.07 to 0.19 mm for dual cameras analysis. It is to the authors' best knowledge that this work is the first to address the topic of DF stability analysis.
Ganalyzer: A tool for automatic galaxy image analysis
NASA Astrophysics Data System (ADS)
Shamir, Lior
2011-05-01
Ganalyzer is a model-based tool that automatically analyzes and classifies galaxy images. Ganalyzer works by separating the galaxy pixels from the background pixels, finding the center and radius of the galaxy, generating the radial intensity plot, and then computing the slopes of the peaks detected in the radial intensity plot to measure the spirality of the galaxy and determine its morphological class. Unlike algorithms that are based on machine learning, Ganalyzer is based on measuring the spirality of the galaxy, a task that is difficult to perform manually, and in many cases can provide a more accurate analysis compared to manual observation. Ganalyzer is simple to use, and can be easily embedded into other image analysis applications. Another advantage is its speed, which allows it to analyze ~10,000,000 galaxy images in five days using a standard modern desktop computer. These capabilities can make Ganalyzer a useful tool in analyzing large datasets of galaxy images collected by autonomous sky surveys such as SDSS, LSST or DES.
CMOS image sensors as an efficient platform for glucose monitoring.
Devadhasan, Jasmine Pramila; Kim, Sanghyo; Choi, Cheol Soo
2013-10-07
Complementary metal oxide semiconductor (CMOS) image sensors have been used previously in the analysis of biological samples. In the present study, a CMOS image sensor was used to monitor the concentration of oxidized mouse plasma glucose (86-322 mg dL(-1)) based on photon count variation. Measurement of the concentration of oxidized glucose was dependent on changes in color intensity; color intensity increased with increasing glucose concentration. The high color density of glucose highly prevented photons from passing through the polydimethylsiloxane (PDMS) chip, which suggests that the photon count was altered by color intensity. Photons were detected by a photodiode in the CMOS image sensor and converted to digital numbers by an analog to digital converter (ADC). Additionally, UV-spectral analysis and time-dependent photon analysis proved the efficiency of the detection system. This simple, effective, and consistent method for glucose measurement shows that CMOS image sensors are efficient devices for monitoring glucose in point-of-care applications.
Biological Parametric Mapping: A Statistical Toolbox for Multi-Modality Brain Image Analysis
Casanova, Ramon; Ryali, Srikanth; Baer, Aaron; Laurienti, Paul J.; Burdette, Jonathan H.; Hayasaka, Satoru; Flowers, Lynn; Wood, Frank; Maldjian, Joseph A.
2006-01-01
In recent years multiple brain MR imaging modalities have emerged; however, analysis methodologies have mainly remained modality specific. In addition, when comparing across imaging modalities, most researchers have been forced to rely on simple region-of-interest type analyses, which do not allow the voxel-by-voxel comparisons necessary to answer more sophisticated neuroscience questions. To overcome these limitations, we developed a toolbox for multimodal image analysis called biological parametric mapping (BPM), based on a voxel-wise use of the general linear model. The BPM toolbox incorporates information obtained from other modalities as regressors in a voxel-wise analysis, thereby permitting investigation of more sophisticated hypotheses. The BPM toolbox has been developed in MATLAB with a user friendly interface for performing analyses, including voxel-wise multimodal correlation, ANCOVA, and multiple regression. It has a high degree of integration with the SPM (statistical parametric mapping) software relying on it for visualization and statistical inference. Furthermore, statistical inference for a correlation field, rather than a widely-used T-field, has been implemented in the correlation analysis for more accurate results. An example with in-vivo data is presented demonstrating the potential of the BPM methodology as a tool for multimodal image analysis. PMID:17070709
Image processing analysis of traditional Gestalt vision experiments
NASA Astrophysics Data System (ADS)
McCann, John J.
2002-06-01
In the late 19th century, the Gestalt Psychology rebelled against the popular new science of Psychophysics. The Gestalt revolution used many fascinating visual examples to illustrate that the whole is greater than the sum of all the parts. Color constancy was an important example. The physical interpretation of sensations and their quantification by JNDs and Weber fractions were met with innumerable examples in which two 'identical' physical stimuli did not look the same. The fact that large changes in the color of the illumination failed to change color appearance in real scenes demanded something more than quantifying the psychophysical response of a single pixel. The debates continues today with proponents of both physical, pixel-based colorimetry and perceptual, image- based cognitive interpretations. Modern instrumentation has made colorimetric pixel measurement universal. As well, new examples of unconscious inference continue to be reported in the literature. Image processing provides a new way of analyzing familiar Gestalt displays. Since the pioneering experiments by Fergus Campbell and Land, we know that human vision has independent spatial channels and independent color channels. Color matching data from color constancy experiments agrees with spatial comparison analysis. In this analysis, simple spatial processes can explain the different appearances of 'identical' stimuli by analyzing the multiresolution spatial properties of their surrounds. Benary's Cross, White's Effect, the Checkerboard Illusion and the Dungeon Illusion can all be understood by the analysis of their low-spatial-frequency components. Just as with color constancy, these Gestalt images are most simply described by the analysis of spatial components. Simple spatial mechanisms account for the appearance of 'identical' stimuli in complex scenes. It does not require complex, cognitive processes to calculate appearances in familiar Gestalt experiments.
Techniques for automatic large scale change analysis of temporal multispectral imagery
NASA Astrophysics Data System (ADS)
Mercovich, Ryan A.
Change detection in remotely sensed imagery is a multi-faceted problem with a wide variety of desired solutions. Automatic change detection and analysis to assist in the coverage of large areas at high resolution is a popular area of research in the remote sensing community. Beyond basic change detection, the analysis of change is essential to provide results that positively impact an image analyst's job when examining potentially changed areas. Present change detection algorithms are geared toward low resolution imagery, and require analyst input to provide anything more than a simple pixel level map of the magnitude of change that has occurred. One major problem with this approach is that change occurs in such large volume at small spatial scales that a simple change map is no longer useful. This research strives to create an algorithm based on a set of metrics that performs a large area search for change in high resolution multispectral image sequences and utilizes a variety of methods to identify different types of change. Rather than simply mapping the magnitude of any change in the scene, the goal of this research is to create a useful display of the different types of change in the image. The techniques presented in this dissertation are used to interpret large area images and provide useful information to an analyst about small regions that have undergone specific types of change while retaining image context to make further manual interpretation easier. This analyst cueing to reduce information overload in a large area search environment will have an impact in the areas of disaster recovery, search and rescue situations, and land use surveys among others. By utilizing a feature based approach founded on applying existing statistical methods and new and existing topological methods to high resolution temporal multispectral imagery, a novel change detection methodology is produced that can automatically provide useful information about the change occurring in large area and high resolution image sequences. The change detection and analysis algorithm developed could be adapted to many potential image change scenarios to perform automatic large scale analysis of change.
Lee, Unseok; Chang, Sungyul; Putra, Gian Anantrio; Kim, Hyoungseok; Kim, Dong Hwan
2018-01-01
A high-throughput plant phenotyping system automatically observes and grows many plant samples. Many plant sample images are acquired by the system to determine the characteristics of the plants (populations). Stable image acquisition and processing is very important to accurately determine the characteristics. However, hardware for acquiring plant images rapidly and stably, while minimizing plant stress, is lacking. Moreover, most software cannot adequately handle large-scale plant imaging. To address these problems, we developed a new, automated, high-throughput plant phenotyping system using simple and robust hardware, and an automated plant-imaging-analysis pipeline consisting of machine-learning-based plant segmentation. Our hardware acquires images reliably and quickly and minimizes plant stress. Furthermore, the images are processed automatically. In particular, large-scale plant-image datasets can be segmented precisely using a classifier developed using a superpixel-based machine-learning algorithm (Random Forest), and variations in plant parameters (such as area) over time can be assessed using the segmented images. We performed comparative evaluations to identify an appropriate learning algorithm for our proposed system, and tested three robust learning algorithms. We developed not only an automatic analysis pipeline but also a convenient means of plant-growth analysis that provides a learning data interface and visualization of plant growth trends. Thus, our system allows end-users such as plant biologists to analyze plant growth via large-scale plant image data easily.
Lundell, Henrik; Alexander, Daniel C; Dyrby, Tim B
2014-08-01
Stimulated echo acquisition mode (STEAM) diffusion MRI can be advantageous over pulsed-gradient spin-echo (PGSE) for diffusion times that are long compared with T2 . It therefore has potential for biomedical diffusion imaging applications at 7T and above where T2 is short. However, gradient pulses other than the diffusion gradients in the STEAM sequence contribute much greater diffusion weighting than in PGSE and lead to a disrupted experimental design. Here, we introduce a simple compensation to the STEAM acquisition that avoids the orientational bias and disrupted experiment design that these gradient pulses can otherwise produce. The compensation is simple to implement by adjusting the gradient vectors in the diffusion pulses of the STEAM sequence, so that the net effective gradient vector including contributions from diffusion and other gradient pulses is as the experiment intends. High angular resolution diffusion imaging (HARDI) data were acquired with and without the proposed compensation. The data were processed to derive standard diffusion tensor imaging (DTI) maps, which highlight the need for the compensation. Ignoring the other gradient pulses, a bias in DTI parameters from STEAM acquisition is found, due both to confounds in the analysis and the experiment design. Retrospectively correcting the analysis with a calculation of the full B matrix can partly correct for these confounds, but an acquisition that is compensated as proposed is needed to remove the effect entirely. © 2014 The Authors. NMR in Biomedicine published by John Wiley & Sons, Ltd.
PCANet: A Simple Deep Learning Baseline for Image Classification?
Chan, Tsung-Han; Jia, Kui; Gao, Shenghua; Lu, Jiwen; Zeng, Zinan; Ma, Yi
2015-12-01
In this paper, we propose a very simple deep learning network for image classification that is based on very basic data processing components: 1) cascaded principal component analysis (PCA); 2) binary hashing; and 3) blockwise histograms. In the proposed architecture, the PCA is employed to learn multistage filter banks. This is followed by simple binary hashing and block histograms for indexing and pooling. This architecture is thus called the PCA network (PCANet) and can be extremely easily and efficiently designed and learned. For comparison and to provide a better understanding, we also introduce and study two simple variations of PCANet: 1) RandNet and 2) LDANet. They share the same topology as PCANet, but their cascaded filters are either randomly selected or learned from linear discriminant analysis. We have extensively tested these basic networks on many benchmark visual data sets for different tasks, including Labeled Faces in the Wild (LFW) for face verification; the MultiPIE, Extended Yale B, AR, Facial Recognition Technology (FERET) data sets for face recognition; and MNIST for hand-written digit recognition. Surprisingly, for all tasks, such a seemingly naive PCANet model is on par with the state-of-the-art features either prefixed, highly hand-crafted, or carefully learned [by deep neural networks (DNNs)]. Even more surprisingly, the model sets new records for many classification tasks on the Extended Yale B, AR, and FERET data sets and on MNIST variations. Additional experiments on other public data sets also demonstrate the potential of PCANet to serve as a simple but highly competitive baseline for texture classification and object recognition.
Sereshti, Hassan; Poursorkh, Zahra; Aliakbarzadeh, Ghazaleh; Zarre, Shahin; Ataolahi, Sahar
2018-01-15
Quality of saffron, a valuable food additive, could considerably affect the consumers' health. In this work, a novel preprocessing strategy for image analysis of saffron thin layer chromatographic (TLC) patterns was introduced. This includes performing a series of image pre-processing techniques on TLC images such as compression, inversion, elimination of general baseline (using asymmetric least squares (AsLS)), removing spots shift and concavity (by correlation optimization warping (COW)), and finally conversion to RGB chromatograms. Subsequently, an unsupervised multivariate data analysis including principal component analysis (PCA) and k-means clustering was utilized to investigate the soil salinity effect, as a cultivation parameter, on saffron TLC patterns. This method was used as a rapid and simple technique to obtain the chemical fingerprints of saffron TLC images. Finally, the separated TLC spots were chemically identified using high-performance liquid chromatography-diode array detection (HPLC-DAD). Accordingly, the saffron quality from different areas of Iran was evaluated and classified. Copyright © 2017 Elsevier Ltd. All rights reserved.
A method for rapid quantitative assessment of biofilms with biomolecular staining and image analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Larimer, Curtis J.; Winder, Eric M.; Jeters, Robert T.
Here, the accumulation of bacteria in surface attached biofilms, or biofouling, can be detrimental to human health, dental hygiene, and many industrial processes. A critical need in identifying and preventing the deleterious effects of biofilms is the ability to observe and quantify their development. Analytical methods capable of assessing early stage fouling are cumbersome or lab-confined, subjective, and qualitative. Herein, a novel photographic method is described that uses biomolecular staining and image analysis to enhance contrast of early stage biofouling. A robust algorithm was developed to objectively and quantitatively measure surface accumulation of Pseudomonas putida from photographs and results weremore » compared to independent measurements of cell density. Results from image analysis quantified biofilm growth intensity accurately and with approximately the same precision of the more laborious cell counting method. This simple method for early stage biofilm detection enables quantifiable measurement of surface fouling and is flexible enough to be applied from the laboratory to the field. Broad spectrum staining highlights fouling biomass, photography quickly captures a large area of interest, and image analysis rapidly quantifies fouling in the image.« less
A method for rapid quantitative assessment of biofilms with biomolecular staining and image analysis
Larimer, Curtis J.; Winder, Eric M.; Jeters, Robert T.; ...
2015-12-07
Here, the accumulation of bacteria in surface attached biofilms, or biofouling, can be detrimental to human health, dental hygiene, and many industrial processes. A critical need in identifying and preventing the deleterious effects of biofilms is the ability to observe and quantify their development. Analytical methods capable of assessing early stage fouling are cumbersome or lab-confined, subjective, and qualitative. Herein, a novel photographic method is described that uses biomolecular staining and image analysis to enhance contrast of early stage biofouling. A robust algorithm was developed to objectively and quantitatively measure surface accumulation of Pseudomonas putida from photographs and results weremore » compared to independent measurements of cell density. Results from image analysis quantified biofilm growth intensity accurately and with approximately the same precision of the more laborious cell counting method. This simple method for early stage biofilm detection enables quantifiable measurement of surface fouling and is flexible enough to be applied from the laboratory to the field. Broad spectrum staining highlights fouling biomass, photography quickly captures a large area of interest, and image analysis rapidly quantifies fouling in the image.« less
Aksoy, S; Erdil, I; Hocaoglu, E; Inci, E; Adas, G T; Kemik, O; Turkay, R
2018-02-01
The present study indicates that simple and hydatid cysts in liver are a common health problem in Turkey. The aim of the study is to differentiate different types of hydatid cysts from simple cysts by using diffusion-weighted images. In total, 37 hydatid cysts and 36 simple cysts in the liver were diagnosed. We retrospectively reviewed the medical records of the patients who had both ultrasonography and magnetic resonance imaging. We measured apparent diffusion coefficient (ADC) values of all the cysts and then compared the findings. There was no statistically meaningful difference between the ADC values of simple cysts and type 1 hydatid cysts. However, for the other types of hydatid cysts, it is possible to differentiate hydatid cysts from simple cysts using the ADC values. Although in our study we cannot differentiate between type I hydatid cysts and simple cysts in the liver, diffusion-weighted images are very useful to differentiate different types of hydatid cysts from simple cysts using the ADC values.
Image Formation in Lenses and Mirrors, a Complete Representation
ERIC Educational Resources Information Center
Bartlett, Albert A.
1976-01-01
Provides tables and graphs that give a complete and simple picture of the relationships of image distance, object distance, and magnification in all formations of images by simple lenses and mirrors. (CP)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Piron, O; Varfalvy, N; Archambault, L
2015-06-15
Purpose: One of the side effects of radiotherapy for head and neck (H&N) cancer is the patient’s anatomical changes. The changes can strongly affect the planned dose distribution. In this work, our goal is to demonstrate that relative analysis of EPID images is a fast and simple method to detect anatomical changes that can have a strong dosimetric impact on the treatment plan for H&N patients. Methods: EPID images were recorded at every beam and all fractions for 50 H&N patients. Of these, five patients that showed important anatomical changes were selected to evaluate dosimetric impacts of these changes andmore » to correlate them with a 2D relative gamma analysis of EPID images. The planning CT and original contours were deformed onto CBCTs (one mid treatment and one at the end of treatment). By using deformable image registration, it was possible to map accurate CT numbers from the planning CT to the anatomy of the day obtained with CBCTs. Clinical treatment plan were then copied on the deformed dataset and dose was re-computed. In parallel, EPID images were analysed using the gamma index (3%3mm) relative to the first image. Results: It was possible to divide patients in two distinct, statistically different (p<0.001) categories using an average gamma index of 0.5 as a threshold. Below this threshold no significant dosimetric degradation of the plan are observed. Above this threshold two types of plan deterioration were seen: (1) target dose increases, but coverage remains adequate while dose to at least one OAR increases beyond tolerances; (2) the OAR doses remain low, but the target dose is reduced and coverage becomes inadequate. Conclusion: Relative analysis gamma of EPID images could indeed be a fast and simple method to detect anatomical changes that can potentially deteriorates treatment plan for H&N patients. This work was supported in part by Varian Medical System.« less
NASA Astrophysics Data System (ADS)
Min, Junwei; Yao, Baoli; Ketelhut, Steffi; Kemper, Björn
2017-02-01
The modular combination of optical microscopes with digital holographic microscopy (DHM) has been proven to be a powerful tool for quantitative live cell imaging. The introduction of condenser and different microscope objectives (MO) simplifies the usage of the technique and makes it easier to measure different kinds of specimens with different magnifications. However, the high flexibility of illumination and imaging also causes variable phase aberrations that need to be eliminated for high resolution quantitative phase imaging. The existent phase aberrations compensation methods either require add additional elements into the reference arm or need specimen free reference areas or separate reference holograms to build up suitable digital phase masks. These inherent requirements make them unpractical for usage with highly variable illumination and imaging systems and prevent on-line monitoring of living cells. In this paper, we present a simple numerical method for phase aberration compensation based on the analysis of holograms in spatial frequency domain with capabilities for on-line quantitative phase imaging. From a single shot off-axis hologram, the whole phase aberration can be eliminated automatically without numerical fitting or pre-knowledge of the setup. The capabilities and robustness for quantitative phase imaging of living cancer cells are demonstrated.
Mohiyeddini, Changiz
2017-09-01
Repressive coping, as a means of preserving a positive self-image, has been widely explored in the context of dealing with self-evaluative cues. The current study extends this research by exploring whether repressive coping is associated with lower levels of body image concerns, drive for thinness, bulimic symptoms, and higher positive rational acceptance. A sample of 229 female college students was recruited in South London. Repressive coping was measured via the interaction between trait anxiety and defensiveness. The results of moderated regression analysis with simple slope analysis show that compared to non-repressors, repressors reported lower levels of body image concerns, drive for thinness, and bulimic symptoms while exhibiting a higher use of positive rational acceptance. These findings, in line with previous evidence, suggest that repressive coping may be adaptive particularly in the context of body image. Copyright © 2017 Elsevier Ltd. All rights reserved.
Inferring Biological Structures from Super-Resolution Single Molecule Images Using Generative Models
Maji, Suvrajit; Bruchez, Marcel P.
2012-01-01
Localization-based super resolution imaging is presently limited by sampling requirements for dynamic measurements of biological structures. Generating an image requires serial acquisition of individual molecular positions at sufficient density to define a biological structure, increasing the acquisition time. Efficient analysis of biological structures from sparse localization data could substantially improve the dynamic imaging capabilities of these methods. Using a feature extraction technique called the Hough Transform simple biological structures are identified from both simulated and real localization data. We demonstrate that these generative models can efficiently infer biological structures in the data from far fewer localizations than are required for complete spatial sampling. Analysis at partial data densities revealed efficient recovery of clathrin vesicle size distributions and microtubule orientation angles with as little as 10% of the localization data. This approach significantly increases the temporal resolution for dynamic imaging and provides quantitatively useful biological information. PMID:22629348
Deep Learning Nuclei Detection in Digitized Histology Images by Superpixels
Sornapudi, Sudhir; Stanley, Ronald Joe; Stoecker, William V.; Almubarak, Haidar; Long, Rodney; Antani, Sameer; Thoma, George; Zuna, Rosemary; Frazier, Shelliane R.
2018-01-01
Background: Advances in image analysis and computational techniques have facilitated automatic detection of critical features in histopathology images. Detection of nuclei is critical for squamous epithelium cervical intraepithelial neoplasia (CIN) classification into normal, CIN1, CIN2, and CIN3 grades. Methods: In this study, a deep learning (DL)-based nuclei segmentation approach is investigated based on gathering localized information through the generation of superpixels using a simple linear iterative clustering algorithm and training with a convolutional neural network. Results: The proposed approach was evaluated on a dataset of 133 digitized histology images and achieved an overall nuclei detection (object-based) accuracy of 95.97%, with demonstrated improvement over imaging-based and clustering-based benchmark techniques. Conclusions: The proposed DL-based nuclei segmentation Method with superpixel analysis has shown improved segmentation results in comparison to state-of-the-art methods. PMID:29619277
Simple arithmetic: not so simple for highly math anxious individuals.
Chang, Hyesang; Sprute, Lisa; Maloney, Erin A; Beilock, Sian L; Berman, Marc G
2017-12-01
Fluency with simple arithmetic, typically achieved in early elementary school, is thought to be one of the building blocks of mathematical competence. Behavioral studies with adults indicate that math anxiety (feelings of tension or apprehension about math) is associated with poor performance on cognitively demanding math problems. However, it remains unclear whether there are fundamental differences in how high and low math anxious individuals approach overlearned simple arithmetic problems that are less reliant on cognitive control. The current study used functional magnetic resonance imaging to examine the neural correlates of simple arithmetic performance across high and low math anxious individuals. We implemented a partial least squares analysis, a data-driven, multivariate analysis method to measure distributed patterns of whole-brain activity associated with performance. Despite overall high simple arithmetic performance across high and low math anxious individuals, performance was differentially dependent on the fronto-parietal attentional network as a function of math anxiety. Specifically, low-compared to high-math anxious individuals perform better when they activate this network less-a potential indication of more automatic problem-solving. These findings suggest that low and high math anxious individuals approach even the most fundamental math problems differently. © The Author (2017). Published by Oxford University Press.
Simple arithmetic: not so simple for highly math anxious individuals
Sprute, Lisa; Maloney, Erin A; Beilock, Sian L; Berman, Marc G
2017-01-01
Abstract Fluency with simple arithmetic, typically achieved in early elementary school, is thought to be one of the building blocks of mathematical competence. Behavioral studies with adults indicate that math anxiety (feelings of tension or apprehension about math) is associated with poor performance on cognitively demanding math problems. However, it remains unclear whether there are fundamental differences in how high and low math anxious individuals approach overlearned simple arithmetic problems that are less reliant on cognitive control. The current study used functional magnetic resonance imaging to examine the neural correlates of simple arithmetic performance across high and low math anxious individuals. We implemented a partial least squares analysis, a data-driven, multivariate analysis method to measure distributed patterns of whole-brain activity associated with performance. Despite overall high simple arithmetic performance across high and low math anxious individuals, performance was differentially dependent on the fronto-parietal attentional network as a function of math anxiety. Specifically, low—compared to high—math anxious individuals perform better when they activate this network less—a potential indication of more automatic problem-solving. These findings suggest that low and high math anxious individuals approach even the most fundamental math problems differently. PMID:29140499
X-ray peak profile analysis of zinc oxide nanoparticles formed by simple precipitation method
NASA Astrophysics Data System (ADS)
Pelicano, Christian Mark; Rapadas, Nick Joaquin; Magdaluyo, Eduardo
2017-12-01
Zinc oxide (ZnO) nanoparticles were successfully synthesized by a simple precipitation method using zinc acetate and tetramethylammonium hydroxide. The synthesized ZnO nanoparticles were characterized by X-ray Diffraction analysis (XRD) and Transmission Electron Microscopy (TEM). The XRD result revealed a hexagonal wurtzite structure for the ZnO nanoparticles. The TEM image showed spherical nanoparticles with an average crystallite size of 6.70 nm. For x-ray peak analysis, Williamson-Hall (W-H) and Size-Strain Plot (SSP) methods were applied to examine the effects of crystallite size and lattice strain on the peak broadening of the ZnO nanoparticles. Based on the calculations, the estimated crystallite sizes and lattice strains obtained are in good agreement with each other.
Drew, Mark S.
2016-01-01
Cutaneous melanoma is the most life-threatening form of skin cancer. Although advanced melanoma is often considered as incurable, if detected and excised early, the prognosis is promising. Today, clinicians use computer vision in an increasing number of applications to aid early detection of melanoma through dermatological image analysis (dermoscopy images, in particular). Colour assessment is essential for the clinical diagnosis of skin cancers. Due to this diagnostic importance, many studies have either focused on or employed colour features as a constituent part of their skin lesion analysis systems. These studies range from using low-level colour features, such as simple statistical measures of colours occurring in the lesion, to availing themselves of high-level semantic features such as the presence of blue-white veil, globules, or colour variegation in the lesion. This paper provides a retrospective survey and critical analysis of contributions in this research direction. PMID:28096807
ExoSOFT: Exoplanet Simple Orbit Fitting Toolbox
NASA Astrophysics Data System (ADS)
Mede, Kyle; Brandt, Timothy D.
2017-08-01
ExoSOFT provides orbital analysis of exoplanets and binary star systems. It fits any combination of astrometric and radial velocity data, and offers four parameter space exploration techniques, including MCMC. It is packaged with an automated set of post-processing and plotting routines to summarize results, and is suitable for performing orbital analysis during surveys with new radial velocity and direct imaging instruments.
Simple and robust image-based autofocusing for digital microscopy.
Yazdanfar, Siavash; Kenny, Kevin B; Tasimi, Krenar; Corwin, Alex D; Dixon, Elizabeth L; Filkins, Robert J
2008-06-09
A simple image-based autofocusing scheme for digital microscopy is demonstrated that uses as few as two intermediate images to bring the sample into focus. The algorithm is adapted to a commercial inverted microscope and used to automate brightfield and fluorescence imaging of histopathology tissue sections.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoo, Jun Soo
The bubble departure diameter and bubble release frequency were obtained through the analysis of TAMU subcooled flow boiling experimental data. The numerous images of bubbles at departure were analyzed for each experimental condition to achieve the reliable statistics of the measured bubble parameters. The results are provided in this report with simple discussion.
Matsumoto, Atsushi; Miyazaki, Naoyuki; Takagi, Junichi; Iwasaki, Kenji
2017-03-23
In this study, we develop an approach termed "2D hybrid analysis" for building atomic models by image matching from electron microscopy (EM) images of biological molecules. The key advantage is that it is applicable to flexible molecules, which are difficult to analyze by 3DEM approach. In the proposed approach, first, a lot of atomic models with different conformations are built by computer simulation. Then, simulated EM images are built from each atomic model. Finally, they are compared with the experimental EM image. Two kinds of models are used as simulated EM images: the negative stain model and the simple projection model. Although the former is more realistic, the latter is adopted to perform faster computations. The use of the negative stain model enables decomposition of the averaged EM images into multiple projection images, each of which originated from a different conformation or orientation. We apply this approach to the EM images of integrin to obtain the distribution of the conformations, from which the pathway of the conformational change of the protein is deduced.
Mathematical analysis of the 1D model and reconstruction schemes for magnetic particle imaging
NASA Astrophysics Data System (ADS)
Erb, W.; Weinmann, A.; Ahlborg, M.; Brandt, C.; Bringout, G.; Buzug, T. M.; Frikel, J.; Kaethner, C.; Knopp, T.; März, T.; Möddel, M.; Storath, M.; Weber, A.
2018-05-01
Magnetic particle imaging (MPI) is a promising new in vivo medical imaging modality in which distributions of super-paramagnetic nanoparticles are tracked based on their response in an applied magnetic field. In this paper we provide a mathematical analysis of the modeled MPI operator in the univariate situation. We provide a Hilbert space setup, in which the MPI operator is decomposed into simple building blocks and in which these building blocks are analyzed with respect to their mathematical properties. In turn, we obtain an analysis of the MPI forward operator and, in particular, of its ill-posedness properties. We further get that the singular values of the MPI core operator decrease exponentially. We complement our analytic results by some numerical studies which, in particular, suggest a rapid decay of the singular values of the MPI operator.
Fast Image Texture Classification Using Decision Trees
NASA Technical Reports Server (NTRS)
Thompson, David R.
2011-01-01
Texture analysis would permit improved autonomous, onboard science data interpretation for adaptive navigation, sampling, and downlink decisions. These analyses would assist with terrain analysis and instrument placement in both macroscopic and microscopic image data products. Unfortunately, most state-of-the-art texture analysis demands computationally expensive convolutions of filters involving many floating-point operations. This makes them infeasible for radiation- hardened computers and spaceflight hardware. A new method approximates traditional texture classification of each image pixel with a fast decision-tree classifier. The classifier uses image features derived from simple filtering operations involving integer arithmetic. The texture analysis method is therefore amenable to implementation on FPGA (field-programmable gate array) hardware. Image features based on the "integral image" transform produce descriptive and efficient texture descriptors. Training the decision tree on a set of training data yields a classification scheme that produces reasonable approximations of optimal "texton" analysis at a fraction of the computational cost. A decision-tree learning algorithm employing the traditional k-means criterion of inter-cluster variance is used to learn tree structure from training data. The result is an efficient and accurate summary of surface morphology in images. This work is an evolutionary advance that unites several previous algorithms (k-means clustering, integral images, decision trees) and applies them to a new problem domain (morphology analysis for autonomous science during remote exploration). Advantages include order-of-magnitude improvements in runtime, feasibility for FPGA hardware, and significant improvements in texture classification accuracy.
Quantitative analysis of tympanic membrane perforation: a simple and reliable method.
Ibekwe, T S; Adeosun, A A; Nwaorgu, O G
2009-01-01
Accurate assessment of the features of tympanic membrane perforation, especially size, site, duration and aetiology, is important, as it enables optimum management. To describe a simple, cheap and effective method of quantitatively analysing tympanic membrane perforations. The system described comprises a video-otoscope (capable of generating still and video images of the tympanic membrane), adapted via a universal serial bus box to a computer screen, with images analysed using the Image J geometrical analysis software package. The reproducibility of results and their correlation with conventional otoscopic methods of estimation were tested statistically with the paired t-test and correlational tests, using the Statistical Package for the Social Sciences version 11 software. The following equation was generated: P/T x 100 per cent = percentage perforation, where P is the area (in pixels2) of the tympanic membrane perforation and T is the total area (in pixels2) for the entire tympanic membrane (including the perforation). Illustrations are shown. Comparison of blinded data on tympanic membrane perforation area obtained independently from assessments by two trained otologists, of comparative years of experience, using the video-otoscopy system described, showed similar findings, with strong correlations devoid of inter-observer error (p = 0.000, r = 1). Comparison with conventional otoscopic assessment also indicated significant correlation, comparing results for two trained otologists, but some inter-observer variation was present (p = 0.000, r = 0.896). Correlation between the two methods for each of the otologists was also highly significant (p = 0.000). A computer-adapted video-otoscope, with images analysed by Image J software, represents a cheap, reliable, technology-driven, clinical method of quantitative analysis of tympanic membrane perforations and injuries.
Open source software in a practical approach for post processing of radiologic images.
Valeri, Gianluca; Mazza, Francesco Antonino; Maggi, Stefania; Aramini, Daniele; La Riccia, Luigi; Mazzoni, Giovanni; Giovagnoni, Andrea
2015-03-01
The purpose of this paper is to evaluate the use of open source software (OSS) to process DICOM images. We selected 23 programs for Windows and 20 programs for Mac from 150 possible OSS programs including DICOM viewers and various tools (converters, DICOM header editors, etc.). The programs selected all meet the basic requirements such as free availability, stand-alone application, presence of graphical user interface, ease of installation and advanced features beyond simple display monitor. Capabilities of data import, data export, metadata, 2D viewer, 3D viewer, support platform and usability of each selected program were evaluated on a scale ranging from 1 to 10 points. Twelve programs received a score higher than or equal to eight. Among them, five obtained a score of 9: 3D Slicer, MedINRIA, MITK 3M3, VolView, VR Render; while OsiriX received 10. OsiriX appears to be the only program able to perform all the operations taken into consideration, similar to a workstation equipped with proprietary software, allowing the analysis and interpretation of images in a simple and intuitive way. OsiriX is a DICOM PACS workstation for medical imaging and software for image processing for medical research, functional imaging, 3D imaging, confocal microscopy and molecular imaging. This application is also a good tool for teaching activities because it facilitates the attainment of learning objectives among students and other specialists.
Maire, E; Lelièvre, E; Brau, D; Lyons, A; Woodward, M; Fafeur, V; Vandenbunder, B
2000-04-10
We have developed an approach to study in single living epithelial cells both cell migration and transcriptional activation, which was evidenced by the detection of luminescence emission from cells transfected with luciferase reporter vectors. The image acquisition chain consists of an epifluorescence inverted microscope, connected to an ultralow-light-level photon-counting camera and an image-acquisition card associated to specialized image analysis software running on a PC computer. Using a simple method based on a thin calibrated light source, the image acquisition chain has been optimized following comparisons of the performance of microscopy objectives and photon-counting cameras designed to observe luminescence. This setup allows us to measure by image analysis the luminescent light emitted by individual cells stably expressing a luciferase reporter vector. The sensitivity of the camera was adjusted to a high value, which required the use of a segmentation algorithm to eliminate the background noise. Following mathematical morphology treatments, kinetic changes of luminescent sources were analyzed and then correlated with the distance and speed of migration. Our results highlight the usefulness of our image acquisition chain and mathematical morphology software to quantify the kinetics of luminescence changes in migrating cells.
Low-cost conversion of the Polaroid MD-4 land camera to a digital gel documentation system.
Porch, Timothy G; Erpelding, John E
2006-04-30
A simple, inexpensive design is presented for the rapid conversion of the popular MD-4 Polaroid land camera to a high quality digital gel documentation system. Images of ethidium bromide stained DNA gels captured using the digital system were compared to images captured on Polaroid instant film. Resolution and sensitivity were enhanced using the digital system. In addition to the low cost and superior image quality of the digital system, there is also the added convenience of real-time image viewing through the swivel LCD of the digital camera, wide flexibility of gel sizes, accurate automatic focusing, variable image resolution, and consistent ease of use and quality. Images can be directly imported to a computer by using the USB port on the digital camera, further enhancing the potential of the digital system for documentation, analysis, and archiving. The system is appropriate for use as a start-up gel documentation system and for routine gel analysis.
Exploratory analysis of TOF-SIMS data from biological surfaces
NASA Astrophysics Data System (ADS)
Vaidyanathan, Seetharaman; Fletcher, John S.; Henderson, Alex; Lockyer, Nicholas P.; Vickerman, John C.
2008-12-01
The application of multivariate analytical tools enables simplification of TOF-SIMS datasets so that useful information can be extracted from complex spectra and images, especially those that do not give readily interpretable results. There is however a challenge in understanding the outputs from such analyses. The problem is complicated when analysing images, given the additional dimensions in the dataset. Here we demonstrate how the application of simple pre-processing routines can enable the interpretation of TOF-SIMS spectra and images. For the spectral data, TOF-SIMS spectra used to discriminate bacterial isolates associated with urinary tract infection were studied. Using different criteria for picking peaks before carrying out PC-DFA enabled identification of the discriminatory information with greater certainty. For the image data, an air-dried salt stressed bacterial sample, discussed in another paper by us in this issue, was studied. Exploration of the image datasets with and without normalisation prior to multivariate analysis by PCA or MAF resulted in different regions of the image being highlighted by the techniques.
Anwar, A R; Muthalib, M; Perrey, S; Galka, A; Granert, O; Wolff, S; Deuschl, G; Raethjen, J; Heute, U; Muthuraman, M
2012-01-01
Directionality analysis of signals originating from different parts of brain during motor tasks has gained a lot of interest. Since brain activity can be recorded over time, methods of time series analysis can be applied to medical time series as well. Granger Causality is a method to find a causal relationship between time series. Such causality can be referred to as a directional connection and is not necessarily bidirectional. The aim of this study is to differentiate between different motor tasks on the basis of activation maps and also to understand the nature of connections present between different parts of the brain. In this paper, three different motor tasks (finger tapping, simple finger sequencing, and complex finger sequencing) are analyzed. Time series for each task were extracted from functional magnetic resonance imaging (fMRI) data, which have a very good spatial resolution and can look into the sub-cortical regions of the brain. Activation maps based on fMRI images show that, in case of complex finger sequencing, most parts of the brain are active, unlike finger tapping during which only limited regions show activity. Directionality analysis on time series extracted from contralateral motor cortex (CMC), supplementary motor area (SMA), and cerebellum (CER) show bidirectional connections between these parts of the brain. In case of simple finger sequencing and complex finger sequencing, the strongest connections originate from SMA and CMC, while connections originating from CER in either direction are the weakest ones in magnitude during all paradigms.
Compressive Hyperspectral Imaging and Anomaly Detection
2010-02-01
Level Set Systems 1058 Embury Street Pacific Palisades , CA 90272 8. PERFORMING ORGANIZATION REPORT NUMBER 1A-2010 9. SPONSORING/MONITORING...were obtained from a simple algorithm, namely, the atoms in the trained image were very similar to the simple cell receptive fields in early vision...Field, "Emergence of simple- cell receptive field properties by learning a sparse code for natural images,’" Nature 381(6583), pp. 607-609, 1996. M
NASA Astrophysics Data System (ADS)
Civera Lorenzo, Tamara
2017-10-01
Brief presentation about the J-PLUS EDR data access web portal (http://archive.cefca.es/catalogues/jplus-edr) where the different services available to retrieve images and catalogues data have been presented.J-PLUS Early Data Release (EDR) archive includes two types of data: images and dual and single catalogue data which include parameters measured from images. J-PLUS web portal offers catalogue data and images through several different online data access tools or services each suited to a particular need. The different services offered are: Coverage map Sky navigator Object visualization Image search Cone search Object list search Virtual observatory services: Simple Cone Search Simple Image Access Protocol Simple Spectral Access Protocol Table Access Protocol
Binary encoding of multiplexed images in mixed noise.
Lalush, David S
2008-09-01
Binary coding of multiplexed signals and images has been studied in the context of spectroscopy with models of either purely constant or purely proportional noise, and has been shown to result in improved noise performance under certain conditions. We consider the case of mixed noise in an imaging system consisting of multiple individually-controllable sources (X-ray or near-infrared, for example) shining on a single detector. We develop a mathematical model for the noise in such a system and show that the noise is dependent on the properties of the binary coding matrix and on the average number of sources used for each code. Each binary matrix has a characteristic linear relationship between the ratio of proportional-to-constant noise and the noise level in the decoded image. We introduce a criterion for noise level, which is minimized via a genetic algorithm search. The search procedure results in the discovery of matrices that outperform the Hadamard S-matrices at certain levels of mixed noise. Simulation of a seven-source radiography system demonstrates that the noise model predicts trends and rank order of performance in regions of nonuniform images and in a simple tomosynthesis reconstruction. We conclude that the model developed provides a simple framework for analysis, discovery, and optimization of binary coding patterns used in multiplexed imaging systems.
A non-iterative twin image elimination method with two in-line digital holograms
NASA Astrophysics Data System (ADS)
Kim, Jongwu; Lee, Heejung; Jeon, Philjun; Kim, Dug Young
2018-02-01
We propose a simple non-iterative in-line holographic measurement method which can effectively eliminate a twin image in digital holographic 3D imaging. It is shown that a twin image can be effectively eliminated with only two measured holograms by using a simple numerical propagation algorithm and arithmetic calculations.
NASA Astrophysics Data System (ADS)
Nine, H. M. Zulker
The adversity of metallic corrosion is of growing concern to industrial engineers and scientists. Corrosion attacks metal surface and causes structural as well as direct and indirect economic losses. Multiple corrosion monitoring tools are available although those are time-consuming and costly. Due to the availability of image capturing devices in today's world, image based corrosion control technique is a unique innovation. By setting up stainless steel SS 304 and low carbon steel QD 1008 panels in distilled water, half-saturated sodium chloride and saturated sodium chloride solutions and subsequent RGB image analysis in Matlab, in this research, a simple and cost-effective corrosion measurement tool has identified and investigated. Additionally, the open circuit potential and electrochemical impedance spectroscopy results have been compared with RGB analysis to gratify the corrosion. Additionally, to understand the importance of ambiguity in crisis communication, the communication process between Union Carbide and Indian Government regarding the Bhopal incident in 1984 was analyzed.
Image analysis of pubic bone for age estimation in a computed tomography sample.
López-Alcaraz, Manuel; González, Pedro Manuel Garamendi; Aguilera, Inmaculada Alemán; López, Miguel Botella
2015-03-01
Radiology has demonstrated great utility for age estimation, but most of the studies are based on metrical and morphological methods in order to perform an identification profile. A simple image analysis-based method is presented, aimed to correlate the bony tissue ultrastructure with several variables obtained from the grey-level histogram (GLH) of computed tomography (CT) sagittal sections of the pubic symphysis surface and the pubic body, and relating them with age. The CT sample consisted of 169 hospital Digital Imaging and Communications in Medicine (DICOM) archives of known sex and age. The calculated multiple regression models showed a maximum R (2) of 0.533 for females and 0.726 for males, with a high intra- and inter-observer agreement. The method suggested is considered not only useful for performing an identification profile during virtopsy, but also for application in further studies in order to attach a quantitative correlation for tissue ultrastructure characteristics, without complex and expensive methods beyond image analysis.
Change of spatial information under rescaling: A case study using multi-resolution image series
NASA Astrophysics Data System (ADS)
Chen, Weirong; Henebry, Geoffrey M.
Spatial structure in imagery depends on a complicated interaction between the observational regime and the types and arrangements of entities within the scene that the image portrays. Although block averaging of pixels has commonly been used to simulate coarser resolution imagery, relatively little attention has been focused on the effects of simple rescaling on spatial structure and the explanation and a possible solution to the problem. Yet, if there are significant differences in spatial variance between rescaled and observed images, it may affect the reliability of retrieved biogeophysical quantities. To investigate these issues, a nested series of high spatial resolution digital imagery was collected at a research site in eastern Nebraska in 2001. An airborne Kodak DCS420IR camera acquired imagery at three altitudes, yielding nominal spatial resolutions ranging from 0.187 m to 1 m. The red and near infrared (NIR) bands of the co-registered image series were normalized using pseudo-invariant features, and the normalized difference vegetation index (NDVI) was calculated. Plots of grain sorghum planted in orthogonal crop row orientations were extracted from the image series. The finest spatial resolution data were then rescaled by averaging blocks of pixels to produce a rescaled image series that closely matched the spatial resolution of the observed image series. Spatial structures of the observed and rescaled image series were characterized using semivariogram analysis. Results for NDVI and its component bands show, as expected, that decreasing spatial resolution leads to decreasing spatial variability and increasing spatial dependence. However, compared to the observed data, the rescaled images contain more persistent spatial structure that exhibits limited variation in both spatial dependence and spatial heterogeneity. Rescaling via simple block averaging fails to consider the effect of scene object shape and extent on spatial information. As the features portrayed by pixels are equally weighted regardless of the shape and extent of the underlying scene objects, the rescaled image retains more of the original spatial information than would occur through direct observation at a coarser sensor spatial resolution. In contrast, for the observed images, due to the effect of the modulation transfer function (MTF) of the imaging system, high frequency features like edges are blurred or lost as the pixel size increases, resulting in greater variation in spatial structure. Successive applications of a low-pass spatial convolution filter are shown to mimic a MTF. Accordingly, it is recommended that such a procedure be applied prior to rescaling by simple block averaging, if insufficient image metadata exist to replicate the net MTF of the imaging system, as might be expected in land cover change analysis studies using historical imagery.
Analysis of lithology: Vegetation mixes in multispectral images
NASA Technical Reports Server (NTRS)
Adams, J. B.; Smith, M.; Adams, J. D.
1982-01-01
Discrimination and identification of lithologies from multispectral images is discussed. Rock/soil identification can be facilitated by removing the component of the signal in the images that is contributed by the vegetation. Mixing models were developed to predict the spectra of combinations of pure end members, and those models were refined using laboratory measurements of real mixtures. Models in use include a simple linear (checkerboard) mix, granular mixing, semi-transparent coatings, and combinations of the above. The use of interactive computer techniques that allow quick comparison of the spectrum of a pixel stack (in a multiband set) with laboratory spectra is discussed.
Medical image segmentation to estimate HER2 gene status in breast cancer
NASA Astrophysics Data System (ADS)
Palacios-Navarro, Guillermo; Acirón-Pomar, José Manuel; Vilchez-Sorribas, Enrique; Zambrano, Eddie Galarza
2016-02-01
This work deals with the estimation of HER2 Gene status in breast tumour images treated with in situ hybridization techniques (ISH). We propose a simple algorithm to obtain the amplification factor of HER2 gene. The obtained results are very close to those obtained by specialists in a manual way. The developed algorithm is based on colour image segmentation and has been included in a software application tool for breast tumour analysis. The developed tool focus on the estimation of the seriousness of tumours, facilitating the work of pathologists and contributing to a better diagnosis.
Invariant approach to the character classification
NASA Astrophysics Data System (ADS)
Šariri, Kristina; Demoli, Nazif
2008-04-01
Image moments analysis is a very useful tool which allows image description invariant to translation and rotation, scale change and some types of image distortions. The aim of this work was development of simple method for fast and reliable classification of characters by using Hu's and affine moment invariants. Measure of Eucleidean distance was used as a discrimination feature with statistical parameters estimated. The method was tested in classification of Times New Roman font letters as well as sets of the handwritten characters. It is shown that using all Hu's and three affine invariants as discrimination set improves recognition rate by 30%.
The lucky image-motion prediction for simple scene observation based soft-sensor technology
NASA Astrophysics Data System (ADS)
Li, Yan; Su, Yun; Hu, Bin
2015-08-01
High resolution is important to earth remote sensors, while the vibration of the platforms of the remote sensors is a major factor restricting high resolution imaging. The image-motion prediction and real-time compensation are key technologies to solve this problem. For the reason that the traditional autocorrelation image algorithm cannot meet the demand for the simple scene image stabilization, this paper proposes to utilize soft-sensor technology in image-motion prediction, and focus on the research of algorithm optimization in imaging image-motion prediction. Simulations results indicate that the improving lucky image-motion stabilization algorithm combining the Back Propagation Network (BP NN) and support vector machine (SVM) is the most suitable for the simple scene image stabilization. The relative error of the image-motion prediction based the soft-sensor technology is below 5%, the training computing speed of the mathematical predication model is as fast as the real-time image stabilization in aerial photography.
Photographic monitoring of soiling and decay of roadside walls in central Oxford, England
NASA Astrophysics Data System (ADS)
Thornbush, Mary J.; Viles, Heather A.
2008-12-01
As part of the Environmental Monitoring of Integrated Transport Strategies (EMITS) project, which examined the impact of the Oxford Transport Strategy (OTS) on the soiling and decay of buildings and structures in central Oxford, England, a simple photographic survey of a sample of roadside walls was carried out in 1997, with re-surveys in 1999 and 2003. Thirty photographs were taken each time, covering an area of stonework approximately 30 × 30 cm in dimensions at 1-1.3 m above pavement level. The resulting images have been used to investigate, both qualitatively as well as quantitatively, the progression of soiling and decay. Comparison of images by eye reveals a number of minor changes in soiling and decay patterns, but generally indicates stability except at one site where dramatic, superficial damage occurred over 2 years. Quantitative analysis of decay features (concavities resulting from surface blistering, flaking, and scaling), using simple techniques in Adobe Photoshop, shows variable pixel-based size proportions of concavities across 6 years of survey. Colour images (in Lab Color) generally have a reduced proportion of pixels, representing decay features in comparison to black and white (Grayscale) images. The study conveys that colour images provide more information both for general observations of soiling and decay patterns and for segmentation of decay-produced concavities. The study indicates that simple repeat photography can reveal useful information about changing patterns of both soiling and decay, although unavoidable variation in external lighting conditions between re-surveys is a factor limiting the accuracy of change detection.
NASA Astrophysics Data System (ADS)
Hu, Jicun; Tam, Kwok; Johnson, Roger H.
2004-01-01
We derive and analyse a simple algorithm first proposed by Kudo et al (2001 Proc. 2001 Meeting on Fully 3D Image Reconstruction in Radiology and Nuclear Medicine (Pacific Grove, CA) pp 7-10) for long object imaging from truncated helical cone beam data via a novel definition of region of interest (ROI). Our approach is based on the theory of short object imaging by Kudo et al (1998 Phys. Med. Biol. 43 2885-909). One of the key findings in their work is that filtering of the truncated projection can be divided into two parts: one, finite in the axial direction, results from ramp filtering the data within the Tam window. The other, infinite in the z direction, results from unbounded filtering of ray sums over PI lines only. We show that for an ROI defined by PI lines emanating from the initial and final source positions on a helical segment, the boundary data which would otherwise contaminate the reconstruction of the ROI can be completely excluded. This novel definition of the ROI leads to a simple algorithm for long object imaging. The overscan of the algorithm is analytically calculated and it is the same as that of the zero boundary method. The reconstructed ROI can be divided into two regions: one is minimally contaminated by the portion outside the ROI, while the other is reconstructed free of contamination. We validate the algorithm with a 3D Shepp-Logan phantom and a disc phantom.
Schlieren image velocimetry measurements in a rocket engine exhaust plume
NASA Astrophysics Data System (ADS)
Morales, Rudy; Peguero, Julio; Hargather, Michael
2017-11-01
Schlieren image velocimetry (SIV) measures velocity fields by tracking the motion of naturally-occurring turbulent flow features in a compressible flow. Here the technique is applied to measuring the exhaust velocity profile of a liquid rocket engine. The SIV measurements presented include discussion of visibility of structures, image pre-processing for structure visibility, and ability to process resulting images using commercial particle image velocimetry (PIV) codes. The small-scale liquid bipropellant rocket engine operates on nitrous oxide and ethanol as propellants. Predictions of the exhaust velocity are obtained through NASA CEA calculations and simple compressible flow relationships, which are compared against the measured SIV profiles. Analysis of shear layer turbulence along the exhaust plume edge is also presented.
Development of Cad System for Diffuse Disease Based on Ultrasound Elasticity Images
NASA Astrophysics Data System (ADS)
Yamazaki, M.; Shiina, T.; Yamakawa, M.; Takizawa, H.; Tonomura, A.; Mitake, T.
It is well known that as hepatic cirrhosis progresses, hepatocyte fibrosis spreads and nodule increases. However, it is not easy to diagnosis its early stage by conventional B-mode image because we have to read subtle change of speckle pattern which is not sensitive to the stage of fibrosis. Ultrasonic tissue elasticity imaging can provide us novel diagnostic information based on tissue hardness. We recently developed commercial-based equipment for tissue elasticity imaging. In this work, we investigated to develop the CAD system based on elasticity image for diagnosing defused type diseases such as hepatic cirrhosis. The results of clinical data analysis indicate that the CAD system is promising as means for diagnosis of diffuse disease with simple criterion.
An approach for automated analysis of particle holograms
NASA Technical Reports Server (NTRS)
Stanton, A. C.; Caulfield, H. J.; Stewart, G. W.
1984-01-01
A simple method for analyzing droplet holograms is proposed that is readily adaptable to automation using modern image digitizers and analyzers for determination of the number, location, and size distributions of spherical or nearly spherical droplets. The method determines these parameters by finding the spatial location of best focus of the droplet images. With this location known, the particle size may be determined by direct measurement of image area in the focal plane. Particle velocity and trajectory may be determined by comparison of image locations at different instants in time. The method is tested by analyzing digitized images from a reconstructed in-line hologram, and the results show that the method is more accurate than a time-consuming plane-by-plane search for sharpest focus.
PET kinetic analysis --pitfalls and a solution for the Logan plot.
Kimura, Yuichi; Naganawa, Mika; Shidahara, Miho; Ikoma, Yoko; Watabe, Hiroshi
2007-01-01
The Logan plot is a widely used algorithm for the quantitative analysis of neuroreceptors using PET because it is easy to use and simple to implement. The Logan plot is also suitable for receptor imaging because its algorithm is fast. However, use of the Logan plot, and interpretation of the formed receptor images should be regarded with caution, because noise in PET data causes bias in the Logan plot estimates. In this paper, we describe the basic concept of the Logan plot in detail and introduce three algorithms for the Logan plot. By comparing these algorithms, we demonstrate the pitfalls of the Logan plot and discuss the solution.
NASA Astrophysics Data System (ADS)
Zhou, Jiangying; Lopresti, Daniel P.; Tasdizen, Tolga
1998-04-01
In this paper, we consider the problem of locating and extracting text from WWW images. A previous algorithm based on color clustering and connected components analysis works well as long as the color of each character is relatively uniform and the typography is fairly simple. It breaks down quickly, however, when these assumptions are violated. In this paper, we describe more robust techniques for dealing with this challenging problem. We present an improved color clustering algorithm that measures similarity based on both RGB and spatial proximity. Layout analysis is also incorporated to handle more complex typography. THese changes significantly enhance the performance of our text detection procedure.
Sekuła, Justyna; Nizioł, Joanna; Rode, Wojciech; Ruman, Tomasz
2015-05-22
Preparation is described of a durable surface of cationic gold nanoparticles (AuNPs), covering commercial and custom-made MALDI targets, along with characterization of the nanoparticle surface properties and examples of the use in MS analyses and MS imaging (IMS) of low molecular weight (LMW) organic compounds. Tested compounds include nucleosides, saccharides, amino acids, glycosides, and nucleic bases for MS measurements, as well as over one hundred endogenous compounds in imaging experiment. The nanoparticles covering target plate were enriched in sodium in order to promote sodium-adduct formation. The new surface allows fast analysis, high sensitivity of detection and high mass determination accuracy. Example of application of new Au nanoparticle-enhanced target for fast and simple MS imaging of a fingerprint is also presented. Copyright © 2015 Elsevier B.V. All rights reserved.
Canuto, Holly C; McLachlan, Charles; Kettunen, Mikko I; Velic, Marko; Krishnan, Anant S; Neves, Andre' A; de Backer, Maaike; Hu, D-E; Hobson, Michael P; Brindle, Kevin M
2009-05-01
A targeted Gd(3+)-based contrast agent has been developed that detects tumor cell death by binding to the phosphatidylserine (PS) exposed on the plasma membrane of dying cells. Although this agent has been used to detect tumor cell death in vivo, the differences in signal intensity between treated and untreated tumors was relatively small. As cell death is often spatially heterogeneous within tumors, we investigated whether an image analysis technique that parameterizes heterogeneity could be used to increase the sensitivity of detection of this targeted contrast agent. Two-dimensional (2D) Minkowski functionals (MFs) provided an automated and reliable method for parameterization of image heterogeneity, which does not require prior assumptions about the number of regions or features in the image, and were shown to increase the sensitivity of detection of the contrast agent as compared to simple signal intensity analysis. (c) 2009 Wiley-Liss, Inc.
Multispectral imaging approach for simplified non-invasive in-vivo evaluation of gingival erythema
NASA Astrophysics Data System (ADS)
Eckhard, Timo; Valero, Eva M.; Nieves, Juan L.; Gallegos-Rueda, José M.; Mesa, Francisco
2012-03-01
Erythema is a common visual sign of gingivitis. In this work, a new and simple low-cost image capture and analysis method for erythema assessment is proposed. The method is based on digital still images of gingivae and applied on a pixel-by-pixel basis. Multispectral images are acquired with a conventional digital camera and multiplexed LED illumination panels at 460nm and 630nm peak wavelength. An automatic work-flow segments teeth from gingiva regions in the images and creates a map of local blood oxygenation levels, which relates to the presence of erythema. The map is computed from the ratio of the two spectral images. An advantage of the proposed approach is that the whole process is easy to manage by dental health care professionals in clinical environment.
Kero, Tanja; Lindsjö, Lars; Sörensen, Jens; Lubberink, Mark
2016-08-01
(11)C-PIB PET is a promising non-invasive diagnostic tool for cardiac amyloidosis. Semiautomatic analysis of PET data is now available but it is not known how accurate these methods are for amyloid imaging. The aim of this study was to evaluate the feasibility of one semiautomatic software tool for analysis and visualization of (11)C-PIB left ventricular retention index (RI) in cardiac amyloidosis. Patients with systemic amyloidosis and cardiac involvement (n = 10) and healthy controls (n = 5) were investigated with dynamic (11)C-PIB PET. Two observers analyzed the PET studies with semiautomatic software to calculate the left ventricular RI of (11)C-PIB and to create parametric images. The mean RI at 15-25 min from the semiautomatic analysis was compared with RI based on manual analysis and showed comparable values (0.056 vs 0.054 min(-1) for amyloidosis patients and 0.024 vs 0.025 min(-1) in healthy controls; P = .78) and the correlation was excellent (r = 0.98). Inter-reader reproducibility also was excellent (intraclass correlation coefficient, ICC > 0.98). Parametric polarmaps and histograms made visual separation of amyloidosis patients and healthy controls fast and simple. Accurate semiautomatic analysis of cardiac (11)C-PIB RI in amyloidosis patients is feasible. Parametric polarmaps and histograms make visual interpretation fast and simple.
Mari, João Fernando; Saito, José Hiroki; Neves, Amanda Ferreira; Lotufo, Celina Monteiro da Cruz; Destro-Filho, João-Batista; Nicoletti, Maria do Carmo
2015-12-01
Microelectrode Arrays (MEA) are devices for long term electrophysiological recording of extracellular spontaneous or evocated activities on in vitro neuron culture. This work proposes and develops a framework for quantitative and morphological analysis of neuron cultures on MEAs, by processing their corresponding images, acquired by fluorescence microscopy. The neurons are segmented from the fluorescence channel images using a combination of segmentation by thresholding, watershed transform, and object classification. The positioning of microelectrodes is obtained from the transmitted light channel images using the circular Hough transform. The proposed method was applied to images of dissociated culture of rat dorsal root ganglion (DRG) neuronal cells. The morphological and topological quantitative analysis carried out produced information regarding the state of culture, such as population count, neuron-to-neuron and neuron-to-microelectrode distances, soma morphologies, neuron sizes, neuron and microelectrode spatial distributions. Most of the analysis of microscopy images taken from neuronal cultures on MEA only consider simple qualitative analysis. Also, the proposed framework aims to standardize the image processing and to compute quantitative useful measures for integrated image-signal studies and further computational simulations. As results show, the implemented microelectrode identification method is robust and so are the implemented neuron segmentation and classification one (with a correct segmentation rate up to 84%). The quantitative information retrieved by the method is highly relevant to assist the integrated signal-image study of recorded electrophysiological signals as well as the physical aspects of the neuron culture on MEA. Although the experiments deal with DRG cell images, cortical and hippocampal cell images could also be processed with small adjustments in the image processing parameter estimation.
BP fusion model for the detection of oil spills on the sea by remote sensing
NASA Astrophysics Data System (ADS)
Chen, Weiwei; An, Jubai; Zhang, Hande; Lin, Bin
2003-06-01
Oil spills are very serious marine pollution in many countries. In order to detect and identify the oil-spilled on the sea by remote sensor, scientists have to conduct a research work on the remote sensing image. As to the detection of oil spills on the sea, edge detection is an important technology in image processing. There are many algorithms of edge detection developed for image processing. These edge detection algorithms always have their own advantages and disadvantages in the image processing. Based on the primary requirements of edge detection of the oil spills" image on the sea, computation time and detection accuracy, we developed a fusion model. The model employed a BP neural net to fuse the detection results of simple operators. The reason we selected BP neural net as the fusion technology is that the relation between simple operators" result of edge gray level and the image"s true edge gray level is nonlinear, while BP neural net is good at solving the nonlinear identification problem. Therefore in this paper we trained a BP neural net by some oil spill images, then applied the BP fusion model on the edge detection of other oil spill images and obtained a good result. In this paper the detection result of some gradient operators and Laplacian operator are also compared with the result of BP fusion model to analysis the fusion effect. At last the paper pointed out that the fusion model has higher accuracy and higher speed in the processing oil spill image"s edge detection.
Scattering Removal for Finger-Vein Image Restoration
Yang, Jinfeng; Zhang, Ben; Shi, Yihua
2012-01-01
Finger-vein recognition has received increased attention recently. However, the finger-vein images are always captured in poor quality. This certainly makes finger-vein feature representation unreliable, and further impairs the accuracy of finger-vein recognition. In this paper, we first give an analysis of the intrinsic factors causing finger-vein image degradation, and then propose a simple but effective image restoration method based on scattering removal. To give a proper description of finger-vein image degradation, a biological optical model (BOM) specific to finger-vein imaging is proposed according to the principle of light propagation in biological tissues. Based on BOM, the light scattering component is sensibly estimated and properly removed for finger-vein image restoration. Finally, experimental results demonstrate that the proposed method is powerful in enhancing the finger-vein image contrast and in improving the finger-vein image matching accuracy. PMID:22737028
Ganalyzer: A Tool for Automatic Galaxy Image Analysis
NASA Astrophysics Data System (ADS)
Shamir, Lior
2011-08-01
We describe Ganalyzer, a model-based tool that can automatically analyze and classify galaxy images. Ganalyzer works by separating the galaxy pixels from the background pixels, finding the center and radius of the galaxy, generating the radial intensity plot, and then computing the slopes of the peaks detected in the radial intensity plot to measure the spirality of the galaxy and determine its morphological class. Unlike algorithms that are based on machine learning, Ganalyzer is based on measuring the spirality of the galaxy, a task that is difficult to perform manually, and in many cases can provide a more accurate analysis compared to manual observation. Ganalyzer is simple to use, and can be easily embedded into other image analysis applications. Another advantage is its speed, which allows it to analyze ~10,000,000 galaxy images in five days using a standard modern desktop computer. These capabilities can make Ganalyzer a useful tool in analyzing large data sets of galaxy images collected by autonomous sky surveys such as SDSS, LSST, or DES. The software is available for free download at http://vfacstaff.ltu.edu/lshamir/downloads/ganalyzer, and the data used in the experiment are available at http://vfacstaff.ltu.edu/lshamir/downloads/ganalyzer/GalaxyImages.zip.
Simple method to measure the refractive index of liquid with graduated cylinder and beaker.
An, Yu-Kuan
2017-12-01
A simple method is introduced to measure the refractive index (RI) of a liquid with an experimental device composed of a graduated cylinder and a beaker which are coaxial. A magnified image of the graduated cylinder is formed as the liquid is poured into the beaker. Optical path analysis indicates that the RI of the liquid is equal to the product of the image's diameter magnification and the RI of air, irrelevant to the beaker. Theoretically, the RI measurement range is unlimited and the liquid dosage could be small as well. The device is used to carry out experiments by means of both the photographic method and telescope method to measure RIs of three kinds of liquids. The results show that the measured RIs all fit their published values well.
Simple method to measure the refractive index of liquid with graduated cylinder and beaker
NASA Astrophysics Data System (ADS)
An, Yu-Kuan
2017-12-01
A simple method is introduced to measure the refractive index (RI) of a liquid with an experimental device composed of a graduated cylinder and a beaker which are coaxial. A magnified image of the graduated cylinder is formed as the liquid is poured into the beaker. Optical path analysis indicates that the RI of the liquid is equal to the product of the image's diameter magnification and the RI of air, irrelevant to the beaker. Theoretically, the RI measurement range is unlimited and the liquid dosage could be small as well. The device is used to carry out experiments by means of both the photographic method and telescope method to measure RIs of three kinds of liquids. The results show that the measured RIs all fit their published values well.
Simple spectroscope used with solid state image amplifier over wide spectral range
NASA Technical Reports Server (NTRS)
Brown, R. L., Sr.
1971-01-01
Prism plus image amplifier panel provides visual image of many infrared spectral lines from carbon arc impregnated with metal compound. Different metal compounds generate various desired spectra. Panel also aligns and focuses simple spectroscopes for detecting spectral lines inside and outside visible region.
Hatta, Tomoko; Fujinaga, Yasunari; Kadoya, Masumi; Ueda, Hitoshi; Murayama, Hiroaki; Kurozumi, Masahiro; Ueda, Kazuhiko; Komatsu, Michiharu; Nagaya, Tadanobu; Joshita, Satoru; Kodama, Ryo; Tanaka, Eiji; Uehara, Tsuyoshi; Sano, Kenji; Tanaka, Naoki
2010-12-01
To assess the degree of hepatic fat content, simple and noninvasive methods with high objectivity and reproducibility are required. Magnetic resonance imaging (MRI) is one such candidate, although its accuracy remains unclear. We aimed to validate an MRI method for quantifying hepatic fat content by calibrating MRI reading with a phantom and comparing MRI measurements in human subjects with estimates of liver fat content in liver biopsy specimens. The MRI method was performed by a combination of MRI calibration using a phantom and double-echo chemical shift gradient-echo sequence (double-echo fast low-angle shot sequence) that has been widely used on a 1.5-T scanner. Liver fat content in patients with nonalcoholic fatty liver disease (NAFLD, n = 26) was derived from a calibration curve generated by scanning the phantom. Liver fat was also estimated by optical image analysis. The correlation between the MRI measurements and liver histology findings was examined prospectively. Magnetic resonance imaging measurements showed a strong correlation with liver fat content estimated from the results of light microscopic examination (correlation coefficient 0.91, P < 0.001) regardless of the degree of hepatic steatosis. Moreover, the severity of lobular inflammation or fibrosis did not influence the MRI measurements. This MRI method is simple and noninvasive, has excellent ability to quantify hepatic fat content even in NAFLD patients with mild steatosis or advanced fibrosis, and can be performed easily without special devices.
Diabetic peripheral neuropathy assessment through texture based analysis of corneal nerve images
NASA Astrophysics Data System (ADS)
Silva, Susana F.; Gouveia, Sofia; Gomes, Leonor; Negrão, Luís; João Quadrado, Maria; Domingues, José Paulo; Morgado, António Miguel
2015-05-01
Diabetic peripheral neuropathy (DPN) is one common complication of diabetes. Early diagnosis of DPN often fails due to the non-availability of a simple, reliable, non-invasive method. Several published studies show that corneal confocal microscopy (CCM) can identify small nerve fibre damage and quantify the severity of DPN, using nerve morphometric parameters. Here, we used image texture features, extracted from corneal sub-basal nerve plexus images, obtained in vivo by CCM, to identify DPN patients, using classification techniques. A SVM classifier using image texture features was used to identify (DPN vs. No DPN) DPN patients. The accuracies were 80.6%, when excluding diabetic patients without neuropathy, and 73.5%, when including diabetic patients without diabetic neuropathy jointly with healthy controls. The results suggest that texture analysis might be used as a complementing technique for DPN diagnosis, without requiring nerve segmentation in CCM images. The results also suggest that this technique has enough sensitivity to detect early disorders in the corneal nerves of diabetic patients.
Morgan, Kaye S; Paganin, David M; Siu, Karen K W
2011-01-01
The ability to quantitatively retrieve transverse phase maps during imaging by using coherent x rays often requires a precise grating or analyzer-crystal-based setup. Imaging of live animals presents further challenges when these methods require multiple exposures for image reconstruction. We present a simple method of single-exposure, single-grating quantitative phase contrast for a regime in which the grating period is much greater than the effective pixel size. A grating is used to create a high-visibility reference pattern incident on the sample, which is distorted according to the complex refractive index and thickness of the sample. The resolution, along a line parallel to the grating, is not restricted by the grating spacing, and the detector resolution becomes the primary determinant of the spatial resolution. We present a method of analysis that maps the displacement of interrogation windows in order to retrieve a quantitative phase map. Application of this analysis to the imaging of known phantoms shows excellent correspondence.
Rueckl, Martin; Lenzi, Stephen C; Moreno-Velasquez, Laura; Parthier, Daniel; Schmitz, Dietmar; Ruediger, Sten; Johenning, Friedrich W
2017-01-01
The measurement of activity in vivo and in vitro has shifted from electrical to optical methods. While the indicators for imaging activity have improved significantly over the last decade, tools for analysing optical data have not kept pace. Most available analysis tools are limited in their flexibility and applicability to datasets obtained at different spatial scales. Here, we present SamuROI (Structured analysis of multiple user-defined ROIs), an open source Python-based analysis environment for imaging data. SamuROI simplifies exploratory analysis and visualization of image series of fluorescence changes in complex structures over time and is readily applicable at different spatial scales. In this paper, we show the utility of SamuROI in Ca 2+ -imaging based applications at three spatial scales: the micro-scale (i.e., sub-cellular compartments including cell bodies, dendrites and spines); the meso-scale, (i.e., whole cell and population imaging with single-cell resolution); and the macro-scale (i.e., imaging of changes in bulk fluorescence in large brain areas, without cellular resolution). The software described here provides a graphical user interface for intuitive data exploration and region of interest (ROI) management that can be used interactively within Jupyter Notebook: a publicly available interactive Python platform that allows simple integration of our software with existing tools for automated ROI generation and post-processing, as well as custom analysis pipelines. SamuROI software, source code and installation instructions are publicly available on GitHub and documentation is available online. SamuROI reduces the energy barrier for manual exploration and semi-automated analysis of spatially complex Ca 2+ imaging datasets, particularly when these have been acquired at different spatial scales.
Rueckl, Martin; Lenzi, Stephen C.; Moreno-Velasquez, Laura; Parthier, Daniel; Schmitz, Dietmar; Ruediger, Sten; Johenning, Friedrich W.
2017-01-01
The measurement of activity in vivo and in vitro has shifted from electrical to optical methods. While the indicators for imaging activity have improved significantly over the last decade, tools for analysing optical data have not kept pace. Most available analysis tools are limited in their flexibility and applicability to datasets obtained at different spatial scales. Here, we present SamuROI (Structured analysis of multiple user-defined ROIs), an open source Python-based analysis environment for imaging data. SamuROI simplifies exploratory analysis and visualization of image series of fluorescence changes in complex structures over time and is readily applicable at different spatial scales. In this paper, we show the utility of SamuROI in Ca2+-imaging based applications at three spatial scales: the micro-scale (i.e., sub-cellular compartments including cell bodies, dendrites and spines); the meso-scale, (i.e., whole cell and population imaging with single-cell resolution); and the macro-scale (i.e., imaging of changes in bulk fluorescence in large brain areas, without cellular resolution). The software described here provides a graphical user interface for intuitive data exploration and region of interest (ROI) management that can be used interactively within Jupyter Notebook: a publicly available interactive Python platform that allows simple integration of our software with existing tools for automated ROI generation and post-processing, as well as custom analysis pipelines. SamuROI software, source code and installation instructions are publicly available on GitHub and documentation is available online. SamuROI reduces the energy barrier for manual exploration and semi-automated analysis of spatially complex Ca2+ imaging datasets, particularly when these have been acquired at different spatial scales. PMID:28706482
Wistermayer, Paul R; McIlwain, Wesley R; Ieronimakis, Nicholas; Rogers, Derek J
2018-04-01
Validate an accurate and reproducible method of measuring the cross-sectional area (CSA) of the upper airway. This is a prospective animal study done at a tertiary care medical treatment facility. Control images were obtained using endotracheal tubes of varying sizes. In vivo images were obtained from various timepoints of a concurrent study on subglottic stenosis. Using a 0° rod telescope, an instrument was placed at the level of interest, and a photo was obtained. Three independent and blinded raters then measured the CSA of the narrowest portion of the airway using open source image analysis software. Each blinded rater measured the CSA of 79 photos. The t testing to assess for accuracy showed no difference between measured and known CSAs of the control images ( P = .86), with an average error of 1.5% (SD = 5.5%). All intraclass correlation (ICC) values for intrarater agreement showed excellent agreement (ICC > .75). Interrater reliability among all raters in control (ICC = .975; 95% CI, .817-.995) and in vivo (ICC = .846;, 95% CI, .780-.896) images showed excellent agreement. We validate a simple, accurate, and reproducible method of measuring the CSA of the airway that can be used in a clinical or research setting.
Faster than "g", Revisited with High-Speed Imaging
ERIC Educational Resources Information Center
Vollmer, Michael; Mollmann, Klaus-Peter
2012-01-01
The introduction of modern high-speed cameras in physics teaching provides a tool not only for easy visualization, but also for quantitative analysis of many simple though fast occurring phenomena. As an example, we present a very well-known demonstration experiment--sometimes also discussed in the context of falling chimneys--which is commonly…
ERIC Educational Resources Information Center
Lopez-Arias, T.; Calza, G.; Gratton, L. M.; Oss, S.
2009-01-01
A simple experiment is presented to visualize inferior and superior mirages in the laboratory. A quantitative analysis is done using ray tracing with both photographic and computational techniques. The mirage's image, as seen by the eye or the camera lens, can be used to analyse the deflection and inversion of light rays. (Contains 6 footnotes, 1…
GIS Toolsets for Planetary Geomorphology and Landing-Site Analysis
NASA Astrophysics Data System (ADS)
Nass, Andrea; van Gasselt, Stephan
2015-04-01
Modern Geographic Information Systems (GIS) allow expert and lay users alike to load and position geographic data and perform simple to highly complex surface analyses. For many applications dedicated and ready-to-use GIS tools are available in standard software systems while other applications require the modular combination of available basic tools to answer more specific questions. This also applies to analyses in modern planetary geomorphology where many of such (basic) tools can be used to build complex analysis tools, e.g. in image- and terrain model analysis. Apart from the simple application of sets of different tools, many complex tasks require a more sophisticated design for storing and accessing data using databases (e.g. ArcHydro for hydrological data analysis). In planetary sciences, complex database-driven models are often required to efficiently analyse potential landings sites or store rover data, but also geologic mapping data can be efficiently stored and accessed using database models rather than stand-alone shapefiles. For landings-site analyses, relief and surface roughness estimates are two common concepts that are of particular interest and for both, a number of different definitions co-exist. We here present an advanced toolset for the analysis of image and terrain-model data with an emphasis on extraction of landing site characteristics using established criteria. We provide working examples and particularly focus on the concepts of terrain roughness as it is interpreted in geomorphology and engineering studies.
Huang, Yue; Zheng, Han; Liu, Chi; Ding, Xinghao; Rohde, Gustavo K
2017-11-01
Epithelium-stroma classification is a necessary preprocessing step in histopathological image analysis. Current deep learning based recognition methods for histology data require collection of large volumes of labeled data in order to train a new neural network when there are changes to the image acquisition procedure. However, it is extremely expensive for pathologists to manually label sufficient volumes of data for each pathology study in a professional manner, which results in limitations in real-world applications. A very simple but effective deep learning method, that introduces the concept of unsupervised domain adaptation to a simple convolutional neural network (CNN), has been proposed in this paper. Inspired by transfer learning, our paper assumes that the training data and testing data follow different distributions, and there is an adaptation operation to more accurately estimate the kernels in CNN in feature extraction, in order to enhance performance by transferring knowledge from labeled data in source domain to unlabeled data in target domain. The model has been evaluated using three independent public epithelium-stroma datasets by cross-dataset validations. The experimental results demonstrate that for epithelium-stroma classification, the proposed framework outperforms the state-of-the-art deep neural network model, and it also achieves better performance than other existing deep domain adaptation methods. The proposed model can be considered to be a better option for real-world applications in histopathological image analysis, since there is no longer a requirement for large-scale labeled data in each specified domain.
CMOS Active-Pixel Image Sensor With Simple Floating Gates
NASA Technical Reports Server (NTRS)
Fossum, Eric R.; Nakamura, Junichi; Kemeny, Sabrina E.
1996-01-01
Experimental complementary metal-oxide/semiconductor (CMOS) active-pixel image sensor integrated circuit features simple floating-gate structure, with metal-oxide/semiconductor field-effect transistor (MOSFET) as active circuit element in each pixel. Provides flexibility of readout modes, no kTC noise, and relatively simple structure suitable for high-density arrays. Features desirable for "smart sensor" applications.
Visualization and Analysis of Microtubule Dynamics Using Dual Color-Coded Display of Plus-End Labels
Garrison, Amy K.; Xia, Caihong; Wang, Zheng; Ma, Le
2012-01-01
Investigating spatial and temporal control of microtubule dynamics in live cells is critical to understanding cell morphogenesis in development and disease. Tracking fluorescently labeled plus-end-tracking proteins over time has become a widely used method to study microtubule assembly. Here, we report a complementary approach that uses only two images of these labels to visualize and analyze microtubule dynamics at any given time. Using a simple color-coding scheme, labeled plus-ends from two sequential images are pseudocolored with different colors and then merged to display color-coded ends. Based on object recognition algorithms, these colored ends can be identified and segregated into dynamic groups corresponding to four events, including growth, rescue, catastrophe, and pause. Further analysis yields not only their spatial distribution throughout the cell but also provides measurements such as growth rate and direction for each labeled end. We have validated the method by comparing our results with ground-truth data derived from manual analysis as well as with data obtained using the tracking method. In addition, we have confirmed color-coded representation of different dynamic events by analyzing their history and fate. Finally, we have demonstrated the use of the method to investigate microtubule assembly in cells and provided guidance in selecting optimal image acquisition conditions. Thus, this simple computer vision method offers a unique and quantitative approach to study spatial regulation of microtubule dynamics in cells. PMID:23226282
Dao, Lam; Glancy, Brian; Lucotte, Bertrand; Chang, Lin-Ching; Balaban, Robert S; Hsu, Li-Yueh
2015-01-01
SUMMARY This paper investigates a post-processing approach to correct spatial distortion in two-photon fluorescence microscopy images for vascular network reconstruction. It is aimed at in vivo imaging of large field-of-view, deep-tissue studies of vascular structures. Based on simple geometric modeling of the object-of-interest, a distortion function is directly estimated from the image volume by deconvolution analysis. Such distortion function is then applied to sub volumes of the image stack to adaptively adjust for spatially varying distortion and reduce the image blurring through blind deconvolution. The proposed technique was first evaluated in phantom imaging of fluorescent microspheres that are comparable in size to the underlying capillary vascular structures. The effectiveness of restoring three-dimensional spherical geometry of the microspheres using the estimated distortion function was compared with empirically measured point-spread function. Next, the proposed approach was applied to in vivo vascular imaging of mouse skeletal muscle to reduce the image distortion of the capillary structures. We show that the proposed method effectively improve the image quality and reduce spatially varying distortion that occurs in large field-of-view deep-tissue vascular dataset. The proposed method will help in qualitative interpretation and quantitative analysis of vascular structures from fluorescence microscopy images. PMID:26224257
NASA Astrophysics Data System (ADS)
Ham, Woonchul; Song, Chulgyu; Lee, Kangsan; Roh, Seungkuk
2016-05-01
In this paper, we propose a new image reconstruction algorithm considering the geometric information of acoustic sources and senor detector and review the two-step reconstruction algorithm which was previously proposed based on the geometrical information of ROI(region of interest) considering the finite size of acoustic sensor element. In a new image reconstruction algorithm, not only mathematical analysis is very simple but also its software implementation is very easy because we don't need to use the FFT. We verify the effectiveness of the proposed reconstruction algorithm by showing the simulation results by using Matlab k-wave toolkit.
On estimation of secret message length in LSB steganography in spatial domain
NASA Astrophysics Data System (ADS)
Fridrich, Jessica; Goljan, Miroslav
2004-06-01
In this paper, we present a new method for estimating the secret message length of bit-streams embedded using the Least Significant Bit embedding (LSB) at random pixel positions. We introduce the concept of a weighted stego image and then formulate the problem of determining the unknown message length as a simple optimization problem. The methodology is further refined to obtain more stable and accurate results for a wide spectrum of natural images. One of the advantages of the new method is its modular structure and a clean mathematical derivation that enables elegant estimator accuracy analysis using statistical image models.
NASA Astrophysics Data System (ADS)
Treloar, W. J.; Taylor, G. E.; Flenley, J. R.
2004-12-01
This is the first of a series of papers on the theme of automated pollen analysis. The automation of pollen analysis could result in numerous advantages for the reconstruction of past environments, with larger data sets made practical, objectivity and fine resolution sampling. There are also applications in apiculture and medicine. Previous work on the classification of pollen using texture measures has been successful with small numbers of pollen taxa. However, as the number of pollen taxa to be identified increases, more features may be required to achieve a successful classification. This paper describes the use of simple geometric measures to augment the texture measures. The feasibility of this new approach is tested using scanning electron microscope (SEM) images of 12 taxa of fresh pollen taken from reference material collected on Henderson Island, Polynesia. Pollen images were captured directly from a SEM connected to a PC. A threshold grey-level was set and binary images were then generated. Pollen edges were then located and the boundaries were traced using a chain coding system. A number of simple geometric variables were calculated directly from the chain code of the pollen and a variable selection procedure was used to choose the optimal subset to be used for classification. The efficiency of these variables was tested using a leave-one-out classification procedure. The system successfully split the original 12 taxa sample into five sub-samples containing no more than six pollen taxa each. The further subdivision of echinate pollen types was then attempted with a subset of four pollen taxa. A set of difference codes was constructed for a range of displacements along the chain code. From these difference codes probability variables were calculated. A variable selection procedure was again used to choose the optimal subset of probabilities that may be used for classification. The efficiency of these variables was again tested using a leave-one-out classification procedure. The proportion of correctly classified pollen ranged from 81% to 100% depending on the subset of variables used. The best set of variables had an overall classification rate averaging at about 95%. This is comparable with the classification rates from the earlier texture analysis work for other types of pollen. Copyright
An Integrative Object-Based Image Analysis Workflow for Uav Images
NASA Astrophysics Data System (ADS)
Yu, Huai; Yan, Tianheng; Yang, Wen; Zheng, Hong
2016-06-01
In this work, we propose an integrative framework to process UAV images. The overall process can be viewed as a pipeline consisting of the geometric and radiometric corrections, subsequent panoramic mosaicking and hierarchical image segmentation for later Object Based Image Analysis (OBIA). More precisely, we first introduce an efficient image stitching algorithm after the geometric calibration and radiometric correction, which employs a fast feature extraction and matching by combining the local difference binary descriptor and the local sensitive hashing. We then use a Binary Partition Tree (BPT) representation for the large mosaicked panoramic image, which starts by the definition of an initial partition obtained by an over-segmentation algorithm, i.e., the simple linear iterative clustering (SLIC). Finally, we build an object-based hierarchical structure by fully considering the spectral and spatial information of the super-pixels and their topological relationships. Moreover, an optimal segmentation is obtained by filtering the complex hierarchies into simpler ones according to some criterions, such as the uniform homogeneity and semantic consistency. Experimental results on processing the post-seismic UAV images of the 2013 Ya'an earthquake demonstrate the effectiveness and efficiency of our proposed method.
Modeling loosely annotated images using both given and imagined annotations
NASA Astrophysics Data System (ADS)
Tang, Hong; Boujemaa, Nozha; Chen, Yunhao; Deng, Lei
2011-12-01
In this paper, we present an approach to learn latent semantic analysis models from loosely annotated images for automatic image annotation and indexing. The given annotation in training images is loose due to: 1. ambiguous correspondences between visual features and annotated keywords; 2. incomplete lists of annotated keywords. The second reason motivates us to enrich the incomplete annotation in a simple way before learning a topic model. In particular, some ``imagined'' keywords are poured into the incomplete annotation through measuring similarity between keywords in terms of their co-occurrence. Then, both given and imagined annotations are employed to learn probabilistic topic models for automatically annotating new images. We conduct experiments on two image databases (i.e., Corel and ESP) coupled with their loose annotations, and compare the proposed method with state-of-the-art discrete annotation methods. The proposed method improves word-driven probability latent semantic analysis (PLSA-words) up to a comparable performance with the best discrete annotation method, while a merit of PLSA-words is still kept, i.e., a wider semantic range.
Current and future trends in marine image annotation software
NASA Astrophysics Data System (ADS)
Gomes-Pereira, Jose Nuno; Auger, Vincent; Beisiegel, Kolja; Benjamin, Robert; Bergmann, Melanie; Bowden, David; Buhl-Mortensen, Pal; De Leo, Fabio C.; Dionísio, Gisela; Durden, Jennifer M.; Edwards, Luke; Friedman, Ariell; Greinert, Jens; Jacobsen-Stout, Nancy; Lerner, Steve; Leslie, Murray; Nattkemper, Tim W.; Sameoto, Jessica A.; Schoening, Timm; Schouten, Ronald; Seager, James; Singh, Hanumant; Soubigou, Olivier; Tojeira, Inês; van den Beld, Inge; Dias, Frederico; Tempera, Fernando; Santos, Ricardo S.
2016-12-01
Given the need to describe, analyze and index large quantities of marine imagery data for exploration and monitoring activities, a range of specialized image annotation tools have been developed worldwide. Image annotation - the process of transposing objects or events represented in a video or still image to the semantic level, may involve human interactions and computer-assisted solutions. Marine image annotation software (MIAS) have enabled over 500 publications to date. We review the functioning, application trends and developments, by comparing general and advanced features of 23 different tools utilized in underwater image analysis. MIAS requiring human input are basically a graphical user interface, with a video player or image browser that recognizes a specific time code or image code, allowing to log events in a time-stamped (and/or geo-referenced) manner. MIAS differ from similar software by the capability of integrating data associated to video collection, the most simple being the position coordinates of the video recording platform. MIAS have three main characteristics: annotating events in real time, posteriorly to annotation and interact with a database. These range from simple annotation interfaces, to full onboard data management systems, with a variety of toolboxes. Advanced packages allow to input and display data from multiple sensors or multiple annotators via intranet or internet. Posterior human-mediated annotation often include tools for data display and image analysis, e.g. length, area, image segmentation, point count; and in a few cases the possibility of browsing and editing previous dive logs or to analyze the annotations. The interaction with a database allows the automatic integration of annotations from different surveys, repeated annotation and collaborative annotation of shared datasets, browsing and querying of data. Progress in the field of automated annotation is mostly in post processing, for stable platforms or still images. Integration into available MIAS is currently limited to semi-automated processes of pixel recognition through computer-vision modules that compile expert-based knowledge. Important topics aiding the choice of a specific software are outlined, the ideal software is discussed and future trends are presented.
NASA Technical Reports Server (NTRS)
Abbey, Craig K.; Eckstein, Miguel P.
2002-01-01
We consider estimation and statistical hypothesis testing on classification images obtained from the two-alternative forced-choice experimental paradigm. We begin with a probabilistic model of task performance for simple forced-choice detection and discrimination tasks. Particular attention is paid to general linear filter models because these models lead to a direct interpretation of the classification image as an estimate of the filter weights. We then describe an estimation procedure for obtaining classification images from observer data. A number of statistical tests are presented for testing various hypotheses from classification images based on some more compact set of features derived from them. As an example of how the methods we describe can be used, we present a case study investigating detection of a Gaussian bump profile.
Light Field Imaging Based Accurate Image Specular Highlight Removal
Wang, Haoqian; Xu, Chenxue; Wang, Xingzheng; Zhang, Yongbing; Peng, Bo
2016-01-01
Specular reflection removal is indispensable to many computer vision tasks. However, most existing methods fail or degrade in complex real scenarios for their individual drawbacks. Benefiting from the light field imaging technology, this paper proposes a novel and accurate approach to remove specularity and improve image quality. We first capture images with specularity by the light field camera (Lytro ILLUM). After accurately estimating the image depth, a simple and concise threshold strategy is adopted to cluster the specular pixels into “unsaturated” and “saturated” category. Finally, a color variance analysis of multiple views and a local color refinement are individually conducted on the two categories to recover diffuse color information. Experimental evaluation by comparison with existed methods based on our light field dataset together with Stanford light field archive verifies the effectiveness of our proposed algorithm. PMID:27253083
Investigating an Aerial Image First
ERIC Educational Resources Information Center
Wyrembeck, Edward P.; Elmer, Jeffrey S.
2006-01-01
Most introductory optics lab activities begin with students locating the real image formed by a converging lens. The method is simple and straightforward--students move a screen back and forth until the real image is in sharp focus on the screen. Students then draw a simple ray diagram to explain the observation using only two or three special…
Zhou, Ruixi; Huang, Wei; Yang, Yang; Chen, Xiao; Weller, Daniel S; Kramer, Christopher M; Kozerke, Sebastian; Salerno, Michael
2018-02-01
Cardiovascular magnetic resonance (CMR) stress perfusion imaging provides important diagnostic and prognostic information in coronary artery disease (CAD). Current clinical sequences have limited temporal and/or spatial resolution, and incomplete heart coverage. Techniques such as k-t principal component analysis (PCA) or k-t sparcity and low rank structure (SLR), which rely on the high degree of spatiotemporal correlation in first-pass perfusion data, can significantly accelerate image acquisition mitigating these problems. However, in the presence of respiratory motion, these techniques can suffer from significant degradation of image quality. A number of techniques based on non-rigid registration have been developed. However, to first approximation, breathing motion predominantly results in rigid motion of the heart. To this end, a simple robust motion correction strategy is proposed for k-t accelerated and compressed sensing (CS) perfusion imaging. A simple respiratory motion compensation (MC) strategy for k-t accelerated and compressed-sensing CMR perfusion imaging to selectively correct respiratory motion of the heart was implemented based on linear k-space phase shifts derived from rigid motion registration of a region-of-interest (ROI) encompassing the heart. A variable density Poisson disk acquisition strategy was used to minimize coherent aliasing in the presence of respiratory motion, and images were reconstructed using k-t PCA and k-t SLR with or without motion correction. The strategy was evaluated in a CMR-extended cardiac torso digital (XCAT) phantom and in prospectively acquired first-pass perfusion studies in 12 subjects undergoing clinically ordered CMR studies. Phantom studies were assessed using the Structural Similarity Index (SSIM) and Root Mean Square Error (RMSE). In patient studies, image quality was scored in a blinded fashion by two experienced cardiologists. In the phantom experiments, images reconstructed with the MC strategy had higher SSIM (p < 0.01) and lower RMSE (p < 0.01) in the presence of respiratory motion. For patient studies, the MC strategy improved k-t PCA and k-t SLR reconstruction image quality (p < 0.01). The performance of k-t SLR without motion correction demonstrated improved image quality as compared to k-t PCA in the setting of respiratory motion (p < 0.01), while with motion correction there is a trend of better performance in k-t SLR as compared with motion corrected k-t PCA. Our simple and robust rigid motion compensation strategy greatly reduces motion artifacts and improves image quality for standard k-t PCA and k-t SLR techniques in setting of respiratory motion due to imperfect breath-holding.
Geometric Theory of Moving Grid Wavefront Sensor
1977-06-30
Identify by block numbot) Adaptive Optics WaVefront Sensor Geometric Optics Analysis Moving Ronchi Grid "ABSTRACT (Continue an revere sdde If nooessaY...ad Identify by block nucber)A geometric optics analysis is made for a wavefront sensor that uses a moving Ronchi grid. It is shown that by simple data... optical systems being considered or being developed -3 for imaging an object through a turbulent atmosphere. Some of these use a wavefront sensor to
Cervical Vertebral Body's Volume as a New Parameter for Predicting the Skeletal Maturation Stages.
Choi, Youn-Kyung; Kim, Jinmi; Yamaguchi, Tetsutaro; Maki, Koutaro; Ko, Ching-Chang; Kim, Yong-Il
2016-01-01
This study aimed to determine the correlation between the volumetric parameters derived from the images of the second, third, and fourth cervical vertebrae by using cone beam computed tomography with skeletal maturation stages and to propose a new formula for predicting skeletal maturation by using regression analysis. We obtained the estimation of skeletal maturation levels from hand-wrist radiographs and volume parameters derived from the second, third, and fourth cervical vertebrae bodies from 102 Japanese patients (54 women and 48 men, 5-18 years of age). We performed Pearson's correlation coefficient analysis and simple regression analysis. All volume parameters derived from the second, third, and fourth cervical vertebrae exhibited statistically significant correlations (P < 0.05). The simple regression model with the greatest R-square indicated the fourth-cervical-vertebra volume as an independent variable with a variance inflation factor less than ten. The explanation power was 81.76%. Volumetric parameters of cervical vertebrae using cone beam computed tomography are useful in regression models. The derived regression model has the potential for clinical application as it enables a simple and quantitative analysis to evaluate skeletal maturation level.
Cervical Vertebral Body's Volume as a New Parameter for Predicting the Skeletal Maturation Stages
Choi, Youn-Kyung; Kim, Jinmi; Maki, Koutaro; Ko, Ching-Chang
2016-01-01
This study aimed to determine the correlation between the volumetric parameters derived from the images of the second, third, and fourth cervical vertebrae by using cone beam computed tomography with skeletal maturation stages and to propose a new formula for predicting skeletal maturation by using regression analysis. We obtained the estimation of skeletal maturation levels from hand-wrist radiographs and volume parameters derived from the second, third, and fourth cervical vertebrae bodies from 102 Japanese patients (54 women and 48 men, 5–18 years of age). We performed Pearson's correlation coefficient analysis and simple regression analysis. All volume parameters derived from the second, third, and fourth cervical vertebrae exhibited statistically significant correlations (P < 0.05). The simple regression model with the greatest R-square indicated the fourth-cervical-vertebra volume as an independent variable with a variance inflation factor less than ten. The explanation power was 81.76%. Volumetric parameters of cervical vertebrae using cone beam computed tomography are useful in regression models. The derived regression model has the potential for clinical application as it enables a simple and quantitative analysis to evaluate skeletal maturation level. PMID:27340668
Wide-Field Imaging of Single-Nanoparticle Extinction with Sub-nm2 Sensitivity
NASA Astrophysics Data System (ADS)
Payne, Lukas M.; Langbein, Wolfgang; Borri, Paola
2018-03-01
We report on a highly sensitive wide-field imaging technique for quantitative measurement of the optical extinction cross section σext of single nanoparticles. The technique is simple and high speed, and it enables the simultaneous acquisition of hundreds of nanoparticles for statistical analysis. Using rapid referencing, fast acquisition, and a deconvolution analysis, a shot-noise-limited sensitivity down to 0.4 nm2 is achieved. Measurements on a set of individual gold nanoparticles of 5 nm diameter using this method yield σext=(10.0 ±3.1 ) nm2, which is consistent with theoretical expectations and well above the background fluctuations of 0.9 nm2 .
Leonard, Annemarie K; Loughran, Elizabeth A; Klymenko, Yuliya; Liu, Yueying; Kim, Oleg; Asem, Marwa; McAbee, Kevin; Ravosa, Matthew J; Stack, M Sharon
2018-01-01
This chapter highlights methods for visualization and analysis of extracellular matrix (ECM) proteins, with particular emphasis on collagen type I, the most abundant protein in mammals. Protocols described range from advanced imaging of complex in vivo matrices to simple biochemical analysis of individual ECM proteins. The first section of this chapter describes common methods to image ECM components and includes protocols for second harmonic generation, scanning electron microscopy, and several histological methods of ECM localization and degradation analysis, including immunohistochemistry, Trichrome staining, and in situ zymography. The second section of this chapter details both a common transwell invasion assay and a novel live imaging method to investigate cellular behavior with respect to collagen and other ECM proteins of interest. The final section consists of common electrophoresis-based biochemical methods that are used in analysis of ECM proteins. Use of the methods described herein will enable researchers to gain a greater understanding of the role of ECM structure and degradation in development and matrix-related diseases such as cancer and connective tissue disorders. © 2018 Elsevier Inc. All rights reserved.
The Role of Nonlinear Gradients in Parallel Imaging: A k-Space Based Analysis.
Galiana, Gigi; Stockmann, Jason P; Tam, Leo; Peters, Dana; Tagare, Hemant; Constable, R Todd
2012-09-01
Sequences that encode the spatial information of an object using nonlinear gradient fields are a new frontier in MRI, with potential to provide lower peripheral nerve stimulation, windowed fields of view, tailored spatially-varying resolution, curved slices that mirror physiological geometry, and, most importantly, very fast parallel imaging with multichannel coils. The acceleration for multichannel images is generally explained by the fact that curvilinear gradient isocontours better complement the azimuthal spatial encoding provided by typical receiver arrays. However, the details of this complementarity have been more difficult to specify. We present a simple and intuitive framework for describing the mechanics of image formation with nonlinear gradients, and we use this framework to review some the main classes of nonlinear encoding schemes.
Complex amplitude reconstruction by iterative amplitude-phase retrieval algorithm with reference
NASA Astrophysics Data System (ADS)
Shen, Cheng; Guo, Cheng; Tan, Jiubin; Liu, Shutian; Liu, Zhengjun
2018-06-01
Multi-image iterative phase retrieval methods have been successfully applied in plenty of research fields due to their simple but efficient implementation. However, there is a mismatch between the measurement of the first long imaging distance and the sequential interval. In this paper, an amplitude-phase retrieval algorithm with reference is put forward without additional measurements or priori knowledge. It gets rid of measuring the first imaging distance. With a designed update formula, it significantly raises the convergence speed and the reconstruction fidelity, especially in phase retrieval. Its superiority over the original amplitude-phase retrieval (APR) method is validated by numerical analysis and experiments. Furthermore, it provides a conceptual design of a compact holographic image sensor, which can achieve numerical refocusing easily.
Instant Grainification: Real-Time Grain-Size Analysis from Digital Images in the Field
NASA Astrophysics Data System (ADS)
Rubin, D. M.; Chezar, H.
2007-12-01
Over the past few years, digital cameras and underwater microscopes have been developed to collect in-situ images of sand-sized bed sediment, and software has been developed to measure grain size from those digital images (Chezar and Rubin, 2004; Rubin, 2004; Rubin et al., 2006). Until now, all image processing and grain- size analysis was done back in the office where images were uploaded from cameras and processed on desktop computers. Computer hardware has become small and rugged enough to process images in the field, which for the first time allows real-time grain-size analysis of sand-sized bed sediment. We present such a system consisting of weatherproof tablet computer, open source image-processing software (autocorrelation code of Rubin, 2004, running under Octave and Cygwin), and digital camera with macro lens. Chezar, H., and Rubin, D., 2004, Underwater microscope system: U.S. Patent and Trademark Office, patent number 6,680,795, January 20, 2004. Rubin, D.M., 2004, A simple autocorrelation algorithm for determining grain size from digital images of sediment: Journal of Sedimentary Research, v. 74, p. 160-165. Rubin, D.M., Chezar, H., Harney, J.N., Topping, D.J., Melis, T.S., and Sherwood, C.R., 2006, Underwater microscope for measuring spatial and temporal changes in bed-sediment grain size: USGS Open-File Report 2006-1360.
Efficient Data Mining for Local Binary Pattern in Texture Image Analysis
Kwak, Jin Tae; Xu, Sheng; Wood, Bradford J.
2015-01-01
Local binary pattern (LBP) is a simple gray scale descriptor to characterize the local distribution of the grey levels in an image. Multi-resolution LBP and/or combinations of the LBPs have shown to be effective in texture image analysis. However, it is unclear what resolutions or combinations to choose for texture analysis. Examining all the possible cases is impractical and intractable due to the exponential growth in a feature space. This limits the accuracy and time- and space-efficiency of LBP. Here, we propose a data mining approach for LBP, which efficiently explores a high-dimensional feature space and finds a relatively smaller number of discriminative features. The features can be any combinations of LBPs. These may not be achievable with conventional approaches. Hence, our approach not only fully utilizes the capability of LBP but also maintains the low computational complexity. We incorporated three different descriptors (LBP, local contrast measure, and local directional derivative measure) with three spatial resolutions and evaluated our approach using two comprehensive texture databases. The results demonstrated the effectiveness and robustness of our approach to different experimental designs and texture images. PMID:25767332
NASA Astrophysics Data System (ADS)
Yusvana, Rama; Headon, Denis; Markx, Gerard H.
2009-08-01
The use of dielectrophoresis for the construction of artificial skin tissue with skin cells in follicle-like 3D cell aggregates in well-defined patterns is demonstrated. To analyse the patterns produced and to study their development after their formation a Virtual Instrument (VI) system was developed using the LabVIEW IMAQ Vision Development Module. A series of programming functions (algorithms) was used to isolate the features on the image (in our case; the patterned aggregates) and separate them from all other unwanted regions on the image. The image was subsequently converted into a binary version, covering only the desired microarray regions which could then be analysed by computer for automatic object measurements. The analysis utilized the simple and easy-to-use User-Specified Multi-Regions Masking (MRM) technique, which allows one to concentrate the analysis on the desired regions specified in the mask. This simplified the algorithms for the analysis of images of cell arrays having similar geometrical properties. By having a collection of scripts containing masks of different patterns, it was possible to quickly and efficiently develop sets of custom virtual instruments for the offline or online analysis of images of cell arrays in the database.
Swarm Intelligence for Optimizing Hybridized Smoothing Filter in Image Edge Enhancement
NASA Astrophysics Data System (ADS)
Rao, B. Tirumala; Dehuri, S.; Dileep, M.; Vindhya, A.
In this modern era, image transmission and processing plays a major role. It would be impossible to retrieve information from satellite and medical images without the help of image processing techniques. Edge enhancement is an image processing step that enhances the edge contrast of an image or video in an attempt to improve its acutance. Edges are the representations of the discontinuities of image intensity functions. For processing these discontinuities in an image, a good edge enhancement technique is essential. The proposed work uses a new idea for edge enhancement using hybridized smoothening filters and we introduce a promising technique of obtaining best hybrid filter using swarm algorithms (Artificial Bee Colony (ABC), Particle Swarm Optimization (PSO) and Ant Colony Optimization (ACO)) to search for an optimal sequence of filters from among a set of rather simple, representative image processing filters. This paper deals with the analysis of the swarm intelligence techniques through the combination of hybrid filters generated by these algorithms for image edge enhancement.
Comparison of Two Simplification Methods for Shoreline Extraction from Digital Orthophoto Images
NASA Astrophysics Data System (ADS)
Bayram, B.; Sen, A.; Selbesoglu, M. O.; Vārna, I.; Petersons, P.; Aykut, N. O.; Seker, D. Z.
2017-11-01
The coastal ecosystems are very sensitive to external influences. Coastal resources such as sand dunes, coral reefs and mangroves has vital importance to prevent coastal erosion. Human based effects also threats the coastal areas. Therefore, the change of coastal areas should be monitored. Up-to-date, accurate shoreline information is indispensable for coastal managers and decision makers. Remote sensing and image processing techniques give a big opportunity to obtain reliable shoreline information. In the presented study, NIR bands of seven 1:5000 scaled digital orthophoto images of Riga Bay-Latvia have been used. The Object-oriented Simple Linear Clustering method has been utilized to extract shoreline of Riga Bay. Bend and Douglas-Peucker methods have been used to simplify the extracted shoreline to test the effect of both methods. Photogrammetrically digitized shoreline has been taken as reference data to compare obtained results. The accuracy assessment has been realised by Digital Shoreline Analysis tool. As a result, the achieved shoreline by the Bend method has been found closer to the extracted shoreline with Simple Linear Clustering method.
On the bandwidth of the plenoptic function.
Do, Minh N; Marchand-Maillet, Davy; Vetterli, Martin
2012-02-01
The plenoptic function (POF) provides a powerful conceptual tool for describing a number of problems in image/video processing, vision, and graphics. For example, image-based rendering is shown as sampling and interpolation of the POF. In such applications, it is important to characterize the bandwidth of the POF. We study a simple but representative model of the scene where band-limited signals (e.g., texture images) are "painted" on smooth surfaces (e.g., of objects or walls). We show that, in general, the POF is not band limited unless the surfaces are flat. We then derive simple rules to estimate the essential bandwidth of the POF for this model. Our analysis reveals that, in addition to the maximum and minimum depths and the maximum frequency of painted signals, the bandwidth of the POF also depends on the maximum surface slope. With a unifying formalism based on multidimensional signal processing, we can verify several key results in POF processing, such as induced filtering in space and depth-corrected interpolation, and quantify the necessary sampling rates. © 2011 IEEE
Identification, definition and mapping of terrestrial ecosystems in interior Alaska
NASA Technical Reports Server (NTRS)
Anderson, J. H. (Principal Investigator)
1972-01-01
The author has identified the following significant results. A reconstituted color infrared image covering the western Seward Peninsula was used for identifying vegetation types by simple visual examination. The image was taken by ERTS-1 approximately 1120 hours on August 1, 1972. Seven major colors were identified. Four of these were matched with four units on existing vegetation maps: bright red - shrub thicket; light gray-red - upland tundra; medium gray-red - coastal wet tundra; gray - alpine barrens. In the bright red color, two phases, violet and orange, were recognized and tentatively ascribed to differences in species composition in the shrub thicket type. The three colors which had no map unit equivalents were interpreted as follows: pink - grassland tundra; dark gray-red - burn scars; light orange-red - senescent vegetation. It was concluded that the image provides a considerable amount of information regarding the distribution of vegetation types, even at so simple a leval of analysis. It was also concluded that sequential imagery of this type could provide useful information on vegetation fires and phenologic events.
NASA Astrophysics Data System (ADS)
Kulkarni, Ashutosh; Stork, David G.
2009-02-01
A recent theory claims that some Renaissance artists, as early as 1425, secretly traced optically projected images during the execution of some passages in some of their works, nearly a quarter millennium before historians of art and of optics have secure evidence anyone recorded an image this way. Key evidence adduced by the theory's proponents includes the trelliswork in the right panel of Robert Campin's Merode altarpiece triptych (c. 1425-28). If their claim were verified for this work, such a discovery would be extremely important to the history of art and of image making more generally: the Altarpiece would be the earliest surviving image believed to record the projected image of an illuminated object, the first step towards photography, over 400 years later. The projection theory proponents point to teeny "kinks" in the depicted slats of one orientation in the Altarpiece as evidence that Campin refocussed a projector twice and traced images of physically straight slats in his studio. However, the proponents rotated the digital images of each slat individually, rather than the full trelliswork as a whole, and thereby disrupted the relative alignment between the images of the kinks and thus confounded their analysis. We found that when properly rotated, the kinks line up nearly perfectly and are consistent with Campin using a subtly kinked straightedge repeatedly, once for each of the slats. Moreover, the proponents did not report any analysis of the other set of slats-the ones nearly perpendicular to the first set. These perpendicular slats are straight across the break line of the first set-an unlikely scenario in the optical explanation. Finally, whereas it would have been difficult for Campin to draw the middle portions of the slats perfectly straight by tracing a projected image, it would have been trivially simple had he used a straightedge. Our results and the lack of any contemporaneous documentary evidence for the projection technique imply that Campin used a simple mechanical aid-such as a minutely kinked straightedge or a mahl stick commonly used in the early Renaissance-rather than a very complex optical projector and procedure, undocumented from that time.
Machine Learning and Computer Vision System for Phenotype Data Acquisition and Analysis in Plants.
Navarro, Pedro J; Pérez, Fernando; Weiss, Julia; Egea-Cortines, Marcos
2016-05-05
Phenomics is a technology-driven approach with promising future to obtain unbiased data of biological systems. Image acquisition is relatively simple. However data handling and analysis are not as developed compared to the sampling capacities. We present a system based on machine learning (ML) algorithms and computer vision intended to solve the automatic phenotype data analysis in plant material. We developed a growth-chamber able to accommodate species of various sizes. Night image acquisition requires near infrared lightning. For the ML process, we tested three different algorithms: k-nearest neighbour (kNN), Naive Bayes Classifier (NBC), and Support Vector Machine. Each ML algorithm was executed with different kernel functions and they were trained with raw data and two types of data normalisation. Different metrics were computed to determine the optimal configuration of the machine learning algorithms. We obtained a performance of 99.31% in kNN for RGB images and a 99.34% in SVM for NIR. Our results show that ML techniques can speed up phenomic data analysis. Furthermore, both RGB and NIR images can be segmented successfully but may require different ML algorithms for segmentation.
Iodine-131 radiolabeling of poly ethylene glycol-coated gold nanorods for in vivo imaging.
Eskandari, Najmeh; Yavari, Kamal; Outokesh, Mohammad; Sadjadi, Sodeh; Ahmadi, Seyed Javad
2013-01-01
Gold nanorods (GNRs) can be used in various biomedical applications; however, very little is known about their in vivo tissue distribution by radiolabeling. Here, we have developed a rapid and simple method with high yield and without disturbing their optical properties for radiolabeling of gold rods with iodine-131 in order to track in vivo tissue uptake of GNRs after systemic administration by biodistribution analysis and γ-imaging. Following intravenous injection into rat, PEGylated GNRs have much longer blood circulation times. Copyright © 2012 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Dobner, Sven; Fallnich, Carsten
2014-02-01
We present the hyperspectral imaging capabilities of in-line interferometric femtosecond stimulated Raman scattering. The beneficial features of this method, namely, the improved signal-to-background ratio compared to other applicable broadband stimulated Raman scattering methods and the simple experimental implementation, allow for a rather fast acquisition of three-dimensional raster-scanned hyperspectral data-sets, which is shown for PMMA beads and a lipid droplet in water as a demonstration. A subsequent application of a principle component analysis displays the chemical selectivity of the method.
Webcam classification using simple features
NASA Astrophysics Data System (ADS)
Pramoun, Thitiporn; Choe, Jeehyun; Li, He; Chen, Qingshuang; Amornraksa, Thumrongrat; Lu, Yung-Hsiang; Delp, Edward J.
2015-03-01
Thousands of sensors are connected to the Internet and many of these sensors are cameras. The "Internet of Things" will contain many "things" that are image sensors. This vast network of distributed cameras (i.e. web cams) will continue to exponentially grow. In this paper we examine simple methods to classify an image from a web cam as "indoor/outdoor" and having "people/no people" based on simple features. We use four types of image features to classify an image as indoor/outdoor: color, edge, line, and text. To classify an image as having people/no people we use HOG and texture features. The features are weighted based on their significance and combined. A support vector machine is used for classification. Our system with feature weighting and feature combination yields 95.5% accuracy.
USDA-ARS?s Scientific Manuscript database
Spectral scattering is useful for assessing the firmness and soluble solids content (SSC) of apples. In previous research, mean reflectance extracted from the hyperspectral scattering profiles was used for this purpose since the method is simple and fast and also gives relatively good predictions. T...
USDA-ARS?s Scientific Manuscript database
Rice germplasm with an inherent ability to suppress weeds can potentially improve the economics and sustainability of weed control in rice. We devised a simple, rapid, and inexpensive digital imaging system to quantify several shoot and root growth characteristics in field-grown rice plants that ha...
NASA Astrophysics Data System (ADS)
Ambekar Ramachandra Rao, Raghu; Mehta, Monal R.; Toussaint, Kimani C., Jr.
2010-02-01
We demonstrate the use of Fourier transform-second-harmonic generation (FT-SHG) imaging of collagen fibers as a means of performing quantitative analysis of obtained images of selected spatial regions in porcine trachea, ear, and cornea. Two quantitative markers, preferred orientation and maximum spatial frequency are proposed for differentiating structural information between various spatial regions of interest in the specimens. The ear shows consistent maximum spatial frequency and orientation as also observed in its real-space image. However, there are observable changes in the orientation and minimum feature size of fibers in the trachea indicating a more random organization. Finally, the analysis is applied to a 3D image stack of the cornea. It is shown that the standard deviation of the orientation is sensitive to the randomness in fiber orientation. Regions with variations in the maximum spatial frequency, but with relatively constant orientation, suggest that maximum spatial frequency is useful as an independent quantitative marker. We emphasize that FT-SHG is a simple, yet powerful, tool for extracting information from images that is not obvious in real space. This technique can be used as a quantitative biomarker to assess the structure of collagen fibers that may change due to damage from disease or physical injury.
Hyperspectral and differential CARS microscopy for quantitative chemical imaging in human adipocytes
Di Napoli, Claudia; Pope, Iestyn; Masia, Francesco; Watson, Peter; Langbein, Wolfgang; Borri, Paola
2014-01-01
In this work, we demonstrate the applicability of coherent anti-Stokes Raman scattering (CARS) micro-spectroscopy for quantitative chemical imaging of saturated and unsaturated lipids in human stem-cell derived adipocytes. We compare dual-frequency/differential CARS (D-CARS), which enables rapid imaging and simple data analysis, with broadband hyperspectral CARS microscopy analyzed using an unsupervised phase-retrieval and factorization method recently developed by us for quantitative chemical image analysis. Measurements were taken in the vibrational fingerprint region (1200–2000/cm) and in the CH stretch region (2600–3300/cm) using a home-built CARS set-up which enables hyperspectral imaging with 10/cm resolution via spectral focussing from a single broadband 5 fs Ti:Sa laser source. Through a ratiometric analysis, both D-CARS and phase-retrieved hyperspectral CARS determine the concentration of unsaturated lipids with comparable accuracy in the fingerprint region, while in the CH stretch region D-CARS provides only a qualitative contrast owing to its non-linear behavior. When analyzing hyperspectral CARS images using the blind factorization into susceptibilities and concentrations of chemical components recently demonstrated by us, we are able to determine vol:vol concentrations of different lipid components and spatially resolve inhomogeneities in lipid composition with superior accuracy compared to state-of-the art ratiometric methods. PMID:24877002
Knee X-ray image analysis method for automated detection of Osteoarthritis
Shamir, Lior; Ling, Shari M.; Scott, William W.; Bos, Angelo; Orlov, Nikita; Macura, Tomasz; Eckley, D. Mark; Ferrucci, Luigi; Goldberg, Ilya G.
2008-01-01
We describe a method for automated detection of radiographic Osteoarthritis (OA) in knee X-ray images. The detection is based on the Kellgren-Lawrence classification grades, which correspond to the different stages of OA severity. The classifier was built using manually classified X-rays, representing the first four KL grades (normal, doubtful, minimal and moderate). Image analysis is performed by first identifying a set of image content descriptors and image transforms that are informative for the detection of OA in the X-rays, and assigning weights to these image features using Fisher scores. Then, a simple weighted nearest neighbor rule is used in order to predict the KL grade to which a given test X-ray sample belongs. The dataset used in the experiment contained 350 X-ray images classified manually by their KL grades. Experimental results show that moderate OA (KL grade 3) and minimal OA (KL grade 2) can be differentiated from normal cases with accuracy of 91.5% and 80.4%, respectively. Doubtful OA (KL grade 1) was detected automatically with a much lower accuracy of 57%. The source code developed and used in this study is available for free download at www.openmicroscopy.org. PMID:19342330
A marker-based watershed method for X-ray image segmentation.
Zhang, Xiaodong; Jia, Fucang; Luo, Suhuai; Liu, Guiying; Hu, Qingmao
2014-03-01
Digital X-ray images are the most frequent modality for both screening and diagnosis in hospitals. To facilitate subsequent analysis such as quantification and computer aided diagnosis (CAD), it is desirable to exclude image background. A marker-based watershed segmentation method was proposed to segment background of X-ray images. The method consisted of six modules: image preprocessing, gradient computation, marker extraction, watershed segmentation from markers, region merging and background extraction. One hundred clinical direct radiograph X-ray images were used to validate the method. Manual thresholding and multiscale gradient based watershed method were implemented for comparison. The proposed method yielded a dice coefficient of 0.964±0.069, which was better than that of the manual thresholding (0.937±0.119) and that of multiscale gradient based watershed method (0.942±0.098). Special means were adopted to decrease the computational cost, including getting rid of few pixels with highest grayscale via percentile, calculation of gradient magnitude through simple operations, decreasing the number of markers by appropriate thresholding, and merging regions based on simple grayscale statistics. As a result, the processing time was at most 6s even for a 3072×3072 image on a Pentium 4 PC with 2.4GHz CPU (4 cores) and 2G RAM, which was more than one time faster than that of the multiscale gradient based watershed method. The proposed method could be a potential tool for diagnosis and quantification of X-ray images. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Novel methods of imaging and analysis for the thermoregulatory sweat test.
Carroll, Michael Sean; Reed, David W; Kuntz, Nancy L; Weese-Mayer, Debra Ellyn
2018-06-07
The thermoregulatory sweat test (TST) can be central to the identification and management of disorders affecting sudomotor function and small sensory and autonomic nerve fibers, but the cumbersome nature of the standard testing protocol has prevented its widespread adoption. A high resolution, quantitative, clean and simple assay of sweating could significantly improve identification and management of these disorders. Images from 89 clinical TSTs were analyzed retrospectively using two novel techniques. First, using the standard indicator powder, skin surface sweat distributions were determined algorithmically for each patient. Second, a fundamentally novel method using thermal imaging of forced evaporative cooling was evaluated through comparison with the standard technique. Correlation and receiver operating characteristic analyses were used to determine the degree of match between these methods, and the potential limits of thermal imaging were examined through cumulative analysis of all studied patients. Algorithmic encoding of sweating and non-sweating regions produces a more objective analysis for clinical decision making. Additionally, results from the forced cooling method correspond well with those from indicator powder imaging, with a correlation across spatial regions of -0.78 (CI: -0.84 to -0.71). The method works similarly across body regions, and frame-by-frame analysis suggests the ability to identify sweating regions within about 1 second of imaging. While algorithmic encoding can enhance the standard sweat testing protocol, thermal imaging with forced evaporative cooling can dramatically improve the TST by making it less time-consuming and more patient-friendly than the current approach.
NASA Astrophysics Data System (ADS)
Nishidate, Izumi; Wiswadarma, Aditya; Hase, Yota; Tanaka, Noriyuki; Maeda, Takaaki; Niizeki, Kyuichi; Aizu, Yoshihisa
2011-08-01
In order to visualize melanin and blood concentrations and oxygen saturation in human skin tissue, a simple imaging technique based on multispectral diffuse reflectance images acquired at six wavelengths (500, 520, 540, 560, 580 and 600nm) was developed. The technique utilizes multiple regression analysis aided by Monte Carlo simulation for diffuse reflectance spectra. Using the absorbance spectrum as a response variable and the extinction coefficients of melanin, oxygenated hemoglobin, and deoxygenated hemoglobin as predictor variables, multiple regression analysis provides regression coefficients. Concentrations of melanin and total blood are then determined from the regression coefficients using conversion vectors that are deduced numerically in advance, while oxygen saturation is obtained directly from the regression coefficients. Experiments with a tissue-like agar gel phantom validated the method. In vivo experiments with human skin of the human hand during upper limb occlusion and of the inner forearm exposed to UV irradiation demonstrated the ability of the method to evaluate physiological reactions of human skin tissue.
Dorr, Ricardo; Ozu, Marcelo; Parisi, Mario
2007-04-15
Water channels (aquaporins) family members have been identified in central nervous system cells. A classic method to measure membrane water permeability and its regulation is to capture and analyse images of Xenopus laevis oocytes expressing them. Laboratories dedicated to the analysis of motion images usually have powerful equipment valued in thousands of dollars. However, some scientists consider that new approaches are needed to reduce costs in scientific labs, especially in developing countries. The objective of this work is to share a very low-cost hardware and software setup based on a well-selected webcam, a hand-made adapter to a microscope and the use of free software to measure membrane water permeability in Xenopus oocytes. One of the main purposes of this setup is to maintain a high level of quality in images obtained at brief intervals (shorter than 70 ms). The presented setup helps to economize without sacrificing image analysis requirements.
Imaging Intratumor Heterogeneity: Role in Therapy Response, Resistance, and Clinical Outcome
O’Connor, James P.B.; Rose, Chris J.; Waterton, John C.; Carano, Richard A.D.; Parker, Geoff J.M.; Jackson, Alan
2014-01-01
Tumors exhibit genomic and phenotypic heterogeneity which has prognostic significance and may influence response to therapy. Imaging can quantify the spatial variation in architecture and function of individual tumors through quantifying basic biophysical parameters such as density or MRI signal relaxation rate; through measurements of blood flow, hypoxia, metabolism, cell death and other phenotypic features; and through mapping the spatial distribution of biochemical pathways and cell signaling networks. These methods can establish whether one tumor is more or less heterogeneous than another and can identify sub-regions with differing biology. In this article we review the image analysis methods currently used to quantify spatial heterogeneity within tumors. We discuss how analysis of intratumor heterogeneity can provide benefit over more simple biomarkers such as tumor size and average function. We consider how imaging methods can be integrated with genomic and pathology data, rather than be developed in isolation. Finally, we identify the challenges that must be overcome before measurements of intratumoral heterogeneity can be used routinely to guide patient care. PMID:25421725
Automated tracking of lava lake level using thermal images at Kīlauea Volcano, Hawai’i
Patrick, Matthew R.; Swanson, Don; Orr, Tim R.
2016-01-01
Tracking the level of the lava lake in Halema‘uma‘u Crater, at the summit of Kīlauea Volcano, Hawai’i, is an essential part of monitoring the ongoing eruption and forecasting potentially hazardous changes in activity. We describe a simple automated image processing routine that analyzes continuously-acquired thermal images of the lava lake and measures lava level. The method uses three image segmentation approaches, based on edge detection, short-term change analysis, and composite temperature thresholding, to identify and track the lake margin in the images. These relative measurements from the images are periodically calibrated with laser rangefinder measurements to produce real-time estimates of lake elevation. Continuous, automated tracking of the lava level has been an important tool used by the U.S. Geological Survey’s Hawaiian Volcano Observatory since 2012 in real-time operational monitoring of the volcano and its hazard potential.
NASA Technical Reports Server (NTRS)
Brickey, David W.; Crowley, James K.; Rowan, Lawrence C.
1987-01-01
Airborne Imaging Spectrometer-1 (AIS-1) data were obtained for an area of amphibolite grade metamorphic rocks that have moderate rangeland vegetation cover. Although rock exposures are sparse and patchy at this site, soils are visible through the vegetation and typically comprise 20 to 30 percent of the surface area. Channel averaged low band depth images for diagnostic soil rock absorption bands. Sets of three such images were combined to produce color composite band depth images. This relative simple approach did not require extensive calibration efforts and was effective for discerning a number of spectrally distinctive rocks and soils, including soils having high talc concentrations. The results show that the high spectral and spatial resolution of AIS-1 and future sensors hold considerable promise for mapping mineral variations in soil, even in moderately vegetated areas.
Classification Comparisons Between Compact Polarimetric and Quad-Pol SAR Imagery
NASA Astrophysics Data System (ADS)
Souissi, Boularbah; Doulgeris, Anthony P.; Eltoft, Torbjørn
2015-04-01
Recent interest in dual-pol SAR systems has lead to a novel approach, the so-called compact polarimetric imaging mode (CP) which attempts to reconstruct fully polarimetric information based on a few simple assumptions. In this work, the CP image is simulated from the full quad-pol (QP) image. We present here the initial comparison of polarimetric information content between QP and CP imaging modes. The analysis of multi-look polarimetric covariance matrix data uses an automated statistical clustering method based upon the expectation maximization (EM) algorithm for finite mixture modeling, using the complex Wishart probability density function. Our results showed that there are some different characteristics between the QP and CP modes. The classification is demonstrated using a E-SAR and Radarsat2 polarimetric SAR images acquired over DLR Oberpfaffenhofen in Germany and Algiers in Algeria respectively.
Digital image classification with the help of artificial neural network by simple histogram.
Dey, Pranab; Banerjee, Nirmalya; Kaur, Rajwant
2016-01-01
Visual image classification is a great challenge to the cytopathologist in routine day-to-day work. Artificial neural network (ANN) may be helpful in this matter. In this study, we have tried to classify digital images of malignant and benign cells in effusion cytology smear with the help of simple histogram data and ANN. A total of 404 digital images consisting of 168 benign cells and 236 malignant cells were selected for this study. The simple histogram data was extracted from these digital images and an ANN was constructed with the help of Neurointelligence software [Alyuda Neurointelligence 2.2 (577), Cupertino, California, USA]. The network architecture was 6-3-1. The images were classified as training set (281), validation set (63), and test set (60). The on-line backpropagation training algorithm was used for this study. A total of 10,000 iterations were done to train the ANN system with the speed of 609.81/s. After the adequate training of this ANN model, the system was able to identify all 34 malignant cell images and 24 out of 26 benign cells. The ANN model can be used for the identification of the individual malignant cells with the help of simple histogram data. This study will be helpful in the future to identify malignant cells in unknown situations.
A programmable light engine for quantitative single molecule TIRF and HILO imaging.
van 't Hoff, Marcel; de Sars, Vincent; Oheim, Martin
2008-10-27
We report on a simple yet powerful implementation of objective-type total internal reflection fluorescence (TIRF) and highly inclined and laminated optical sheet (HILO, a type of dark-field) illumination. Instead of focusing the illuminating laser beam to a single spot close to the edge of the microscope objective, we are scanning during the acquisition of a fluorescence image the focused spot in a circular orbit, thereby illuminating the sample from various directions. We measure parameters relevant for quantitative image analysis during fluorescence image acquisition by capturing an image of the excitation light distribution in an equivalent objective backfocal plane (BFP). Operating at scan rates above 1 MHz, our programmable light engine allows directional averaging by circular spinning the spot even for sub-millisecond exposure times. We show that restoring the symmetry of TIRF/HILO illumination reduces scattering and produces an evenly lit field-of-view that affords on-line analysis of evanescnt-field excited fluorescence without pre-processing. Utilizing crossed acousto-optical deflectors, our device generates arbitrary intensity profiles in BFP, permitting variable-angle, multi-color illumination, or objective lenses to be rapidly exchanged.
Imaging intratumor heterogeneity: role in therapy response, resistance, and clinical outcome.
O'Connor, James P B; Rose, Chris J; Waterton, John C; Carano, Richard A D; Parker, Geoff J M; Jackson, Alan
2015-01-15
Tumors exhibit genomic and phenotypic heterogeneity, which has prognostic significance and may influence response to therapy. Imaging can quantify the spatial variation in architecture and function of individual tumors through quantifying basic biophysical parameters such as CT density or MRI signal relaxation rate; through measurements of blood flow, hypoxia, metabolism, cell death, and other phenotypic features; and through mapping the spatial distribution of biochemical pathways and cell signaling networks using PET, MRI, and other emerging molecular imaging techniques. These methods can establish whether one tumor is more or less heterogeneous than another and can identify subregions with differing biology. In this article, we review the image analysis methods currently used to quantify spatial heterogeneity within tumors. We discuss how analysis of intratumor heterogeneity can provide benefit over more simple biomarkers such as tumor size and average function. We consider how imaging methods can be integrated with genomic and pathology data, instead of being developed in isolation. Finally, we identify the challenges that must be overcome before measurements of intratumoral heterogeneity can be used routinely to guide patient care. ©2014 American Association for Cancer Research.
Kim, Yong-Il; Im, Hyung-Jun; Paeng, Jin Chul; Lee, Jae Sung; Eo, Jae Seon; Kim, Dong Hyun; Kim, Euishin E; Kang, Keon Wook; Chung, June-Key; Lee, Dong Soo
2012-12-01
(18)F-FP-CIT positron emission tomography (PET) is an effective imaging for dopamine transporters. In usual clinical practice, (18)F-FP-CIT PET is analyzed visually or quantified using manual delineation of a volume of interest (VOI) for the striatum. In this study, we suggested and validated two simple quantitative methods based on automatic VOI delineation using statistical probabilistic anatomical mapping (SPAM) and isocontour margin setting. Seventy-five (18)F-FP-CIT PET images acquired in routine clinical practice were used for this study. A study-specific image template was made and the subject images were normalized to the template. Afterwards, uptakes in the striatal regions and cerebellum were quantified using probabilistic VOI based on SPAM. A quantitative parameter, QSPAM, was calculated to simulate binding potential. Additionally, the functional volume of each striatal region and its uptake were measured in automatically delineated VOI using isocontour margin setting. Uptake-volume product (QUVP) was calculated for each striatal region. QSPAM and QUVP were compared with visual grading and the influence of cerebral atrophy on the measurements was tested. Image analyses were successful in all the cases. Both the QSPAM and QUVP were significantly different according to visual grading (P < 0.001). The agreements of QUVP or QSPAM with visual grading were slight to fair for the caudate nucleus (κ = 0.421 and 0.291, respectively) and good to perfect to the putamen (κ = 0.663 and 0.607, respectively). Also, QSPAM and QUVP had a significant correlation with each other (P < 0.001). Cerebral atrophy made a significant difference in QSPAM and QUVP of the caudate nuclei regions with decreased (18)F-FP-CIT uptake. Simple quantitative measurements of QSPAM and QUVP showed acceptable agreement with visual grading. Although QSPAM in some group may be influenced by cerebral atrophy, these simple methods are expected to be effective in the quantitative analysis of (18)F-FP-CIT PET in usual clinical practice.
Software manual for operating particle displacement tracking data acquisition and reduction system
NASA Technical Reports Server (NTRS)
Wernet, Mark P.
1991-01-01
The software manual is presented. The necessary steps required to record, analyze, and reduce Particle Image Velocimetry (PIV) data using the Particle Displacement Tracking (PDT) technique are described. The new PDT system is an all electronic technique employing a CCD video camera and a large memory buffer frame-grabber board to record low velocity (less than or equal to 20 cm/s) flows. Using a simple encoding scheme, a time sequence of single exposure images are time coded into a single image and then processed to track particle displacements and determine 2-D velocity vectors. All the PDT data acquisition, analysis, and data reduction software is written to run on an 80386 PC.
NASA Astrophysics Data System (ADS)
Semenishchev, E. A.; Marchuk, V. I.; Fedosov, V. P.; Stradanchenko, S. G.; Ruslyakov, D. V.
2015-05-01
This work aimed to study computationally simple method of saliency map calculation. Research in this field received increasing interest for the use of complex techniques in portable devices. A saliency map allows increasing the speed of many subsequent algorithms and reducing the computational complexity. The proposed method of saliency map detection based on both image and frequency space analysis. Several examples of test image from the Kodak dataset with different detalisation considered in this paper demonstrate the effectiveness of the proposed approach. We present experiments which show that the proposed method providing better results than the framework Salience Toolbox in terms of accuracy and speed.
NASA Astrophysics Data System (ADS)
Cao, Xinhua; Xu, Xiaoyin; Voss, Stephan
2017-03-01
In this paper, we describe an enhanced DICOM Secondary Capture (SC) that integrates Image Quantification (IQ) results, Regions of Interest (ROIs), and Time Activity Curves (TACs) with screen shots by embedding extra medical imaging information into a standard DICOM header. A software toolkit of DICOM IQSC has been developed to implement the SC-centered information integration of quantitative analysis for routine practice of nuclear medicine. Primary experiments show that the DICOM IQSC method is simple and easy to implement seamlessly integrating post-processing workstations with PACS for archiving and retrieving IQ information. Additional DICOM IQSC applications in routine nuclear medicine and clinic research are also discussed.
Quantitative analysis of the chromatin of lymphocytes: an assay on comparative structuralism.
Meyer, F
1980-01-01
With 26 letters we can form all the words we use, and with a few words it is possible to form an infinite number of different meaningful sentences. In our case, the letters will be a few simple neighborhood image transformations and area measurements. The paper shows how, by iterating these transformations, it is possible to obtain a good quantitative description of the nuclear structure of Feulgen-stained lymphocytes (CLL and normal). The fact that we restricted ourselves to a small number of image transformations made it possible to construct an image analysis system (TAS) able to do these transformations very quickly. We will see, successively, how to segment the nucleus itself, the chromatin, and the interchromatinic channels, how openings and closings lead to size and spatial distribution curves, and how skeletons may be used for measuring the lengths of interchromatinic channels.
Digital repeat analysis; setup and operation.
Nol, J; Isouard, G; Mirecki, J
2006-06-01
Since the emergence of digital imaging, there have been questions about the necessity of continuing reject analysis programs in imaging departments to evaluate performance and quality. As a marketing strategy, most suppliers of digital technology focus on the supremacy of the technology and its ability to reduce the number of repeats, resulting in less radiation doses given to patients and increased productivity in the department. On the other hand, quality assurance radiographers and radiologists believe that repeats are mainly related to positioning skills, and repeat analysis is the main tool to plan training needs to up-skill radiographers. A comparative study between conventional and digital imaging was undertaken to compare outcomes and evaluate the need for reject analysis. However, digital technology still being at its early development stages, setting a credible reject analysis program became the major task of the study. It took the department, with the help of the suppliers of the computed radiography reader and the picture archiving and communication system, over 2 years of software enhancement to build a reliable digital repeat analysis system. The results were supportive of both philosophies; the number of repeats as a result of exposure factors was reduced dramatically; however, the percentage of repeats as a result of positioning skills was slightly on the increase for the simple reason that some rejects in the conventional system qualifying for both exposure and positioning errors were classified as exposure error. The ability of digitally adjusting dark or light images reclassified some of those images as positioning errors.
Analysis of line structure in handwritten documents using the Hough transform
NASA Astrophysics Data System (ADS)
Ball, Gregory R.; Kasiviswanathan, Harish; Srihari, Sargur N.; Narayanan, Aswin
2010-01-01
In the analysis of handwriting in documents a central task is that of determining line structure of the text, e.g., number of text lines, location of their starting and end-points, line-width, etc. While simple methods can handle ideal images, real world documents have complexities such as overlapping line structure, variable line spacing, line skew, document skew, noisy or degraded images etc. This paper explores the application of the Hough transform method to handwritten documents with the goal of automatically determining global document line structure in a top-down manner which can then be used in conjunction with a bottom-up method such as connected component analysis. The performance is significantly better than other top-down methods, such as the projection profile method. In addition, we evaluate the performance of skew analysis by the Hough transform on handwritten documents.
Helioviewer.org: Simple Solar and Heliospheric Data Visualization
NASA Astrophysics Data System (ADS)
Hughitt, V. K.; Ireland, J.; Mueller, D.
2011-12-01
Helioviewer.org is a free and open-source web application for exploring solar physics data in a simple and intuitive manner. Over the past several years, Helioviewer.org has enabled thousands of users from across the globe to explore the inner heliosphere, providing access to over ten million images from the SOHO, SDO, and STEREO missions. While Helioviewer.org has seen a surge in use by the public in recent months, it is still ultimately a science tool. The newest version of Helioviewer.org provides access to science-quality data for all available images through the Virtual Solar Observatory (VSO). In addition to providing a powerful platform for browsing heterogeneous sets of solar data, Helioviewer.org also seeks to be as flexible and extensible as possible, providing access to much of its functionality via a simple Application Programming Interface (API). Recently, the Helioviewer.org API was used for two such applications: a Wordpress plugin, and a Python library for solar physics data analysis (SunPy). These applications are discussed and examples of API usage are provided. Finally, Helioviewer.org is undergoing continual development, with new features being added on a regular basis. Recent updates to Helioviewer.org are discussed, along with a preview of things to come.
Single underwater image enhancement based on color cast removal and visibility restoration
NASA Astrophysics Data System (ADS)
Li, Chongyi; Guo, Jichang; Wang, Bo; Cong, Runmin; Zhang, Yan; Wang, Jian
2016-05-01
Images taken under underwater condition usually have color cast and serious loss of contrast and visibility. Degraded underwater images are inconvenient for observation and analysis. In order to address these problems, an underwater image-enhancement method is proposed. A simple yet effective underwater image color cast removal algorithm is first presented based on the optimization theory. Then, based on the minimum information loss principle and inherent relationship of medium transmission maps of three color channels in an underwater image, an effective visibility restoration algorithm is proposed to recover visibility, contrast, and natural appearance of degraded underwater images. To evaluate the performance of the proposed method, qualitative comparison, quantitative comparison, and color accuracy test are conducted. Experimental results demonstrate that the proposed method can effectively remove color cast, improve contrast and visibility, and recover natural appearance of degraded underwater images. Additionally, the proposed method is comparable to and even better than several state-of-the-art methods.
Liu, Chaocheng; Desai, Shashwat; Krebs, Lynette D; Kirkland, Scott W; Keto-Lambert, Diana; Rowe, Brian H
2018-01-08
Low back pain (LBP) is an extremely frequent reason for patients to present to an emergency department (ED). Despite evidence against the utility of imaging, simple and advanced imaging (i.e., computed tomography [CT], magnetic resonance imaging) for patients with LBP has become increasingly frequent in the ED. The objective of this review was to identify and examine the effectiveness of interventions aimed at reducing image ordering in the ED for LBP patients. A protocol was developed a priori, following the PRISMA guidelines, and registered with PROSPERO. Six bibliographic databases (including MEDLINE, EMBASE, EBM Reviews, SCOPUS, CINAHL, and Dissertation Abstracts) and the gray literature were searched. Comparative studies assessing interventions that targeted image ordering in the ED for adult patients with LBP were eligible for inclusion. Two reviewers independently screened study eligibility and completed data extraction. Study quality was completed independently by two reviewers using the before-after quality assessment checklist, with a third-party mediator resolving any differences. Due to a limited number of studies and significant heterogeneity, only a descriptive analysis was performed. The search yielded 603 unique citations of which a total of five before-after studies were included. Quality assessment identified potential biases relating to comparability between the pre- and postintervention groups, reliable assessment of outcomes, and an overall lack of information on the intervention (i.e., time point, description, intervention data collection). The type of interventions utilized included clinical decision support tools, clinical practice guidelines, a knowledge translation initiative, and multidisciplinary protocols. Overall, four studies reported a decrease in the relative percentage change in imaging in a specific image modality (22.7%-47.4%) following implementation of the interventions; however, one study reported a 35% increase in patient referrals to radiography, while another study reported a subsequent 15.4% increase in referrals to CT and myelography after implementing an intervention which reduced referrals for simple radiography. While imaging of LBP has been identified as a key area of imaging overuse (e.g., Choosing Wisely recommendation), evidence on interventions to reduce image ordering for ED patients with LBP is sparse. There is some evidence to suggest that interventions can reduce the use of simple imaging in LBP in the ED; however, a shift in imaging modality has also been demonstrated. Additional studies employing higher-quality methods and measuring intervention fidelity are strongly recommended to further explore the potential of ED-based interventions to reduce image ordering for this patient population. © 2018 by the Society for Academic Emergency Medicine.
ERIC Educational Resources Information Center
Olson, Joel A.; Nordell, Karen J.; Chesnik, Marla A.; Landis, Clark R.; Ellis, Arthur B.; Rzchowski, M. S.; Condren, S. Michael; Lisensky, George C.
2000-01-01
Describes a set of simple, inexpensive, classical demonstrations of nuclear magnetic resonance (NMR) and magnetic resonance imaging (MRI) principles that illustrate the resonance condition associated with magnetic dipoles and the dependence of the resonance frequency on environment. (WRM)
Häggström, Ida; Beattie, Bradley J; Schmidtlein, C Ross
2016-06-01
To develop and evaluate a fast and simple tool called dpetstep (Dynamic PET Simulator of Tracers via Emission Projection), for dynamic PET simulations as an alternative to Monte Carlo (MC), useful for educational purposes and evaluation of the effects of the clinical environment, postprocessing choices, etc., on dynamic and parametric images. The tool was developed in matlab using both new and previously reported modules of petstep (PET Simulator of Tracers via Emission Projection). Time activity curves are generated for each voxel of the input parametric image, whereby effects of imaging system blurring, counting noise, scatters, randoms, and attenuation are simulated for each frame. Each frame is then reconstructed into images according to the user specified method, settings, and corrections. Reconstructed images were compared to MC data, and simple Gaussian noised time activity curves (GAUSS). dpetstep was 8000 times faster than MC. Dynamic images from dpetstep had a root mean square error that was within 4% on average of that of MC images, whereas the GAUSS images were within 11%. The average bias in dpetstep and MC images was the same, while GAUSS differed by 3% points. Noise profiles in dpetstep images conformed well to MC images, confirmed visually by scatter plot histograms, and statistically by tumor region of interest histogram comparisons that showed no significant differences (p < 0.01). Compared to GAUSS, dpetstep images and noise properties agreed better with MC. The authors have developed a fast and easy one-stop solution for simulations of dynamic PET and parametric images, and demonstrated that it generates both images and subsequent parametric images with very similar noise properties to those of MC images, in a fraction of the time. They believe dpetstep to be very useful for generating fast, simple, and realistic results, however since it uses simple scatter and random models it may not be suitable for studies investigating these phenomena. dpetstep can be downloaded free of cost from https://github.com/CRossSchmidtlein/dPETSTEP.
Brown, Ryan M; Meah, Christopher J; Heath, Victoria L; Styles, Iain B; Bicknell, Roy
2016-01-01
Angiogenesis involves the generation of new blood vessels from the existing vasculature and is dependent on many growth factors and signaling events. In vivo angiogenesis is dynamic and complex, meaning assays are commonly utilized to explore specific targets for research into this area. Tube-forming assays offer an excellent overview of the molecular processes in angiogenesis. The Matrigel tube forming assay is a simple-to-implement but powerful tool for identifying biomolecules involved in angiogenesis. A detailed experimental protocol on the implementation of the assay is described in conjunction with an in-depth review of methods that can be applied to the analysis of the tube formation. In addition, an ImageJ plug-in is presented which allows automatic quantification of tube images reducing analysis times while removing user bias and subjectivity.
Maui Space Surveillance System Satellite Categorization Laboratory
NASA Astrophysics Data System (ADS)
Deiotte, R.; Guyote, M.; Kelecy, T.; Hall, D.; Africano, J.; Kervin, P.
The MSSS satellite categorization laboratory is a fusion of robotics and digital imaging processes that aims to decompose satellite photometric characteristics and behavior in a controlled setting. By combining a robot, light source and camera to acquire non-resolved images of a model satellite, detailed photometric analyses can be performed to extract relevant information about shape features, elemental makeup, and ultimately attitude and function. Using the laboratory setting a detailed analysis can be done on any type of material or design and the results cataloged in a database that will facilitate object identification by "curve-fitting" individual elements in the basis set to observational data that might otherwise be unidentifiable. Currently the laboratory has created, an ST-Robotics five degree of freedom robotic arm, collimated light source and non-focused Apogee camera have all been integrated into a MATLAB based software package that facilitates automatic data acquisition and analysis. Efforts to date have been aimed at construction of the lab as well as validation and verification of simple geometric objects. Simple tests on spheres, cubes and simple satellites show promising results that could lead to a much better understanding of non-resolvable space object characteristics. This paper presents a description of the laboratory configuration and validation test results with emphasis on the non-resolved photometric characteristics for a variety of object shapes, spin dynamics and orientations. The future vision, utility and benefits of the laboratory to the SSA community as a whole are also discussed.
ERIC Educational Resources Information Center
Shosted, Ryan; Hualde, Jose Ignacio; Scarpace, Daniel
2012-01-01
Are palatal consonants articulated by multiple tongue gestures (coronal and dorsal) or by a single gesture that brings the tongue into contact with the palate at several places of articulation? The lenition of palatal consonants (resulting in approximants) has been presented as evidence that palatals are simple, not complex: When reduced, they do…
Hoshino, Hiromitsu; Higuchi, Tetsuya; Achmad, Arifudin; Taketomi-Takahashi, Ayako; Fujimaki, Hiroya; Tsushima, Yoshito
2016-01-01
We developed a new quantitative interpretation technique of radioisotope cisternography (RIC) for the diagnosis of spontaneous cerebrospinal fluid hypovolemia (SCH). RIC studies performed for suspected SCH were evaluated. (111)In-DTPA RIC images were taken at 0, 1, 3, 6, and 24-h after radioisotope injection following the current protocol. Regions of interest (ROI) were selected on 3-h images to include brain, spine, bladder or the whole body. The accumulative radioactivity counts were calculated for quantitative analysis. Final diagnoses of SCH were established based on the diagnostic criteria recently proposed by Schievink and colleagues. Thirty-five patients were focused on. Twenty-one (60.0%) patients were diagnosed as having SCH according to the Schievink criteria. On the 3-h images, direct cerebrospinal fluid leakage sign was detected in nine of 21 SCH patients (42.9%), as well as three patients with suspected iatrogenic leakage. Compared to non-SCH patients, SCH patients showed higher bladder accumulation at 3-h images (P = 0.0002), and higher brain clearance between the 6- and 24-h images (P < 0.0001). In particular, the 24-h brain clearance was more conclusive for the diagnosis than 24-h whole cistern clearance. The combination of direct sign and 24-h brain accumulation resulted in 100% of accuracy in the 32 patients in whom iatrogenic leakage was not observed. 1- and 6-h images did not provide any additional information in any patients. A new simple ROI setting method, in which only the 3-h whole body and 24-h brain images were necessary, was sufficient to diagnose SCH.
Björn, Lars Olof; Li, Shaoshan
2013-10-01
Solar energy absorbed by plants results in either reflection or absorption. The latter results in photosynthesis, fluorescence, or heat. Measurements of fluorescence changes have been used for monitoring processes associated with photosynthesis. A simple method to follow changes in leaf fluorescence and leaf reflectance associated with nonphotochemical quenching and light acclimation of leaves is described. The main equipment needed consists of a green-light emitting laser pointer, a digital camera, and a personal computer equipped with the camera acquisition software and the programs ImageJ and Excel. Otherwise, only commonly available cheap materials are required.
Rozen, Warren Matthew; Spychal, Robert T.; Hunter-Smith, David J.
2016-01-01
Background Accurate volumetric analysis is an essential component of preoperative planning in both reconstructive and aesthetic breast procedures towards achieving symmetrization and patient-satisfactory outcome. Numerous comparative studies and reviews of individual techniques have been reported. However, a unifying review of all techniques comparing their accuracy, reliability, and practicality has been lacking. Methods A review of the published English literature dating from 1950 to 2015 using databases, such as PubMed, Medline, Web of Science, and EMBASE, was undertaken. Results Since Bouman’s first description of water displacement method, a range of volumetric assessment techniques have been described: thermoplastic casting, direct anthropomorphic measurement, two-dimensional (2D) imaging, and computed tomography (CT)/magnetic resonance imaging (MRI) scans. However, most have been unreliable, difficult to execute and demonstrate limited practicability. Introduction of 3D surface imaging has revolutionized the field due to its ease of use, fast speed, accuracy, and reliability. However, its widespread use has been limited by its high cost and lack of high level of evidence. Recent developments have unveiled the first web-based 3D surface imaging program, 4D imaging, and 3D printing. Conclusions Despite its importance, an accurate, reliable, and simple breast volumetric analysis tool has been elusive until the introduction of 3D surface imaging technology. However, its high cost has limited its wide usage. Novel adjunct technologies, such as web-based 3D surface imaging program, 4D imaging, and 3D printing, appear promising. PMID:27047788
Ontology-aided feature correlation for multi-modal urban sensing
NASA Astrophysics Data System (ADS)
Misra, Archan; Lantra, Zaman; Jayarajah, Kasthuri
2016-05-01
The paper explores the use of correlation across features extracted from different sensing channels to help in urban situational understanding. We use real-world datasets to show how such correlation can improve the accuracy of detection of city-wide events by combining metadata analysis with image analysis of Instagram content. We demonstrate this through a case study on the Singapore Haze. We show that simple ontological relationships and reasoning can significantly help in automating such correlation-based understanding of transient urban events.
NASA Astrophysics Data System (ADS)
Sebesta, Mikael; Egelberg, Peter J.; Langberg, Anders; Lindskov, Jens-Henrik; Alm, Kersti; Janicke, Birgit
2016-03-01
Live-cell imaging enables studying dynamic cellular processes that cannot be visualized in fixed-cell assays. An increasing number of scientists in academia and the pharmaceutical industry are choosing live-cell analysis over or in addition to traditional fixed-cell assays. We have developed a time-lapse label-free imaging cytometer HoloMonitorM4. HoloMonitor M4 assists researchers to overcome inherent disadvantages of fluorescent analysis, specifically effects of chemical labels or genetic modifications which can alter cellular behavior. Additionally, label-free analysis is simple and eliminates the costs associated with staining procedures. The underlying technology principle is based on digital off-axis holography. While multiple alternatives exist for this type of analysis, we prioritized our developments to achieve the following: a) All-inclusive system - hardware and sophisticated cytometric analysis software; b) Ease of use enabling utilization of instrumentation by expert- and entrylevel researchers alike; c) Validated quantitative assay end-points tracked over time such as optical path length shift, optical volume and multiple derived imaging parameters; d) Reliable digital autofocus; e) Robust long-term operation in the incubator environment; f) High throughput and walk-away capability; and finally g) Data management suitable for single- and multi-user networks. We provide examples of HoloMonitor applications of label-free cell viability measurements and monitoring of cell cycle phase distribution.
NASA Astrophysics Data System (ADS)
Trokielewicz, Mateusz; Bartuzi, Ewelina; Michowska, Katarzyna; Andrzejewska, Antonina; Selegrat, Monika
2015-09-01
In the age of modern, hyperconnected society that increasingly relies on mobile devices and solutions, implementing a reliable and accurate biometric system employing iris recognition presents new challenges. Typical biometric systems employing iris analysis require expensive and complicated hardware. We therefore explore an alternative way using visible spectrum iris imaging. This paper aims at answering several questions related to applying iris biometrics for images obtained in the visible spectrum using smartphone camera. Can irides be successfully and effortlessly imaged using a smartphone's built-in camera? Can existing iris recognition methods perform well when presented with such images? The main advantage of using near-infrared (NIR) illumination in dedicated iris recognition cameras is good performance almost independent of the iris color and pigmentation. Are the images obtained from smartphone's camera of sufficient quality even for the dark irides? We present experiments incorporating simple image preprocessing to find the best visibility of iris texture, followed by a performance study to assess whether iris recognition methods originally aimed at NIR iris images perform well with visible light images. To our best knowledge this is the first comprehensive analysis of iris recognition performance using a database of high-quality images collected in visible light using the smartphones flashlight together with the application of commercial off-the-shelf (COTS) iris recognition methods.
Anderson, Beth M.; Stevens, Michael C.; Glahn, David C.; Assaf, Michal; Pearlson, Godfrey D.
2013-01-01
We present a modular, high performance, open-source database system that incorporates popular neuroimaging database features with novel peer-to-peer sharing, and a simple installation. An increasing number of imaging centers have created a massive amount of neuroimaging data since fMRI became popular more than 20 years ago, with much of that data unshared. The Neuroinformatics Database (NiDB) provides a stable platform to store and manipulate neuroimaging data and addresses several of the impediments to data sharing presented by the INCF Task Force on Neuroimaging Datasharing, including 1) motivation to share data, 2) technical issues, and 3) standards development. NiDB solves these problems by 1) minimizing PHI use, providing a cost effective simple locally stored platform, 2) storing and associating all data (including genome) with a subject and creating a peer-to-peer sharing model, and 3) defining a sample, normalized definition of a data storage structure that is used in NiDB. NiDB not only simplifies the local storage and analysis of neuroimaging data, but also enables simple sharing of raw data and analysis methods, which may encourage further sharing. PMID:23912507
The objective assessment of experts' and novices' suturing skills using an image analysis program.
Frischknecht, Adam C; Kasten, Steven J; Hamstra, Stanley J; Perkins, Noel C; Gillespie, R Brent; Armstrong, Thomas J; Minter, Rebecca M
2013-02-01
To objectively assess suturing performance using an image analysis program and to provide validity evidence for this assessment method by comparing experts' and novices' performance. In 2009, the authors used an image analysis program to extract objective variables from digital images of suturing end products obtained during a previous study involving third-year medical students (novices) and surgical faculty and residents (experts). Variables included number of stitches, stitch length, total bite size, travel, stitch orientation, total bite-size-to-travel ratio, and symmetry across the incision ratio. The authors compared all variables between groups to detect significant differences and two variables (total bite-size-to-travel ratio and symmetry across the incision ratio) to ideal values. Five experts and 15 novices participated. Experts' and novices' performances differed significantly (P < .05) with large effect sizes attributable to experience (Cohen d > 0.8) for total bite size (P = .009, d = 1.5), travel (P = .045, d = 1.1), total bite-size-to-travel ratio (P < .0001, d = 2.6), stitch orientation (P = .014,d = 1.4), and symmetry across the incision ratio (P = .022, d = 1.3). The authors found that a simple computer algorithm can extract variables from digital images of a running suture and rapidly provide quantitative summative assessment feedback. The significant differences found between groups confirm that this system can discriminate between skill levels. This image analysis program represents a viable training tool for objectively assessing trainees' suturing, a foundational skill for many medical specialties.
Smits, Loek P; Coolen, Bram F; Panno, Maria D; Runge, Jurgen H; Nijhof, Wouter H; Verheij, Joanne; Nieuwdorp, Max; Stoker, Jaap; Beuers, Ulrich H; Nederveen, Aart J; Stroes, Erik S
2016-03-01
To (a) study the optimal timing and dosing for ultrasmall superparamagnetic iron oxide particle (USPIO)-enhanced magnetic resonance (MR) imaging of the liver in nonalcoholic fatty liver disease, (b) evaluate whether hepatic USPIO uptake is decreased in nonalcoholic steatohepatitis (NASH), and (c) study the diagnostic accuracy of USPIO-enhanced MR imaging to distinguish between NASH and simple steatosis. This prospective study was approved by the local institutional review board, and informed consent was obtained from all patients. Quantitative R2* MR imaging of the liver was performed at baseline and 72 hours after USPIO administration in patients with biopsy-proven NASH (n = 13), hepatic steatosis without NASH (n = 11), and healthy control subjects (n = 9). The hepatic USPIO uptake in the liver was quantified by the difference in R2* (ΔR2*) between the contrast material-enhanced images and baseline images. Between-group differences in mean ΔR2* were tested with the Student t test, and diagnostic accuracy was tested by calculating the area under the receiver operating characteristic curve. Patients with NASH had a significantly lower ΔR2* 72 hours after USPIO administration when compared with patients who had simple steatosis and healthy control subjects (mean ± standard deviation for patients with NASH, 37.0 sec(-1) ± 16.1; patients with simple steatosis, 61.0 sec(-1) ± 17.3; and healthy control subjects, 72.2 sec(-1) ± 22.0; P = .006 for NASH vs simple steatosis; P < .001 for NASH vs healthy control subjects). The area under the receiver operating characteristic curve to distinguish NASH from simple steatosis was 0.87 (95% confidence interval: 0.72, 1.00). This proof-of-concept study provides clues that hepatic USPIO uptake in patients with NASH is decreased and that USPIO MR imaging can be used to differentiate NASH from simple steatosis.
Kikuchi, K; Masuda, Y; Yamashita, T; Sato, K; Katagiri, C; Hirao, T; Mizokami, Y; Yaguchi, H
2016-08-01
Facial skin pigmentation is one of the most prominent visible features of skin aging and often affects perception of health and beauty. To date, facial pigmentation has been evaluated using various image analysis methods developed for the cosmetic and esthetic fields. However, existing methods cannot provide precise information on pigmented spots, such as variations in size, color shade, and distribution pattern. The purpose of this study is the development of image evaluation methods to analyze individual pigmented spots and acquire detailed information on their age-related changes. To characterize the individual pigmented spots within a cheek image, we established a simple object-counting algorithm. First, we captured cheek images using an original imaging system equipped with an illumination unit and a high-resolution digital camera. The acquired images were converted into melanin concentration images using compensation formulae. Next, the melanin images were converted into binary images. The binary images were then subjected to noise reduction. Finally, we calculated parameters such as the melanin concentration, quantity, and size of individual pigmented spots using a connected-components labeling algorithm, which assigns a unique label to each separate group of connected pixels. The cheek image analysis was evaluated on 643 female Japanese subjects. We confirmed that the proposed method was sufficiently sensitive to measure the melanin concentration, and the numbers and sizes of individual pigmented spots through manual evaluation of the cheek images. The image analysis results for the 643 Japanese women indicated clear relationships between age and the changes in the pigmented spots. We developed a new quantitative evaluation method for individual pigmented spots in facial skin. This method facilitates the analysis of the characteristics of various pigmented facial spots and is directly applicable to the fields of dermatology, pharmacology, and esthetic cosmetology. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
D. M., Jayaseema; Lai, Jiann-Shiun; Hueng, Dueng-Yuan; Chang, Chen
2013-01-01
Cellular magnetic resonance imaging (MRI) has been well-established for tracking neural progenitor cells (NPC). Superparamagnetic iron oxide nanoparticles (SPIONs) approved for clinical application are the most common agents used for labeling. Conventionally, transfection agents (TAs) were added with SPIONs to facilitate cell labeling because SPIONs in the native unmodified form were deemed inefficient for intracellular labeling. However, compelling evidence also shows that simple SPION incubation is not invariably ineffective. The labeling efficiency can be improved by prolonged incubation and elevated iron doses. The goal of the present study was to establish simple SPION incubation as an efficient intracellular labeling method. To this end, NPCs derived from the neonatal subventricular zone were incubated with SPIONs (Feridex®) and then evaluated in vitro with regard to the labeling efficiency and biological functions. The results showed that, following 48 hours of incubation at 75 µg/ml, nearly all NPCs exhibited visible SPION intake. Evidence from light microscopy, electron microscopy, chemical analysis, and magnetic resonance imaging confirmed the effectiveness of the labeling. Additionally, biological assays showed that the labeled NPCs exhibited unaffected viability, oxidative stress, apoptosis and differentiation. In the demonstrated in vivo cellular MRI experiment, the hypointensities representing the SPION labeled NPCs remained observable throughout the entire tracking period. The findings indicate that simple SPION incubation without the addition of TAs is an efficient intracellular magnetic labeling method. This simple approach may be considered as an alternative approach to the mainstream labeling method that involves the use of TAs. PMID:23468856
The neural basis of precise visual short-term memory for complex recognisable objects.
Veldsman, Michele; Mitchell, Daniel J; Cusack, Rhodri
2017-10-01
Recent evidence suggests that visual short-term memory (VSTM) capacity estimated using simple objects, such as colours and oriented bars, may not generalise well to more naturalistic stimuli. More visual detail can be stored in VSTM when complex, recognisable objects are maintained compared to simple objects. It is not yet known if it is recognisability that enhances memory precision, nor whether maintenance of recognisable objects is achieved with the same network of brain regions supporting maintenance of simple objects. We used a novel stimulus generation method to parametrically warp photographic images along a continuum, allowing separate estimation of the precision of memory representations and the number of items retained. The stimulus generation method was also designed to create unrecognisable, though perceptually matched, stimuli, to investigate the impact of recognisability on VSTM. We adapted the widely-used change detection and continuous report paradigms for use with complex, photographic images. Across three functional magnetic resonance imaging (fMRI) experiments, we demonstrated greater precision for recognisable objects in VSTM compared to unrecognisable objects. This clear behavioural advantage was not the result of recruitment of additional brain regions, or of stronger mean activity within the core network. Representational similarity analysis revealed greater variability across item repetitions in the representations of recognisable, compared to unrecognisable complex objects. We therefore propose that a richer range of neural representations support VSTM for complex recognisable objects. Copyright © 2017 Elsevier Inc. All rights reserved.
Digital image classification with the help of artificial neural network by simple histogram
Dey, Pranab; Banerjee, Nirmalya; Kaur, Rajwant
2016-01-01
Background: Visual image classification is a great challenge to the cytopathologist in routine day-to-day work. Artificial neural network (ANN) may be helpful in this matter. Aims and Objectives: In this study, we have tried to classify digital images of malignant and benign cells in effusion cytology smear with the help of simple histogram data and ANN. Materials and Methods: A total of 404 digital images consisting of 168 benign cells and 236 malignant cells were selected for this study. The simple histogram data was extracted from these digital images and an ANN was constructed with the help of Neurointelligence software [Alyuda Neurointelligence 2.2 (577), Cupertino, California, USA]. The network architecture was 6-3-1. The images were classified as training set (281), validation set (63), and test set (60). The on-line backpropagation training algorithm was used for this study. Result: A total of 10,000 iterations were done to train the ANN system with the speed of 609.81/s. After the adequate training of this ANN model, the system was able to identify all 34 malignant cell images and 24 out of 26 benign cells. Conclusion: The ANN model can be used for the identification of the individual malignant cells with the help of simple histogram data. This study will be helpful in the future to identify malignant cells in unknown situations. PMID:27279679
A simple prescription for simulating and characterizing gravitational arcs
NASA Astrophysics Data System (ADS)
Furlanetto, C.; Santiago, B. X.; Makler, M.; de Bom, C.; Brandt, C. H.; Neto, A. F.; Ferreira, P. C.; da Costa, L. N.; Maia, M. A. G.
2013-01-01
Simple models of gravitational arcs are crucial for simulating large samples of these objects with full control of the input parameters. These models also provide approximate and automated estimates of the shape and structure of the arcs, which are necessary for detecting and characterizing these objects on massive wide-area imaging surveys. We here present and explore the ArcEllipse, a simple prescription for creating objects with a shape similar to gravitational arcs. We also present PaintArcs, which is a code that couples this geometrical form with a brightness distribution and adds the resulting object to images. Finally, we introduce ArcFitting, which is a tool that fits ArcEllipses to images of real gravitational arcs. We validate this fitting technique using simulated arcs and apply it to CFHTLS and HST images of tangential arcs around clusters of galaxies. Our simple ArcEllipse model for the arc, associated to a Sérsic profile for the source, recovers the total signal in real images typically within 10%-30%. The ArcEllipse+Sérsic models also automatically recover visual estimates of length-to-width ratios of real arcs. Residual maps between data and model images reveal the incidence of arc substructure. They may thus be used as a diagnostic for arcs formed by the merging of multiple images. The incidence of these substructures is the main factor that prevents ArcEllipse models from accurately describing real lensed systems.
A comparison of quantitative methods for clinical imaging with hyperpolarized (13)C-pyruvate.
Daniels, Charlie J; McLean, Mary A; Schulte, Rolf F; Robb, Fraser J; Gill, Andrew B; McGlashan, Nicholas; Graves, Martin J; Schwaiger, Markus; Lomas, David J; Brindle, Kevin M; Gallagher, Ferdia A
2016-04-01
Dissolution dynamic nuclear polarization (DNP) enables the metabolism of hyperpolarized (13)C-labelled molecules, such as the conversion of [1-(13)C]pyruvate to [1-(13)C]lactate, to be dynamically and non-invasively imaged in tissue. Imaging of this exchange reaction in animal models has been shown to detect early treatment response and correlate with tumour grade. The first human DNP study has recently been completed, and, for widespread clinical translation, simple and reliable methods are necessary to accurately probe the reaction in patients. However, there is currently no consensus on the most appropriate method to quantify this exchange reaction. In this study, an in vitro system was used to compare several kinetic models, as well as simple model-free methods. Experiments were performed using a clinical hyperpolarizer, a human 3 T MR system, and spectroscopic imaging sequences. The quantitative methods were compared in vivo by using subcutaneous breast tumours in rats to examine the effect of pyruvate inflow. The two-way kinetic model was the most accurate method for characterizing the exchange reaction in vitro, and the incorporation of a Heaviside step inflow profile was best able to describe the in vivo data. The lactate time-to-peak and the lactate-to-pyruvate area under the curve ratio were simple model-free approaches that accurately represented the full reaction, with the time-to-peak method performing indistinguishably from the best kinetic model. Finally, extracting data from a single pixel was a robust and reliable surrogate of the whole region of interest. This work has identified appropriate quantitative methods for future work in the analysis of human hyperpolarized (13)C data. © 2016 The Authors. NMR in Biomedicine published by John Wiley & Sons Ltd.
Qian, Jianjun; Yang, Jian; Xu, Yong
2013-09-01
This paper presents a robust but simple image feature extraction method, called image decomposition based on local structure (IDLS). It is assumed that in the local window of an image, the macro-pixel (patch) of the central pixel, and those of its neighbors, are locally linear. IDLS captures the local structural information by describing the relationship between the central macro-pixel and its neighbors. This relationship is represented with the linear representation coefficients determined using ridge regression. One image is actually decomposed into a series of sub-images (also called structure images) according to a local structure feature vector. All the structure images, after being down-sampled for dimensionality reduction, are concatenated into one super-vector. Fisher linear discriminant analysis is then used to provide a low-dimensional, compact, and discriminative representation for each super-vector. The proposed method is applied to face recognition and examined using our real-world face image database, NUST-RWFR, and five popular, publicly available, benchmark face image databases (AR, Extended Yale B, PIE, FERET, and LFW). Experimental results show the performance advantages of IDLS over state-of-the-art algorithms.
fMRI activation patterns in an analytic reasoning task: consistency with EEG source localization
NASA Astrophysics Data System (ADS)
Li, Bian; Vasanta, Kalyana C.; O'Boyle, Michael; Baker, Mary C.; Nutter, Brian; Mitra, Sunanda
2010-03-01
Functional magnetic resonance imaging (fMRI) is used to model brain activation patterns associated with various perceptual and cognitive processes as reflected by the hemodynamic (BOLD) response. While many sensory and motor tasks are associated with relatively simple activation patterns in localized regions, higher-order cognitive tasks may produce activity in many different brain areas involving complex neural circuitry. We applied a recently proposed probabilistic independent component analysis technique (PICA) to determine the true dimensionality of the fMRI data and used EEG localization to identify the common activated patterns (mapped as Brodmann areas) associated with a complex cognitive task like analytic reasoning. Our preliminary study suggests that a hybrid GLM/PICA analysis may reveal additional regions of activation (beyond simple GLM) that are consistent with electroencephalography (EEG) source localization patterns.
Synthesis and Structural Characterization of CdFe2O4 Nanostructures
NASA Astrophysics Data System (ADS)
Kalpanadevi, K.; Sinduja, C. R.; Manimekalai, R.
The synthesis of CdFe2O4 nanoparticles has been achieved by a simple thermal decomposition method from the inorganic precursor, [CdFe2(cin)3(N2H4)3], which was obtained by a simple precipitation method from the corresponding metal salts, cinnamic acid and hydrazine hydrate. The precursor was characterized by hydrazine and metal analyses, infrared spectral analysis and thermo gravimetric analysis. On appropriate annealing, [CdFe2(cin)3(N2H4)3] yielded CdFe2O4 nanoparticles. The XRD studies showed that the crystallite size of the particles was 13nm. The results of HRTEM studies also agreed well with those of XRD. SAED pattern of the sample established the polycrystalline nature of the nanoparticles. SEM images displayed a random distribution of grains in the sample.
Recognition of Simple 3D Geometrical Objects under Partial Occlusion
NASA Astrophysics Data System (ADS)
Barchunova, Alexandra; Sommer, Gerald
In this paper we present a novel procedure for contour-based recognition of partially occluded three-dimensional objects. In our approach we use images of real and rendered objects whose contours have been deformed by a restricted change of the viewpoint. The preparatory part consists of contour extraction, preprocessing, local structure analysis and feature extraction. The main part deals with an extended construction and functionality of the classifier ensemble Adaptive Occlusion Classifier (AOC). It relies on a hierarchical fragmenting algorithm to perform a local structure analysis which is essential when dealing with occlusions. In the experimental part of this paper we present classification results for five classes of simple geometrical figures: prism, cylinder, half cylinder, a cube, and a bridge. We compare classification results for three classical feature extractors: Fourier descriptors, pseudo Zernike and Zernike moments.
NASA Astrophysics Data System (ADS)
He, Honghui; Dong, Yang; Zhou, Jialing; Ma, Hui
2017-03-01
As one of the salient features of light, polarization contains abundant structural and optical information of media. Recently, as a comprehensive description of polarization property, the Mueller matrix polarimetry has been applied to various biomedical studies such as cancerous tissues detections. In previous works, it has been found that the structural information encoded in the 2D Mueller matrix images can be presented by other transformed parameters with more explicit relationship to certain microstructural features. In this paper, we present a statistical analyzing method to transform the 2D Mueller matrix images into frequency distribution histograms (FDHs) and their central moments to reveal the dominant structural features of samples quantitatively. The experimental results of porcine heart, intestine, stomach, and liver tissues demonstrate that the transformation parameters and central moments based on the statistical analysis of Mueller matrix elements have simple relationships to the dominant microstructural properties of biomedical samples, including the density and orientation of fibrous structures, the depolarization power, diattenuation and absorption abilities. It is shown in this paper that the statistical analysis of 2D images of Mueller matrix elements may provide quantitative or semi-quantitative criteria for biomedical diagnosis.
Age estimation using exfoliative cytology and radiovisiography: A comparative study
Nallamala, Shilpa; Guttikonda, Venkateswara Rao; Manchikatla, Praveen Kumar; Taneeru, Sravya
2017-01-01
Introduction: Age estimation is one of the essential factors in establishing the identity of an individual. Among various methods, exfoliative cytology (EC) is a unique, noninvasive technique, involving simple, and pain-free collection of intact cells from the oral cavity for microscopic examination. Objective: The study was undertaken with an aim to estimate the age of an individual from the average cell size of their buccal smears calculated using image analysis morphometric software and the pulp–tooth area ratio in mandibular canine of the same individual using radiovisiography (RVG). Materials and Methods: Buccal smears were collected from 100 apparently healthy individuals. After fixation in 95% alcohol, the smears were stained using Papanicolaou stain. The average cell size was measured using image analysis software (Image-Pro Insight 8.0). The RVG images of mandibular canines were obtained, pulp and tooth areas were traced using AutoCAD 2010 software, and area ratio was calculated. The estimated age was then calculated using regression analysis. Results: The paired t-test between chronological age and estimated age by cell size and pulp–tooth area ratio was statistically nonsignificant (P > 0.05). Conclusion: In the present study, age estimated by pulp–tooth area ratio and EC yielded good results. PMID:29657491
Age estimation using exfoliative cytology and radiovisiography: A comparative study.
Nallamala, Shilpa; Guttikonda, Venkateswara Rao; Manchikatla, Praveen Kumar; Taneeru, Sravya
2017-01-01
Age estimation is one of the essential factors in establishing the identity of an individual. Among various methods, exfoliative cytology (EC) is a unique, noninvasive technique, involving simple, and pain-free collection of intact cells from the oral cavity for microscopic examination. The study was undertaken with an aim to estimate the age of an individual from the average cell size of their buccal smears calculated using image analysis morphometric software and the pulp-tooth area ratio in mandibular canine of the same individual using radiovisiography (RVG). Buccal smears were collected from 100 apparently healthy individuals. After fixation in 95% alcohol, the smears were stained using Papanicolaou stain. The average cell size was measured using image analysis software (Image-Pro Insight 8.0). The RVG images of mandibular canines were obtained, pulp and tooth areas were traced using AutoCAD 2010 software, and area ratio was calculated. The estimated age was then calculated using regression analysis. The paired t -test between chronological age and estimated age by cell size and pulp-tooth area ratio was statistically nonsignificant ( P > 0.05). In the present study, age estimated by pulp-tooth area ratio and EC yielded good results.
A New Dusts Sensor for Cultural Heritage Applications Based on Image Processing
Proietti, Andrea; Leccese, Fabio; Caciotta, Maurizio; Morresi, Fabio; Santamaria, Ulderico; Malomo, Carmela
2014-01-01
In this paper, we propose a new sensor for the detection and analysis of dusts (seen as powders and fibers) in indoor environments, especially designed for applications in the field of Cultural Heritage or in other contexts where the presence of dust requires special care (surgery, clean rooms, etc.). The presented system relies on image processing techniques (enhancement, noise reduction, segmentation, metrics analysis) and it allows obtaining both qualitative and quantitative information on the accumulation of dust. This information aims to identify the geometric and topological features of the elements of the deposit. The curators can use this information in order to design suitable prevention and maintenance actions for objects and environments. The sensor consists of simple and relatively cheap tools, based on a high-resolution image acquisition system, a preprocessing software to improve the captured image and an analysis algorithm for the feature extraction and the classification of the elements of the dust deposit. We carried out some tests in order to validate the system operation. These tests were performed within the Sistine Chapel in the Vatican Museums, showing the good performance of the proposed sensor in terms of execution time and classification accuracy. PMID:24901977
Noise correction on LANDSAT images using a spline-like algorithm
NASA Technical Reports Server (NTRS)
Vijaykumar, N. L. (Principal Investigator); Dias, L. A. V.
1985-01-01
Many applications using LANDSAT images face a dilemma: the user needs a certain scene (for example, a flooded region), but that particular image may present interference or noise in form of horizontal stripes. During automatic analysis, this interference or noise may cause false readings of the region of interest. In order to minimize this interference or noise, many solutions are used, for instane, that of using the average (simple or weighted) values of the neighboring vertical points. In the case of high interference (more than one adjacent line lost) the method of averages may not suit the desired purpose. The solution proposed is to use a spline-like algorithm (weighted splines). This type of interpolation is simple to be computer implemented, fast, uses only four points in each interval, and eliminates the necessity of solving a linear equation system. In the normal mode of operation, the first and second derivatives of the solution function are continuous and determined by data points, as in cubic splines. It is possible, however, to impose the values of the first derivatives, in order to account for shapr boundaries, without increasing the computational effort. Some examples using the proposed method are also shown.
MO-FG-202-06: Improving the Performance of Gamma Analysis QA with Radiomics- Based Image Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wootton, L; Nyflot, M; Ford, E
2016-06-15
Purpose: The use of gamma analysis for IMRT quality assurance has well-known limitations. Traditionally, a simple thresholding technique is used to evaluated passing criteria. However, like any image the gamma distribution is rich in information which thresholding mostly discards. We therefore propose a novel method of analyzing gamma images that uses quantitative image features borrowed from radiomics, with the goal of improving error detection. Methods: 368 gamma images were generated from 184 clinical IMRT beams. For each beam the dose to a phantom was measured with EPID dosimetry and compared to the TPS dose calculated with and without normally distributedmore » (2mm sigma) errors in MLC positions. The magnitude of 17 intensity histogram and size-zone radiomic features were derived from each image. The features that differed most significantly between image sets were determined with ROC analysis. A linear machine-learning model was trained on these features to classify images as with or without errors on 180 gamma images.The model was then applied to an independent validation set of 188 additional gamma distributions, half with and half without errors. Results: The most significant features for detecting errors were histogram kurtosis (p=0.007) and three size-zone metrics (p<1e-6 for each). The sizezone metrics detected clusters of high gamma-value pixels under mispositioned MLCs. The model applied to the validation set had an AUC of 0.8, compared to 0.56 for traditional gamma analysis with the decision threshold restricted to 98% or less. Conclusion: A radiomics-based image analysis method was developed that is more effective in detecting error than traditional gamma analysis. Though the pilot study here considers only MLC position errors, radiomics-based methods for other error types are being developed, which may provide better error detection and useful information on the source of detected errors. This work was partially supported by a grant from the Agency for Healthcare Research and Quality, grant number R18 HS022244-01.« less
Lilliu, S; Maragliano, C; Hampton, M; Elliott, M; Stefancich, M; Chiesa, M; Dahlem, M S; Macdonald, J E
2013-11-27
We report a simple technique for mapping Electrostatic Force Microscopy (EFM) bias sweep data into 2D images. The method allows simultaneous probing, in the same scanning area, of the contact potential difference and the second derivative of the capacitance between tip and sample, along with the height information. The only required equipment consists of a microscope with lift-mode EFM capable of phase shift detection. We designate this approach as Scanning Probe Potential Electrostatic Force Microscopy (SPP-EFM). An open-source MATLAB Graphical User Interface (GUI) for images acquisition, processing and analysis has been developed. The technique is tested with Indium Tin Oxide (ITO) and with poly(3-hexylthiophene) (P3HT) nanowires for organic transistor applications.
Ship dynamics for maritime ISAR imaging.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Doerry, Armin Walter
2008-02-01
Demand is increasing for imaging ships at sea. Conventional SAR fails because the ships are usually in motion, both with a forward velocity, and other linear and angular motions that accompany sea travel. Because the target itself is moving, this becomes an Inverse- SAR, or ISAR problem. Developing useful ISAR techniques and algorithms is considerably aided by first understanding the nature and characteristics of ship motion. Consequently, a brief study of some principles of naval architecture sheds useful light on this problem. We attempt to do so here. Ship motions are analyzed for their impact on range-Doppler imaging using Inversemore » Synthetic Aperture Radar (ISAR). A framework for analysis is developed, and limitations of simple ISAR systems are discussed.« less
Improved deconvolution of very weak confocal signals.
Day, Kasey J; La Rivière, Patrick J; Chandler, Talon; Bindokas, Vytas P; Ferrier, Nicola J; Glick, Benjamin S
2017-01-01
Deconvolution is typically used to sharpen fluorescence images, but when the signal-to-noise ratio is low, the primary benefit is reduced noise and a smoother appearance of the fluorescent structures. 3D time-lapse (4D) confocal image sets can be improved by deconvolution. However, when the confocal signals are very weak, the popular Huygens deconvolution software erases fluorescent structures that are clearly visible in the raw data. We find that this problem can be avoided by prefiltering the optical sections with a Gaussian blur. Analysis of real and simulated data indicates that the Gaussian blur prefilter preserves meaningful signals while enabling removal of background noise. This approach is very simple, and it allows Huygens to be used with 4D imaging conditions that minimize photodamage.
Techniques for using diazo materials in remote sensor data analysis
NASA Technical Reports Server (NTRS)
Whitebay, L. E.; Mount, S.
1978-01-01
The use of data derived from LANDSAT is facilitated when special products or computer enhanced images can be analyzed. However, the facilities required to produce and analyze such products prevent many users from taking full advantages of the LANDSAT data. A simple, low-cost method is presented by which users can make their own specially enhanced composite images from the four band black and white LANDSAT images by using the diazo process. The diazo process is described and a detailed procedure for making various color composites, such as color infrared, false natural color, and false color, is provided. The advantages and limitations of the diazo process are discussed. A brief discussion interpretation of diazo composites for land use mapping with some typical examples is included.
Imaging Fluorescent Combustion Species in Gas Turbine Flame Tubes: On Complexities in Real Systems
NASA Technical Reports Server (NTRS)
Hicks, Y. R.; Locke, R. J.; Anderson, R. C.; Zaller, M.; Schock, H. J.
1997-01-01
Planar laser-induced fluorescence (PLIF) is used to visualize the flame structure via OH, NO, and fuel imaging in kerosene- burning gas turbine combustor flame tubes. When compared to simple gaseous hydrocarbon flames and hydrogen flames, flame tube testing complexities include spectral interferences from large fuel fragments, unknown turbulence interactions, high pressure operation, and the concomitant need for windows and remote operation. Complications of these and other factors as they apply to image analysis are considered. Because both OH and gas turbine engine fuels (commercial and military) can be excited and detected using OH transition lines, a narrowband and a broadband detection scheme are compared and the benefits and drawbacks of each method are examined.
Omucheni, Dickson L; Kaduki, Kenneth A; Bulimo, Wallace D; Angeyo, Hudson K
2014-12-11
Multispectral imaging microscopy is a novel microscopic technique that integrates spectroscopy with optical imaging to record both spectral and spatial information of a specimen. This enables acquisition of a large and more informative dataset than is achievable in conventional optical microscopy. However, such data are characterized by high signal correlation and are difficult to interpret using univariate data analysis techniques. In this work, the development and application of a novel method which uses principal component analysis (PCA) in the processing of spectral images obtained from a simple multispectral-multimodal imaging microscope to detect Plasmodium parasites in unstained thin blood smear for malaria diagnostics is reported. The optical microscope used in this work has been modified by replacing the broadband light source (tungsten halogen lamp) with a set of light emitting diodes (LEDs) emitting thirteen different wavelengths of monochromatic light in the UV-vis-NIR range. The LEDs are activated sequentially to illuminate same spot of the unstained thin blood smears on glass slides, and grey level images are recorded at each wavelength. PCA was used to perform data dimensionality reduction and to enhance score images for visualization as well as for feature extraction through clusters in score space. Using this approach, haemozoin was uniquely distinguished from haemoglobin in unstained thin blood smears on glass slides and the 590-700 spectral range identified as an important band for optical imaging of haemozoin as a biomarker for malaria diagnosis. This work is of great significance in reducing the time spent on staining malaria specimens and thus drastically reducing diagnosis time duration. The approach has the potential of replacing a trained human eye with a trained computerized vision system for malaria parasite blood screening.
Kumar, B Santhosh; Sandhyamani, S; Nazeer, Shaiju S; Jayasree, R S
2015-02-01
Autofluorescence exhibited by tissues often interferes with immunofluorescence. Using imaging and spectral analysis, we observed remarkable reduction of autofluorescence of formalin fixed paraffin embedded tissues irradiated with light prior to incubation with immunofluorescent dyes. The technique of photobleaching offers significant improvement in the quality and specificity of immunofluorescence. This has the potential for better techniques for disease diagnosis.
Moralidis, Efstratios; Spyridonidis, Tryfon; Arsos, Georgios; Skeberis, Vassilios; Anagnostopoulos, Constantinos; Gavrielidis, Stavros
2010-01-01
This study aimed to determine systolic dysfunction and estimate resting left ventricular ejection fraction (LVEF) from information collected during routine evaluation of patients with suspected or known coronary heart disease. This approach was then compared to gated single photon emission tomography (SPET). Patients having undergone stress (201)Tl myocardial perfusion imaging followed by equilibrium radionuclide angiography (ERNA) were separated into derivation (n=954) and validation (n=309) groups. Logistic regression analysis was used to develop scoring systems, containing clinical, electrocardiographic (ECG) and scintigraphic data, for the discrimination of an ERNA-LVEF<0.50. Linear regression analysis provided equations predicting ERNA-LVEF from those scores. In 373 patients LVEF was also assessed with (201)Tl gated SPET. Our results showed that an ECG-Scintigraphic scoring system was the best simple predictor of an ERNA-LVEF<0.50 in comparison to other models including ECG, clinical and scintigraphic variables in both the derivation and validation subpopulations. A simple linear equation was derived also for the assessment of resting LVEF from the ECG-Scintigraphic model. Equilibrium radionuclide angiography-LVEF had a good correlation with the ECG-Scintigraphic model LVEF (r=0.716, P=0.000), (201)Tl gated SPET LVEF (r=0.711, P=0.000) and the average LVEF from those assessments (r=0.796, P=0.000). The Bland-Altman statistic (mean+/-2SD) provided values of 0.001+/-0.176, 0.071+/-0.196 and 0.040+/-0.152, respectively. The average LVEF was a better discriminator of systolic dysfunction than gated SPET-LVEF in receiver operating characteristic (ROC) analysis and identified more patients (89%) with a =10% difference from ERNA-LVEF than gated SPET (65%, P=0.000). In conclusion, resting left ventricular systolic dysfunction can be determined effectively from simple resting ECG and stress myocardial perfusion imaging variables. This model provides reliable LVEF estimations, comparable to those from (201)Tl gated SPET, and can enhance the clinical performance of the latter.
Szarka, Mate; Guttman, Andras
2017-10-17
We present the application of a smartphone anatomy based technology in the field of liquid phase bioseparations, particularly in capillary electrophoresis. A simple capillary electrophoresis system was built with LED induced fluorescence detection and a credit card sized minicomputer to prove the concept of real time fluorescent imaging (zone adjustable time-lapse fluorescence image processor) and separation controller. The system was evaluated by analyzing under- and overloaded aminopyrenetrisulfonate (APTS)-labeled oligosaccharide samples. The open source software based image processing tool allowed undistorted signal modulation (reprocessing) if the signal was inappropriate for the actual detection system settings (too low or too high). The novel smart detection tool for fluorescently labeled biomolecules greatly expands dynamic range and enables retrospective correction for injections with unsuitable signal levels without the necessity to repeat the analysis.
Imaging, microscopic analysis, and modeling of a CdTe module degraded by heat and light
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnston, Steve; Albin, David; Hacke, Peter
Photoluminescence (PL), electroluminescence (EL), and dark lock-in thermography are collected during stressing of a CdTe module under one-Sun light at an elevated temperature of 100 degrees C. The PL imaging system is simple and economical. The PL images show differing degrees of degradation across the module and are less sensitive to effects of shunting and resistance that appear on the EL images. Regions of varying degradation are chosen based on avoiding pre-existing shunt defects. These regions are evaluated using time-of-flight secondary ion-mass spectrometry and Kelvin probe force microscopy. Reduced PL intensity correlates to increased Cu concentration at the front interface.more » Numerical modeling and measurements agree that the increased Cu concentration at the junction also correlates to a reduced space charge region.« less
Measuring bacterial cells size with AFM
Osiro, Denise; Filho, Rubens Bernardes; Assis, Odilio Benedito Garrido; Jorge, Lúcio André de Castro; Colnago, Luiz Alberto
2012-01-01
Atomic Force Microscopy (AFM) can be used to obtain high-resolution topographical images of bacteria revealing surface details and cell integrity. During scanning however, the interactions between the AFM probe and the membrane results in distortion of the images. Such distortions or artifacts are the result of geometrical effects related to bacterial cell height, specimen curvature and the AFM probe geometry. The most common artifact in imaging is surface broadening, what can lead to errors in bacterial sizing. Several methods of correction have been proposed to compensate for these artifacts and in this study we describe a simple geometric model for the interaction between the tip (a pyramidal shaped AFM probe) and the bacterium (Escherichia coli JM-109 strain) to minimize the enlarging effect. Approaches to bacteria immobilization and examples of AFM images analysis are also described. PMID:24031837
Imaging, microscopic analysis, and modeling of a CdTe module degraded by heat and light
Johnston, Steve; Albin, David; Hacke, Peter; ...
2018-01-12
Photoluminescence (PL), electroluminescence (EL), and dark lock-in thermography are collected during stressing of a CdTe module under one-Sun light at an elevated temperature of 100 degrees C. The PL imaging system is simple and economical. The PL images show differing degrees of degradation across the module and are less sensitive to effects of shunting and resistance that appear on the EL images. Regions of varying degradation are chosen based on avoiding pre-existing shunt defects. These regions are evaluated using time-of-flight secondary ion-mass spectrometry and Kelvin probe force microscopy. Reduced PL intensity correlates to increased Cu concentration at the front interface.more » Numerical modeling and measurements agree that the increased Cu concentration at the junction also correlates to a reduced space charge region.« less
Kim, Won Hwa; Chung, Moo K; Singh, Vikas
2013-01-01
The analysis of 3-D shape meshes is a fundamental problem in computer vision, graphics, and medical imaging. Frequently, the needs of the application require that our analysis take a multi-resolution view of the shape's local and global topology, and that the solution is consistent across multiple scales. Unfortunately, the preferred mathematical construct which offers this behavior in classical image/signal processing, Wavelets, is no longer applicable in this general setting (data with non-uniform topology). In particular, the traditional definition does not allow writing out an expansion for graphs that do not correspond to the uniformly sampled lattice (e.g., images). In this paper, we adapt recent results in harmonic analysis, to derive Non-Euclidean Wavelets based algorithms for a range of shape analysis problems in vision and medical imaging. We show how descriptors derived from the dual domain representation offer native multi-resolution behavior for characterizing local/global topology around vertices. With only minor modifications, the framework yields a method for extracting interest/key points from shapes, a surprisingly simple algorithm for 3-D shape segmentation (competitive with state of the art), and a method for surface alignment (without landmarks). We give an extensive set of comparison results on a large shape segmentation benchmark and derive a uniqueness theorem for the surface alignment problem.
Image registration of low signal-to-noise cryo-STEM data.
Savitzky, Benjamin H; El Baggari, Ismail; Clement, Colin B; Waite, Emily; Goodge, Berit H; Baek, David J; Sheckelton, John P; Pasco, Christopher; Nair, Hari; Schreiber, Nathaniel J; Hoffman, Jason; Admasu, Alemayehu S; Kim, Jaewook; Cheong, Sang-Wook; Bhattacharya, Anand; Schlom, Darrell G; McQueen, Tyrel M; Hovden, Robert; Kourkoutis, Lena F
2018-08-01
Combining multiple fast image acquisitions to mitigate scan noise and drift artifacts has proven essential for picometer precision, quantitative analysis of atomic resolution scanning transmission electron microscopy (STEM) data. For very low signal-to-noise ratio (SNR) image stacks - frequently required for undistorted imaging at liquid nitrogen temperatures - image registration is particularly delicate, and standard approaches may either fail, or produce subtly specious reconstructed lattice images. We present an approach which effectively registers and averages image stacks which are challenging due to their low-SNR and propensity for unit cell misalignments. Registering all possible image pairs in a multi-image stack leads to significant information surplus. In combination with a simple physical picture of stage drift, this enables identification of incorrect image registrations, and determination of the optimal image shifts from the complete set of relative shifts. We demonstrate the effectiveness of our approach on experimental, cryogenic STEM datasets, highlighting subtle artifacts endemic to low-SNR lattice images and how they can be avoided. High-SNR average images with information transfer out to 0.72 Å are achieved at 300 kV and with the sample cooled to near liquid nitrogen temperature. Copyright © 2018 Elsevier B.V. All rights reserved.
Cluet, David; Spichty, Martin; Delattre, Marie
2014-01-01
The mitotic spindle is a microtubule-based structure that elongates to accurately segregate chromosomes during anaphase. Its position within the cell also dictates the future cell cleavage plan, thereby determining daughter cell orientation within a tissue or cell fate adoption for polarized cells. Therefore, the mitotic spindle ensures at the same time proper cell division and developmental precision. Consequently, spindle dynamics is the matter of intensive research. Among the different cellular models that have been explored, the one-cell stage C. elegans embryo has been an essential and powerful system to dissect the molecular and biophysical basis of spindle elongation and positioning. Indeed, in this large and transparent cell, spindle poles (or centrosomes) can be easily detected from simple DIC microscopy by human eyes. To perform quantitative and high-throughput analysis of spindle motion, we developed a computer program ACT for Automated-Centrosome-Tracking from DIC movies of C. elegans embryos. We therefore offer an alternative to the image acquisition and processing of transgenic lines expressing fluorescent spindle markers. Consequently, experiments on large sets of cells can be performed with a simple setup using inexpensive microscopes. Moreover, analysis of any mutant or wild-type backgrounds is accessible because laborious rounds of crosses with transgenic lines become unnecessary. Last, our program allows spindle detection in other nematode species, offering the same quality of DIC images but for which techniques of transgenesis are not accessible. Thus, our program also opens the way towards a quantitative evolutionary approach of spindle dynamics. Overall, our computer program is a unique macro for the image- and movie-processing platform ImageJ. It is user-friendly and freely available under an open-source licence. ACT allows batch-wise analysis of large sets of mitosis events. Within 2 minutes, a single movie is processed and the accuracy of the automated tracking matches the precision of the human eye. PMID:24763198
Digital PIV (DPIV) Software Analysis System
NASA Technical Reports Server (NTRS)
Blackshire, James L.
1997-01-01
A software package was developed to provide a Digital PIV (DPIV) capability for NASA LaRC. The system provides an automated image capture, test correlation, and autocorrelation analysis capability for the Kodak Megaplus 1.4 digital camera system for PIV measurements. The package includes three separate programs that, when used together with the PIV data validation algorithm, constitutes a complete DPIV analysis capability. The programs are run on an IBM PC/AT host computer running either Microsoft Windows 3.1 or Windows 95 using a 'quickwin' format that allows simple user interface and output capabilities to the windows environment.
Haker, Steven; Wells, William M; Warfield, Simon K; Talos, Ion-Florin; Bhagwat, Jui G; Goldberg-Zimring, Daniel; Mian, Asim; Ohno-Machado, Lucila; Zou, Kelly H
2005-01-01
In any medical domain, it is common to have more than one test (classifier) to diagnose a disease. In image analysis, for example, there is often more than one reader or more than one algorithm applied to a certain data set. Combining of classifiers is often helpful, but determining the way in which classifiers should be combined is not trivial. Standard strategies are based on learning classifier combination functions from data. We describe a simple strategy to combine results from classifiers that have not been applied to a common data set, and therefore can not undergo this type of joint training. The strategy, which assumes conditional independence of classifiers, is based on the calculation of a combined Receiver Operating Characteristic (ROC) curve, using maximum likelihood analysis to determine a combination rule for each ROC operating point. We offer some insights into the use of ROC analysis in the field of medical imaging.
Haker, Steven; Wells, William M.; Warfield, Simon K.; Talos, Ion-Florin; Bhagwat, Jui G.; Goldberg-Zimring, Daniel; Mian, Asim; Ohno-Machado, Lucila; Zou, Kelly H.
2010-01-01
In any medical domain, it is common to have more than one test (classifier) to diagnose a disease. In image analysis, for example, there is often more than one reader or more than one algorithm applied to a certain data set. Combining of classifiers is often helpful, but determining the way in which classifiers should be combined is not trivial. Standard strategies are based on learning classifier combination functions from data. We describe a simple strategy to combine results from classifiers that have not been applied to a common data set, and therefore can not undergo this type of joint training. The strategy, which assumes conditional independence of classifiers, is based on the calculation of a combined Receiver Operating Characteristic (ROC) curve, using maximum likelihood analysis to determine a combination rule for each ROC operating point. We offer some insights into the use of ROC analysis in the field of medical imaging. PMID:16685884
DOE Office of Scientific and Technical Information (OSTI.GOV)
Häggström, Ida, E-mail: haeggsti@mskcc.org; Beattie, Bradley J.; Schmidtlein, C. Ross
2016-06-15
Purpose: To develop and evaluate a fast and simple tool called dPETSTEP (Dynamic PET Simulator of Tracers via Emission Projection), for dynamic PET simulations as an alternative to Monte Carlo (MC), useful for educational purposes and evaluation of the effects of the clinical environment, postprocessing choices, etc., on dynamic and parametric images. Methods: The tool was developed in MATLAB using both new and previously reported modules of PETSTEP (PET Simulator of Tracers via Emission Projection). Time activity curves are generated for each voxel of the input parametric image, whereby effects of imaging system blurring, counting noise, scatters, randoms, and attenuationmore » are simulated for each frame. Each frame is then reconstructed into images according to the user specified method, settings, and corrections. Reconstructed images were compared to MC data, and simple Gaussian noised time activity curves (GAUSS). Results: dPETSTEP was 8000 times faster than MC. Dynamic images from dPETSTEP had a root mean square error that was within 4% on average of that of MC images, whereas the GAUSS images were within 11%. The average bias in dPETSTEP and MC images was the same, while GAUSS differed by 3% points. Noise profiles in dPETSTEP images conformed well to MC images, confirmed visually by scatter plot histograms, and statistically by tumor region of interest histogram comparisons that showed no significant differences (p < 0.01). Compared to GAUSS, dPETSTEP images and noise properties agreed better with MC. Conclusions: The authors have developed a fast and easy one-stop solution for simulations of dynamic PET and parametric images, and demonstrated that it generates both images and subsequent parametric images with very similar noise properties to those of MC images, in a fraction of the time. They believe dPETSTEP to be very useful for generating fast, simple, and realistic results, however since it uses simple scatter and random models it may not be suitable for studies investigating these phenomena. dPETSTEP can be downloaded free of cost from https://github.com/CRossSchmidtlein/dPETSTEP.« less
Wen, Xin; She, Ying; Vinke, Petra Corianne; Chen, Hong
2016-01-01
Body image distress or body dissatisfaction is one of the most common consequences of obesity and overweight. We investigated the neural bases of body image processing in overweight and average weight young women to understand whether brain regions that were previously found to be involved in processing self-reflective, perspective and affective components of body image would show different activation between two groups. Thirteen overweight (O-W group, age = 20.31±1.70 years) and thirteen average weight (A-W group, age = 20.15±1.62 years) young women underwent functional magnetic resonance imaging while performing a body image self-reflection task. Among both groups, whole-brain analysis revealed activations of a brain network related to perceptive and affective components of body image processing. ROI analysis showed a main effect of group in ACC as well as a group by condition interaction within bilateral EBA, bilateral FBA, right IPL, bilateral DLPFC, left amygdala and left MPFC. For the A-W group, simple effect analysis revealed stronger activations in Thin-Control compared to Fat-Control condition within regions related to perceptive (including bilateral EBA, bilateral FBA, right IPL) and affective components of body image processing (including bilateral DLPFC, left amygdala), as well as self-reference (left MPFC). The O-W group only showed stronger activations in Fat-Control than in Thin-Control condition within regions related to the perceptive component of body image processing (including left EBA and left FBA). Path analysis showed that in the Fat-Thin contrast, body dissatisfaction completely mediated the group difference in brain response in left amygdala across the whole sample. Our data are the first to demonstrate differences in brain response to body pictures between average weight and overweight young females involved in a body image self-reflection task. These results provide insights for understanding the vulnerability to body image distress among overweight or obese young females. PMID:27764116
Gao, Xiao; Deng, Xiao; Wen, Xin; She, Ying; Vinke, Petra Corianne; Chen, Hong
2016-01-01
Body image distress or body dissatisfaction is one of the most common consequences of obesity and overweight. We investigated the neural bases of body image processing in overweight and average weight young women to understand whether brain regions that were previously found to be involved in processing self-reflective, perspective and affective components of body image would show different activation between two groups. Thirteen overweight (O-W group, age = 20.31±1.70 years) and thirteen average weight (A-W group, age = 20.15±1.62 years) young women underwent functional magnetic resonance imaging while performing a body image self-reflection task. Among both groups, whole-brain analysis revealed activations of a brain network related to perceptive and affective components of body image processing. ROI analysis showed a main effect of group in ACC as well as a group by condition interaction within bilateral EBA, bilateral FBA, right IPL, bilateral DLPFC, left amygdala and left MPFC. For the A-W group, simple effect analysis revealed stronger activations in Thin-Control compared to Fat-Control condition within regions related to perceptive (including bilateral EBA, bilateral FBA, right IPL) and affective components of body image processing (including bilateral DLPFC, left amygdala), as well as self-reference (left MPFC). The O-W group only showed stronger activations in Fat-Control than in Thin-Control condition within regions related to the perceptive component of body image processing (including left EBA and left FBA). Path analysis showed that in the Fat-Thin contrast, body dissatisfaction completely mediated the group difference in brain response in left amygdala across the whole sample. Our data are the first to demonstrate differences in brain response to body pictures between average weight and overweight young females involved in a body image self-reflection task. These results provide insights for understanding the vulnerability to body image distress among overweight or obese young females.
High-Throughput Density Measurement Using Magnetic Levitation.
Ge, Shencheng; Wang, Yunzhe; Deshler, Nicolas J; Preston, Daniel J; Whitesides, George M
2018-06-20
This work describes the development of an integrated analytical system that enables high-throughput density measurements of diamagnetic particles (including cells) using magnetic levitation (MagLev), 96-well plates, and a flatbed scanner. MagLev is a simple and useful technique with which to carry out density-based analysis and separation of a broad range of diamagnetic materials with different physical forms (e.g., liquids, solids, gels, pastes, gums, etc.); one major limitation, however, is the capacity to perform high-throughput density measurements. This work addresses this limitation by (i) re-engineering the shape of the magnetic fields so that the MagLev system is compatible with 96-well plates, and (ii) integrating a flatbed scanner (and simple optical components) to carry out imaging of the samples that levitate in the system. The resulting system is compatible with both biological samples (human erythrocytes) and nonbiological samples (simple liquids and solids, such as 3-chlorotoluene, cholesterol crystals, glass beads, copper powder, and polymer beads). The high-throughput capacity of this integrated MagLev system will enable new applications in chemistry (e.g., analysis and separation of materials) and biochemistry (e.g., cellular responses under environmental stresses) in a simple and label-free format on the basis of a universal property of all matter, i.e., density.
Principal component analysis of dynamic fluorescence images for diagnosis of diabetic vasculopathy
NASA Astrophysics Data System (ADS)
Seo, Jihye; An, Yuri; Lee, Jungsul; Ku, Taeyun; Kang, Yujung; Ahn, Chulwoo; Choi, Chulhee
2016-04-01
Indocyanine green (ICG) fluorescence imaging has been clinically used for noninvasive visualizations of vascular structures. We have previously developed a diagnostic system based on dynamic ICG fluorescence imaging for sensitive detection of vascular disorders. However, because high-dimensional raw data were used, the analysis of the ICG dynamics proved difficult. We used principal component analysis (PCA) in this study to extract important elements without significant loss of information. We examined ICG spatiotemporal profiles and identified critical features related to vascular disorders. PCA time courses of the first three components showed a distinct pattern in diabetic patients. Among the major components, the second principal component (PC2) represented arterial-like features. The explained variance of PC2 in diabetic patients was significantly lower than in normal controls. To visualize the spatial pattern of PCs, pixels were mapped with red, green, and blue channels. The PC2 score showed an inverse pattern between normal controls and diabetic patients. We propose that PC2 can be used as a representative bioimaging marker for the screening of vascular diseases. It may also be useful in simple extractions of arterial-like features.
Efficient Workflows for Curation of Heterogeneous Data Supporting Modeling of U-Nb Alloy Aging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ward, Logan Timothy; Hackenberg, Robert Errol
These are slides from a presentation summarizing a graduate research associate's summer project. The following topics are covered in these slides: data challenges in materials, aging in U-Nb Alloys, Building an Aging Model, Different Phase Trans. in U-Nb, the Challenge, Storing Materials Data, Example Data Source, Organizing Data: What is a Schema?, What does a "XML Schema" look like?, Our Data Schema: Nice and Simple, Storing Data: Materials Data Curation System (MDCS), Problem with MDCS: Slow Data Entry, Getting Literature into MDCS, Staging Data in Excel Document, Final Result: MDCS Records, Analyzing Image Data, Process for Making TTT Diagram, Bottleneckmore » Number 1: Image Analysis, Fitting a TTP Boundary, Fitting a TTP Curve: Comparable Results, How Does it Compare to Our Data?, Image Analysis Workflow, Curating Hardness Records, Hardness Data: Two Key Decisions, Before Peak Age? - Automation, Interactive Viz, Which Transformation?, Microstructure-Informed Model, Tracking the Entire Process, General Problem with Property Models, Pinyon: Toolkit for Managing Model Creation, Tracking Individual Decisions, Jupyter: Docs and Code in One File, Hardness Analysis Workflow, Workflow for Aging Models, and conclusions.« less
Berardi, Alberto; Bisharat, Lorina; Blaibleh, Anaheed; Pavoni, Lucia; Cespi, Marco
2018-06-20
Tablets disintegration is often the result of a size expansion of the tablets. In this study, we quantified the extent and direction of size expansion of tablets during disintegration, using readily available techniques, i.e. a digital camera and a public domain image analysis software. After validating the method, the influence of disintegrants concentration and diluents type on kinetics and mechanisms of disintegration were studied. Tablets containing diluent, disintegrant (sodium starch glycolate-SSG, crospovidone-PVPP or croscarmellose sodium-CCS) and lubricant were prepared by direct compression. Projected area and aspect ratio of the tablets were monitored using image analysis techniques. The developed method could describe the kinetics and mechanisms of disintegration qualitatively and quantitatively. SSG and PVPP acted purely by swelling and shape recovery mechanisms. Instead, CCS worked by a combination of both mechanisms, the extent of which changed depending on its concentration and the diluent type. We anticipate that the method described here could provide a framework for the routine screening of tablets disintegration using readily available equipment. Copyright © 2018. Published by Elsevier Inc.
Wright, Alexander I.; Magee, Derek R.; Quirke, Philip; Treanor, Darren E.
2015-01-01
Background: Obtaining ground truth for pathological images is essential for various experiments, especially for training and testing image analysis algorithms. However, obtaining pathologist input is often difficult, time consuming and expensive. This leads to algorithms being over-fitted to small datasets, and inappropriate validation, which causes poor performance on real world data. There is a great need to gather data from pathologists in a simple and efficient manner, in order to maximise the amount of data obtained. Methods: We present a lightweight, web-based HTML5 system for administering and participating in data collection experiments. The system is designed for rapid input with minimal effort, and can be accessed from anywhere in the world with a reliable internet connection. Results: We present two case studies that use the system to assess how limitations on fields of view affect pathologist agreement, and to what extent poorly stained slides affect judgement. In both cases, the system collects pathologist scores at a rate of less than two seconds per image. Conclusions: The system has multiple potential applications in pathology and other domains. PMID:26110089
Wright, Alexander I; Magee, Derek R; Quirke, Philip; Treanor, Darren E
2015-01-01
Obtaining ground truth for pathological images is essential for various experiments, especially for training and testing image analysis algorithms. However, obtaining pathologist input is often difficult, time consuming and expensive. This leads to algorithms being over-fitted to small datasets, and inappropriate validation, which causes poor performance on real world data. There is a great need to gather data from pathologists in a simple and efficient manner, in order to maximise the amount of data obtained. We present a lightweight, web-based HTML5 system for administering and participating in data collection experiments. The system is designed for rapid input with minimal effort, and can be accessed from anywhere in the world with a reliable internet connection. We present two case studies that use the system to assess how limitations on fields of view affect pathologist agreement, and to what extent poorly stained slides affect judgement. In both cases, the system collects pathologist scores at a rate of less than two seconds per image. The system has multiple potential applications in pathology and other domains.
Robustifying blind image deblurring methods by simple filters
NASA Astrophysics Data System (ADS)
Liu, Yan; Zeng, Xiangrong; Huangpeng, Qizi; Fan, Jun; Zhou, Jinglun; Feng, Jing
2016-07-01
The state-of-the-art blind image deblurring (BID) methods are sensitive to noise, and most of them can deal with only small levels of Gaussian noise. In this paper, we use simple filters to present a robust BID framework which is able to robustify exiting BID methods to high-level Gaussian noise or/and Non-Gaussian noise. Experiments on images in presence of Gaussian noise, impulse noise (salt-and-pepper noise and random-valued noise) and mixed Gaussian-impulse noise, and a real-world blurry and noisy image show that the proposed method can faster estimate sharper kernels and better images, than that obtained by other methods.
Content-based image retrieval for interstitial lung diseases using classification confidence
NASA Astrophysics Data System (ADS)
Dash, Jatindra Kumar; Mukhopadhyay, Sudipta; Prabhakar, Nidhi; Garg, Mandeep; Khandelwal, Niranjan
2013-02-01
Content Based Image Retrieval (CBIR) system could exploit the wealth of High-Resolution Computed Tomography (HRCT) data stored in the archive by finding similar images to assist radiologists for self learning and differential diagnosis of Interstitial Lung Diseases (ILDs). HRCT findings of ILDs are classified into several categories (e.g. consolidation, emphysema, ground glass, nodular etc.) based on their texture like appearances. Therefore, analysis of ILDs is considered as a texture analysis problem. Many approaches have been proposed for CBIR of lung images using texture as primitive visual content. This paper presents a new approach to CBIR for ILDs. The proposed approach makes use of a trained neural network (NN) to find the output class label of query image. The degree of confidence of the NN classifier is analyzed using Naive Bayes classifier that dynamically takes a decision on the size of the search space to be used for retrieval. The proposed approach is compared with three simple distance based and one classifier based texture retrieval approaches. Experimental results show that the proposed technique achieved highest average percentage precision of 92.60% with lowest standard deviation of 20.82%.
Super-resolution method for face recognition using nonlinear mappings on coherent features.
Huang, Hua; He, Huiting
2011-01-01
Low-resolution (LR) of face images significantly decreases the performance of face recognition. To address this problem, we present a super-resolution method that uses nonlinear mappings to infer coherent features that favor higher recognition of the nearest neighbor (NN) classifiers for recognition of single LR face image. Canonical correlation analysis is applied to establish the coherent subspaces between the principal component analysis (PCA) based features of high-resolution (HR) and LR face images. Then, a nonlinear mapping between HR/LR features can be built by radial basis functions (RBFs) with lower regression errors in the coherent feature space than in the PCA feature space. Thus, we can compute super-resolved coherent features corresponding to an input LR image according to the trained RBF model efficiently and accurately. And, face identity can be obtained by feeding these super-resolved features to a simple NN classifier. Extensive experiments on the Facial Recognition Technology, University of Manchester Institute of Science and Technology, and Olivetti Research Laboratory databases show that the proposed method outperforms the state-of-the-art face recognition algorithms for single LR image in terms of both recognition rate and robustness to facial variations of pose and expression.
NEFI: Network Extraction From Images
Dirnberger, M.; Kehl, T.; Neumann, A.
2015-01-01
Networks are amongst the central building blocks of many systems. Given a graph of a network, methods from graph theory enable a precise investigation of its properties. Software for the analysis of graphs is widely available and has been applied to study various types of networks. In some applications, graph acquisition is relatively simple. However, for many networks data collection relies on images where graph extraction requires domain-specific solutions. Here we introduce NEFI, a tool that extracts graphs from images of networks originating in various domains. Regarding previous work on graph extraction, theoretical results are fully accessible only to an expert audience and ready-to-use implementations for non-experts are rarely available or insufficiently documented. NEFI provides a novel platform allowing practitioners to easily extract graphs from images by combining basic tools from image processing, computer vision and graph theory. Thus, NEFI constitutes an alternative to tedious manual graph extraction and special purpose tools. We anticipate NEFI to enable time-efficient collection of large datasets. The analysis of these novel datasets may open up the possibility to gain new insights into the structure and function of various networks. NEFI is open source and available at http://nefi.mpi-inf.mpg.de. PMID:26521675
Spatial Statistics for Tumor Cell Counting and Classification
NASA Astrophysics Data System (ADS)
Wirjadi, Oliver; Kim, Yoo-Jin; Breuel, Thomas
To count and classify cells in histological sections is a standard task in histology. One example is the grading of meningiomas, benign tumors of the meninges, which requires to assess the fraction of proliferating cells in an image. As this process is very time consuming when performed manually, automation is required. To address such problems, we propose a novel application of Markov point process methods in computer vision, leading to algorithms for computing the locations of circular objects in images. In contrast to previous algorithms using such spatial statistics methods in image analysis, the present one is fully trainable. This is achieved by combining point process methods with statistical classifiers. Using simulated data, the method proposed in this paper will be shown to be more accurate and more robust to noise than standard image processing methods. On the publicly available SIMCEP benchmark for cell image analysis algorithms, the cell count performance of the present paper is significantly more accurate than results published elsewhere, especially when cells form dense clusters. Furthermore, the proposed system performs as well as a state-of-the-art algorithm for the computer-aided histological grading of meningiomas when combined with a simple k-nearest neighbor classifier for identifying proliferating cells.
Kuswandi, Bambang; Irmawati, Titi; Hidayat, Moch Amrun; Jayus; Ahmad, Musa
2014-01-27
A simple visual ethanol biosensor based on alcohol oxidase (AOX) immobilised onto polyaniline (PANI) film for halal verification of fermented beverage samples is described. This biosensor responds to ethanol via a colour change from green to blue, due to the enzymatic reaction of ethanol that produces acetaldehyde and hydrogen peroxide, when the latter oxidizes the PANI film. The procedure to obtain this biosensor consists of the immobilization of AOX onto PANI film by adsorption. For the immobilisation, an AOX solution is deposited on the PANI film and left at room temperature until dried (30 min). The biosensor was constructed as a dip stick for visual and simple use. The colour changes of the films have been scanned and analysed using image analysis software (i.e., ImageJ) to study the characteristics of the biosensor's response toward ethanol. The biosensor has a linear response in an ethanol concentration range of 0.01%-0.8%, with a correlation coefficient (r) of 0.996. The limit detection of the biosensor was 0.001%, with reproducibility (RSD) of 1.6% and a life time up to seven weeks when stored at 4 °C. The biosensor provides accurate results for ethanol determination in fermented drinks and was in good agreement with the standard method (gas chromatography) results. Thus, the biosensor could be used as a simple visual method for ethanol determination in fermented beverage samples that can be useful for Muslim community for halal verification.
A simple and low-cost biofilm quantification method using LED and CMOS image sensor.
Kwak, Yeon Hwa; Lee, Junhee; Lee, Junghoon; Kwak, Soo Hwan; Oh, Sangwoo; Paek, Se-Hwan; Ha, Un-Hwan; Seo, Sungkyu
2014-12-01
A novel biofilm detection platform, which consists of a cost-effective red, green, and blue light-emitting diode (RGB LED) as a light source and a lens-free CMOS image sensor as a detector, is designed. This system can measure the diffraction patterns of cells from their shadow images, and gather light absorbance information according to the concentration of biofilms through a simple image processing procedure. Compared to a bulky and expensive commercial spectrophotometer, this platform can provide accurate and reproducible biofilm concentration detection and is simple, compact, and inexpensive. Biofilms originating from various bacterial strains, including Pseudomonas aeruginosa (P. aeruginosa), were tested to demonstrate the efficacy of this new biofilm detection approach. The results were compared with the results obtained from a commercial spectrophotometer. To utilize a cost-effective light source (i.e., an LED) for biofilm detection, the illumination conditions were optimized. For accurate and reproducible biofilm detection, a simple, custom-coded image processing algorithm was developed and applied to a five-megapixel CMOS image sensor, which is a cost-effective detector. The concentration of biofilms formed by P. aeruginosa was detected and quantified by varying the indole concentration, and the results were compared with the results obtained from a commercial spectrophotometer. The correlation value of the results from those two systems was 0.981 (N = 9, P < 0.01) and the coefficients of variation (CVs) were approximately threefold lower at the CMOS image-sensor platform. Copyright © 2014 Elsevier B.V. All rights reserved.
Passive Markers for Tracking Surgical Instruments in Real-Time 3-D Ultrasound Imaging
Stoll, Jeffrey; Ren, Hongliang; Dupont, Pierre E.
2013-01-01
A family of passive echogenic markers is presented by which the position and orientation of a surgical instrument can be determined in a 3-D ultrasound volume, using simple image processing. Markers are attached near the distal end of the instrument so that they appear in the ultrasound volume along with the instrument tip. They are detected and measured within the ultrasound image, thus requiring no external tracking device. This approach facilitates imaging instruments and tissue simultaneously in ultrasound-guided interventions. Marker-based estimates of instrument pose can be used in augmented reality displays or for image-based servoing. Design principles for marker shapes are presented that ensure imaging system and measurement uniqueness constraints are met. An error analysis is included that can be used to guide marker design and which also establishes a lower bound on measurement uncertainty. Finally, examples of marker measurement and tracking algorithms are presented along with experimental validation of the concepts. PMID:22042148
Large deformation image classification using generalized locality-constrained linear coding.
Zhang, Pei; Wee, Chong-Yaw; Niethammer, Marc; Shen, Dinggang; Yap, Pew-Thian
2013-01-01
Magnetic resonance (MR) imaging has been demonstrated to be very useful for clinical diagnosis of Alzheimer's disease (AD). A common approach to using MR images for AD detection is to spatially normalize the images by non-rigid image registration, and then perform statistical analysis on the resulting deformation fields. Due to the high nonlinearity of the deformation field, recent studies suggest to use initial momentum instead as it lies in a linear space and fully encodes the deformation field. In this paper we explore the use of initial momentum for image classification by focusing on the problem of AD detection. Experiments on the public ADNI dataset show that the initial momentum, together with a simple sparse coding technique-locality-constrained linear coding (LLC)--can achieve a classification accuracy that is comparable to or even better than the state of the art. We also show that the performance of LLC can be greatly improved by introducing proper weights to the codebook.
Semiautomated spleen volumetry with diffusion-weighted MR imaging.
Lee, Jeongjin; Kim, Kyoung Won; Lee, Ho; Lee, So Jung; Choi, Sanghyun; Jeong, Woo Kyoung; Kye, Heewon; Song, Gi-Won; Hwang, Shin; Lee, Sung-Gyu
2012-07-01
In this article, we determined the relative accuracy of semiautomated spleen volumetry with diffusion-weighted (DW) MR images compared to standard manual volumetry with DW-MR or CT images. Semiautomated spleen volumetry using simple thresholding followed by 3D and 2D connected component analysis was performed with DW-MR images. Manual spleen volumetry was performed on DW-MR and CT images. In this study, 35 potential live liver donor candidates were included. Semiautomated volumetry results were highly correlated with manual volumetry results using DW-MR (r = 0.99; P < 0.0001; mean percentage absolute difference, 1.43 ± 0.94) and CT (r = 0.99; P < 0.0001; 1.76 ± 1.07). Mean total processing time for semiautomated volumetry was significantly shorter compared to that of manual volumetry with DW-MR (P < 0.0001) and CT (P < 0.0001). In conclusion, semiautomated spleen volumetry with DW-MR images can be performed rapidly and accurately when compared with standard manual volumetry. Copyright © 2011 Wiley Periodicals, Inc.
Influence of long-range Coulomb interaction in velocity map imaging.
Barillot, T; Brédy, R; Celep, G; Cohen, S; Compagnon, I; Concina, B; Constant, E; Danakas, S; Kalaitzis, P; Karras, G; Lépine, F; Loriot, V; Marciniak, A; Predelus-Renois, G; Schindler, B; Bordas, C
2017-07-07
The standard velocity-map imaging (VMI) analysis relies on the simple approximation that the residual Coulomb field experienced by the photoelectron ejected from a neutral or ion system may be neglected. Under this almost universal approximation, the photoelectrons follow ballistic (parabolic) trajectories in the externally applied electric field, and the recorded image may be considered as a 2D projection of the initial photoelectron velocity distribution. There are, however, several circumstances where this approximation is not justified and the influence of long-range forces must absolutely be taken into account for the interpretation and analysis of the recorded images. The aim of this paper is to illustrate this influence by discussing two different situations involving isolated atoms or molecules where the analysis of experimental images cannot be performed without considering long-range Coulomb interactions. The first situation occurs when slow (meV) photoelectrons are photoionized from a neutral system and strongly interact with the attractive Coulomb potential of the residual ion. The result of this interaction is the formation of a more complex structure in the image, as well as the appearance of an intense glory at the center of the image. The second situation, observed also at low energy, occurs in the photodetachment from a multiply charged anion and it is characterized by the presence of a long-range repulsive potential. Then, while the standard VMI approximation is still valid, the very specific features exhibited by the recorded images can be explained only by taking into consideration tunnel detachment through the repulsive Coulomb barrier.
0.8 V nanogenerator for mechanical energy harvesting using bismuth titanate-PDMS nanocomposite
NASA Astrophysics Data System (ADS)
Abinnas, N.; Baskaran, P.; Harish, S.; Ganesh, R. Sankar; Navaneethan, M.; Nisha, K. D.; Ponnusamy, S.; Muthamizhchelvan, C.; Ikeda, H.; Hayakawa, Y.
2017-10-01
We present a novel, low-cost approach to fabricate piezoelectric nanogenerators using Bismuth titanate (BiT)/Polydimethylsiloxane (PDMS) nanocomposite. The nanogenerator has the advantage of the simple process of fabrication and is eco-friendly. This simple device was fabricated to harvest the energy released from finger tapping. This device generated an output of 0.8 V. The BiT samples were synthesized by wet chemical method. The structural, dielectric and ferroelectric properties of the samples were analyzed. Phase analysis using X-ray diffraction indicated that the phase structure was orthorhombic. The FESEM images of the sample calcined at 700 °C exhibited sheet-like morphology. Further characterizations like XPS, Raman studies, TEM were done.
Suh, Sungho; Itoh, Shinya; Aoyama, Satoshi; Kawahito, Shoji
2010-01-01
For low-noise complementary metal-oxide-semiconductor (CMOS) image sensors, the reduction of pixel source follower noises is becoming very important. Column-parallel high-gain readout circuits are useful for low-noise CMOS image sensors. This paper presents column-parallel high-gain signal readout circuits, correlated multiple sampling (CMS) circuits and their noise reduction effects. In the CMS, the gain of the noise cancelling is controlled by the number of samplings. It has a similar effect to that of an amplified CDS for the thermal noise but is a little more effective for 1/f and RTS noises. Two types of the CMS with simple integration and folding integration are proposed. In the folding integration, the output signal swing is suppressed by a negative feedback using a comparator and one-bit D-to-A converter. The CMS circuit using the folding integration technique allows to realize a very low-noise level while maintaining a wide dynamic range. The noise reduction effects of their circuits have been investigated with a noise analysis and an implementation of a 1Mpixel pinned photodiode CMOS image sensor. Using 16 samplings, dynamic range of 59.4 dB and noise level of 1.9 e(-) for the simple integration CMS and 75 dB and 2.2 e(-) for the folding integration CMS, respectively, are obtained.
Verifying Air Force Weather Passive Satellite Derived Cloud Analysis Products
NASA Astrophysics Data System (ADS)
Nobis, T. E.
2017-12-01
Air Force Weather (AFW) has developed an hourly World-Wide Merged Cloud Analysis (WWMCA) using imager data from 16 geostationary and polar-orbiting satellites. The analysis product contains information on cloud fraction, height, type and various optical properties including optical depth and integrated water path. All of these products are derived using a suite of algorithms which rely exclusively on passively sensed data from short, mid and long wave imager data. The system integrates satellites with a wide-range of capabilities, from the relatively simple two-channel OLS imager to the 16 channel ABI/AHI to create a seamless global analysis in real time. Over the last couple of years, AFW has started utilizing independent verification data from active sensed cloud measurements to better understand the performance limitations of the WWMCA. Sources utilized include space based lidars (CALIPSO, CATS) and radar (CloudSat) as well as ground based lidars from the Department of Energy ARM sites and several European cloud radars. This work will present findings from our efforts to compare active and passive sensed cloud information including comparison techniques/limitations as well as performance of the passive derived cloud information against the active.
On-orbit point spread function estimation for THEOS imaging system
NASA Astrophysics Data System (ADS)
Khetkeeree, Suphongsa; Liangrocapart, Sompong
2018-03-01
In this paper, we present two approaches for net Point Spread Function (net-PSF) estimation of Thailand Earth Observation System (THEOS) imaging system. In the first approach, we estimate the net- PSF by employing the specification information of the satellite. The analytic model of the net- PSF based on the simple model of push-broom imaging system. This model consists of a scanner, optical system, detector and electronics system. The mathematical PSF model of each component is demonstrated in spatial domain. In the second approach, the specific target images from THEOS imaging system are analyzed to determine the net-PSF. For panchromatic imaging system, the images of the checkerboard target at Salon de Provence airport are used to analysis the net-PSF by slant-edge method. For multispectral imaging system, the new man-made targets are proposed. It is a pier bridge in Lamchabang, Chonburi, Thailand. This place has had a lot of bridges which have several width sizes and orientation. The pulse method is used to analysis the images of this bridge for estimating the net-PSF. Finally, the Full Width at Half Maximums (FWHMs) of the net-PSF of both approaches is compared. The results show that both approaches coincide and all Modulation Transfer Functions (MTFs) at Nyquist of both approaches are better than the requirement. However, the FWHM of multispectral system more deviate than panchromatic system, because the targets are not specially constructed for estimating the characteristics of the satellite imaging system.
A new multi-spectral feature level image fusion method for human interpretation
NASA Astrophysics Data System (ADS)
Leviner, Marom; Maltz, Masha
2009-03-01
Various different methods to perform multi-spectral image fusion have been suggested, mostly on the pixel level. However, the jury is still out on the benefits of a fused image compared to its source images. We present here a new multi-spectral image fusion method, multi-spectral segmentation fusion (MSSF), which uses a feature level processing paradigm. To test our method, we compared human observer performance in a three-task experiment using MSSF against two established methods: averaging and principle components analysis (PCA), and against its two source bands, visible and infrared. The three tasks that we studied were: (1) simple target detection, (2) spatial orientation, and (3) camouflaged target detection. MSSF proved superior to the other fusion methods in all three tests; MSSF also outperformed the source images in the spatial orientation and camouflaged target detection tasks. Based on these findings, current speculation about the circumstances in which multi-spectral image fusion in general and specific fusion methods in particular would be superior to using the original image sources can be further addressed.
Uncooled thermal imaging and image analysis
NASA Astrophysics Data System (ADS)
Wang, Shiyun; Chang, Benkang; Yu, Chunyu; Zhang, Junju; Sun, Lianjun
2006-09-01
Thermal imager can transfer difference of temperature to difference of electric signal level, so can be application to medical treatment such as estimation of blood flow speed and vessel 1ocation [1], assess pain [2] and so on. With the technology of un-cooled focal plane array (UFPA) is grown up more and more, some simple medical function can be completed with un-cooled thermal imager, for example, quick warning for fever heat with SARS. It is required that performance of imaging is stabilization and spatial and temperature resolution is high enough. In all performance parameters, noise equivalent temperature difference (NETD) is often used as the criterion of universal performance. 320 x 240 α-Si micro-bolometer UFPA has been applied widely presently for its steady performance and sensitive responsibility. In this paper, NETD of UFPA and the relation between NETD and temperature are researched. several vital parameters that can affect NETD are listed and an universal formula is presented. Last, the images from the kind of thermal imager are analyzed based on the purpose of detection persons with fever heat. An applied thermal image intensification method is introduced.
Lee, Dong-Hoon; Lee, Do-Wan; Henry, David; Park, Hae-Jin; Han, Bong-Soo; Woo, Dong-Cheol
2018-04-12
To evaluate the effects of signal intensity differences between the b0 image and diffusion tensor imaging (DTI) in the image registration process. To correct signal intensity differences between the b0 image and DTI data, a simple image intensity compensation (SIMIC) method, which is a b0 image re-calculation process from DTI data, was applied before the image registration. The re-calculated b0 image (b0 ext ) from each diffusion direction was registered to the b0 image acquired through the MR scanning (b0 nd ) with two types of cost functions and their transformation matrices were acquired. These transformation matrices were then used to register the DTI data. For quantifications, the dice similarity coefficient (DSC) values, diffusion scalar matrix, and quantified fibre numbers and lengths were calculated. The combined SIMIC method with two cost functions showed the highest DSC value (0.802 ± 0.007). Regarding diffusion scalar values and numbers and lengths of fibres from the corpus callosum, superior longitudinal fasciculus, and cortico-spinal tract, only using normalised cross correlation (NCC) showed a specific tendency toward lower values in the brain regions. Image-based distortion correction with SIMIC for DTI data would help in image analysis by accounting for signal intensity differences as one additional option for DTI analysis. • We evaluated the effects of signal intensity differences at DTI registration. • The non-diffusion-weighted image re-calculation process from DTI data was applied. • SIMIC can minimise the signal intensity differences at DTI registration.
Superpixel Based Factor Analysis and Target Transformation Method for Martian Minerals Detection
NASA Astrophysics Data System (ADS)
Wu, X.; Zhang, X.; Lin, H.
2018-04-01
The Factor analysis and target transformation (FATT) is an effective method to test for the presence of particular mineral on Martian surface. It has been used both in thermal infrared (Thermal Emission Spectrometer, TES) and near-infrared (Compact Reconnaissance Imaging Spectrometer for Mars, CRISM) hyperspectral data. FATT derived a set of orthogonal eigenvectors from a mixed system and typically selected first 10 eigenvectors to least square fit the library mineral spectra. However, minerals present only in a limited pixels will be ignored because its weak spectral features compared with full image signatures. Here, we proposed a superpixel based FATT method to detect the mineral distributions on Mars. The simple linear iterative clustering (SLIC) algorithm was used to partition the CRISM image into multiple connected image regions with spectral homogeneous to enhance the weak signatures by increasing their proportion in a mixed system. A least square fitting was used in target transformation and performed to each region iteratively. Finally, the distribution of the specific minerals in image was obtained, where fitting residual less than a threshold represent presence and otherwise absence. We validate our method by identifying carbonates in a well analysed CRISM image in Nili Fossae on Mars. Our experimental results indicate that the proposed method work well both in simulated and real data sets.
Framework for SEM contour analysis
NASA Astrophysics Data System (ADS)
Schneider, L.; Farys, V.; Serret, E.; Fenouillet-Beranger, C.
2017-03-01
SEM images provide valuable information about patterning capability. Geometrical properties such as Critical Dimension (CD) can be extracted from them and are used to calibrate OPC models, thus making OPC more robust and reliable. However, there is currently a shortage of appropriate metrology tools to inspect complex two-dimensional patterns in the same way as one would work with simple one-dimensional patterns. In this article we present a full framework for the analysis of SEM images. It has been proven to be fast, reliable and robust for every type of structure, and particularly for two-dimensional structures. To achieve this result, several innovative solutions have been developed and will be presented in the following pages. Firstly, we will present a new noise filter which is used to reduce noise on SEM images, followed by an efficient topography identifier, and finally we will describe the use of a topological skeleton as a measurement tool that can extend CD measurements on all kinds of patterns.
Tolivia, Jorge; Navarro, Ana; del Valle, Eva; Perez, Cristina; Ordoñez, Cristina; Martínez, Eva
2006-02-01
To describe a simple method to achieve the differential selection and subsequent quantification of the strength signal using only one section. Several methods for performing quantitative histochemistry, immunocytochemistry or hybridocytochemistry, without use of specific commercial image analysis systems, rely on pixel-counting algorithms, which do not provide information on the amount of chromogen present in the section. Other techniques use complex algorithms to calculate the cumulative signal strength using two consecutive sections. To separate the chromogen signal we used the "Color range" option of the Adobe Photoshop program, which provides a specific file for a particular chromogen selection that could be applied on similar sections. The measurement of the chromogen signal strength of the specific staining is achieved with the Scion Image software program. The method described in this paper can also be applied to simultaneous detection of different signals on the same section or different parameters (area of particles, number of particles, etc.) when the "Analyze particles" tool of the Scion program is used.
Ibrahim, Reham S; Fathy, Hoda
2018-03-30
Tracking the impact of commonly applied post-harvesting and industrial processing practices on the compositional integrity of ginger rhizome was implemented in this work. Untargeted metabolite profiling was performed using digitally-enhanced HPTLC method where the chromatographic fingerprints were extracted using ImageJ software then analysed with multivariate Principal Component Analysis (PCA) for pattern recognition. A targeted approach was applied using a new, validated, simple and fast HPTLC image analysis method for simultaneous quantification of the officially recognized markers 6-, 8-, 10-gingerol and 6-shogaol in conjunction with chemometric Hierarchical Clustering Analysis (HCA). The results of both targeted and untargeted metabolite profiling revealed that peeling, drying in addition to storage employed during processing have a great influence on ginger chemo-profile, the different forms of processed ginger shouldn't be used interchangeably. Moreover, it deemed necessary to consider the holistic metabolic profile for comprehensive evaluation of ginger during processing. Copyright © 2018. Published by Elsevier B.V.
Siegel, Nisan; Storrie, Brian; Bruce, Marc; Brooker, Gary
2015-02-07
FINCH holographic fluorescence microscopy creates high resolution super-resolved images with enhanced depth of focus. The simple addition of a real-time Nipkow disk confocal image scanner in a conjugate plane of this incoherent holographic system is shown to reduce the depth of focus, and the combination of both techniques provides a simple way to enhance the axial resolution of FINCH in a combined method called "CINCH". An important feature of the combined system allows for the simultaneous real-time image capture of widefield and holographic images or confocal and confocal holographic images for ready comparison of each method on the exact same field of view. Additional GPU based complex deconvolution processing of the images further enhances resolution.
Employee resourcing strategies and universities' corporate image: A survey dataset.
Falola, Hezekiah Olubusayo; Oludayo, Olumuyiwa Akinrole; Olokundun, Maxwell Ayodele; Salau, Odunayo Paul; Ibidunni, Ayodotun Stephen; Igbinoba, Ebe
2018-06-01
The data examined the effect of employee resourcing strategies on corporate image. The data were generated from a total of 500 copies of questionnaire administered to the academic staff of the six (6) selected private Universities in Southwest, Nigeria, out of which four hundred and forty-three (443) were retrieved. Stratified and simple random sampling techniques were used to select the respondents for this study. Descriptive and Linear Regression, were used for the presentation of the data. Mean score was used as statistical tool of analysis. Therefore, the data presented in this article is made available to facilitate further and more comprehensive investigation on the subject matter.
Analysis of the multigroup model for muon tomography based threat detection
NASA Astrophysics Data System (ADS)
Perry, J. O.; Bacon, J. D.; Borozdin, K. N.; Fabritius, J. M.; Morris, C. L.
2014-02-01
We compare different algorithms for detecting a 5 cm tungsten cube using cosmic ray muon technology. In each case, a simple tomographic technique was used for position reconstruction, but the scattering angles were used differently to obtain a density signal. Receiver operating characteristic curves were used to compare images made using average angle squared, median angle squared, average of the squared angle, and a multi-energy group fit of the angular distributions for scenes with and without a 5 cm tungsten cube. The receiver operating characteristic curves show that the multi-energy group treatment of the scattering angle distributions is the superior method for image reconstruction.
Brainstorm: A User-Friendly Application for MEG/EEG Analysis
Tadel, François; Baillet, Sylvain; Mosher, John C.; Pantazis, Dimitrios; Leahy, Richard M.
2011-01-01
Brainstorm is a collaborative open-source application dedicated to magnetoencephalography (MEG) and electroencephalography (EEG) data visualization and processing, with an emphasis on cortical source estimation techniques and their integration with anatomical magnetic resonance imaging (MRI) data. The primary objective of the software is to connect MEG/EEG neuroscience investigators with both the best-established and cutting-edge methods through a simple and intuitive graphical user interface (GUI). PMID:21584256
Direct Estimation of Local pH Change at Infection Sites of Fungi in Potato Tubers.
Tardi-Ovadia, R; Linker, R; Tsror Lahkim, L
2017-01-01
Fungi can modify the pH in or around the infected site via alkalization or acidification, and pH monitoring may provide valuable information on host-fungus interactions. The objective of the present study was to examine the ability of two fungi, Colletotrichum coccodes and Helminthosporium solani, to modify the pH of potato tubers during artificial inoculation in situ. Both fungi cause blemishes on potato tubers, which downgrades tuber quality and yield. Direct visualization and estimation of pH changes near the inoculation area were achieved using pH indicators and image analysis. The results showed that the pH of the area infected by either fungus increased from potato native pH of approximately 6.0 to 7.4 to 8.0. By performing simple analysis of the images, it was also possible to derive the growth curve of each fungus and estimate the lag phase of the radial growth: 10 days for C. coccodes and 17 days H. solani. In addition, a distinctive halo (an edge area with increased pH) was observed only during the lag phase of H. solani infection. pH modulation is a major factor in pathogen-host interaction and the proposed method offers a simple and rapid way to monitor these changes.
Fitzgibbon, Jessica; Beck, Martina; Zhou, Ji; Faulkner, Christine; Robatzek, Silke; Oparka, Karl
2013-01-01
Plasmodesmata (PD) form tubular connections that function as intercellular communication channels. They are essential for transporting nutrients and for coordinating development. During cytokinesis, simple PDs are inserted into the developing cell plate, while during wall extension, more complex (branched) forms of PD are laid down. We show that complex PDs are derived from existing simple PDs in a pattern that is accelerated when leaves undergo the sink–source transition. Complex PDs are inserted initially at the three-way junctions between epidermal cells but develop most rapidly in the anisocytic complexes around stomata. For a quantitative analysis of complex PD formation, we established a high-throughput imaging platform and constructed PDQUANT, a custom algorithm that detected cell boundaries and PD numbers in different wall faces. For anticlinal walls, the number of complex PDs increased with increasing cell size, while for periclinal walls, the number of PDs decreased. Complex PD insertion was accelerated by up to threefold in response to salicylic acid treatment and challenges with mannitol. In a single 30-min run, we could derive data for up to 11k PDs from 3k epidermal cells. This facile approach opens the door to a large-scale analysis of the endogenous and exogenous factors that influence PD formation. PMID:23371949
Lack of sex effect on brain activity during a visuomotor response task: functional MR imaging study.
Mikhelashvili-Browner, Nina; Yousem, David M; Wu, Colin; Kraut, Michael A; Vaughan, Christina L; Oguz, Kader Karli; Calhoun, Vince D
2003-03-01
As more individuals are enrolled in clinical functional MR imaging (fMRI) studies, an understanding of how sex may influence fMRI-measured brain activation is critical. We used fixed- and random-effects models to study the influence of sex on fMRI patterns of brain activation during a simple visuomotor reaction time task in the group of 26 age-matched men and women. We evaluated the right visual, left visual, left primary motor, left supplementary motor, and left anterior cingulate areas. Volumes of activations did not significantly differ between the groups in any defined regions. Analysis of variance failed to show any significant correlations between sex and volumes of brain activation in any location studied. Mean percentage signal-intensity changes for all locations were similar between men and women. A two-way t test of brain activation in men and women, performed as a part of random-effects modeling, showed no significant difference at any site. Our results suggest that sex seems to have little influence on fMRI brain activation when we compared performance on the simple reaction-time task. The need to control for sex effects is not critical in the analysis of this task with fMRI.
Near-Field Terahertz Transmission Imaging at 0.210 Terahertz Using a Simple Aperture Technique
2015-10-01
This report discusses a simple aperture useful for terahertz near-field imaging at .2010 terahertz ( lambda = 1.43 millimeters). The aperture requires...achieve a spatial resolution of lambda /7. The aperture can be scaled with the assistance of machinery found in conventional machine shops to achieve similar results using shorter terahertz wavelengths.
Scalable Track Detection in SAR CCD Images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chow, James G; Quach, Tu-Thach
Existing methods to detect vehicle tracks in coherent change detection images, a product of combining two synthetic aperture radar images ta ken at different times of the same scene, rely on simple, fast models to label track pixels. These models, however, are often too simple to capture natural track features such as continuity and parallelism. We present a simple convolutional network architecture consisting of a series of 3-by-3 convolutions to detect tracks. The network is trained end-to-end to learn natural track features entirely from data. The network is computationally efficient and improves the F-score on a standard dataset to 0.988,more » up fr om 0.907 obtained by the current state-of-the-art method.« less
A Simple BODIPY-Based Viscosity Probe for Imaging of Cellular Viscosity in Live Cells
Su, Dongdong; Teoh, Chai Lean; Gao, Nengyue; Xu, Qing-Hua; Chang, Young-Tae
2016-01-01
Intracellular viscosity is a fundamental physical parameter that indicates the functioning of cells. In this work, we developed a simple boron-dipyrromethene (BODIPY)-based probe, BTV, for cellular mitochondria viscosity imaging by coupling a simple BODIPY rotor with a mitochondria-targeting unit. The BTV exhibited a significant fluorescence intensity enhancement of more than 100-fold as the solvent viscosity increased. Also, the probe showed a direct linear relationship between the fluorescence lifetime and the media viscosity, which makes it possible to trace the change of the medium viscosity. Furthermore, it was demonstrated that BTV could achieve practical applicability in the monitoring of mitochondrial viscosity changes in live cells through fluorescence lifetime imaging microscopy (FLIM). PMID:27589762
A Simple BODIPY-Based Viscosity Probe for Imaging of Cellular Viscosity in Live Cells.
Su, Dongdong; Teoh, Chai Lean; Gao, Nengyue; Xu, Qing-Hua; Chang, Young-Tae
2016-08-31
Intracellular viscosity is a fundamental physical parameter that indicates the functioning of cells. In this work, we developed a simple boron-dipyrromethene (BODIPY)-based probe, BTV, for cellular mitochondria viscosity imaging by coupling a simple BODIPY rotor with a mitochondria-targeting unit. The BTV exhibited a significant fluorescence intensity enhancement of more than 100-fold as the solvent viscosity increased. Also, the probe showed a direct linear relationship between the fluorescence lifetime and the media viscosity, which makes it possible to trace the change of the medium viscosity. Furthermore, it was demonstrated that BTV could achieve practical applicability in the monitoring of mitochondrial viscosity changes in live cells through fluorescence lifetime imaging microscopy (FLIM).
Kage, S; Kudo, K; Kaizoji, A; Ryumoto, J; Ikeda, H; Ikeda, N
2001-07-01
We devised a simple and rapid method for detection of gunshot residue (GSR) particles, using scanning electron microscopy/wavelength dispersive X-ray (SEM/WDX) analysis. Experiments were done on samples containing GSR particles obtained from hands, hair, face, and clothing, using double-sided adhesive coated aluminum stubs (tape-lift method). SEM/WDX analyses for GSR were carried out in three steps: the first step was map analysis for barium (Ba) to search for GSR particles from lead styphnate primed ammunition, or tin (Sn) to search for GSR particles from mercury fulminate primed ammunition. The second step was determination of the location of GSR particles by X-ray imaging of Ba or Sn at a magnification of x 1000-2000 in the SEM, using data of map analysis, and the third step was identification of GSR particles, using WDX spectrometers. Analysis of samples from each primer of a stub took about 3 h. Practical applications were shown for utility of this method.
The comet moment as a measure of DNA damage in the comet assay.
Kent, C R; Eady, J J; Ross, G M; Steel, G G
1995-06-01
The development of rapid assays of radiation-induced DNA damage requires the definition of reliable parameters for the evaluation of dose-response relationships to compare with cellular endpoints. We have used the single-cell gel electrophoresis (SCGE) or 'comet' assay to measure DNA damage in individual cells after irradiation. Both the alkaline and neutral protocols were used. In both cases, DNA was stained with ethidium bromide and viewed using a fluorescence microscope at 516-560 nm. Images of comets were stored as 512 x 512 pixel images using OPTIMAS, an image analysis software package. Using this software we tested various parameters for measuring DNA damage. We have developed a method of analysis that rigorously conforms to the mathematical definition of the moment of inertia of a plane figure. This parameter does not require the identification of separate head and tail regions, but rather calculates a moment of the whole comet image. We have termed this parameter 'comet moment'. This method is simple to calculate and can be performed using most image analysis software packages that support macro facilities. In experiments on CHO-K1 cells, tail length was found to increase linearly with dose, but plateaued at higher doses. Comet moment also increased linearly with dose, but over a larger dose range than tail length and had no tendency to plateau.
Cystogram follow-up in the management of traumatic bladder disruption.
Inaba, Kenji; McKenney, Mark; Munera, Felipe; de Moya, Marc; Lopez, Peter P; Schulman, Carl I; Habib, Fahim A
2006-01-01
The utility of obtaining a routine cystogram after the repair of intraperitoneal bladder disruption before urethral catheter removal is unknown. This study was designed to examine whether follow-up cystogram evaluation after traumatic bladder disruption affected the clinical management of these injuries. We hypothesized that routine cystograms, after operative repair of intraperitoneal bladder disruptions, provide no clinically useful information and may be eliminated in the management of these injuries. Our prospectively collected trauma database was retrospectively reviewed for all ICD-9 867.0 and 867.1 coded bladder injuries over a 6-year period ending in June 2004. Demographics, clinical injury data, detailed operative records, and imaging studies were reviewed for each patient. Bladder injuries were categorized as intraperitoneal (IP) or extraperitoneal (EP) bladder disruptions based on imaging results and operative exploration. Patients with IP injuries were further subdivided into those with "simple" dome disruptions or through-and-through penetrating injuries and "complex" injuries involving the trigone or ureter reimplantation. All patients sustaining isolated ureteric or urethral injury were excluded from further analysis. In all, 20,647 trauma patients were screened for bladder injury. Out of this group, there were 50 IP (47 simple, 3 complex) and 37 EP injuries available for analysis. All IP injuries underwent operative repair. Eight of the IP injuries (all simple) had no postoperative cystogram and all were doing well at 1- to 4-week follow-up. The remaining 42 patients underwent a postoperative cystogram at 15.3 +/- 7.3 days (range 7 to 36 days). All simple IP injuries had a negative postoperative cystogram. The only positive study was in one of the three complex IP injuries. In the EP group, 21.6% had positive cystograms requiring further follow-up and intervention. Patients sustaining extraperitoneal and complex intraperitoneal bladder disruptions require routine cystogram follow-up. In those patients undergoing repair of a simple intraperitoneal bladder disruption, however, routine follow-up cystograms did not affect clinical management. Further prospective evaluation to determine the optimal timing of catheter removal in this patient population is warranted.
Image-based mobile service: automatic text extraction and translation
NASA Astrophysics Data System (ADS)
Berclaz, Jérôme; Bhatti, Nina; Simske, Steven J.; Schettino, John C.
2010-01-01
We present a new mobile service for the translation of text from images taken by consumer-grade cell-phone cameras. Such capability represents a new paradigm for users where a simple image provides the basis for a service. The ubiquity and ease of use of cell-phone cameras enables acquisition and transmission of images anywhere and at any time a user wishes, delivering rapid and accurate translation over the phone's MMS and SMS facilities. Target text is extracted completely automatically, requiring no bounding box delineation or related user intervention. The service uses localization, binarization, text deskewing, and optical character recognition (OCR) in its analysis. Once the text is translated, an SMS message is sent to the user with the result. Further novelties include that no software installation is required on the handset, any service provider or camera phone can be used, and the entire service is implemented on the server side.
Imaging of surface spin textures on bulk crystals by scanning electron microscopy
NASA Astrophysics Data System (ADS)
Akamine, Hiroshi; Okumura, So; Farjami, Sahar; Murakami, Yasukazu; Nishida, Minoru
2016-11-01
Direct observation of magnetic microstructures is vital for advancing spintronics and other technologies. Here we report a method for imaging surface domain structures on bulk samples by scanning electron microscopy (SEM). Complex magnetic domains, referred to as the maze state in CoPt/FePt alloys, were observed at a spatial resolution of less than 100 nm by using an in-lens annular detector. The method allows for imaging almost all the domain walls in the mazy structure, whereas the visualisation of the domain walls with the classical SEM method was limited. Our method provides a simple way to analyse surface domain structures in the bulk state that can be used in combination with SEM functions such as orientation or composition analysis. Thus, the method extends applications of SEM-based magnetic imaging, and is promising for resolving various problems at the forefront of fields including physics, magnetics, materials science, engineering, and chemistry.
Transformations based on continuous piecewise-affine velocity fields
Freifeld, Oren; Hauberg, Soren; Batmanghelich, Kayhan; ...
2017-01-11
Here, we propose novel finite-dimensional spaces of well-behaved Rn → Rn transformations. The latter are obtained by (fast and highly-accurate) integration of continuous piecewise-affine velocity fields. The proposed method is simple yet highly expressive, effortlessly handles optional constraints (e.g., volume preservation and/or boundary conditions), and supports convenient modeling choices such as smoothing priors and coarse-to-fine analysis. Importantly, the proposed approach, partly due to its rapid likelihood evaluations and partly due to its other properties, facilitates tractable inference over rich transformation spaces, including using Markov-Chain Monte-Carlo methods. Its applications include, but are not limited to: monotonic regression (more generally, optimization overmore » monotonic functions); modeling cumulative distribution functions or histograms; time-warping; image warping; image registration; real-time diffeomorphic image editing; data augmentation for image classifiers. Our GPU-based code is publicly available.« less
A new blood vessel extraction technique using edge enhancement and object classification.
Badsha, Shahriar; Reza, Ahmed Wasif; Tan, Kim Geok; Dimyati, Kaharudin
2013-12-01
Diabetic retinopathy (DR) is increasing progressively pushing the demand of automatic extraction and classification of severity of diseases. Blood vessel extraction from the fundus image is a vital and challenging task. Therefore, this paper presents a new, computationally simple, and automatic method to extract the retinal blood vessel. The proposed method comprises several basic image processing techniques, namely edge enhancement by standard template, noise removal, thresholding, morphological operation, and object classification. The proposed method has been tested on a set of retinal images. The retinal images were collected from the DRIVE database and we have employed robust performance analysis to evaluate the accuracy. The results obtained from this study reveal that the proposed method offers an average accuracy of about 97 %, sensitivity of 99 %, specificity of 86 %, and predictive value of 98 %, which is superior to various well-known techniques.
Adaptive segmentation of nuclei in H&S stained tendon microscopy
NASA Astrophysics Data System (ADS)
Chuang, Bo-I.; Wu, Po-Ting; Hsu, Jian-Han; Jou, I.-Ming; Su, Fong-Chin; Sun, Yung-Nien
2015-12-01
Tendiopathy is a popular clinical issue in recent years. In most cases like trigger finger or tennis elbow, the pathology change can be observed under H and E stained tendon microscopy. However, the qualitative analysis is too subjective and thus the results heavily depend on the observers. We develop an automatic segmentation procedure which segments and counts the nuclei in H and E stained tendon microscopy fast and precisely. This procedure first determines the complexity of images and then segments the nuclei from the image. For the complex images, the proposed method adopts sampling-based thresholding to segment the nuclei. While for the simple images, the Laplacian-based thresholding is employed to re-segment the nuclei more accurately. In the experiments, the proposed method is compared with the experts outlined results. The nuclei number of proposed method is closed to the experts counted, and the processing time of proposed method is much faster than the experts'.
Transformations Based on Continuous Piecewise-Affine Velocity Fields
Freifeld, Oren; Hauberg, Søren; Batmanghelich, Kayhan; Fisher, Jonn W.
2018-01-01
We propose novel finite-dimensional spaces of well-behaved ℝn → ℝn transformations. The latter are obtained by (fast and highly-accurate) integration of continuous piecewise-affine velocity fields. The proposed method is simple yet highly expressive, effortlessly handles optional constraints (e.g., volume preservation and/or boundary conditions), and supports convenient modeling choices such as smoothing priors and coarse-to-fine analysis. Importantly, the proposed approach, partly due to its rapid likelihood evaluations and partly due to its other properties, facilitates tractable inference over rich transformation spaces, including using Markov-Chain Monte-Carlo methods. Its applications include, but are not limited to: monotonic regression (more generally, optimization over monotonic functions); modeling cumulative distribution functions or histograms; time-warping; image warping; image registration; real-time diffeomorphic image editing; data augmentation for image classifiers. Our GPU-based code is publicly available. PMID:28092517
Test images for the maximum entropy image restoration method
NASA Technical Reports Server (NTRS)
Mackey, James E.
1990-01-01
One of the major activities of any experimentalist is data analysis and reduction. In solar physics, remote observations are made of the sun in a variety of wavelengths and circumstances. In no case is the data collected free from the influence of the design and operation of the data gathering instrument as well as the ever present problem of noise. The presence of significant noise invalidates the simple inversion procedure regardless of the range of known correlation functions. The Maximum Entropy Method (MEM) attempts to perform this inversion by making minimal assumptions about the data. To provide a means of testing the MEM and characterizing its sensitivity to noise, choice of point spread function, type of data, etc., one would like to have test images of known characteristics that can represent the type of data being analyzed. A means of reconstructing these images is presented.
Narrow-field imaging of the lunar sodium exosphere
NASA Technical Reports Server (NTRS)
Stern, S. Alan; Flynn, Brian C.
1995-01-01
We present the first results of a new technique for imaging the lunar Na atmosphere. The technique employs high resolution, a narrow bandpass, and specific observing geometry to suppress scattered light and image lunar atmospheric Na I emission down to approximately 50 km altitude. Analysis of four latitudinally dispersed images shows that the lunar Na atmosphere exhibits intersting latitudinal and radial dependencies. Application of a simple Maxwellian collisionless exosphere model indicates that: (1) at least two thermal populations are required to adequately fit the soldium's radial intensity behavior, and (2) the fractional abundances and temperatures of the two components vary systematically with latitude. We conclude that both cold (barometric) and hot (suprathermal) Na may coexist in the lunar atmosphere, either as distinct components or as elements of a continuum of populations ranging in temperature from the local surface temperature up to or exceeding escape energies.
A gravitational lens candidate discovered with the Hubble Space Telescope
NASA Technical Reports Server (NTRS)
Maoz, Dan; Bahcall, John N.; Schneider, Donald P.; Doxsey, Rodger; Bahcall, Neta A.; Filippenko, Alexei V.; Goss, W. M.; Lahav, Ofer; Yanny, Brian
1992-01-01
Evidence is reported for gravitational lensing of the high-redshift (z = 3.8) quasar 1208 + 101, observed as part of the Snapshot survey with the HST Planetary Camera. An HST V image taken on gyroscopes resolves the quasar into three point-source components, with the two fainter images having separations of 0.1 and 0.5 arcsec from the central bright component. A radio observation of the quasar with the VLA at 2 cm shows that, like most quasars of this redhsift, 1208 + 101 is radio quiet. Based on positional information alone, the probability that the observed optical components are chance superpositions of Galactic stars is small, but not negligible. Analysis of a combined ground-based spectrum of all three components, using the relative brightnesses of the HST image, supports the lensing hypothesis. If all the components are lensed images of the quasar, the observed configuration cannot be reproduced by simple lens models.
Choi, Kyongsik; Chon, James W; Gu, Min; Lee, Byoungho
2007-08-20
In this paper, a simple confocal laser scanning microscopic (CLSM) image mapping technique based on the finite-difference time domain (FDTD) calculation has been proposed and evaluated for characterization of a subwavelength-scale three-dimensional (3D) void structure fabricated inside polymer matrix. The FDTD simulation method adopts a focused Gaussian beam incident wave, Berenger's perfectly matched layer absorbing boundary condition, and the angular spectrum analysis method. Through the well matched simulation and experimental results of the xz-scanned 3D void structure, we first characterize the exact position and the topological shape factor of the subwavelength-scale void structure, which was fabricated by a tightly focused ultrashort pulse laser. The proposed CLSM image mapping technique based on the FDTD can be widely applied from the 3D near-field microscopic imaging, optical trapping, and evanescent wave phenomenon to the state-of-the-art bio- and nanophotonics.
Determination of chromophore distribution in skin by spectral imaging
NASA Astrophysics Data System (ADS)
Saknite, Inga; Lange, Marta; Jakovels, Dainis; Spigulis, Janis
2012-10-01
Possibilities to determine chromophore distribution in skin by spectral imaging were explored. Simple RGB sensor devices were used for image acquisition. Totally 200 images of 40 different bruises of 20 people were obtained in order to map chromophores bilirubin and haemoglobin. Possibilities to detect water in vitro and in vivo were estimated by using silicon photodetectors and narrow band LEDs. The results show that it is possible to obtain bilirubin and haemoglobin distribution maps and observe changes of chromophore parameter values over time by using a simple RGB imaging device. Water in vitro was detected by using differences in absorption at 450 nm and 950 nm, and 650 nm and 950 nm.
NASA Astrophysics Data System (ADS)
Hidalgo-Aguirre, Maribel; Gitelman, Julian; Lesk, Mark Richard; Costantino, Santiago
2015-11-01
Optical coherence tomography (OCT) imaging has become a standard diagnostic tool in ophthalmology, providing essential information associated with various eye diseases. In order to investigate the dynamics of the ocular fundus, we present a simple and accurate automated algorithm to segment the inner limiting membrane in video-rate optic nerve head spectral domain (SD) OCT images. The method is based on morphological operations including a two-step contrast enhancement technique, proving to be very robust when dealing with low signal-to-noise ratio images and pathological eyes. An analysis algorithm was also developed to measure neuroretinal tissue deformation from the segmented retinal profiles. The performance of the algorithm is demonstrated, and deformation results are presented for healthy and glaucomatous eyes.
Improved deconvolution of very weak confocal signals
Day, Kasey J.; La Rivière, Patrick J.; Chandler, Talon; Bindokas, Vytas P.; Ferrier, Nicola J.; Glick, Benjamin S.
2017-01-01
Deconvolution is typically used to sharpen fluorescence images, but when the signal-to-noise ratio is low, the primary benefit is reduced noise and a smoother appearance of the fluorescent structures. 3D time-lapse (4D) confocal image sets can be improved by deconvolution. However, when the confocal signals are very weak, the popular Huygens deconvolution software erases fluorescent structures that are clearly visible in the raw data. We find that this problem can be avoided by prefiltering the optical sections with a Gaussian blur. Analysis of real and simulated data indicates that the Gaussian blur prefilter preserves meaningful signals while enabling removal of background noise. This approach is very simple, and it allows Huygens to be used with 4D imaging conditions that minimize photodamage. PMID:28868135
Combining hyperspectral imaging and Raman spectroscopy for remote chemical sensing
NASA Astrophysics Data System (ADS)
Ingram, John M.; Lo, Edsanter
2008-04-01
The Photonics Research Center at the United States Military Academy is conducting research to demonstrate the feasibility of combining hyperspectral imaging and Raman spectroscopy for remote chemical detection over a broad area of interest. One limitation of future trace detection systems is their ability to analyze large areas of view. Hyperspectral imaging provides a balance between fast spectral analysis and scanning area. Integration of a hyperspectral system capable of remote chemical detection will greatly enhance our soldiers' ability to see the battlefield to make threat related decisions. It can also queue the trace detection systems onto the correct interrogation area saving time and reconnaissance/surveillance resources. This research develops both the sensor design and the detection/discrimination algorithms. The one meter remote detection without background radiation is a simple proof of concept.
Improved deconvolution of very weak confocal signals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Day, Kasey J.; La Riviere, Patrick J.; Chandler, Talon
Deconvolution is typically used to sharpen fluorescence images, but when the signal-to-noise ratio is low, the primary benefit is reduced noise and a smoother appearance of the fluorescent structures. 3D time-lapse (4D) confocal image sets can be improved by deconvolution. However, when the confocal signals are very weak, the popular Huygens deconvolution software erases fluorescent structures that are clearly visible in the raw data. We find that this problem can be avoided by prefiltering the optical sections with a Gaussian blur. Analysis of real and simulated data indicates that the Gaussian blur prefilter preserves meaningful signals while enabling removal ofmore » background noise. Here, this approach is very simple, and it allows Huygens to be used with 4D imaging conditions that minimize photodamage.« less
Improved deconvolution of very weak confocal signals
Day, Kasey J.; La Riviere, Patrick J.; Chandler, Talon; ...
2017-06-06
Deconvolution is typically used to sharpen fluorescence images, but when the signal-to-noise ratio is low, the primary benefit is reduced noise and a smoother appearance of the fluorescent structures. 3D time-lapse (4D) confocal image sets can be improved by deconvolution. However, when the confocal signals are very weak, the popular Huygens deconvolution software erases fluorescent structures that are clearly visible in the raw data. We find that this problem can be avoided by prefiltering the optical sections with a Gaussian blur. Analysis of real and simulated data indicates that the Gaussian blur prefilter preserves meaningful signals while enabling removal ofmore » background noise. Here, this approach is very simple, and it allows Huygens to be used with 4D imaging conditions that minimize photodamage.« less
Automatic analysis of the micronucleus test in primary human lymphocytes using image analysis.
Frieauff, W; Martus, H J; Suter, W; Elhajouji, A
2013-01-01
The in vitro micronucleus test (MNT) is a well-established test for early screening of new chemical entities in industrial toxicology. For assessing the clastogenic or aneugenic potential of a test compound, micronucleus induction in cells has been shown repeatedly to be a sensitive and a specific parameter. Various automated systems to replace the tedious and time-consuming visual slide analysis procedure as well as flow cytometric approaches have been discussed. The ROBIAS (Robotic Image Analysis System) for both automatic cytotoxicity assessment and micronucleus detection in human lymphocytes was developed at Novartis where the assay has been used to validate positive results obtained in the MNT in TK6 cells, which serves as the primary screening system for genotoxicity profiling in early drug development. In addition, the in vitro MNT has become an accepted alternative to support clinical studies and will be used for regulatory purposes as well. The comparison of visual with automatic analysis results showed a high degree of concordance for 25 independent experiments conducted for the profiling of 12 compounds. For concentration series of cyclophosphamide and carbendazim, a very good correlation between automatic and visual analysis by two examiners could be established, both for the relative division index used as cytotoxicity parameter, as well as for micronuclei scoring in mono- and binucleated cells. Generally, false-positive micronucleus decisions could be controlled by fast and simple relocation of the automatically detected patterns. The possibility to analyse 24 slides within 65h by automatic analysis over the weekend and the high reproducibility of the results make automatic image processing a powerful tool for the micronucleus analysis in primary human lymphocytes. The automated slide analysis for the MNT in human lymphocytes complements the portfolio of image analysis applications on ROBIAS which is supporting various assays at Novartis.
Directional analysis and filtering for dust storm detection in NOAA-AVHRR imagery
NASA Astrophysics Data System (ADS)
Janugani, S.; Jayaram, V.; Cabrera, S. D.; Rosiles, J. G.; Gill, T. E.; Rivera Rivera, N.
2009-05-01
In this paper, we propose spatio-spectral processing techniques for the detection of dust storms and automatically finding its transport direction in 5-band NOAA-AVHRR imagery. Previous methods that use simple band math analysis have produced promising results but have drawbacks in producing consistent results when low signal to noise ratio (SNR) images are used. Moreover, in seeking to automate the dust storm detection, the presence of clouds in the vicinity of the dust storm creates a challenge in being able to distinguish these two types of image texture. This paper not only addresses the detection of the dust storm in the imagery, it also attempts to find the transport direction and the location of the sources of the dust storm. We propose a spatio-spectral processing approach with two components: visualization and automation. Both approaches are based on digital image processing techniques including directional analysis and filtering. The visualization technique is intended to enhance the image in order to locate the dust sources. The automation technique is proposed to detect the transport direction of the dust storm. These techniques can be used in a system to provide timely warnings of dust storms or hazard assessments for transportation, aviation, environmental safety, and public health.
Particle sizing of pharmaceutical aerosols via direct imaging of particle settling velocities.
Fishler, Rami; Verhoeven, Frank; de Kruijf, Wilbur; Sznitman, Josué
2018-02-15
We present a novel method for characterizing in near real-time the aerodynamic particle size distributions from pharmaceutical inhalers. The proposed method is based on direct imaging of airborne particles followed by a particle-by-particle measurement of settling velocities using image analysis and particle tracking algorithms. Due to the simplicity of the principle of operation, this method has the potential of circumventing potential biases of current real-time particle analyzers (e.g. Time of Flight analysis), while offering a cost effective solution. The simple device can also be constructed in laboratory settings from off-the-shelf materials for research purposes. To demonstrate the feasibility and robustness of the measurement technique, we have conducted benchmark experiments whereby aerodynamic particle size distributions are obtained from several commercially-available dry powder inhalers (DPIs). Our measurements yield size distributions (i.e. MMAD and GSD) that are closely in line with those obtained from Time of Flight analysis and cascade impactors suggesting that our imaging-based method may embody an attractive methodology for rapid inhaler testing and characterization. In a final step, we discuss some of the ongoing limitations of the current prototype and conceivable routes for improving the technique. Copyright © 2017 Elsevier B.V. All rights reserved.
Resin embedded multicycle imaging (REMI): a tool to evaluate protein domains.
Busse, B L; Bezrukov, L; Blank, P S; Zimmerberg, J
2016-08-08
Protein complexes associated with cellular processes comprise a significant fraction of all biology, but our understanding of their heterogeneous organization remains inadequate, particularly for physiological densities of multiple protein species. Towards resolving this limitation, we here present a new technique based on resin-embedded multicycle imaging (REMI) of proteins in-situ. By stabilizing protein structure and antigenicity in acrylic resins, affinity labels were repeatedly applied, imaged, removed, and replaced. In principle, an arbitrarily large number of proteins of interest may be imaged on the same specimen with subsequent digital overlay. A series of novel preparative methods were developed to address the problem of imaging multiple protein species in areas of the plasma membrane or volumes of cytoplasm of individual cells. For multiplexed examination of antibody staining we used straightforward computational techniques to align sequential images, and super-resolution microscopy was used to further define membrane protein colocalization. We give one example of a fibroblast membrane with eight multiplexed proteins. A simple statistical analysis of this limited membrane proteomic dataset is sufficient to demonstrate the analytical power contributed by additional imaged proteins when studying membrane protein domains.
[Sub-field imaging spectrometer design based on Offner structure].
Wu, Cong-Jun; Yan, Chang-Xiang; Liu, Wei; Dai, Hu
2013-08-01
To satisfy imaging spectrometers's miniaturization, lightweight and large field requirements in space application, the current optical design of imaging spectrometer with Offner structure was analyzed, and an simple method to design imaging spectrometer with concave grating based on current ways was given. Using the method offered, the sub-field imaging spectrometer with 400 km altitude, 0.4-1.0 microm wavelength range, 5 F-number of 720 mm focal length and 4.3 degrees total field was designed. Optical fiber was used to transfer the image in telescope's focal plane to three slits arranged in the same plane so as to achieve subfield. The CCD detector with 1 024 x 1 024 and 18 microm x 18 microm was used to receive the image of the three slits after dispersing. Using ZEMAX software optimization and tolerance analysis, the system can satisfy 5 nm spectrum resolution and 5 m field resolution, and the MTF is over 0.62 with 28 lp x mm(-1). The field of the system is almost 3 times that of similar instruments used in space probe.
Ghost imaging based on Pearson correlation coefficients
NASA Astrophysics Data System (ADS)
Yu, Wen-Kai; Yao, Xu-Ri; Liu, Xue-Feng; Li, Long-Zhen; Zhai, Guang-Jie
2015-05-01
Correspondence imaging is a new modality of ghost imaging, which can retrieve a positive/negative image by simple conditional averaging of the reference frames that correspond to relatively large/small values of the total intensity measured at the bucket detector. Here we propose and experimentally demonstrate a more rigorous and general approach in which a ghost image is retrieved by calculating a Pearson correlation coefficient between the bucket detector intensity and the brightness at a given pixel of the reference frames, and at the next pixel, and so on. Furthermore, we theoretically provide a statistical interpretation of these two imaging phenomena, and explain how the error depends on the sample size and what kind of distribution the error obeys. According to our analysis, the image signal-to-noise ratio can be greatly improved and the sampling number reduced by means of our new method. Project supported by the National Key Scientific Instrument and Equipment Development Project of China (Grant No. 2013YQ030595) and the National High Technology Research and Development Program of China (Grant No. 2013AA122902).
Resin embedded multicycle imaging (REMI): a tool to evaluate protein domains
Busse, B. L.; Bezrukov, L.; Blank, P. S.; Zimmerberg, J.
2016-01-01
Protein complexes associated with cellular processes comprise a significant fraction of all biology, but our understanding of their heterogeneous organization remains inadequate, particularly for physiological densities of multiple protein species. Towards resolving this limitation, we here present a new technique based on resin-embedded multicycle imaging (REMI) of proteins in-situ. By stabilizing protein structure and antigenicity in acrylic resins, affinity labels were repeatedly applied, imaged, removed, and replaced. In principle, an arbitrarily large number of proteins of interest may be imaged on the same specimen with subsequent digital overlay. A series of novel preparative methods were developed to address the problem of imaging multiple protein species in areas of the plasma membrane or volumes of cytoplasm of individual cells. For multiplexed examination of antibody staining we used straightforward computational techniques to align sequential images, and super-resolution microscopy was used to further define membrane protein colocalization. We give one example of a fibroblast membrane with eight multiplexed proteins. A simple statistical analysis of this limited membrane proteomic dataset is sufficient to demonstrate the analytical power contributed by additional imaged proteins when studying membrane protein domains. PMID:27499335
NASA Astrophysics Data System (ADS)
Bruns, S.; Stipp, S. L. S.; Sørensen, H. O.
2017-09-01
Digital rock physics carries the dogmatic concept of having to segment volume images for quantitative analysis but segmentation rejects huge amounts of signal information. Information that is essential for the analysis of difficult and marginally resolved samples, such as materials with very small features, is lost during segmentation. In X-ray nanotomography reconstructions of Hod chalk we observed partial volume voxels with an abundance that limits segmentation based analysis. Therefore, we investigated the suitability of greyscale analysis for establishing statistical representative elementary volumes (sREV) for the important petrophysical parameters of this type of chalk, namely porosity, specific surface area and diffusive tortuosity, by using volume images without segmenting the datasets. Instead, grey level intensities were transformed to a voxel level porosity estimate using a Gaussian mixture model. A simple model assumption was made that allowed formulating a two point correlation function for surface area estimates using Bayes' theory. The same assumption enables random walk simulations in the presence of severe partial volume effects. The established sREVs illustrate that in compacted chalk, these simulations cannot be performed in binary representations without increasing the resolution of the imaging system to a point where the spatial restrictions of the represented sample volume render the precision of the measurement unacceptable. We illustrate this by analyzing the origins of variance in the quantitative analysis of volume images, i.e. resolution dependence and intersample and intrasample variance. Although we cannot make any claims on the accuracy of the approach, eliminating the segmentation step from the analysis enables comparative studies with higher precision and repeatability.
NASA Astrophysics Data System (ADS)
Augustine, Kurt E.; Holmes, David R., III; Hanson, Dennis P.; Robb, Richard A.
2006-03-01
One of the greatest challenges for a software engineer is to create a complex application that is comprehensive enough to be useful to a diverse set of users, yet focused enough for individual tasks to be carried out efficiently with minimal training. This "powerful yet simple" paradox is particularly prevalent in advanced medical imaging applications. Recent research in the Biomedical Imaging Resource (BIR) at Mayo Clinic has been directed toward development of an imaging application framework that provides powerful image visualization/analysis tools in an intuitive, easy-to-use interface. It is based on two concepts very familiar to physicians - Cases and Workflows. Each case is associated with a unique patient and a specific set of routine clinical tasks, or a workflow. Each workflow is comprised of an ordered set of general-purpose modules which can be re-used for each unique workflow. Clinicians help describe and design the workflows, and then are provided with an intuitive interface to both patient data and analysis tools. Since most of the individual steps are common to many different workflows, the use of general-purpose modules reduces development time and results in applications that are consistent, stable, and robust. While the development of individual modules may reflect years of research by imaging scientists, new customized workflows based on the new modules can be developed extremely fast. If a powerful, comprehensive application is difficult to learn and complicated to use, it will be unacceptable to most clinicians. Clinical image analysis tools must be intuitive and effective or they simply will not be used.
Remote sensing of frozen lakes on the North Slope of Alaska
French, N.; Savage, S.; Shuchman, R.; Edson, R.; Payne, J.; Josberger, E.
2004-01-01
We used synthetic aperture radar (SAR) images from the ERS-2 remote sensing satellite to map the freeze condition of lakes on Alaska's North Slope, the geographic region to the north of the Brooks Range. An mage from March 1997, to coincide with the period of maximum freeze depth, was used for the frozen lake mapping. Emphasis was placed on distinguishing between lakes frozen to the lakebed and lakes with some portion unfrozen to the bed (a binary classification). The result of the analysis is a map identifying lakes as frozen to the lakebed and lakes not frozen to the lakebed. This analysis of one SAR image has shown the feasibility of a simple technique for mapping frozen lake condition for supporting decision making and understanding impacts of climate change on the North Slope.
Dynamic photoelasticity by TDI imaging
NASA Astrophysics Data System (ADS)
Asundi, Anand K.; Sajan, M. R.
2001-06-01
High speed photographic system like the image rotation camera, the Cranz Schardin camera and the drum camera are typically used for the recording and visualization of dynamic events in stress analysis, fluid mechanics, etc. All these systems are fairly expensive and generally not simple to use. Furthermore they are all based on photographic film recording system requiring time consuming and tedious wet processing of the films. Digital cameras are replacing the conventional cameras, to certain extent in static experiments. Recently, there is lots of interest in development and modifying CCD architectures and recording arrangements for dynamic scenes analysis. Herein we report the use of a CCD camera operating in the Time Delay and Integration mode for digitally recording dynamic photoelastic stress patterns. Applications in strobe and streak photoelastic pattern recording and system limitations will be explained in the paper.
Extremely simple holographic projection of color images
NASA Astrophysics Data System (ADS)
Makowski, Michal; Ducin, Izabela; Kakarenko, Karol; Suszek, Jaroslaw; Kolodziejczyk, Andrzej; Sypek, Maciej
2012-03-01
A very simple scheme of holographic projection is presented with some experimental results showing good quality image projection without any imaging lens. This technique can be regarded as an alternative to classic projection methods. It is based on the reconstruction real images from three phase iterated Fourier holograms. The illumination is performed with three laser beams of primary colors. A divergent wavefront geometry is used to achieve an increased throw angle of the projection, compared to plane wave illumination. Light fibers are used as light guidance in order to keep the setup as simple as possible and to provide point-like sources of high quality divergent wave-fronts at optimized position against the light modulator. Absorbing spectral filters are implemented to multiplex three holograms on a single phase-only spatial light modulator. Hence color mixing occurs without any time-division methods, which cause rainbow effects and color flicker. The zero diffractive order with divergent illumination is practically invisible and speckle field is effectively suppressed with phase optimization and time averaging techniques. The main advantages of the proposed concept are: a very simple and highly miniaturizable configuration; lack of lens; a single LCoS (Liquid Crystal on Silicon) modulator; a strong resistance to imperfections and obstructions of the spatial light modulator like dead pixels, dust, mud, fingerprints etc.; simple calculations based on Fast Fourier Transform (FFT) easily processed in real time mode with GPU (Graphic Programming).
Haddock, Luis J; Kim, David Y; Mukai, Shizuo
2013-01-01
Purpose. We describe in detail a relatively simple technique of fundus photography in human and rabbit eyes using a smartphone, an inexpensive app for the smartphone, and instruments that are readily available in an ophthalmic practice. Methods. Fundus images were captured with a smartphone and a 20D lens with or without a Koeppe lens. By using the coaxial light source of the phone, this system works as an indirect ophthalmoscope that creates a digital image of the fundus. The application whose software allows for independent control of focus, exposure, and light intensity during video filming was used. With this app, we recorded high-definition videos of the fundus and subsequently extracted high-quality, still images from the video clip. Results. The described technique of smartphone fundus photography was able to capture excellent high-quality fundus images in both children under anesthesia and in awake adults. Excellent images were acquired with the 20D lens alone in the clinic, and the addition of the Koeppe lens in the operating room resulted in the best quality images. Successful photodocumentation of rabbit fundus was achieved in control and experimental eyes. Conclusion. The currently described system was able to take consistently high-quality fundus photographs in patients and in animals using readily available instruments that are portable with simple power sources. It is relatively simple to master, is relatively inexpensive, and can take advantage of the expanding mobile-telephone networks for telemedicine.
Two Magnets and a Ball Bearing: A Simple Demonstration of the Methods of Images.
ERIC Educational Resources Information Center
Poon, W. C. K.
2003-01-01
Investigates the behavior of a bar magnet with a steel ball bearing on one pole as it approaches another bar magnet. Maps the problem onto electrostatics and explains observations based on the behavior of point charges near an isolated, uncharged sphere. Offers a simple demonstration of the method of images in electrostatics. (Author/NB)
Soponar, Florin; Moţ, Augustin C; Sârbu, Costel
2010-01-01
A novel HPTLC method was developed for fast and simple quantitative determination of rutin in pharmaceutical preparations. The proposed method combines the advantages of HPTLC with the comfort and the convenience of digital processing of images. For the separation of rutin, silica gel with attached amino groups was used as the stationary phase and ethyl acetate-formic acid-methanol-water (10 + 0.9 + 1.1 + 1.7, v/v/v/v) as the mobile phase. The chromatographic plate was sprayed with 2-aminoethyldiphenyl borate solution for visual detection of the spots. For the construction of a three-dimensional chromatogram, Melanie 7.0 software was used together with an HP flatbed scanner that allowed capture of the images on chromatographic plates. The calibration curve was linear within the range of 0.95-4.78 microg/spot with an r value of 0.9984. The RSD for six replicates at three concentration levels was less than 3%, while the recovery was between 97.28 and 103.27%. The proposed method was found to be simple, precise, sensitive, and accurate and has been applied for the determination of rutin in two commercial drugs. The results were compared with the results of other techniques that generate bidimensional chromatograms and validated by UV-Vis spectrophotometry.
Portable, stand-off spectral imaging camera for detection of effluents and residues
NASA Astrophysics Data System (ADS)
Goldstein, Neil; St. Peter, Benjamin; Grot, Jonathan; Kogan, Michael; Fox, Marsha; Vujkovic-Cvijin, Pajo; Penny, Ryan; Cline, Jason
2015-06-01
A new, compact and portable spectral imaging camera, employing a MEMs-based encoded imaging approach, has been built and demonstrated for detection of hazardous contaminants including gaseous effluents and solid-liquid residues on surfaces. The camera is called the Thermal infrared Reconfigurable Analysis Camera for Effluents and Residues (TRACER). TRACER operates in the long wave infrared and has the potential to detect a wide variety of materials with characteristic spectral signatures in that region. The 30 lb. camera is tripod mounted and battery powered. A touch screen control panel provides a simple user interface for most operations. The MEMS spatial light modulator is a Texas Instruments Digital Microarray Array with custom electronics and firmware control. Simultaneous 1D-spatial and 1Dspectral dimensions are collected, with the second spatial dimension obtained by scanning the internal spectrometer slit. The sensor can be configured to collect data in several modes including full hyperspectral imagery using Hadamard multiplexing, panchromatic thermal imagery, and chemical-specific contrast imagery, switched with simple user commands. Matched filters and other analog filters can be generated internally on-the-fly and applied in hardware, substantially reducing detection time and improving SNR over HSI software processing, while reducing storage requirements. Results of preliminary instrument evaluation and measurements of flame exhaust are presented.
CMOS image sensor-based immunodetection by refractive-index change.
Devadhasan, Jasmine P; Kim, Sanghyo
2012-01-01
A complementary metal oxide semiconductor (CMOS) image sensor is an intriguing technology for the development of a novel biosensor. Indeed, the CMOS image sensor mechanism concerning the detection of the antigen-antibody (Ag-Ab) interaction at the nanoscale has been ambiguous so far. To understand the mechanism, more extensive research has been necessary to achieve point-of-care diagnostic devices. This research has demonstrated a CMOS image sensor-based analysis of cardiovascular disease markers, such as C-reactive protein (CRP) and troponin I, Ag-Ab interactions on indium nanoparticle (InNP) substrates by simple photon count variation. The developed sensor is feasible to detect proteins even at a fg/mL concentration under ordinary room light. Possible mechanisms, such as dielectric constant and refractive-index changes, have been studied and proposed. A dramatic change in the refractive index after protein adsorption on an InNP substrate was observed to be a predominant factor involved in CMOS image sensor-based immunoassay.
Inviscid Limit for Damped and Driven Incompressible Navier-Stokes Equations in mathbb R^2
NASA Astrophysics Data System (ADS)
Ramanah, D.; Raghunath, S.; Mee, D. J.; Rösgen, T.; Jacobs, P. A.
2007-08-01
Experiments to demonstrate the use of the background-oriented schlieren (BOS) technique in hypersonic impulse facilities are reported. BOS uses a simple optical set-up consisting of a structured background pattern, an electronic camera with a high shutter speed and a high intensity light source. The visualization technique is demonstrated in a small reflected shock tunnel with a Mach 4 conical nozzle, nozzle supply pressure of 2.2 MPa and nozzle supply enthalpy of 1.8 MJ/kg. A 20° sharp circular cone and a model of the MUSES-C re-entry body were tested. Images captured were processed using PIV-style image analysis to visualize variations in the density field. The shock angle on the cone measured from the BOS images agreed with theoretical calculations to within 0.5°. Shock standoff distances could be measured from the BOS image for the re-entry body. Preliminary experiments are also reported in higher enthalpy facilities where flow luminosity can interfere with imaging of the background pattern.
Tripathi, Ashish; Emmons, Erik D; Wilcox, Phillip G; Guicheteau, Jason A; Emge, Darren K; Christesen, Steven D; Fountain, Augustus W
2011-06-01
We have previously demonstrated the use of wide-field Raman chemical imaging (RCI) to detect and identify the presence of trace explosives in contaminated fingerprints. In this current work we demonstrate the detection of trace explosives in contaminated fingerprints on strongly Raman scattering surfaces such as plastics and painted metals using an automated background subtraction routine. We demonstrate the use of partial least squares subtraction to minimize the interfering surface spectral signatures, allowing the detection and identification of explosive materials in the corrected Raman images. The resulting analyses are then visually superimposed on the corresponding bright field images to physically locate traces of explosives. Additionally, we attempt to address the question of whether a complete RCI of a fingerprint is required for trace explosive detection or whether a simple non-imaging Raman spectrum is sufficient. This investigation further demonstrates the ability to nondestructively identify explosives on fingerprints present on commonly found surfaces such that the fingerprint remains intact for further biometric analysis.
NASA Astrophysics Data System (ADS)
Sumriddetchkajorn, Sarun; Chaitavon, Kosom
2009-07-01
This paper introduces a parallel measurement approach for fast infrared-based human temperature screening suitable for use in a large public area. Our key idea is based on the combination of simple image processing algorithms, infrared technology, and human flow management. With this multidisciplinary concept, we arrange as many people as possible in a two-dimensional space in front of a thermal imaging camera and then highlight all human facial areas through simple image filtering, image morphological, and particle analysis processes. In this way, an individual's face in live thermal image can be located and the maximum facial skin temperature can be monitored and displayed. Our experiment shows a measured 1 ms processing time in highlighting all human face areas. With a thermal imaging camera having an FOV lens of 24° × 18° and 320 × 240 active pixels, the maximum facial skin temperatures from three people's faces located at 1.3 m from the camera can also be simultaneously monitored and displayed in a measured rate of 31 fps, limited by the looping process in determining coordinates of all faces. For our 3-day test under the ambient temperature of 24-30 °C, 57-72% relative humidity, and weak wind from the outside hospital building, hyperthermic patients can be identified with 100% sensitivity and 36.4% specificity when the temperature threshold level and the offset temperature value are appropriately chosen. Appropriately locating our system away from the building doors, air conditioners and electric fans in order to eliminate wind blow coming toward the camera lens can significantly help improve our system specificity.
Comparison of air space measurement imaged by CT, small-animal CT, and hyperpolarized Xe MRI
NASA Astrophysics Data System (ADS)
Madani, Aniseh; White, Steven; Santyr, Giles; Cunningham, Ian
2005-04-01
Lung disease is the third leading cause of death in the western world. Lung air volume measurements are thought to be early indicators of lung disease and markers in pharmaceutical research. The purpose of this work is to develop a lung phantom for assessing and comparing the quantitative accuracy of hyperpolarized xenon 129 magnetic resonance imaging (HP 129Xe MRI), conventional computed tomography (HRCT), and highresolution small-animal CT (μCT) in measuring lung gas volumes. We developed a lung phantom consisting of solid cellulose acetate spheres (1, 2, 3, 4 and 5 mm diameter) uniformly packed in circulated air or HP 129Xe gas. Air volume is estimated based on simple thresholding algorithm. Truth is calculated from the sphere diameters and validated using μCT. While this phantom is not anthropomorphic, it enables us to directly measure air space volume and compare these imaging methods as a function of sphere diameter for the first time. HP 129Xe MRI requires partial volume analysis to distinguish regions with and without 129Xe gas and results are within %5 of truth but settling of the heavy 129Xe gas complicates this analysis. Conventional CT demonstrated partial-volume artifacts for the 1mm spheres. μCT gives the most accurate air-volume results. Conventional CT and HP 129Xe MRI give similar results although non-uniform densities of 129Xe require more sophisticated algorithms than simple thresholding. The threshold required to give the true air volume in both HRCT and μCT, varies with sphere diameters calling into question the validity of thresholding method.
Siegel, Nisan; Storrie, Brian; Bruce, Marc
2016-01-01
FINCH holographic fluorescence microscopy creates high resolution super-resolved images with enhanced depth of focus. The simple addition of a real-time Nipkow disk confocal image scanner in a conjugate plane of this incoherent holographic system is shown to reduce the depth of focus, and the combination of both techniques provides a simple way to enhance the axial resolution of FINCH in a combined method called “CINCH”. An important feature of the combined system allows for the simultaneous real-time image capture of widefield and holographic images or confocal and confocal holographic images for ready comparison of each method on the exact same field of view. Additional GPU based complex deconvolution processing of the images further enhances resolution. PMID:26839443
NASA Astrophysics Data System (ADS)
Habte, Frezghi; Natarajan, Arutselvan; Paik, David S.; Gambhir, Sanjiv S.
2014-03-01
Cerenkov luminescence imaging (CLI) is an emerging cost effective modality that uses conventional small animal optical imaging systems and clinically available radionuclide probes for light emission. CLI has shown good correlation with PET for organs of high uptake such as kidney, spleen, thymus and subcutaneous tumors in mouse models. However, CLI has limitations for deep tissue quantitative imaging since the blue-weighted spectral characteristics of Cerenkov radiation attenuates highly by mammalian tissue. Large organs such as the liver have also shown higher signal due to the contribution of emission of light from a greater thickness of tissue. In this study, we developed a simple model that estimates the effective tissue attenuation coefficient in order to correct the CLI signal intensity with a priori estimated depth and thickness of specific organs. We used several thin slices of ham to build a phantom with realistic attenuation. We placed radionuclide sources inside the phantom at different tissue depths and imaged it using an IVIS Spectrum (Perkin-Elmer, Waltham, MA, USA) and Inveon microPET (Preclinical Solutions Siemens, Knoxville, TN). We also performed CLI and PET of mouse models and applied the proposed attenuation model to correct CLI measurements. Using calibration factors obtained from phantom study that converts the corrected CLI measurements to %ID/g, we obtained an average difference of less that 10% for spleen and less than 35% for liver compared to conventional PET measurements. Hence, the proposed model has a capability of correcting the CLI signal to provide comparable measurements with PET data.
A Simple yet Accurate Method for the Estimation of the Biovolume of Planktonic Microorganisms.
Saccà, Alessandro
2016-01-01
Determining the biomass of microbial plankton is central to the study of fluxes of energy and materials in aquatic ecosystems. This is typically accomplished by applying proper volume-to-carbon conversion factors to group-specific abundances and biovolumes. A critical step in this approach is the accurate estimation of biovolume from two-dimensional (2D) data such as those available through conventional microscopy techniques or flow-through imaging systems. This paper describes a simple yet accurate method for the assessment of the biovolume of planktonic microorganisms, which works with any image analysis system allowing for the measurement of linear distances and the estimation of the cross sectional area of an object from a 2D digital image. The proposed method is based on Archimedes' principle about the relationship between the volume of a sphere and that of a cylinder in which the sphere is inscribed, plus a coefficient of 'unellipticity' introduced here. Validation and careful evaluation of the method are provided using a variety of approaches. The new method proved to be highly precise with all convex shapes characterised by approximate rotational symmetry, and combining it with an existing method specific for highly concave or branched shapes allows covering the great majority of cases with good reliability. Thanks to its accuracy, consistency, and low resources demand, the new method can conveniently be used in substitution of any extant method designed for convex shapes, and can readily be coupled with automated cell imaging technologies, including state-of-the-art flow-through imaging devices.
A Simple yet Accurate Method for the Estimation of the Biovolume of Planktonic Microorganisms
2016-01-01
Determining the biomass of microbial plankton is central to the study of fluxes of energy and materials in aquatic ecosystems. This is typically accomplished by applying proper volume-to-carbon conversion factors to group-specific abundances and biovolumes. A critical step in this approach is the accurate estimation of biovolume from two-dimensional (2D) data such as those available through conventional microscopy techniques or flow-through imaging systems. This paper describes a simple yet accurate method for the assessment of the biovolume of planktonic microorganisms, which works with any image analysis system allowing for the measurement of linear distances and the estimation of the cross sectional area of an object from a 2D digital image. The proposed method is based on Archimedes’ principle about the relationship between the volume of a sphere and that of a cylinder in which the sphere is inscribed, plus a coefficient of ‘unellipticity’ introduced here. Validation and careful evaluation of the method are provided using a variety of approaches. The new method proved to be highly precise with all convex shapes characterised by approximate rotational symmetry, and combining it with an existing method specific for highly concave or branched shapes allows covering the great majority of cases with good reliability. Thanks to its accuracy, consistency, and low resources demand, the new method can conveniently be used in substitution of any extant method designed for convex shapes, and can readily be coupled with automated cell imaging technologies, including state-of-the-art flow-through imaging devices. PMID:27195667
I'll take that to go: Big data bags and minimal identifiers for exchange of large, complex datasets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chard, Kyle; D'Arcy, Mike; Heavner, Benjamin D.
Big data workflows often require the assembly and exchange of complex, multi-element datasets. For example, in biomedical applications, the input to an analytic pipeline can be a dataset consisting thousands of images and genome sequences assembled from diverse repositories, requiring a description of the contents of the dataset in a concise and unambiguous form. Typical approaches to creating datasets for big data workflows assume that all data reside in a single location, requiring costly data marshaling and permitting errors of omission and commission because dataset members are not explicitly specified. We address these issues by proposing simple methods and toolsmore » for assembling, sharing, and analyzing large and complex datasets that scientists can easily integrate into their daily workflows. These tools combine a simple and robust method for describing data collections (BDBags), data descriptions (Research Objects), and simple persistent identifiers (Minids) to create a powerful ecosystem of tools and services for big data analysis and sharing. We present these tools and use biomedical case studies to illustrate their use for the rapid assembly, sharing, and analysis of large datasets.« less
Adaptations and Analysis of the AFIT Noise Radar Network for Indoor Navigation
2013-03-01
capable of producing bistatic/multistatic radar images. NTR is unique because it utilizes amplified random thermal noise as its transmission waveform...structure and operation of NTR is described. A minutia of the EM theory describing the various phenomenon found when operating RF devices in indoor...construction of NTR is simple in comparison to other CW radars. The system begins with a commercial thermal noise source, which produces a uniform
Experiments on automatic classification of tissue malignancy in the field of digital pathology
NASA Astrophysics Data System (ADS)
Pereira, J.; Barata, R.; Furtado, Pedro
2017-06-01
Automated analysis of histological images helps diagnose and further classify breast cancer. Totally automated approaches can be used to pinpoint images for further analysis by the medical doctor. But tissue images are especially challenging for either manual or automated approaches, due to mixed patterns and textures, where malignant regions are sometimes difficult to detect unless they are in very advanced stages. Some of the major challenges are related to irregular and very diffuse patterns, as well as difficulty to define winning features and classifier models. Although it is also hard to segment correctly into regions, due to the diffuse nature, it is still crucial to take low-level features over individualized regions instead of the whole image, and to select those with the best outcomes. In this paper we report on our experiments building a region classifier with a simple subspace division and a feature selection model that improves results over image-wide and/or limited feature sets. Experimental results show modest accuracy for a set of classifiers applied over the whole image, while the conjunction of image division, per-region low-level extraction of features and selection of features, together with the use of a neural network classifier achieved the best levels of accuracy for the dataset and settings we used in the experiments. Future work involves deep learning techniques, adding structures semantics and embedding the approach as a tumor finding helper in a practical Medical Imaging Application.
A review of novel optical imaging strategies of the stroke pathology and stem cell therapy in stroke
Aswendt, Markus; Adamczak, Joanna; Tennstaedt, Annette
2014-01-01
Transplanted stem cells can induce and enhance functional recovery in experimental stroke. Invasive analysis has been extensively used to provide detailed cellular and molecular characterization of the stroke pathology and engrafted stem cells. But post mortem analysis is not appropriate to reveal the time scale of the dynamic interplay between the cell graft, the ischemic lesion and the endogenous repair mechanisms. This review describes non-invasive imaging techniques which have been developed to provide complementary in vivo information. Recent advances were made in analyzing simultaneously different aspects of the cell graft (e.g., number of cells, viability state, and cell fate), the ischemic lesion (e.g., blood–brain-barrier consistency, hypoxic, and necrotic areas) and the neuronal and vascular network. We focus on optical methods, which permit simple animal preparation, repetitive experimental conditions, relatively medium-cost instrumentation and are performed under mild anesthesia, thus nearly under physiological conditions. A selection of recent examples of optical intrinsic imaging, fluorescence imaging and bioluminescence imaging to characterize the stroke pathology and engrafted stem cells are discussed. Special attention is paid to novel optimal reporter genes/probes for genetic labeling and tracking of stem cells and appropriate transgenic animal models. Requirements, advantages and limitations of these imaging platforms are critically discussed and placed into the context of other non-invasive techniques, e.g., magnetic resonance imaging and positron emission tomography, which can be joined with optical imaging in multimodal approaches. PMID:25177269
Kuswandi, Bambang; Irmawati, Titi; Hidayat, Moch Amrun; Jayus; Ahmad, Musa
2014-01-01
A simple visual ethanol biosensor based on alcohol oxidase (AOX) immobilised onto polyaniline (PANI) film for halal verification of fermented beverage samples is described. This biosensor responds to ethanol via a colour change from green to blue, due to the enzymatic reaction of ethanol that produces acetaldehyde and hydrogen peroxide, when the latter oxidizes the PANI film. The procedure to obtain this biosensor consists of the immobilization of AOX onto PANI film by adsorption. For the immobilisation, an AOX solution is deposited on the PANI film and left at room temperature until dried (30 min). The biosensor was constructed as a dip stick for visual and simple use. The colour changes of the films have been scanned and analysed using image analysis software (i.e., ImageJ) to study the characteristics of the biosensor's response toward ethanol. The biosensor has a linear response in an ethanol concentration range of 0.01%–0.8%, with a correlation coefficient (r) of 0.996. The limit detection of the biosensor was 0.001%, with reproducibility (RSD) of 1.6% and a life time up to seven weeks when stored at 4 °C. The biosensor provides accurate results for ethanol determination in fermented drinks and was in good agreement with the standard method (gas chromatography) results. Thus, the biosensor could be used as a simple visual method for ethanol determination in fermented beverage samples that can be useful for Muslim community for halal verification. PMID:24473284
Macyszyn, Luke; Akbari, Hamed; Pisapia, Jared M.; Da, Xiao; Attiah, Mark; Pigrish, Vadim; Bi, Yingtao; Pal, Sharmistha; Davuluri, Ramana V.; Roccograndi, Laura; Dahmane, Nadia; Martinez-Lage, Maria; Biros, George; Wolf, Ronald L.; Bilello, Michel; O'Rourke, Donald M.; Davatzikos, Christos
2016-01-01
Background MRI characteristics of brain gliomas have been used to predict clinical outcome and molecular tumor characteristics. However, previously reported imaging biomarkers have not been sufficiently accurate or reproducible to enter routine clinical practice and often rely on relatively simple MRI measures. The current study leverages advanced image analysis and machine learning algorithms to identify complex and reproducible imaging patterns predictive of overall survival and molecular subtype in glioblastoma (GB). Methods One hundred five patients with GB were first used to extract approximately 60 diverse features from preoperative multiparametric MRIs. These imaging features were used by a machine learning algorithm to derive imaging predictors of patient survival and molecular subtype. Cross-validation ensured generalizability of these predictors to new patients. Subsequently, the predictors were evaluated in a prospective cohort of 29 new patients. Results Survival curves yielded a hazard ratio of 10.64 for predicted long versus short survivors. The overall, 3-way (long/medium/short survival) accuracy in the prospective cohort approached 80%. Classification of patients into the 4 molecular subtypes of GB achieved 76% accuracy. Conclusions By employing machine learning techniques, we were able to demonstrate that imaging patterns are highly predictive of patient survival. Additionally, we found that GB subtypes have distinctive imaging phenotypes. These results reveal that when imaging markers related to infiltration, cell density, microvascularity, and blood–brain barrier compromise are integrated via advanced pattern analysis methods, they form very accurate predictive biomarkers. These predictive markers used solely preoperative images, hence they can significantly augment diagnosis and treatment of GB patients. PMID:26188015
Quantitative image analysis of immunohistochemical stains using a CMYK color model
Pham, Nhu-An; Morrison, Andrew; Schwock, Joerg; Aviel-Ronen, Sarit; Iakovlev, Vladimir; Tsao, Ming-Sound; Ho, James; Hedley, David W
2007-01-01
Background Computer image analysis techniques have decreased effects of observer biases, and increased the sensitivity and the throughput of immunohistochemistry (IHC) as a tissue-based procedure for the evaluation of diseases. Methods We adapted a Cyan/Magenta/Yellow/Key (CMYK) model for automated computer image analysis to quantify IHC stains in hematoxylin counterstained histological sections. Results The spectral characteristics of the chromogens AEC, DAB and NovaRed as well as the counterstain hematoxylin were first determined using CMYK, Red/Green/Blue (RGB), normalized RGB and Hue/Saturation/Lightness (HSL) color models. The contrast of chromogen intensities on a 0–255 scale (24-bit image file) as well as compared to the hematoxylin counterstain was greatest using the Yellow channel of a CMYK color model, suggesting an improved sensitivity for IHC evaluation compared to other color models. An increase in activated STAT3 levels due to growth factor stimulation, quantified using the Yellow channel image analysis was associated with an increase detected by Western blotting. Two clinical image data sets were used to compare the Yellow channel automated method with observer-dependent methods. First, a quantification of DAB-labeled carbonic anhydrase IX hypoxia marker in 414 sections obtained from 138 biopsies of cervical carcinoma showed strong association between Yellow channel and positive color selection results. Second, a linear relationship was also demonstrated between Yellow intensity and visual scoring for NovaRed-labeled epidermal growth factor receptor in 256 non-small cell lung cancer biopsies. Conclusion The Yellow channel image analysis method based on a CMYK color model is independent of observer biases for threshold and positive color selection, applicable to different chromogens, tolerant of hematoxylin, sensitive to small changes in IHC intensity and is applicable to simple automation procedures. These characteristics are advantageous for both basic as well as clinical research in an unbiased, reproducible and high throughput evaluation of IHC intensity. PMID:17326824
Detection and recognition of simple spatial forms
NASA Technical Reports Server (NTRS)
Watson, A. B.
1983-01-01
A model of human visual sensitivity to spatial patterns is constructed. The model predicts the visibility and discriminability of arbitrary two-dimensional monochrome images. The image is analyzed by a large array of linear feature sensors, which differ in spatial frequency, phase, orientation, and position in the visual field. All sensors have one octave frequency bandwidths, and increase in size linearly with eccentricity. Sensor responses are processed by an ideal Bayesian classifier, subject to uncertainty. The performance of the model is compared to that of the human observer in detecting and discriminating some simple images.
NASA Astrophysics Data System (ADS)
Devadhasan, Jasmine P.; Kim, Sanghyo
2015-07-01
Complementary metal oxide semiconductor (CMOS) image sensors are received great attention for their high efficiency in biological applications. The present work describes a CMOS image sensor-based whole blood glucose monitoring system through a point-of-care (POC) approach. A simple poly-ethylene terephthalate (PET) film chip was developed to carry out the enzyme kinetic reaction at various concentrations of blood glucose. In this technique, assay reagent was adsorbed onto amine functionalized silica (AFSiO2) nanoparticles in order to achieve glucose oxidation on the PET film chip. The AFSiO2 nanoparticles can immobilize the assay reagent with an electrostatic attraction and eased to develop the opaque platform which was technically suitable chip to analyze by the camera module. The oxidized glucose then produces a green color according to the glucose concentration and is analyzed by the camera module as a photon detection technique. The photon number decreases with increasing glucose concentration. The simple sensing approach, utilizing enzyme immobilized AFSiO2 nanoparticle chip and assay detection method was developed for quantitative glucose measurement.
Brooks, Tessa L Durham; Miller, Nathan D; Spalding, Edgar P
2010-01-01
Plant development is genetically determined but it is also plastic, a fundamental duality that can be investigated provided large number of measurements can be made in various conditions. Plasticity of gravitropism in wild-type Arabidopsis (Arabidopsis thaliana) seedling roots was investigated using automated image acquisition and analysis. A bank of computer-controlled charge-coupled device cameras acquired images with high spatiotemporal resolution. Custom image analysis algorithms extracted time course measurements of tip angle and growth rate. Twenty-two discrete conditions defined by seedling age (2, 3, or 4 d), seed size (extra small, small, medium, or large), and growth medium composition (simple or rich) formed the condition space sampled with 1,216 trials. Computational analyses including dimension reduction by principal components analysis, classification by k-means clustering, and differentiation by wavelet convolution showed distinct response patterns within the condition space, i.e. response plasticity. For example, 2-d-old roots (regardless of seed size) displayed a response time course similar to those of roots from large seeds (regardless of age). Enriching the growth medium with nutrients suppressed response plasticity along the seed size and age axes, possibly by ameliorating a mineral deficiency, although analysis of seeds did not identify any elements with low levels on a per weight basis. Characterizing relationships between growth rate and tip swing rate as a function of condition cast gravitropism in a multidimensional response space that provides new mechanistic insights as well as conceptually setting the stage for mutational analysis of plasticity in general and root gravitropism in particular.
Durham Brooks, Tessa L.; Miller, Nathan D.; Spalding, Edgar P.
2010-01-01
Plant development is genetically determined but it is also plastic, a fundamental duality that can be investigated provided large number of measurements can be made in various conditions. Plasticity of gravitropism in wild-type Arabidopsis (Arabidopsis thaliana) seedling roots was investigated using automated image acquisition and analysis. A bank of computer-controlled charge-coupled device cameras acquired images with high spatiotemporal resolution. Custom image analysis algorithms extracted time course measurements of tip angle and growth rate. Twenty-two discrete conditions defined by seedling age (2, 3, or 4 d), seed size (extra small, small, medium, or large), and growth medium composition (simple or rich) formed the condition space sampled with 1,216 trials. Computational analyses including dimension reduction by principal components analysis, classification by k-means clustering, and differentiation by wavelet convolution showed distinct response patterns within the condition space, i.e. response plasticity. For example, 2-d-old roots (regardless of seed size) displayed a response time course similar to those of roots from large seeds (regardless of age). Enriching the growth medium with nutrients suppressed response plasticity along the seed size and age axes, possibly by ameliorating a mineral deficiency, although analysis of seeds did not identify any elements with low levels on a per weight basis. Characterizing relationships between growth rate and tip swing rate as a function of condition cast gravitropism in a multidimensional response space that provides new mechanistic insights as well as conceptually setting the stage for mutational analysis of plasticity in general and root gravitropism in particular. PMID:19923240
Barczuk-Falęcka, M; Małek, Ł A; Roik, D; Werys, K; Werner, B; Brzewski, M
2018-06-01
To assess the accuracy of simple cardiovascular magnetic resonance imaging (CMR) parameters for first-line analysis of right ventricle (RV) dysfunction in children to identify those who require in-depth analysis and those in whom simple assessment is sufficient. Sixty paediatric CMR studies were analysed. The following CMR parameters were measured: RV end-diastolic and end-systolic area (4CH EDA and 4CH ESA), fractional area change (FAC), RV diameter in end-diastole (RVD1), tricuspid annular plane systolic excursion (TAPSE), and RV outflow tract diameter in end-diastole (RVOT prox). They were correlated with RV end-diastolic volume (RVEDVI) and RV ejection fraction (RVEF). RVEDVI correlated best with 4CH ESA (r=0.85, <0.001) and EDA (r=0.82, <0.001). For RVEF only a moderate reverse correlation was found for 4CH ESA (-0.56, <0.001), 4CH EDA (-0.49, 0.001) and positive correlation for FAC (0.49, <0.001). There was no correlation between TAPSE and RVEF and only weak between RVD1 and RVEDVI. A 4CH ESA cut-off value of 8.5 cm 2 /m 2 had a very high diagnostic accuracy for predicting an enlarged RV (AUC=0.912, p<0.001, sensitivity 92.3%, specificity 79%) and a cut-off value of 10.5 cm 2 /m 2 was also a good predictor of depressed RV systolic function (AUC=0.873, p<0.001, sensitivity 83%, specificity 89%). For routine screening in clinical practice, 4CH ESA seems a reliable and easy method to identify patients with RV dysfunction. Copyright © 2018 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Selsam, Peter; Schwartze, Christian
2016-10-01
Providing software solutions via internet has been known for quite some time and is now an increasing trend marketed as "software as a service". A lot of business units accept the new methods and streamlined IT strategies by offering web-based infrastructures for external software usage - but geospatial applications featuring very specialized services or functionalities on demand are still rare. Originally applied in desktop environments, the ILMSimage tool for remote sensing image analysis and classification was modified in its communicating structures and enabled for running on a high-power server and benefiting from Tavema software. On top, a GIS-like and web-based user interface guides the user through the different steps in ILMSimage. ILMSimage combines object oriented image segmentation with pattern recognition features. Basic image elements form a construction set to model for large image objects with diverse and complex appearance. There is no need for the user to set up detailed object definitions. Training is done by delineating one or more typical examples (templates) of the desired object using a simple vector polygon. The template can be large and does not need to be homogeneous. The template is completely independent from the segmentation. The object definition is done completely by the software.
Levels of detail analysis of microwave scattering from human head models for brain stroke detection
2017-01-01
In this paper, we have presented a microwave scattering analysis from multiple human head models. This study incorporates different levels of detail in the human head models and its effect on microwave scattering phenomenon. Two levels of detail are taken into account; (i) Simplified ellipse shaped head model (ii) Anatomically realistic head model, implemented using 2-D geometry. In addition, heterogenic and frequency-dispersive behavior of the brain tissues has also been incorporated in our head models. It is identified during this study that the microwave scattering phenomenon changes significantly once the complexity of head model is increased by incorporating more details using magnetic resonance imaging database. It is also found out that the microwave scattering results match in both types of head model (i.e., geometrically simple and anatomically realistic), once the measurements are made in the structurally simplified regions. However, the results diverge considerably in the complex areas of brain due to the arbitrary shape interface of tissue layers in the anatomically realistic head model. After incorporating various levels of detail, the solution of subject microwave scattering problem and the measurement of transmitted and backscattered signals were obtained using finite element method. Mesh convergence analysis was also performed to achieve error free results with a minimum number of mesh elements and a lesser degree of freedom in the fast computational time. The results were promising and the E-Field values converged for both simple and complex geometrical models. However, the E-Field difference between both types of head model at the same reference point differentiated a lot in terms of magnitude. At complex location, a high difference value of 0.04236 V/m was measured compared to the simple location, where it turned out to be 0.00197 V/m. This study also contributes to provide a comparison analysis between the direct and iterative solvers so as to find out the solution of subject microwave scattering problem in a minimum computational time along with memory resources requirement. It is seen from this study that the microwave imaging may effectively be utilized for the detection, localization and differentiation of different types of brain stroke. The simulation results verified that the microwave imaging can be efficiently exploited to study the significant contrast between electric field values of the normal and abnormal brain tissues for the investigation of brain anomalies. In the end, a specific absorption rate analysis was carried out to compare the ionizing effects of microwave signals to different types of head model using a factor of safety for brain tissues. It is also suggested after careful study of various inversion methods in practice for microwave head imaging, that the contrast source inversion method may be more suitable and computationally efficient for such problems. PMID:29177115
NASA Astrophysics Data System (ADS)
Volkov, Boris; Mathews, Marlon S.; Abookasis, David
2015-03-01
Multispectral imaging has received significant attention over the last decade as it integrates spectroscopy, imaging, tomography analysis concurrently to acquire both spatial and spectral information from biological tissue. In the present study, a multispectral setup based on projection of structured illumination at several near-infrared wavelengths and at different spatial frequencies is applied to quantitatively assess brain function before, during, and after the onset of traumatic brain injury in an intact mouse brain (n=5). For the production of head injury, we used the weight drop method where weight of a cylindrical metallic rod falling along a metal tube strikes the mouse's head. Structured light was projected onto the scalp surface and diffuse reflected light was recorded by a CCD camera positioned perpendicular to the mouse head. Following data analysis, we were able to concurrently show a series of hemodynamic and morphologic changes over time including higher deoxyhemoglobin, reduction in oxygen saturation, cell swelling, etc., in comparison with baseline measurements. Overall, results demonstrates the capability of multispectral imaging based structured illumination to detect and map of brain tissue optical and physiological properties following brain injury in a simple noninvasive and noncontact manner.
NASA Technical Reports Server (NTRS)
Tilton, James C.; Cook, Diane J.
2008-01-01
Under a project recently selected for funding by NASA's Science Mission Directorate under the Applied Information Systems Research (AISR) program, Tilton and Cook will design and implement the integration of the Subdue graph based knowledge discovery system, developed at the University of Texas Arlington and Washington State University, with image segmentation hierarchies produced by the RHSEG software, developed at NASA GSFC, and perform pilot demonstration studies of data analysis, mining and knowledge discovery on NASA data. Subdue represents a method for discovering substructures in structural databases. Subdue is devised for general-purpose automated discovery, concept learning, and hierarchical clustering, with or without domain knowledge. Subdue was developed by Cook and her colleague, Lawrence B. Holder. For Subdue to be effective in finding patterns in imagery data, the data must be abstracted up from the pixel domain. An appropriate abstraction of imagery data is a segmentation hierarchy: a set of several segmentations of the same image at different levels of detail in which the segmentations at coarser levels of detail can be produced from simple merges of regions at finer levels of detail. The RHSEG program, a recursive approximation to a Hierarchical Segmentation approach (HSEG), can produce segmentation hierarchies quickly and effectively for a wide variety of images. RHSEG and HSEG were developed at NASA GSFC by Tilton. In this presentation we provide background on the RHSEG and Subdue technologies and present a preliminary analysis on how RHSEG and Subdue may be combined to enhance image data analysis, mining and knowledge discovery.
Kwan, Alex C; Dietz, Shelby B; Zhong, Guisheng; Harris-Warrick, Ronald M; Webb, Watt W
2010-12-01
In rhythmic neural circuits, a neuron often fires action potentials with a constant phase to the rhythm, a timing relationship that can be functionally significant. To characterize these phase preferences in a large-scale, cell type-specific manner, we adapted multitaper coherence analysis for two-photon calcium imaging. Analysis of simulated data showed that coherence is a simple and robust measure of rhythmicity for calcium imaging data. When applied to the neonatal mouse hindlimb spinal locomotor network, the phase relationships between peak activity of >1,000 ventral spinal interneurons and motor output were characterized. Most interneurons showed rhythmic activity that was coherent and in phase with the ipsilateral motor output during fictive locomotion. The phase distributions of two genetically identified classes of interneurons were distinct from the ensemble population and from each other. There was no obvious spatial clustering of interneurons with similar phase preferences. Together, these results suggest that cell type, not neighboring neuron activity, is a better indicator of an interneuron's response during fictive locomotion. The ability to measure the phase preferences of many neurons with cell type and spatial information should be widely applicable for studying other rhythmic neural circuits.
NASA Astrophysics Data System (ADS)
Lee, Minsuk; Won, Youngjae; Park, Byungjun; Lee, Seungrag
2017-02-01
Not only static characteristics but also dynamic characteristics of the red blood cell (RBC) contains useful information for the blood diagnosis. Quantitative phase imaging (QPI) can capture sample images with subnanometer scale depth resolution and millisecond scale temporal resolution. Various researches have been used QPI for the RBC diagnosis, and recently many researches has been developed to decrease the process time of RBC information extraction using QPI by the parallel computing algorithm, however previous studies are interested in the static parameters such as morphology of the cells or simple dynamic parameters such as root mean square (RMS) of the membrane fluctuations. Previously, we presented a practical blood test method using the time series correlation analysis of RBC membrane flickering with QPI. However, this method has shown that there is a limit to the clinical application because of the long computation time. In this study, we present an accelerated time series correlation analysis of RBC membrane flickering using the parallel computing algorithm. This method showed consistent fractal scaling exponent results of the surrounding medium and the normal RBC with our previous research.
Larimer, Curtis; Winder, Eric; Jeters, Robert; Prowant, Matthew; Nettleship, Ian; Addleman, Raymond Shane; Bonheyo, George T
2016-01-01
The accumulation of bacteria in surface-attached biofilms can be detrimental to human health, dental hygiene, and many industrial processes. Natural biofilms are soft and often transparent, and they have heterogeneous biological composition and structure over micro- and macroscales. As a result, it is challenging to quantify the spatial distribution and overall intensity of biofilms. In this work, a new method was developed to enhance the visibility and quantification of bacterial biofilms. First, broad-spectrum biomolecular staining was used to enhance the visibility of the cells, nucleic acids, and proteins that make up biofilms. Then, an image analysis algorithm was developed to objectively and quantitatively measure biofilm accumulation from digital photographs and results were compared to independent measurements of cell density. This new method was used to quantify the growth intensity of Pseudomonas putida biofilms as they grew over time. This method is simple and fast, and can quantify biofilm growth over a large area with approximately the same precision as the more laborious cell counting method. Stained and processed images facilitate assessment of spatial heterogeneity of a biofilm across a surface. This new approach to biofilm analysis could be applied in studies of natural, industrial, and environmental biofilms.
Innovative Solution to Video Enhancement
NASA Technical Reports Server (NTRS)
2001-01-01
Through a licensing agreement, Intergraph Government Solutions adapted a technology originally developed at NASA's Marshall Space Flight Center for enhanced video imaging by developing its Video Analyst(TM) System. Marshall's scientists developed the Video Image Stabilization and Registration (VISAR) technology to help FBI agents analyze video footage of the deadly 1996 Olympic Summer Games bombing in Atlanta, Georgia. VISAR technology enhanced nighttime videotapes made with hand-held camcorders, revealing important details about the explosion. Intergraph's Video Analyst System is a simple, effective, and affordable tool for video enhancement and analysis. The benefits associated with the Video Analyst System include support of full-resolution digital video, frame-by-frame analysis, and the ability to store analog video in digital format. Up to 12 hours of digital video can be stored and maintained for reliable footage analysis. The system also includes state-of-the-art features such as stabilization, image enhancement, and convolution to help improve the visibility of subjects in the video without altering underlying footage. Adaptable to many uses, Intergraph#s Video Analyst System meets the stringent demands of the law enforcement industry in the areas of surveillance, crime scene footage, sting operations, and dash-mounted video cameras.
Guijarro, María; Pajares, Gonzalo; Herrera, P. Javier
2009-01-01
The increasing technology of high-resolution image airborne sensors, including those on board Unmanned Aerial Vehicles, demands automatic solutions for processing, either on-line or off-line, the huge amountds of image data sensed during the flights. The classification of natural spectral signatures in images is one potential application. The actual tendency in classification is oriented towards the combination of simple classifiers. In this paper we propose a combined strategy based on the Deterministic Simulated Annealing (DSA) framework. The simple classifiers used are the well tested supervised parametric Bayesian estimator and the Fuzzy Clustering. The DSA is an optimization approach, which minimizes an energy function. The main contribution of DSA is its ability to avoid local minima during the optimization process thanks to the annealing scheme. It outperforms simple classifiers used for the combination and some combined strategies, including a scheme based on the fuzzy cognitive maps and an optimization approach based on the Hopfield neural network paradigm. PMID:22399989
Detection of fastener loosening in simple lap joint based on ultrasonic wavefield imaging
NASA Astrophysics Data System (ADS)
Gooda Sahib, M. I.; Leong, S. J.; Chia, C. C.; Mustapha, F.
2017-12-01
Joints in aero-mechanical structures are critical elements that ensure the structural integrity but they are prone to damages. Inspection of such joints that have no prior baseline data is really challenging but it can be possibly done using the Ultrasonic Propagation Imager (UPI). The feasibility of applying UPI for detection of loosened fastener is investigated in this study. A simple lap joint specimen made by connecting two pieces of 2.5mm thick SAE304 stainless steel plates using five M6 screws and nuts has been used in this study. All fasteners are tightened to 10Nm but one of them is completely loosened to simulate the damage. The wavefield data is processed into ultrasonic wavefield propagation video and a series of spectral amplitude images. The spectral images showed noticeable amplitude difference at the loosened fastener, hence confirmed the feasibility of using UPI for structural joints inspection. A simple contrast maximization method is also introduced to improve the result.
[Digital processing and evaluation of ultrasound images].
Borchers, J; Klews, P M
1993-10-01
With the help of workstations and PCs, on-site image processing has become possible. If the images are not available in digital form the video signal has to be A/D converted. In the case of colour images the colour channels R (red), G (green) and B (blue) have to be digitized separately. "Truecolour" imaging calls for an 8 bit resolution per channel, leading to 24 bits per pixel. Out of a pool of 2(24) possible values only the relevant 128 gray values and 64 shades of red and blue respectively needed for a colour-coded ultrasound image have to be isolated. Digital images can be changed and evaluated with the help of readily available image evaluation programmes. It is mandatory that during image manipulation the gray scale and colour pixels and LUTs (Look-Up-Table) must be worked on separately. Using relatively simple LUT manipulations astonishing image improvements are possible. Application of simple mathematical operations can lead to completely new clinical results. For example, by subtracting two consecutive colour flow images in time and special LUT operations, local acceleration of blood flow can be visualized (Colour Acceleration Imaging).
NASA Astrophysics Data System (ADS)
Liu, Bin; Harman, Michelle; Giattina, Susanne; Stamper, Debra L.; Demakis, Charles; Chilek, Mark; Raby, Stephanie; Brezinski, Mark E.
2006-06-01
Assessing tissue birefringence with imaging modality polarization-sensitive optical coherence tomography (PS-OCT) could improve the characterization of in vivo tissue pathology. Among the birefringent components, collagen may provide invaluable clinical information because of its alteration in disorders ranging from myocardial infarction to arthritis. But the features required of clinical imaging modality in these areas usually include the ability to assess the parameter of interest rapidly and without extensive data analysis, the characteristics that single-detector PS-OCT demonstrates. But beyond detecting organized collagen, which has been previously demonstrated and confirmed with the appropriate histological techniques, additional information can potentially be gained with PS-OCT, including collagen type, form versus intrinsic birefringence, the collagen angle, and the presence of multiple birefringence materials. In part I, we apply the simple but powerful fast-Fourier transform (FFT) to both PS-OCT mathematical modeling and in vitro bovine meniscus for improved PS-OCT data analysis. The FFT analysis yields, in a rapid, straightforward, and easily interpreted manner, information on the presence of multiple birefringent materials, distinguishing the true anatomical structure from patterns in image resulting from alterations in the polarization state and identifying the tissue/phantom optical axes. Therefore the use of the FFT analysis of PS-OCT data provides information on tissue composition beyond identifying the presence of organized collagen in real time and directly from the image without extensive mathematical manipulation or data analysis. In part II, Helistat phantoms (collagen type I) are analyzed with the ultimate goal of improved tissue characterization. This study, along with the data in part I, advance the insights gained from PS-OCT images beyond simply determining the presence or absence of birefringence.
Ship Detection in Optical Satellite Image Based on RX Method and PCAnet
NASA Astrophysics Data System (ADS)
Shao, Xiu; Li, Huali; Lin, Hui; Kang, Xudong; Lu, Ting
2017-12-01
In this paper, we present a novel method for ship detection in optical satellite image based on the ReedXiaoli (RX) method and the principal component analysis network (PCAnet). The proposed method consists of the following three steps. First, the spatially adjacent pixels in optical image are arranged into a vector, transforming the optical image into a 3D cube image. By taking this process, the contextual information of the spatially adjacent pixels can be integrated to magnify the discrimination between ship and background. Second, the RX anomaly detection method is adopted to preliminarily extract ship candidates from the produced 3D cube image. Finally, real ships are further confirmed among ship candidates by applying the PCAnet and the support vector machine (SVM). Specifically, the PCAnet is a simple deep learning network which is exploited to perform feature extraction, and the SVM is applied to achieve feature pooling and decision making. Experimental results demonstrate that our approach is effective in discriminating between ships and false alarms, and has a good ship detection performance.
FluoroSim: A Visual Problem-Solving Environment for Fluorescence Microscopy
Quammen, Cory W.; Richardson, Alvin C.; Haase, Julian; Harrison, Benjamin D.; Taylor, Russell M.; Bloom, Kerry S.
2010-01-01
Fluorescence microscopy provides a powerful method for localization of structures in biological specimens. However, aspects of the image formation process such as noise and blur from the microscope's point-spread function combine to produce an unintuitive image transformation on the true structure of the fluorescing molecules in the specimen, hindering qualitative and quantitative analysis of even simple structures in unprocessed images. We introduce FluoroSim, an interactive fluorescence microscope simulator that can be used to train scientists who use fluorescence microscopy to understand the artifacts that arise from the image formation process, to determine the appropriateness of fluorescence microscopy as an imaging modality in an experiment, and to test and refine hypotheses of model specimens by comparing the output of the simulator to experimental data. FluoroSim renders synthetic fluorescence images from arbitrary geometric models represented as triangle meshes. We describe three rendering algorithms on graphics processing units for computing the convolution of the specimen model with a microscope's point-spread function and report on their performance. We also discuss several cases where the microscope simulator has been used to solve real problems in biology. PMID:20431698
Barlow, Anders J; Portoles, Jose F; Sano, Naoko; Cumpson, Peter J
2016-10-01
The development of the helium ion microscope (HIM) enables the imaging of both hard, inorganic materials and soft, organic or biological materials. Advantages include outstanding topographical contrast, superior resolution down to <0.5 nm at high magnification, high depth of field, and no need for conductive coatings. The instrument relies on helium atom adsorption and ionization at a cryogenically cooled tip that is atomically sharp. Under ideal conditions this arrangement provides a beam of ions that is stable for days to weeks, with beam currents in the order of picoamperes. Over time, however, this stability is lost as gaseous contamination builds up in the source region, leading to adsorbed atoms of species other than helium, which ultimately results in beam current fluctuations. This manifests itself as horizontal stripe artifacts in HIM images. We investigate post-processing methods to remove these artifacts from HIM images, such as median filtering, Gaussian blurring, fast Fourier transforms, and principal component analysis. We arrive at a simple method for completely removing beam current fluctuation effects from HIM images while maintaining the full integrity of the information within the image.
Spectral-domain optical coherence tomography for endoscopic imaging
NASA Astrophysics Data System (ADS)
Chen, Xiaodong; Li, Qiao; Li, Wanhui; Wang, Yi; Yu, Daoyin
2007-02-01
Optical coherence tomography (OCT) is an emerging cross-sectional imaging technology. It uses broadband light sources to achieve axial image resolutions on the few micron scale. OCT is widely applied to medical imaging, it can get cross-sectional image of bio-tissue (transparent and turbid) with non-invasion and non-touch. In this paper, the principle of OCT is presented and the crucial parameters of the system are discussed in theory. With analysis of different methods and medical endoscopic system's feature, a design which combines the spectral domain OCT (SDOCT) technique and endoscopy is put forward. SDOCT provides direct access to the spectrum of the optical signal. It is shown to provide higher imaging speed when compared to time domain OCT. At the meantime, a novel OCT probe which uses advanced micromotor to drive reflecting prism is designed according to alimentary tract endoscopic feature. A simple optical coherence tomography system has been developed based on a fiber-based Michelson interferometer and spectrometer. An experiment which uses motor to drive prism to realize rotating imaging is done. Images obtained with this spectral interferometer are presented. The results verify the feasibility of endoscopic optical coherence tomography system with rotating scan.
NecroQuant: quantitative assessment of radiological necrosis
NASA Astrophysics Data System (ADS)
Hwang, Darryl H.; Mohamed, Passant; Varghese, Bino A.; Cen, Steven Y.; Duddalwar, Vinay
2017-11-01
Clinicians can now objectively quantify tumor necrosis by Hounsfield units and enhancement characteristics from multiphase contrast enhanced CT imaging. NecroQuant has been designed to work as part of a radiomics pipelines. The software is a departure from the conventional qualitative assessment of tumor necrosis, as it provides the user (radiologists and researchers) a simple interface to precisely and interactively define and measure necrosis in contrast-enhanced CT images. Although, the software is tested here on renal masses, it can be re-configured to assess tumor necrosis across variety of tumors from different body sites, providing a generalized, open, portable, and extensible quantitative analysis platform that is widely applicable across cancer types to quantify tumor necrosis.
Q-controlled amplitude modulation atomic force microscopy in liquids: An analysis
NASA Astrophysics Data System (ADS)
Hölscher, H.; Schwarz, U. D.
2006-08-01
An analysis of amplitude modulation atomic force microscopy in liquids is presented with respect to the application of the Q-Control technique. The equation of motion is solved by numerical and analytic methods with and without Q-Control in the presence of a simple model interaction force adequate for many liquid environments. In addition, the authors give an explicit analytical formula for the tip-sample indentation showing that higher Q factors reduce the tip-sample force. It is found that Q-Control suppresses unwanted deformations of the sample surface, leading to the enhanced image quality reported in several experimental studies.
Discrimination of Oil Slicks and Lookalikes in Polarimetric SAR Images Using CNN.
Guo, Hao; Wu, Danni; An, Jubai
2017-08-09
Oil slicks and lookalikes (e.g., plant oil and oil emulsion) all appear as dark areas in polarimetric Synthetic Aperture Radar (SAR) images and are highly heterogeneous, so it is very difficult to use a single feature that can allow classification of dark objects in polarimetric SAR images as oil slicks or lookalikes. We established multi-feature fusion to support the discrimination of oil slicks and lookalikes. In the paper, simple discrimination analysis is used to rationalize a preferred features subset. The features analyzed include entropy, alpha, and Single-bounce Eigenvalue Relative Difference (SERD) in the C-band polarimetric mode. We also propose a novel SAR image discrimination method for oil slicks and lookalikes based on Convolutional Neural Network (CNN). The regions of interest are selected as the training and testing samples for CNN on the three kinds of polarimetric feature images. The proposed method is applied to a training data set of 5400 samples, including 1800 crude oil, 1800 plant oil, and 1800 oil emulsion samples. In the end, the effectiveness of the method is demonstrated through the analysis of some experimental results. The classification accuracy obtained using 900 samples of test data is 91.33%. It is here observed that the proposed method not only can accurately identify the dark spots on SAR images but also verify the ability of the proposed algorithm to classify unstructured features.
Discrimination of Oil Slicks and Lookalikes in Polarimetric SAR Images Using CNN
An, Jubai
2017-01-01
Oil slicks and lookalikes (e.g., plant oil and oil emulsion) all appear as dark areas in polarimetric Synthetic Aperture Radar (SAR) images and are highly heterogeneous, so it is very difficult to use a single feature that can allow classification of dark objects in polarimetric SAR images as oil slicks or lookalikes. We established multi-feature fusion to support the discrimination of oil slicks and lookalikes. In the paper, simple discrimination analysis is used to rationalize a preferred features subset. The features analyzed include entropy, alpha, and Single-bounce Eigenvalue Relative Difference (SERD) in the C-band polarimetric mode. We also propose a novel SAR image discrimination method for oil slicks and lookalikes based on Convolutional Neural Network (CNN). The regions of interest are selected as the training and testing samples for CNN on the three kinds of polarimetric feature images. The proposed method is applied to a training data set of 5400 samples, including 1800 crude oil, 1800 plant oil, and 1800 oil emulsion samples. In the end, the effectiveness of the method is demonstrated through the analysis of some experimental results. The classification accuracy obtained using 900 samples of test data is 91.33%. It is here observed that the proposed method not only can accurately identify the dark spots on SAR images but also verify the ability of the proposed algorithm to classify unstructured features. PMID:28792477
Sotanaphun, Uthai; Phattanawasin, Panadda; Sriphong, Lawan
2009-01-01
Curcumin, desmethoxycurcumin and bisdesmethoxycurcumin are bioactive constituents of turmeric (Curcuma longa). Owing to their different potency, quality control of turmeric based on the content of each curcuminoid is more reliable than that based on total curcuminoids. However, to perform such an assay, high-cost instrument is needed. To develop a simple and low-cost method for the simultaneous quantification of three curcuminoids in turmeric using TLC and the public-domain software Scion Image. The image of a TLC chromatogram of turmeric extract was recorded using a digital scanner. The density of the TLC spot of each curcuminoid was analysed by the Scion Image software. The density value was transformed to concentration by comparison with the calibration curve of standard curcuminoids developed on the same TLC plate. The polynomial regression data for all curcuminoids showed good linear relationship with R(2) > 0.99 in the concentration range of 0.375-6 microg/spot. The limits of detection and quantitation were 43-73 and 143-242 ng/spot, respectively. The method gave adequate precision, accuracy and recovery. The contents of each curcuminoid determined using this method were not significantly different from those determined using the TLC densitometric method. TLC image analysis using Scion Image is shown to be a reliable method for the simultaneous analysis of the content of each curcuminoid in turmeric.
Dosimetry of heavy ions by use of CCD detectors
NASA Technical Reports Server (NTRS)
Schott, J. U.
1994-01-01
The design and the atomic composition of Charge Coupled Devices (CCD's) make them unique for investigations of single energetic particle events. As detector system for ionizing particles they detect single particles with local resolution and near real time particle tracking. In combination with its properties as optical sensor, particle transversals of single particles are to be correlated to any objects attached to the light sensitive surface of the sensor by simple imaging of their shadow and subsequent image analysis of both, optical image and particle effects, observed in affected pixels. With biological objects it is possible for the first time to investigate effects of single heavy ions in tissue or extinguished organs of metabolizing (i.e. moving) systems with a local resolution better than 15 microns. Calibration data for particle detection in CCD's are presented for low energetic protons and heavy ions.
Patterson, Joseph P.; Sanchez, Ana M.; Petzetakis, Nikos; Smart, Thomas P.; Epps, Thomas H.; Portman, Ian
2013-01-01
Block copolymers are well-known to self-assemble into a range of 3-dimensional morphologies. However, due to their nanoscale dimensions, resolving their exact structure can be a challenge. Transmission electron microscopy (TEM) is a powerful technique for achieving this, but for polymeric assemblies chemical fixing/staining techniques are usually required to increase image contrast and protect specimens from electron beam damage. Graphene oxide (GO) is a robust, water-dispersable, and nearly electron transparent membrane: an ideal support for TEM. We show that when using GO supports no stains are required to acquire high contrast TEM images and that the specimens remain stable under the electron beam for long periods, allowing sample analysis by a range of electron microscopy techniques. GO supports are also used for further characterization of assemblies by atomic force microscopy. The simplicity of sample preparation and analysis, as well as the potential for significantly increased contrast background, make GO supports an attractive alternative for the analysis of block copolymer assemblies. PMID:24049544
Sato, Asako; Vogel, Viola; Tanaka, Yo
2017-01-01
The geometrical confinement of small cell colonies gives differential cues to cells sitting at the periphery versus the core. To utilize this effect, for example to create spatially graded differentiation patterns of human mesenchymal stem cells (hMSCs) in vitro or to investigate underpinning mechanisms, the confinement needs to be robust for extended time periods. To create highly repeatable micro-fabricated structures for cellular patterning and high-throughput data mining, we employed here a simple casting method to fabricate more than 800 adhesive patches confined by agarose micro-walls. In addition, a machine learning based image processing software was developed (open code) to detect the differentiation patterns of the population of hMSCs automatically. Utilizing the agarose walls, the circular patterns of hMSCs were successfully maintained throughout 15 days of cell culture. After staining lipid droplets and alkaline phosphatase as the markers of adipogenic and osteogenic differentiation, respectively, the mega-pixels of RGB color images of hMSCs were processed by the software on a laptop PC within several minutes. The image analysis successfully showed that hMSCs sitting on the more central versus peripheral sections of the adhesive circles showed adipogenic versus osteogenic differentiation as reported previously, indicating the compatibility of patterned agarose walls to conventional microcontact printing. In addition, we found a considerable fraction of undifferentiated cells which are preferentially located at the peripheral part of the adhesive circles, even in differentiation-inducing culture media. In this study, we thus successfully demonstrated a simple framework for analyzing the patterned differentiation of hMSCs in confined microenvironments, which has a range of applications in biology, including stem cell biology. PMID:28380036
Automated CT Scan Scores of Bronchiectasis and Air Trapping in Cystic Fibrosis
Swiercz, Waldemar; Heltshe, Sonya L.; Anthony, Margaret M.; Szefler, Paul; Klein, Rebecca; Strain, John; Brody, Alan S.; Sagel, Scott D.
2014-01-01
Background: Computer analysis of high-resolution CT (HRCT) scans may improve the assessment of structural lung injury in children with cystic fibrosis (CF). The goal of this cross-sectional pilot study was to validate automated, observer-independent image analysis software to establish objective, simple criteria for bronchiectasis and air trapping. Methods: HRCT scans of the chest were performed in 35 children with CF and compared with scans from 12 disease control subjects. Automated image analysis software was developed to count visible airways on inspiratory images and to measure a low attenuation density (LAD) index on expiratory images. Among the children with CF, relationships among automated measures, Brody HRCT scanning scores, lung function, and sputum markers of inflammation were assessed. Results: The number of total, central, and peripheral airways on inspiratory images and LAD (%) on expiratory images were significantly higher in children with CF compared with control subjects. Among subjects with CF, peripheral airway counts correlated strongly with Brody bronchiectasis scores by two raters (r = 0.86, P < .0001; r = 0.91, P < .0001), correlated negatively with lung function, and were positively associated with sputum free neutrophil elastase activity. LAD (%) correlated with Brody air trapping scores (r = 0.83, P < .0001; r = 0.69, P < .0001) but did not correlate with lung function or sputum inflammatory markers. Conclusions: Quantitative airway counts and LAD (%) on HRCT scans appear to be useful surrogates for bronchiectasis and air trapping in children with CF. Our automated methodology provides objective quantitative measures of bronchiectasis and air trapping that may serve as end points in CF clinical trials. PMID:24114359
Courtney, Jane; Woods, Elena; Scholz, Dimitri; Hall, William W; Gautier, Virginie W
2015-01-01
We introduce here MATtrack, an open source MATLAB-based computational platform developed to process multi-Tiff files produced by a photo-conversion time lapse protocol for live cell fluorescent microscopy. MATtrack automatically performs a series of steps required for image processing, including extraction and import of numerical values from Multi-Tiff files, red/green image classification using gating parameters, noise filtering, background extraction, contrast stretching and temporal smoothing. MATtrack also integrates a series of algorithms for quantitative image analysis enabling the construction of mean and standard deviation images, clustering and classification of subcellular regions and injection point approximation. In addition, MATtrack features a simple user interface, which enables monitoring of Fluorescent Signal Intensity in multiple Regions of Interest, over time. The latter encapsulates a region growing method to automatically delineate the contours of Regions of Interest selected by the user, and performs background and regional Average Fluorescence Tracking, and automatic plotting. Finally, MATtrack computes convenient visualization and exploration tools including a migration map, which provides an overview of the protein intracellular trajectories and accumulation areas. In conclusion, MATtrack is an open source MATLAB-based software package tailored to facilitate the analysis and visualization of large data files derived from real-time live cell fluorescent microscopy using photoconvertible proteins. It is flexible, user friendly, compatible with Windows, Mac, and Linux, and a wide range of data acquisition software. MATtrack is freely available for download at eleceng.dit.ie/courtney/MATtrack.zip.
Courtney, Jane; Woods, Elena; Scholz, Dimitri; Hall, William W.; Gautier, Virginie W.
2015-01-01
We introduce here MATtrack, an open source MATLAB-based computational platform developed to process multi-Tiff files produced by a photo-conversion time lapse protocol for live cell fluorescent microscopy. MATtrack automatically performs a series of steps required for image processing, including extraction and import of numerical values from Multi-Tiff files, red/green image classification using gating parameters, noise filtering, background extraction, contrast stretching and temporal smoothing. MATtrack also integrates a series of algorithms for quantitative image analysis enabling the construction of mean and standard deviation images, clustering and classification of subcellular regions and injection point approximation. In addition, MATtrack features a simple user interface, which enables monitoring of Fluorescent Signal Intensity in multiple Regions of Interest, over time. The latter encapsulates a region growing method to automatically delineate the contours of Regions of Interest selected by the user, and performs background and regional Average Fluorescence Tracking, and automatic plotting. Finally, MATtrack computes convenient visualization and exploration tools including a migration map, which provides an overview of the protein intracellular trajectories and accumulation areas. In conclusion, MATtrack is an open source MATLAB-based software package tailored to facilitate the analysis and visualization of large data files derived from real-time live cell fluorescent microscopy using photoconvertible proteins. It is flexible, user friendly, compatible with Windows, Mac, and Linux, and a wide range of data acquisition software. MATtrack is freely available for download at eleceng.dit.ie/courtney/MATtrack.zip. PMID:26485569
Bokhart, Mark T; Nazari, Milad; Garrard, Kenneth P; Muddiman, David C
2018-01-01
A major update to the mass spectrometry imaging (MSI) software MSiReader is presented, offering a multitude of newly added features critical to MSI analyses. MSiReader is a free, open-source, and vendor-neutral software written in the MATLAB platform and is capable of analyzing most common MSI data formats. A standalone version of the software, which does not require a MATLAB license, is also distributed. The newly incorporated data analysis features expand the utility of MSiReader beyond simple visualization of molecular distributions. The MSiQuantification tool allows researchers to calculate absolute concentrations from quantification MSI experiments exclusively through MSiReader software, significantly reducing data analysis time. An image overlay feature allows the incorporation of complementary imaging modalities to be displayed with the MSI data. A polarity filter has also been incorporated into the data loading step, allowing the facile analysis of polarity switching experiments without the need for data parsing prior to loading the data file into MSiReader. A quality assurance feature to generate a mass measurement accuracy (MMA) heatmap for an analyte of interest has also been added to allow for the investigation of MMA across the imaging experiment. Most importantly, as new features have been added performance has not degraded, in fact it has been dramatically improved. These new tools and the improvements to the performance in MSiReader v1.0 enable the MSI community to evaluate their data in greater depth and in less time. Graphical Abstract ᅟ.
New Tools for Comparing Microscopy Images: Quantitative Analysis of Cell Types in Bacillus subtilis
van Gestel, Jordi; Vlamakis, Hera
2014-01-01
Fluorescence microscopy is a method commonly used to examine individual differences between bacterial cells, yet many studies still lack a quantitative analysis of fluorescence microscopy data. Here we introduce some simple tools that microbiologists can use to analyze and compare their microscopy images. We show how image data can be converted to distribution data. These data can be subjected to a cluster analysis that makes it possible to objectively compare microscopy images. The distribution data can further be analyzed using distribution fitting. We illustrate our methods by scrutinizing two independently acquired data sets, each containing microscopy images of a doubly labeled Bacillus subtilis strain. For the first data set, we examined the expression of srfA and tapA, two genes which are expressed in surfactin-producing and matrix-producing cells, respectively. For the second data set, we examined the expression of eps and tapA; these genes are expressed in matrix-producing cells. We show that srfA is expressed by all cells in the population, a finding which contrasts with a previously reported bimodal distribution of srfA expression. In addition, we show that eps and tapA do not always have the same expression profiles, despite being expressed in the same cell type: both operons are expressed in cell chains, while single cells mainly express eps. These findings exemplify that the quantification and comparison of microscopy data can yield insights that otherwise would go unnoticed. PMID:25448819
New tools for comparing microscopy images: quantitative analysis of cell types in Bacillus subtilis.
van Gestel, Jordi; Vlamakis, Hera; Kolter, Roberto
2015-02-15
Fluorescence microscopy is a method commonly used to examine individual differences between bacterial cells, yet many studies still lack a quantitative analysis of fluorescence microscopy data. Here we introduce some simple tools that microbiologists can use to analyze and compare their microscopy images. We show how image data can be converted to distribution data. These data can be subjected to a cluster analysis that makes it possible to objectively compare microscopy images. The distribution data can further be analyzed using distribution fitting. We illustrate our methods by scrutinizing two independently acquired data sets, each containing microscopy images of a doubly labeled Bacillus subtilis strain. For the first data set, we examined the expression of srfA and tapA, two genes which are expressed in surfactin-producing and matrix-producing cells, respectively. For the second data set, we examined the expression of eps and tapA; these genes are expressed in matrix-producing cells. We show that srfA is expressed by all cells in the population, a finding which contrasts with a previously reported bimodal distribution of srfA expression. In addition, we show that eps and tapA do not always have the same expression profiles, despite being expressed in the same cell type: both operons are expressed in cell chains, while single cells mainly express eps. These findings exemplify that the quantification and comparison of microscopy data can yield insights that otherwise would go unnoticed. Copyright © 2015, American Society for Microbiology. All Rights Reserved.
Soltanipour, Asieh; Sadri, Saeed; Rabbani, Hossein; Akhlaghi, Mohammad Reza
2015-01-01
This paper presents a new procedure for automatic extraction of the blood vessels and optic disk (OD) in fundus fluorescein angiogram (FFA). In order to extract blood vessel centerlines, the algorithm of vessel extraction starts with the analysis of directional images resulting from sub-bands of fast discrete curvelet transform (FDCT) in the similar directions and different scales. For this purpose, each directional image is processed by using information of the first order derivative and eigenvalues obtained from the Hessian matrix. The final vessel segmentation is obtained using a simple region growing algorithm iteratively, which merges centerline images with the contents of images resulting from modified top-hat transform followed by bit plane slicing. After extracting blood vessels from FFA image, candidates regions for OD are enhanced by removing blood vessels from the FFA image, using multi-structure elements morphology, and modification of FDCT coefficients. Then, canny edge detector and Hough transform are applied to the reconstructed image to extract the boundary of candidate regions. At the next step, the information of the main arc of the retinal vessels surrounding the OD region is used to extract the actual location of the OD. Finally, the OD boundary is detected by applying distance regularized level set evolution. The proposed method was tested on the FFA images from angiography unit of Isfahan Feiz Hospital, containing 70 FFA images from different diabetic retinopathy stages. The experimental results show the accuracy more than 93% for vessel segmentation and more than 87% for OD boundary extraction.
Psoriasis skin biopsy image segmentation using Deep Convolutional Neural Network.
Pal, Anabik; Garain, Utpal; Chandra, Aditi; Chatterjee, Raghunath; Senapati, Swapan
2018-06-01
Development of machine assisted tools for automatic analysis of psoriasis skin biopsy image plays an important role in clinical assistance. Development of automatic approach for accurate segmentation of psoriasis skin biopsy image is the initial prerequisite for developing such system. However, the complex cellular structure, presence of imaging artifacts, uneven staining variation make the task challenging. This paper presents a pioneering attempt for automatic segmentation of psoriasis skin biopsy images. Several deep neural architectures are tried for segmenting psoriasis skin biopsy images. Deep models are used for classifying the super-pixels generated by Simple Linear Iterative Clustering (SLIC) and the segmentation performance of these architectures is compared with the traditional hand-crafted feature based classifiers built on popularly used classifiers like K-Nearest Neighbor (KNN), Support Vector Machine (SVM) and Random Forest (RF). A U-shaped Fully Convolutional Neural Network (FCN) is also used in an end to end learning fashion where input is the original color image and the output is the segmentation class map for the skin layers. An annotated real psoriasis skin biopsy image data set of ninety (90) images is developed and used for this research. The segmentation performance is evaluated with two metrics namely, Jaccard's Coefficient (JC) and the Ratio of Correct Pixel Classification (RCPC) accuracy. The experimental results show that the CNN based approaches outperform the traditional hand-crafted feature based classification approaches. The present research shows that practical system can be developed for machine assisted analysis of psoriasis disease. Copyright © 2018 Elsevier B.V. All rights reserved.
Soltanipour, Asieh; Sadri, Saeed; Rabbani, Hossein; Akhlaghi, Mohammad Reza
2015-01-01
This paper presents a new procedure for automatic extraction of the blood vessels and optic disk (OD) in fundus fluorescein angiogram (FFA). In order to extract blood vessel centerlines, the algorithm of vessel extraction starts with the analysis of directional images resulting from sub-bands of fast discrete curvelet transform (FDCT) in the similar directions and different scales. For this purpose, each directional image is processed by using information of the first order derivative and eigenvalues obtained from the Hessian matrix. The final vessel segmentation is obtained using a simple region growing algorithm iteratively, which merges centerline images with the contents of images resulting from modified top-hat transform followed by bit plane slicing. After extracting blood vessels from FFA image, candidates regions for OD are enhanced by removing blood vessels from the FFA image, using multi-structure elements morphology, and modification of FDCT coefficients. Then, canny edge detector and Hough transform are applied to the reconstructed image to extract the boundary of candidate regions. At the next step, the information of the main arc of the retinal vessels surrounding the OD region is used to extract the actual location of the OD. Finally, the OD boundary is detected by applying distance regularized level set evolution. The proposed method was tested on the FFA images from angiography unit of Isfahan Feiz Hospital, containing 70 FFA images from different diabetic retinopathy stages. The experimental results show the accuracy more than 93% for vessel segmentation and more than 87% for OD boundary extraction. PMID:26284170
Comparative study of standard space and real space analysis of quantitative MR brain data.
Aribisala, Benjamin S; He, Jiabao; Blamire, Andrew M
2011-06-01
To compare the robustness of region of interest (ROI) analysis of magnetic resonance imaging (MRI) brain data in real space with analysis in standard space and to test the hypothesis that standard space image analysis introduces more partial volume effect errors compared to analysis of the same dataset in real space. Twenty healthy adults with no history or evidence of neurological diseases were recruited; high-resolution T(1)-weighted, quantitative T(1), and B(0) field-map measurements were collected. Algorithms were implemented to perform analysis in real and standard space and used to apply a simple standard ROI template to quantitative T(1) datasets. Regional relaxation values and histograms for both gray and white matter tissues classes were then extracted and compared. Regional mean T(1) values for both gray and white matter were significantly lower using real space compared to standard space analysis. Additionally, regional T(1) histograms were more compact in real space, with smaller right-sided tails indicating lower partial volume errors compared to standard space analysis. Standard space analysis of quantitative MRI brain data introduces more partial volume effect errors biasing the analysis of quantitative data compared to analysis of the same dataset in real space. Copyright © 2011 Wiley-Liss, Inc.
Factors Affecting the Changes of Ice Crystal Form in Ice Cream
NASA Astrophysics Data System (ADS)
Wang, Xin; Watanabe, Manabu; Suzuki, Toru
In this study, the shape of ice crystals in ice cream was quantitatively evaluated by introducing fractal analysis. A small droplet of commercial ice cream mix was quickly cooled to about -30°C on the cold stage of microscope. Subsequently, it was heated to -5°C or -10°C and then held for various holding time. Based on the captured images at each holding time, the cross-sectional area and the length of circumference for each ice crystal were measured to calculate fractal dimension using image analysis software. The results showed that the ice crystals were categorized into two groups, e.g. simple-shape and complicated-shape, according to their fractal dimensions. The fractal dimension of ice crystals became lower with increasing holding time and holding temperature. It was also indicated that the growing rate of complicated-shape ice crystals was relatively higher because of aggregation.
Baldissin, Maurício Martins; Souza, Edna Marina de
2013-12-01
Refractory epilepsies are syndromes for which therapies that employ two or more antiepileptic drugs, separately or in association, do not result in control of crisis. Patients may present focal cortical dysplasia or diffuse dysplasia and/or hippocampal atrophic alterations that may not be detectable by a simple visual analysis in magnetic resonance imaging. The aim of this study was to evaluate MRI texture in regions of interest located in the hippocampi, limbic association cortex and prefrontal cortex of 20 patients with refractory epilepsy and to compare them with the same areas in 20 healthy individuals, in order to find out if the texture parameters could be related to the presence of the disease. Of the 11 texture parameters calculated, three indicated the existence of statistically significant differences between the studied groups. Such findings suggest the possibility of this technique contributing to studies of refractory epilepsies.
Thompson, Logan C.; Ciesielski, Peter N.; Jarvis, Mark W.; ...
2017-07-12
Here, biomass particles can experience variable thermal conditions during fast pyrolysis due to differences in their size and morphology, and from local temperature variations within a reactor. These differences lead to increased heterogeneity of the chemical products obtained in the pyrolysis vapors and bio-oil. Here we present a simple, high-throughput method to investigate the thermal history experienced by large ensembles of particles during fast pyrolysis by imaging and quantitative image analysis. We present a correlation between the surface luminance (darkness) of the biochar particle and the highest temperature that it experienced during pyrolysis. Next, we apply this correlation to large,more » heterogeneous ensembles of char particles produced in a laminar entrained flow reactor (LEFR). The results are used to interpret the actual temperature distributions delivered by the reactor over a range of operating conditions.« less
NASA Astrophysics Data System (ADS)
Jakovels, Dainis; Saknite, Inga; Spigulis, Janis
2014-05-01
Laser speckle contrast analysis (LASCA) offers a non-contact, full-field, and real-time mapping of capillary blood flow and can be considered as an alternative method to Laser Doppler perfusion imaging. LASCA technique has been implemented in several commercial instruments. However, these systems are still too expensive and bulky to be widely available. Several optical techniques have found new implementations as connection kits for mobile phones thus offering low cost screening devices. In this work we demonstrate simple implementation of LASCA imaging technique as connection kit for mobile phone for primary low-cost assessment of skin blood flow. Stabilized 650 nm and 532 nm laser diode modules were used for LASCA illumination. Dual wavelength illumination could provide additional information about skin hemoglobin and oxygenation level. The proposed approach was tested for arterial occlusion and heat test. Besides, blood flow maps of injured and provoked skin were demonstrated.
A manual for inexpensive methods of analyzing and utilizing remote sensor data
NASA Technical Reports Server (NTRS)
Elifrits, C. D.; Barr, D. J.
1978-01-01
Instructions are provided for inexpensive methods of using remote sensor data to assist in the completion of the need to observe the earth's surface. When possible, relative costs were included. Equipment need for analysis of remote sensor data is described, and methods of use of these equipment items are included, as well as advantages and disadvantages of the use of individual items. Interpretation and analysis of stereo photos and the interpretation of typical patterns such as tone and texture, landcover, drainage, and erosional form are described. Similar treatment is given to monoscopic image interpretation, including LANDSAT MSS data. Enhancement techniques are detailed with respect to their application and simple techniques of creating an enhanced data item. Techniques described include additive and subtractive (Diazo processes) color techniques and enlargement of photos or images. Applications of these processes, including mappings of land resources, engineering soils, geology, water resources, environmental conditions, and crops and/or vegetation, are outlined.
NASA Astrophysics Data System (ADS)
Ebbeni, Jean
Included in this volume are papers on real-time image enhancement by simple video systems, automatic identification and data collection via barcode laser scanning, the optimization of the cutting up of a strip of float glass, optical sensors for factory automation, and the use of a digital theodolite with infrared radiation. Attention is also given to ISIS (integrated shape imaging system), a new system for follow-up of scoliosis; optical diffraction extensometers; a cross-spectrum technique for high-sensitivity remote vibration analysis by optical interferometry; the compensation and measurement of any motion of three-dimensional objects in holographic interferometry; and stereoscreen. Additional papers are on holographic double pulse YAG lasers, miniature optic connectors, stress-field analysis in an adhesively bonded joint with laser photoelasticimetry, and the locking of the light pulse delay in externally triggered gas lasers.
NASA Astrophysics Data System (ADS)
Ruan, Shaobo; Qian, Jun; Shen, Shun; Zhu, Jianhua; Jiang, Xinguo; He, Qin; Gao, Huile
2014-08-01
Fluorescent carbon dots (CD) possess impressive potential in bioimaging because of their low photobleaching, absence of optical blinking and good biocompatibility. However, their relatively short excitation/emission wavelengths restrict their application in in vivo imaging. In the present study, a kind of CD was prepared by a simple heat treatment method using glycine as the only precursor. The diameter of CD was lower than 5 nm, and the highest emission wavelength was 500 nm. However, at 600 nm, there was still a relatively strong fluorescent emission, suggesting CD could be used for in vivo imaging. Additionally, several experiments demonstrated that CD possessed good serum stability and low cytotoxicity. In vitro, CD could be taken up into C6 glioma cells in a time- and concentration-dependent manner, with both endosomes and mitochondria involved. In vivo, CD could be used for non-invasive glioma imaging because of its high accumulation in the glioma site of the brain, which was demonstrated by both in vivo imaging and ex vivo tissue imaging. Furthermore, the fluorescent distribution in tissue slices also showed CD distributed in glioma with high intensity, while with a low intensity in normal brain tissue. In conclusion, CD were prepared using a simple method with relatively long excitation and emission wavelengths and could be used for non-invasive glioma imaging.Fluorescent carbon dots (CD) possess impressive potential in bioimaging because of their low photobleaching, absence of optical blinking and good biocompatibility. However, their relatively short excitation/emission wavelengths restrict their application in in vivo imaging. In the present study, a kind of CD was prepared by a simple heat treatment method using glycine as the only precursor. The diameter of CD was lower than 5 nm, and the highest emission wavelength was 500 nm. However, at 600 nm, there was still a relatively strong fluorescent emission, suggesting CD could be used for in vivo imaging. Additionally, several experiments demonstrated that CD possessed good serum stability and low cytotoxicity. In vitro, CD could be taken up into C6 glioma cells in a time- and concentration-dependent manner, with both endosomes and mitochondria involved. In vivo, CD could be used for non-invasive glioma imaging because of its high accumulation in the glioma site of the brain, which was demonstrated by both in vivo imaging and ex vivo tissue imaging. Furthermore, the fluorescent distribution in tissue slices also showed CD distributed in glioma with high intensity, while with a low intensity in normal brain tissue. In conclusion, CD were prepared using a simple method with relatively long excitation and emission wavelengths and could be used for non-invasive glioma imaging. Electronic supplementary information (ESI) available. See DOI: 10.1039/c4nr02657h
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chung, Hyekyun
Purpose: Cone-beam CT (CBCT) is a widely used imaging modality for image-guided radiotherapy. Most vendors provide CBCT systems that are mounted on a linac gantry. Thus, CBCT can be used to estimate the actual 3-dimensional (3D) position of moving respiratory targets in the thoracic/abdominal region using 2D projection images. The authors have developed a method for estimating the 3D trajectory of respiratory-induced target motion from CBCT projection images using interdimensional correlation modeling. Methods: Because the superior–inferior (SI) motion of a target can be easily analyzed on projection images of a gantry-mounted CBCT system, the authors investigated the interdimensional correlation ofmore » the SI motion with left–right and anterior–posterior (AP) movements while the gantry is rotating. A simple linear model and a state-augmented model were implemented and applied to the interdimensional correlation analysis, and their performance was compared. The parameters of the interdimensional correlation models were determined by least-square estimation of the 2D error between the actual and estimated projected target position. The method was validated using 160 3D tumor trajectories from 46 thoracic/abdominal cancer patients obtained during CyberKnife treatment. The authors’ simulations assumed two application scenarios: (1) retrospective estimation for the purpose of moving tumor setup used just after volumetric matching with CBCT; and (2) on-the-fly estimation for the purpose of real-time target position estimation during gating or tracking delivery, either for full-rotation volumetric-modulated arc therapy (VMAT) in 60 s or a stationary six-field intensity-modulated radiation therapy (IMRT) with a beam delivery time of 20 s. Results: For the retrospective CBCT simulations, the mean 3D root-mean-square error (RMSE) for all 4893 trajectory segments was 0.41 mm (simple linear model) and 0.35 mm (state-augmented model). In the on-the-fly simulations, prior projections over more than 60° appear to be necessary for reliable estimations. The mean 3D RMSE during beam delivery after the simple linear model had established with a prior 90° projection data was 0.42 mm for VMAT and 0.45 mm for IMRT. Conclusions: The proposed method does not require any internal/external correlation or statistical modeling to estimate the target trajectory and can be used for both retrospective image-guided radiotherapy with CBCT projection images and real-time target position monitoring for respiratory gating or tracking.« less
Subcutaneous Tissue Thickness is an Independent Predictor of Image Noise in Cardiac CT
Staniak, Henrique Lane; Sharovsky, Rodolfo; Pereira, Alexandre Costa; de Castro, Cláudio Campi; Benseñor, Isabela M.; Lotufo, Paulo A.; Bittencourt, Márcio Sommer
2014-01-01
Background Few data on the definition of simple robust parameters to predict image noise in cardiac computed tomography (CT) exist. Objectives To evaluate the value of a simple measure of subcutaneous tissue as a predictor of image noise in cardiac CT. Methods 86 patients underwent prospective ECG-gated coronary computed tomographic angiography (CTA) and coronary calcium scoring (CAC) with 120 kV and 150 mA. The image quality was objectively measured by the image noise in the aorta in the cardiac CTA, and low noise was defined as noise < 30HU. The chest anteroposterior diameter and lateral width, the image noise in the aorta and the skin-sternum (SS) thickness were measured as predictors of cardiac CTA noise. The association of the predictors and image noise was performed by using Pearson correlation. Results The mean radiation dose was 3.5 ± 1.5 mSv. The mean image noise in CT was 36.3 ± 8.5 HU, and the mean image noise in non-contrast scan was 17.7 ± 4.4 HU. All predictors were independently associated with cardiac CTA noise. The best predictors were SS thickness, with a correlation of 0.70 (p < 0.001), and noise in the non-contrast images, with a correlation of 0.73 (p < 0.001). When evaluating the ability to predict low image noise, the areas under the ROC curve for the non-contrast noise and for the SS thickness were 0.837 and 0.864, respectively. Conclusion Both SS thickness and CAC noise are simple accurate predictors of cardiac CTA image noise. Those parameters can be incorporated in standard CT protocols to adequately adjust radiation exposure. PMID:24173136
Legleiter, C.J.; Kinzel, P.J.; Overstreet, B.T.
2011-01-01
This study examined the possibility of mapping depth from optical image data in turbid, sediment-laden channels. Analysis of hyperspectral images from the Platte River indicated that depth retrieval in these environments is feasible, but might not be highly accurate. Four methods of calibrating image-derived depth estimates were evaluated. The first involved extracting image spectra at survey point locations throughout the reach. These paired observations of depth and reflectance were subjected to optimal band ratio analysis (OBRA) to relate (R2 = 0.596) a spectrally based quantity to flow depth. Two other methods were based on OBRA of data from individual cross sections. A fourth strategy used ground-based reflectance measurements to derive an OBRA relation (R2 = 0.944) that was then applied to the image. Depth retrieval accuracy was assessed by visually inspecting cross sections and calculating various error metrics. Calibration via field spectroscopy resulted in a shallow bias but provided relative accuracies similar to image-based methods. Reach-aggregated OBRA was marginally superior to calibrations based on individual cross sections, and depth retrieval accuracy varied considerably along each reach. Errors were lower and observed versus predicted regression R2 values higher for a relatively simple, deeper site than a shallower, braided reach; errors were 1/3 and 1/2 the mean depth for the two reaches. Bathymetric maps were coherent and hydraulically reasonable, however, and might be more reliable than implied by numerical metrics. As an example application, linear discriminant analysis was used to produce a series of depth threshold maps for characterizing shallow-water habitat for roosting cranes. ?? 2011 by the American Geophysical Union.
Legleiter, Carl J.; Kinzel, Paul J.; Overstreet, Brandon T.
2011-01-01
This study examined the possibility of mapping depth from optical image data in turbid, sediment-laden channels. Analysis of hyperspectral images from the Platte River indicated that depth retrieval in these environments is feasible, but might not be highly accurate. Four methods of calibrating image-derived depth estimates were evaluated. The first involved extracting image spectra at survey point locations throughout the reach. These paired observations of depth and reflectance were subjected to optimal band ratio analysis (OBRA) to relate (R2 = 0.596) a spectrally based quantity to flow depth. Two other methods were based on OBRA of data from individual cross sections. A fourth strategy used ground-based reflectance measurements to derive an OBRA relation (R2 = 0.944) that was then applied to the image. Depth retrieval accuracy was assessed by visually inspecting cross sections and calculating various error metrics. Calibration via field spectroscopy resulted in a shallow bias but provided relative accuracies similar to image-based methods. Reach-aggregated OBRA was marginally superior to calibrations based on individual cross sections, and depth retrieval accuracy varied considerably along each reach. Errors were lower and observed versus predicted regression R2 values higher for a relatively simple, deeper site than a shallower, braided reach; errors were 1/3 and 1/2 the mean depth for the two reaches. Bathymetric maps were coherent and hydraulically reasonable, however, and might be more reliable than implied by numerical metrics. As an example application, linear discriminant analysis was used to produce a series of depth threshold maps for characterizing shallow-water habitat for roosting cranes.
Characterizing pigments with hyperspectral imaging variable false-color composites
NASA Astrophysics Data System (ADS)
Hayem-Ghez, Anita; Ravaud, Elisabeth; Boust, Clotilde; Bastian, Gilles; Menu, Michel; Brodie-Linder, Nancy
2015-11-01
Hyperspectral imaging has been used for pigment characterization on paintings for the last 10 years. It is a noninvasive technique, which mixes the power of spectrophotometry and that of imaging technologies. We have access to a visible and near-infrared hyperspectral camera, ranging from 400 to 1000 nm in 80-160 spectral bands. In order to treat the large amount of data that this imaging technique generates, one can use statistical tools such as principal component analysis (PCA). To conduct the characterization of pigments, researchers mostly use PCA, convex geometry algorithms and the comparison of resulting clusters to database spectra with a specific tolerance (like the Spectral Angle Mapper tool on the dedicated software ENVI). Our approach originates from false-color photography and aims at providing a simple tool to identify pigments thanks to imaging spectroscopy. It can be considered as a quick first analysis to see the principal pigments of a painting, before using a more complete multivariate statistical tool. We study pigment spectra, for each kind of hue (blue, green, red and yellow) to identify the wavelength maximizing spectral differences. The case of red pigments is most interesting because our methodology can discriminate the red pigments very well—even red lakes, which are always difficult to identify. As for the yellow and blue categories, it represents a good progress of IRFC photography for pigment discrimination. We apply our methodology to study the pigments on a painting by Eustache Le Sueur, a French painter of the seventeenth century. We compare the results to other noninvasive analysis like X-ray fluorescence and optical microscopy. Finally, we draw conclusions about the advantages and limits of the variable false-color image method using hyperspectral imaging.
Simple Köhler homogenizers for image-forming solar concentrators
NASA Astrophysics Data System (ADS)
Zhang, Weiya; Winston, Roland
2010-08-01
By adding simple Köhler homogenizers in the form of aspheric lenses generated with an optimization approach, we solve the problems of non-uniform irradiance distribution and non-square irradiance pattern existing in some image-forming solar concentrators. The homogenizers do not require optical bonding to the solar cells or total internal reflection surface. Two examples are shown including a Fresnel lens based concentrator and a two-mirror aplanatic system.
A simple method for multiday imaging of slice cultures.
Seidl, Armin H; Rubel, Edwin W
2010-01-01
The organotypic slice culture (Stoppini et al. A simple method for organotypic cultures of nervous tissue. 1991;37:173-182) has become the method of choice to answer a variety of questions in neuroscience. For many experiments, however, it would be beneficial to image or manipulate a slice culture repeatedly, for example, over the course of many days. We prepared organotypic slice cultures of the auditory brainstem of P3 and P4 mice and kept them in vitro for up to 4 weeks. Single cells in the auditory brainstem were transfected with plasmids expressing fluorescent proteins by way of electroporation (Haas et al. Single-cell electroporation for gene transfer in vivo. 2001;29:583-591). The culture was then placed in a chamber perfused with oxygenated ACSF and the labeled cell imaged with an inverted wide-field microscope repeatedly for multiple days, recording several time-points per day, before returning the slice to the incubator. We describe a simple method to image a slice culture preparation during the course of multiple days and over many continuous hours, without noticeable damage to the tissue or photobleaching. Our method uses a simple, inexpensive custom-built insulator constructed around the microscope to maintain controlled temperature and uses a perfusion chamber as used for in vitro slice recordings. (c) 2009 Wiley-Liss, Inc.
NASA Astrophysics Data System (ADS)
Gabai, Haniel; Baranes-Zeevi, Maya; Zilberman, Meital; Shaked, Natan T.
2013-04-01
We propose an off-axis interferometric imaging system as a simple and unique modality for continuous, non-contact and non-invasive wide-field imaging and characterization of drug release from its polymeric device used in biomedicine. In contrast to the current gold-standard methods in this field, usually based on chromatographic and spectroscopic techniques, our method requires no user intervention during the experiment, and only one test-tube is prepared. We experimentally demonstrate imaging and characterization of drug release from soy-based protein matrix, used as skin equivalent for wound dressing with controlled anesthetic, Bupivacaine drug release. Our preliminary results demonstrate the high potential of our method as a simple and low-cost modality for wide-field imaging and characterization of drug release from drug delivery devices.
Sieracki, M E; Reichenbach, S E; Webb, K L
1989-01-01
The accurate measurement of bacterial and protistan cell biomass is necessary for understanding their population and trophic dynamics in nature. Direct measurement of fluorescently stained cells is often the method of choice. The tedium of making such measurements visually on the large numbers of cells required has prompted the use of automatic image analysis for this purpose. Accurate measurements by image analysis require an accurate, reliable method of segmenting the image, that is, distinguishing the brightly fluorescing cells from a dark background. This is commonly done by visually choosing a threshold intensity value which most closely coincides with the outline of the cells as perceived by the operator. Ideally, an automated method based on the cell image characteristics should be used. Since the optical nature of edges in images of light-emitting, microscopic fluorescent objects is different from that of images generated by transmitted or reflected light, it seemed that automatic segmentation of such images may require special considerations. We tested nine automated threshold selection methods using standard fluorescent microspheres ranging in size and fluorescence intensity and fluorochrome-stained samples of cells from cultures of cyanobacteria, flagellates, and ciliates. The methods included several variations based on the maximum intensity gradient of the sphere profile (first derivative), the minimum in the second derivative of the sphere profile, the minimum of the image histogram, and the midpoint intensity. Our results indicated that thresholds determined visually and by first-derivative methods tended to overestimate the threshold, causing an underestimation of microsphere size. The method based on the minimum of the second derivative of the profile yielded the most accurate area estimates for spheres of different sizes and brightnesses and for four of the five cell types tested. A simple model of the optical properties of fluorescing objects and the video acquisition system is described which explains how the second derivative best approximates the position of the edge. Images PMID:2516431
Automating PACS quality control with the Vanderbilt image processing enterprise resource
NASA Astrophysics Data System (ADS)
Esparza, Michael L.; Welch, E. Brian; Landman, Bennett A.
2012-02-01
Precise image acquisition is an integral part of modern patient care and medical imaging research. Periodic quality control using standardized protocols and phantoms ensures that scanners are operating according to specifications, yet such procedures do not ensure that individual datasets are free from corruption; for example due to patient motion, transient interference, or physiological variability. If unacceptable artifacts are noticed during scanning, a technologist can repeat a procedure. Yet, substantial delays may be incurred if a problematic scan is not noticed until a radiologist reads the scans or an automated algorithm fails. Given scores of slices in typical three-dimensional scans and widevariety of potential use cases, a technologist cannot practically be expected inspect all images. In large-scale research, automated pipeline systems have had great success in achieving high throughput. However, clinical and institutional workflows are largely based on DICOM and PACS technologies; these systems are not readily compatible with research systems due to security and privacy restrictions. Hence, quantitative quality control has been relegated to individual investigators and too often neglected. Herein, we propose a scalable system, the Vanderbilt Image Processing Enterprise Resource (VIPER) to integrate modular quality control and image analysis routines with a standard PACS configuration. This server unifies image processing routines across an institutional level and provides a simple interface so that investigators can collaborate to deploy new analysis technologies. VIPER integrates with high performance computing environments has successfully analyzed all standard scans from our institutional research center over the course of the last 18 months.
Is Fourier analysis performed by the visual system or by the visual investigator.
Ochs, A L
1979-01-01
A numerical Fourier transform was made of the pincushion grid illusion and the spectral components orthogonal to the illusory lines were isolated. Their inverse transform creates a picture of the illusion. The spatial-frequency response of cortical, simple receptive field neurons similarly filters the grid. A complete set of these neurons thus approximates a two-dimensional Fourier analyzer. One cannot conclude, however, that the brain actually uses frequency-domain information to interpret visual images.
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
CDS (Change Detection Systems) is a mechanism for rapid visual analysis using complex image alignment algorithms. CDS is controlled with a simple interface that has been designed for use for anyone that can operate a digital camera. A challenge of complex industrial systems like nuclear power plants is to accurately identify changes in systems, structures and components that may critically impact the operation of the facility. CDS can provide a means of early intervention before the issues evolve into safety and production challenges.
Picture Pile: A citizen-powered tool for rapid post-disaster damage assessments
NASA Astrophysics Data System (ADS)
Danylo, Olha; Sturn, Tobias; Giovando, Cristiano; Moorthy, Inian; Fritz, Steffen; See, Linda; Kapur, Ravi; Girardot, Blake; Ajmar, Andrea; Giulio Tonolo, Fabio; Reinicke, Tobias; Mathieu, Pierre Philippe; Duerauer, Martina
2017-04-01
According to the World Bank's global risk analysis, around 34% of the total world's population lives in areas of high mortality risk from two or more natural hazards. Therefore, timely and innovative methods to rapidly assess damage to subsequently aid relief and recovery efforts are critical. In this field of post-disaster damage assessment, several crowdsourcing-based technological tools that engage citizens in carrying out various tasks, including data collection, satellite image analysis and online interactive mapping, have recently been developed. One such tool is Picture Pile, a cross-platform application that is designed as a generic and flexible tool for ingesting satellite imagery for rapid classification. As part of the ESA's Crowd4Sat initiative led by Imperative Space, this study develops a workflow for employing Picture Pile for rapid post-disaster damage assessment. We outline how satellite image interpretation tasks within Picture Pile can be crowdsourced using the example of Hurricane Matthew, which affected large regions of Haiti in September 2016. The application provides simple microtasks, where the user is presented with satellite images and is asked a simple yes/no question. A "before" disaster satellite image is displayed next to an "after" disaster image and the user is asked to assess whether there is any visible, detectable damage. The question is formulated precisely to focus the user's attention on a particular aspect of the damage. The user-interface of Picture Pile is also built for users to rapidly classify the images by swiping to indicate their answer, thereby efficiently completing the microstask. The proposed approach will not only help to increase citizen awareness of natural disasters, but also provide them with a unique opportunity to contribute directly to relief efforts. Furthermore, to gain confidence in the crowdsourced results, quality assurance methods were integrated during the testing phase of the application using image classifications from experts. The application has a built-in real-time quality assurance system to provide volunteers with feedback when their answer does not agree with that of an expert. Picture Pile is intended to supplement existing approaches for post-disaster damage assessment and can be used by different networks of volunteers (e.g., the Humanitarian OpenStreetMap Team) to assess damage and create up-to-date maps of response to disaster events.
Optimal interpolation analysis of leaf area index using MODIS data
Gu, Yingxin; Belair, Stephane; Mahfouf, Jean-Francois; Deblonde, Godelieve
2006-01-01
A simple data analysis technique for vegetation leaf area index (LAI) using Moderate Resolution Imaging Spectroradiometer (MODIS) data is presented. The objective is to generate LAI data that is appropriate for numerical weather prediction. A series of techniques and procedures which includes data quality control, time-series data smoothing, and simple data analysis is applied. The LAI analysis is an optimal combination of the MODIS observations and derived climatology, depending on their associated errors σo and σc. The “best estimate” LAI is derived from a simple three-point smoothing technique combined with a selection of maximum LAI (after data quality control) values to ensure a higher quality. The LAI climatology is a time smoothed mean value of the “best estimate” LAI during the years of 2002–2004. The observation error is obtained by comparing the MODIS observed LAI with the “best estimate” of the LAI, and the climatological error is obtained by comparing the “best estimate” of LAI with the climatological LAI value. The LAI analysis is the result of a weighting between these two errors. Demonstration of the method described in this paper is presented for the 15-km grid of Meteorological Service of Canada (MSC)'s regional version of the numerical weather prediction model. The final LAI analyses have a relatively smooth temporal evolution, which makes them more appropriate for environmental prediction than the original MODIS LAI observation data. They are also more realistic than the LAI data currently used operationally at the MSC which is based on land-cover databases.
Image quality evaluation of full reference algorithm
NASA Astrophysics Data System (ADS)
He, Nannan; Xie, Kai; Li, Tong; Ye, Yushan
2018-03-01
Image quality evaluation is a classic research topic, the goal is to design the algorithm, given the subjective feelings consistent with the evaluation value. This paper mainly introduces several typical reference methods of Mean Squared Error(MSE), Peak Signal to Noise Rate(PSNR), Structural Similarity Image Metric(SSIM) and feature similarity(FSIM) of objective evaluation methods. The different evaluation methods are tested by Matlab, and the advantages and disadvantages of these methods are obtained by analyzing and comparing them.MSE and PSNR are simple, but they are not considered to introduce HVS characteristics into image quality evaluation. The evaluation result is not ideal. SSIM has a good correlation and simple calculation ,because it is considered to the human visual effect into image quality evaluation,However the SSIM method is based on a hypothesis,The evaluation result is limited. The FSIM method can be used for test of gray image and color image test, and the result is better. Experimental results show that the new image quality evaluation algorithm based on FSIM is more accurate.
Amr, Mostafa; Kaliyadan, Feroze; Shams, Tarek
2014-01-01
Skin disorders such as acne, which have significant cosmetic implications, can affect the self-perception of cutaneous body image. There are many scales which measure self-perception of cutaneous body image. We evaluated the use of a simple Cutaneous Body Image (CBI) scale to assess self-perception of body image in a sample of young Arab patients affected with acne. A total of 70 patients with acne answered the CBI questionnaire. The CBI score was correlated with the severity of acne and acne scarring, gender, and history of retinoids use. There was no statistically significant correlation between CBI and the other parameters - gender, acne/acne scarring severity, and use of retinoids. Our study suggests that cutaneous body image perception in Arab patients with acne was not dependent on variables like gender and severity of acne or acne scarring. A simple CBI scale alone is not a sufficiently reliable tool to assess self-perception of body image in patients with acne vulgaris.
Villette, Vincent; Levesque, Mathieu; Miled, Amine; Gosselin, Benoit; Topolnik, Lisa
2017-01-01
Chronic electrophysiological recordings of neuronal activity combined with two-photon Ca2+ imaging give access to high resolution and cellular specificity. In addition, awake drug-free experimentation is required for investigating the physiological mechanisms that operate in the brain. Here, we developed a simple head fixation platform, which allows simultaneous chronic imaging and electrophysiological recordings to be obtained from the hippocampus of awake mice. We performed quantitative analyses of spontaneous animal behaviour, the associated network states and the cellular activities in the dorsal hippocampus as well as estimated the brain stability limits to image dendritic processes and individual axonal boutons. Ca2+ imaging recordings revealed a relatively stereotyped hippocampal activity despite a high inter-animal and inter-day variability in the mouse behavior. In addition to quiet state and locomotion behavioural patterns, the platform allowed the reliable detection of walking steps and fine speed variations. The brain motion during locomotion was limited to ~1.8 μm, thus allowing for imaging of small sub-cellular structures to be performed in parallel with recordings of network and behavioural states. This simple device extends the drug-free experimentation in vivo, enabling high-stability optophysiological experiments with single-bouton resolution in the mouse awake brain. PMID:28240275
CellProfiler Tracer: exploring and validating high-throughput, time-lapse microscopy image data.
Bray, Mark-Anthony; Carpenter, Anne E
2015-11-04
Time-lapse analysis of cellular images is an important and growing need in biology. Algorithms for cell tracking are widely available; what researchers have been missing is a single open-source software package to visualize standard tracking output (from software like CellProfiler) in a way that allows convenient assessment of track quality, especially for researchers tuning tracking parameters for high-content time-lapse experiments. This makes quality assessment and algorithm adjustment a substantial challenge, particularly when dealing with hundreds of time-lapse movies collected in a high-throughput manner. We present CellProfiler Tracer, a free and open-source tool that complements the object tracking functionality of the CellProfiler biological image analysis package. Tracer allows multi-parametric morphological data to be visualized on object tracks, providing visualizations that have already been validated within the scientific community for time-lapse experiments, and combining them with simple graph-based measures for highlighting possible tracking artifacts. CellProfiler Tracer is a useful, free tool for inspection and quality control of object tracking data, available from http://www.cellprofiler.org/tracer/.
Chien, Miao-Ping; Carlini, Andrea S; Hu, Dehong; Barback, Christopher V; Rush, Anthony M; Hall, David J; Orr, Galya; Gianneschi, Nathan C
2013-12-18
Matrix metalloproteinase enzymes, overexpressed in HT-1080 human fibrocarcinoma tumors, were used to guide the accumulation and retention of an enzyme-responsive nanoparticle in a xenograft mouse model. The nanoparticles were prepared as micelles from amphiphilic block copolymers bearing a simple hydrophobic block and a hydrophilic peptide brush. The polymers were end-labeled with Alexa Fluor 647 dyes leading to the formation of labeled micelles upon dialysis of the polymers from DMSO/DMF to aqueous buffer. This dye-labeling strategy allowed the presence of the retained material to be visualized via whole animal imaging in vivo and in ex vivo organ analysis following intratumoral injection into HT-1080 xenograft tumors. We propose that the material is retained by virtue of an enzyme-induced accumulation process whereby particles change morphology from 20 nm spherical micelles to micrometer-scale aggregates, kinetically trapping them within the tumor. This hypothesis is tested here via an unprecedented super-resolution fluorescence analysis of ex vivo tissue slices confirming a particle size increase occurs concomitantly with extended retention of responsive particles compared to unresponsive controls.
NASA Technical Reports Server (NTRS)
Maughan, P. M. (Principal Investigator)
1972-01-01
The author has identified the following significant results. Preliminary analyses indicate that several important relationships have been observed utilizing ERTS-1 imagery. Of most significance is that in the Mississippi Sound, as elsewhere, considerable detail exists as to turbidity patterns in the water column. Simple analysis is complicated by the apparent interaction between actual turbidity, turbidity induced by shoal water, and actual imaging of the bottom in extreme shoal water. A statistical approach is being explored which shows promise of at least partially separating these effects so that partitioning of true turbid plumes can be accomplished. This partitioning is of great importance to this program in that supportive data seem to indicate that menhaden occur more frequently in turbid areas. In this connection four individual captures have been associated with a major turbid feature imaged on 6 August. If a significant relationship between imaged turbid features and catch distribution can be established, for example by graphic and/or numeric analysis, it will represent a major advancement for short term prediction of commercially accessible menhaden.
Unobtrusive integration of data management with fMRI analysis.
Poliakov, Andrew V; Hertzenberg, Xenia; Moore, Eider B; Corina, David P; Ojemann, George A; Brinkley, James F
2007-01-01
This note describes a software utility, called X-batch which addresses two pressing issues typically faced by functional magnetic resonance imaging (fMRI) neuroimaging laboratories (1) analysis automation and (2) data management. The first issue is addressed by providing a simple batch mode processing tool for the popular SPM software package (http://www.fil.ion. ucl.ac.uk/spm/; Welcome Department of Imaging Neuroscience, London, UK). The second is addressed by transparently recording metadata describing all aspects of the batch job (e.g., subject demographics, analysis parameters, locations and names of created files, date and time of analysis, and so on). These metadata are recorded as instances of an extended version of the Protégé-based Experiment Lab Book ontology created by the Dartmouth fMRI Data Center. The resulting instantiated ontology provides a detailed record of all fMRI analyses performed, and as such can be part of larger systems for neuroimaging data management, sharing, and visualization. The X-batch system is in use in our own fMRI research, and is available for download at http://X-batch.sourceforge.net/.
3D spherical-cap fitting procedure for (truncated) sessile nano- and micro-droplets & -bubbles.
Tan, Huanshu; Peng, Shuhua; Sun, Chao; Zhang, Xuehua; Lohse, Detlef
2016-11-01
In the study of nanobubbles, nanodroplets or nanolenses immobilised on a substrate, a cross-section of a spherical cap is widely applied to extract geometrical information from atomic force microscopy (AFM) topographic images. In this paper, we have developed a comprehensive 3D spherical-cap fitting procedure (3D-SCFP) to extract morphologic characteristics of complete or truncated spherical caps from AFM images. Our procedure integrates several advanced digital image analysis techniques to construct a 3D spherical-cap model, from which the geometrical parameters of the nanostructures are extracted automatically by a simple algorithm. The procedure takes into account all valid data points in the construction of the 3D spherical-cap model to achieve high fidelity in morphology analysis. We compare our 3D fitting procedure with the commonly used 2D cross-sectional profile fitting method to determine the contact angle of a complete spherical cap and a truncated spherical cap. The results from 3D-SCFP are consistent and accurate, while 2D fitting is unavoidably arbitrary in the selection of the cross-section and has a much lower number of data points on which the fitting can be based, which in addition is biased to the top of the spherical cap. We expect that the developed 3D spherical-cap fitting procedure will find many applications in imaging analysis.
A simple X-ray source of two orthogonal beams for small samples imaging
NASA Astrophysics Data System (ADS)
Hrdý, J.
2018-04-01
A simple method for simultaneous imaging of small samples by two orthogonal beams is proposed. The method is based on one channel-cut crystal which is oriented such that the beam is diffracted on two crystallographic planes simultaneously. These planes are symmetrically inclined to the crystal surface. The beams are three times diffracted. After the first diffraction the beam is split. After the second diffraction the split beams become parallel. Finally, after the third diffraction the beams become convergent and may be used for imaging. The corresponding angular relations to obtain orthogonal beams are derived.
Compact divided-pupil line-scanning confocal microscope for investigation of human tissues
NASA Astrophysics Data System (ADS)
Glazowski, Christopher; Peterson, Gary; Rajadhyaksha, Milind
2013-03-01
Divided-pupil line-scanning confocal microscopy (DPLSCM) can provide a simple and low-cost approach for imaging of human tissues with pathology-like nuclear and cellular detail. Using results from a multidimensional numerical model of DPLSCM, we found optimal pupil configurations for improved axial sectioning, as well as control of speckle noise in the case of reflectance imaging. The modeling results guided the design and construction of a simple (10 component) microscope, packaged within the footprint of an iPhone, and capable of cellular resolution. We present the optical design with experimental video-images of in-vivo human tissues.
On-Line GIS Analysis and Image Processing for Geoportal Kielce/poland Development
NASA Astrophysics Data System (ADS)
Hejmanowska, B.; Głowienka, E.; Florek-Paszkowski, R.
2016-06-01
GIS databases are widely available on the Internet, but mainly for visualization with limited functionality; very simple queries are possible i.e. attribute query, coordinate readout, line and area measurements or pathfinder. A little more complex analysis (i.e. buffering or intersection) are rare offered. Paper aims at the concept of Geoportal functionality development in the field of GIS analysis. Multi-Criteria Evaluation (MCE) is planned to be implemented in web application. OGC Service is used for data acquisition from the server and results visualization. Advanced GIS analysis is planned in PostGIS and Python programming. In the paper an example of MCE analysis basing on Geoportal Kielce is presented. Other field where Geoportal can be developed is implementation of processing new available satellite images free of charge (Sentinel-2, Landsat 8, ASTER, WV-2). Now we are witnessing a revolution in access to the satellite imagery without charge. This should result in an increase of interest in the use of these data in various fields by a larger number of users, not necessarily specialists in remote sensing. Therefore, it seems reasonable to expand the functionality of Internet's tools for data processing by non-specialists, by automating data collection and prepared predefined analysis.
Simple Technique for Dark-Field Photography of Immunodiffusion Bands
Jensh, Ronald P.; Brent, Robert L.
1969-01-01
A simple dark-field photographic technique was developed which enables laboratory personnel with minimal photographic training to easily record antigen-antibody patterns on immunodiffusion plates. Images PMID:4979944
NASA Astrophysics Data System (ADS)
Vidal, Borja; Lafuente, Juan A.
2016-03-01
A simple technique to avoid color limitations in image capture systems based on chroma key video composition using retroreflective screens and light-emitting diodes (LED) rings is proposed and demonstrated. The combination of an asynchronous temporal modulation onto the background illumination and simple image processing removes the usual restrictions on foreground colors in the scene. The technique removes technical constraints in stage composition, allowing its design to be purely based on artistic grounds. Since it only requires adding a very simple electronic circuit to widely used chroma keying hardware based on retroreflective screens, the technique is easily applicable to TV and filming studios.
Digital imaging mass spectrometry.
Bamberger, Casimir; Renz, Uwe; Bamberger, Andreas
2011-06-01
Methods to visualize the two-dimensional (2D) distribution of molecules by mass spectrometric imaging evolve rapidly and yield novel applications in biology, medicine, and material surface sciences. Most mass spectrometric imagers acquire high mass resolution spectra spot-by-spot and thereby scan the object's surface. Thus, imaging is slow and image reconstruction remains cumbersome. Here we describe an imaging mass spectrometer that exploits the true imaging capabilities by ion optical means for the time of flight mass separation. The mass spectrometer is equipped with the ASIC Timepix chip as an array detector to acquire the position, mass, and intensity of ions that are imaged by matrix-assisted laser desorption/ionization (MALDI) directly from the target sample onto the detector. This imaging mass spectrometer has a spatial resolving power at the specimen of (84 ± 35) μm with a mass resolution of 45 and locates atoms or organic compounds on a surface area up to ~2 cm(2). Extended laser spots of ~5 mm(2) on structured specimens allows parallel imaging of selected masses. The digital imaging mass spectrometer proves high hit-multiplicity, straightforward image reconstruction, and potential for high-speed readout at 4 kHz or more. This device demonstrates a simple way of true image acquisition like a digital photographic camera. The technology may enable a fast analysis of biomolecular samples in near future.
Ishii, M; Jones, M; Shiota, T; Yamada, I; Sinclair, B; Heinrich, R S; Yoganathan, A P; Sahn, D J
1998-11-01
The purpose of our study was to determine the temporal variability of regurgitant color Doppler jet areas and the width of the color Doppler imaged vena contracta for evaluating the severity of aortic regurgitation. Twenty-nine hemodynamically different states were obtained pharmacologically in 8 sheep 20 weeks after surgery to produce aortic regurgitation. Aortic regurgitation was quantified by peak and mean regurgitant flow rates, regurgitant stroke volumes, and regurgitant fractions determined using pulmonary and aortic electromagnetic flow probes and meters balanced against each other. The regurgitant jet areas and the widths of color Doppler imaged vena contracta were measured at 4 different times during diastole to determine the temporal variability of this parameter. When measured at 4 different temporal points in diastole, a significant change was observed in the size of the color Doppler imaged regurgitant jet (percent of difference: from 31.1% to 904%; 233% +/- 245%). Simple linear regression analysis between each color jet area at 4 different periods in diastole and flow meter-based severity of the aortic regurgitation showed only weak correlation (0.23 < r < 0.49). In contrast, for most conditions only a slight change was observed in the width of the color Doppler imaged vena contracta during the diastolic regurgitant period (percent of difference, vena contracta: from 2.4% to 12.9%, 5.8% +/- 3.2%). In addition, for each period the width of the color Doppler imaged vena contracta at the 4 different time periods in diastole correlated quite strongly with volumetric measures of the severity of aortic regurgitation (0.81 < r < 0.90) and with the instantaneous flow rate for the corresponding period (0.85 < r < 0.87). Color Doppler imaged vena contracta may provide a simple, practical, and accurate method for quantifying aortic regurgitation, even when using a single frame color Doppler flow mapping image.
Comparison of image deconvolution algorithms on simulated and laboratory infrared images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Proctor, D.
1994-11-15
We compare Maximum Likelihood, Maximum Entropy, Accelerated Lucy-Richardson, Weighted Goodness of Fit, and Pixon reconstructions of simple scenes as a function of signal-to-noise ratio for simulated images with randomly generated noise. Reconstruction results of infrared images taken with the TAISIR (Temperature and Imaging System InfraRed) are also discussed.
Robson, Philip M; Grant, Aaron K; Madhuranthakam, Ananth J; Lattanzi, Riccardo; Sodickson, Daniel K; McKenzie, Charles A
2008-10-01
Parallel imaging reconstructions result in spatially varying noise amplification characterized by the g-factor, precluding conventional measurements of noise from the final image. A simple Monte Carlo based method is proposed for all linear image reconstruction algorithms, which allows measurement of signal-to-noise ratio and g-factor and is demonstrated for SENSE and GRAPPA reconstructions for accelerated acquisitions that have not previously been amenable to such assessment. Only a simple "prescan" measurement of noise amplitude and correlation in the phased-array receiver, and a single accelerated image acquisition are required, allowing robust assessment of signal-to-noise ratio and g-factor. The "pseudo multiple replica" method has been rigorously validated in phantoms and in vivo, showing excellent agreement with true multiple replica and analytical methods. This method is universally applicable to the parallel imaging reconstruction techniques used in clinical applications and will allow pixel-by-pixel image noise measurements for all parallel imaging strategies, allowing quantitative comparison between arbitrary k-space trajectories, image reconstruction, or noise conditioning techniques. (c) 2008 Wiley-Liss, Inc.
NASA Astrophysics Data System (ADS)
Stork, David G.
2004-04-01
It has recently been claimed that some painters in the early Renaissance employed optical devices, specifically concave mirrors, to project images onto their canvas or other support (paper, oak panel, etc.) which they then traced or painted over. In this way, according to the theory, artists achieved their newfound heightened realism. We apply geometric image analysis to the parts of two paintings specifically adduced as evidence supporting this bold theory: the splendid, meticulous chandelier in Jan van Eyck's "Portrait of Arnolfini and his wife," and the trellis in the right panel of Robert Campin's "Merode Altarpiece." It has further been claimed that this trellis is the earliest surviving image captured using the projection of any optical device - a claim that, if correct, would have profound import for the histories of art, science and optical technology. Our analyses show that the Arnolfini chandelier fails standard tests of perspective coherence that would indicate an optical projection. Or more specifically, for the physical Arnolfini chandelier to be consistent with an optical projection, that chandelier would have to be implausibly irregular, as judged in comparison to surviving chandeliers and candelabras from the same 15th-century European schools. We find that had Campin painted the trellis using projections, he would have performed an extraordinarily precise and complex procedure using the most sophisticated optical system of his day (for which there is no documentary evidence), a conclusion supported by an attempted "re-enactment." We provide a far more simple, parsimonious and plausible explanation, which we demonstrate by a simple experiment. Our analyses lead us to reject the optical projection theory for these paintings, a conclusion that comports with the vast scholarly consensus on Renaissance working methods and the lack of documentary evidence for optical projections onto a screen.
Vehicle Surveillance with a Generic, Adaptive, 3D Vehicle Model.
Leotta, Matthew J; Mundy, Joseph L
2011-07-01
In automated surveillance, one is often interested in tracking road vehicles, measuring their shape in 3D world space, and determining vehicle classification. To address these tasks simultaneously, an effective approach is the constrained alignment of a prior model of 3D vehicle shape to images. Previous 3D vehicle models are either generic but overly simple or rigid and overly complex. Rigid models represent exactly one vehicle design, so a large collection is needed. A single generic model can deform to a wide variety of shapes, but those shapes have been far too primitive. This paper uses a generic 3D vehicle model that deforms to match a wide variety of passenger vehicles. It is adjustable in complexity between the two extremes. The model is aligned to images by predicting and matching image intensity edges. Novel algorithms are presented for fitting models to multiple still images and simultaneous tracking while estimating shape in video. Experiments compare the proposed model to simple generic models in accuracy and reliability of 3D shape recovery from images and tracking in video. Standard techniques for classification are also used to compare the models. The proposed model outperforms the existing simple models at each task.
Kaakinen, M; Huttunen, S; Paavolainen, L; Marjomäki, V; Heikkilä, J; Eklund, L
2014-01-01
Phase-contrast illumination is simple and most commonly used microscopic method to observe nonstained living cells. Automatic cell segmentation and motion analysis provide tools to analyze single cell motility in large cell populations. However, the challenge is to find a sophisticated method that is sufficiently accurate to generate reliable results, robust to function under the wide range of illumination conditions encountered in phase-contrast microscopy, and also computationally light for efficient analysis of large number of cells and image frames. To develop better automatic tools for analysis of low magnification phase-contrast images in time-lapse cell migration movies, we investigated the performance of cell segmentation method that is based on the intrinsic properties of maximally stable extremal regions (MSER). MSER was found to be reliable and effective in a wide range of experimental conditions. When compared to the commonly used segmentation approaches, MSER required negligible preoptimization steps thus dramatically reducing the computation time. To analyze cell migration characteristics in time-lapse movies, the MSER-based automatic cell detection was accompanied by a Kalman filter multiobject tracker that efficiently tracked individual cells even in confluent cell populations. This allowed quantitative cell motion analysis resulting in accurate measurements of the migration magnitude and direction of individual cells, as well as characteristics of collective migration of cell groups. Our results demonstrate that MSER accompanied by temporal data association is a powerful tool for accurate and reliable analysis of the dynamic behaviour of cells in phase-contrast image sequences. These techniques tolerate varying and nonoptimal imaging conditions and due to their relatively light computational requirements they should help to resolve problems in computationally demanding and often time-consuming large-scale dynamical analysis of cultured cells. © 2013 The Authors Journal of Microscopy © 2013 Royal Microscopical Society.
Clustering approaches to feature change detection
NASA Astrophysics Data System (ADS)
G-Michael, Tesfaye; Gunzburger, Max; Peterson, Janet
2018-05-01
The automated detection of changes occurring between multi-temporal images is of significant importance in a wide range of medical, environmental, safety, as well as many other settings. The usage of k-means clustering is explored as a means for detecting objects added to a scene. The silhouette score for the clustering is used to define the optimal number of clusters that should be used. For simple images having a limited number of colors, new objects can be detected by examining the change between the optimal number of clusters for the original and modified images. For more complex images, new objects may need to be identified by examining the relative areas covered by corresponding clusters in the original and modified images. Which method is preferable depends on the composition and range of colors present in the images. In addition to describing the clustering and change detection methodology of our proposed approach, we provide some simple illustrations of its application.
Kim, Taehoon; Visbal-Onufrak, Michelle A.; Konger, Raymond L.; Kim, Young L.
2017-01-01
Sensitive and accurate assessment of dermatologic inflammatory hyperemia in otherwise grossly normal-appearing skin conditions is beneficial to laypeople for monitoring their own skin health on a regular basis, to patients for looking for timely clinical examination, and to primary care physicians or dermatologists for delivering effective treatments. We propose that mathematical hyperspectral reconstruction from RGB images in a simple imaging setup can provide reliable visualization of hemoglobin content in a large skin area. Without relying on a complicated, expensive, and slow hyperspectral imaging system, we demonstrate the feasibility of determining heterogeneous or multifocal areas of inflammatory hyperemia associated with experimental photocarcinogenesis in mice. We envision that RGB-based reconstructed hyperspectral imaging of subclinical inflammatory hyperemic foci could potentially be integrated with the built-in camera (RGB sensor) of a smartphone to develop a simple imaging device that could offer affordable monitoring of dermatologic health. PMID:29188120
Fundamental techniques for resolution enhancement of average subsampled images
NASA Astrophysics Data System (ADS)
Shen, Day-Fann; Chiu, Chui-Wen
2012-07-01
Although single image resolution enhancement, otherwise known as super-resolution, is widely regarded as an ill-posed inverse problem, we re-examine the fundamental relationship between a high-resolution (HR) image acquisition module and its low-resolution (LR) counterpart. Analysis shows that partial HR information is attenuated but still exists, in its LR version, through the fundamental averaging-and-subsampling process. As a result, we propose a modified Laplacian filter (MLF) and an intensity correction process (ICP) as the pre and post process, respectively, with an interpolation algorithm to partially restore the attenuated information in a super-resolution (SR) enhanced image image. Experiments show that the proposed MLF and ICP provide significant and consistent quality improvements on all 10 test images with three well known interpolation methods including bilinear, bi-cubic, and the SR graphical user interface program provided by Ecole Polytechnique Federale de Lausanne. The proposed MLF and ICP are simple in implementation and generally applicable to all average-subsampled LR images. MLF and ICP, separately or together, can be integrated into most interpolation methods that attempt to restore the original HR contents. Finally, the idea of MLF and ICP can also be applied for average, subsampled one-dimensional signal.
Time-frequency analysis of functional optical mammographic images
NASA Astrophysics Data System (ADS)
Barbour, Randall L.; Graber, Harry L.; Schmitz, Christoph H.; Tarantini, Frank; Khoury, Georges; Naar, David J.; Panetta, Thomas F.; Lewis, Theophilus; Pei, Yaling
2003-07-01
We have introduced working technology that provides for time-series imaging of the hemoglobin signal in large tissue structures. In this study we have explored our ability to detect aberrant time-frequency responses of breast vasculature for subjects with Stage II breast cancer at rest and in response to simple provocations. The hypothesis being explored is that time-series imaging will be sensitive to the known structural and functional malformations of the tumor vasculature. Mammographic studies were conducted using an adjustable hemisheric measuring head containing 21 source and 21 detector locations (441 source-detector pairs). Simultaneous dual-wavelength studies were performed at 760 and 830 nm at a framing rate of ~2.7 Hz. Optical measures were performed on women lying prone with the breast hanging in a pendant position. Two class of measures were performed: (1) 20- minute baseline measure wherein the subject was at rest; (2) provocation studies wherein the subject was asked to perform some simple breathing maneuvers. Collected data were analyzed to identify the time-frequency structure and central tendencies of the detector responses and those of the image time series. Imaging data were generated using the Normalized Difference Method (Pei et al., Appl. Opt. 40, 5755-5769, 2001). Results obtained clearly document three classes of anomalies when compared to the normal contralateral breast. 1) Breast tumors exhibit altered oxygen supply/demand imbalance in response to an oxidative challenge (breath hold). 2) The vasomotor response of the tumor vasculature is mainly depressed and exhibits an altered modulation. 3) The affected area of the breast wherein the altered vasomotor signature is seen extends well beyond the limits of the tumor itself.
Photonic crystal enhanced fluorescence immunoassay on diatom biosilica.
Squire, Kenneth; Kong, Xianming; LeDuff, Paul; Rorrer, Gregory L; Wang, Alan X
2018-05-16
Fluorescence biosensing is one of the most established biosensing methods, particularly fluorescence spectroscopy and microscopy. These are two highly sensitive techniques but require high grade electronics and optics to achieve the desired sensitivity. Efforts have been made to implement these methods using consumer grade electronics and simple optical setups for applications such as point-of-care diagnostics, but the sensitivity inherently suffers. Sensing substrates, capable of enhancing fluorescence are thus needed to achieve high sensitivity for such applications. In this paper, we demonstrate a photonic crystal-enhanced fluorescence immunoassay biosensor using diatom biosilica, which consists of silica frustules with sub-100 nm periodic pores. Utilizing the enhanced local optical field, the Purcell effect and increased surface area from the diatom photonic crystals, we create ultrasensitive immunoassay biosensors that can significantly enhance fluorescence spectroscopy as well as fluorescence imaging. Using standard antibody-antigen-labeled antibody immunoassay protocol, we experimentally achieved 100× and 10× better detection limit with fluorescence spectroscopy and fluorescence imaging respectively. The limit of detection of the mouse IgG goes down to 10 -16 M (14 fg/mL) and 10 -15 M (140 fg/mL) for the two respective detection modalities, virtually sensing a single mouse IgG molecule on each diatom frustule. The effectively enhanced fluorescence imaging in conjunction with the simple hot-spot counting analysis method used in this paper proves the great potential of diatom fluorescence immunoassay for point-of-care biosensing. Scanning electron microscope image of biosilica diatom frustule that enables significant enhancement of fluorescence spectroscopy and fluorescence image. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Lambron, Julien; Rakotonjanahary, Josué; Loisel, Didier; Frampas, Eric; De Carli, Emilie; Delion, Matthieu; Rialland, Xavier; Toulgoat, Frédérique
2016-02-01
Magnetic resonance (MR) images from children with optic pathway glioma (OPG) are complex. We initiated this study to evaluate the accuracy of MR imaging (MRI) interpretation and to propose a simple and reproducible imaging classification for MRI. We randomly selected 140 MRIs from among 510 MRIs performed on 104 children diagnosed with OPG in France from 1990 to 2004. These images were reviewed independently by three radiologists (F.T., 15 years of experience in neuroradiology; D.L., 25 years of experience in pediatric radiology; and J.L., 3 years of experience in radiology) using a classification derived from the Dodge and modified Dodge classifications. Intra- and interobserver reliabilities were assessed using the Bland-Altman method and the kappa coefficient. These reviews allowed the definition of reliable criteria for MRI interpretation. The reviews showed intraobserver variability and large discrepancies among the three radiologists (kappa coefficient varying from 0.11 to 1). These variabilities were too large for the interpretation to be considered reproducible over time or among observers. A consensual analysis, taking into account all observed variabilities, allowed the development of a definitive interpretation protocol. Using this revised protocol, we observed consistent intra- and interobserver results (kappa coefficient varying from 0.56 to 1). The mean interobserver difference for the solid portion of the tumor with contrast enhancement was 0.8 cm(3) (limits of agreement = -16 to 17). We propose simple and precise rules for improving the accuracy and reliability of MRI interpretation for children with OPG. Further studies will be necessary to investigate the possible prognostic value of this approach.
... the body to be used as energy. Simple carbohydrates are found naturally in foods such as fruits, milk, and milk products. They are also found in processed and refined sugars such as candy, table sugar, ...
Macyszyn, Luke; Akbari, Hamed; Pisapia, Jared M; Da, Xiao; Attiah, Mark; Pigrish, Vadim; Bi, Yingtao; Pal, Sharmistha; Davuluri, Ramana V; Roccograndi, Laura; Dahmane, Nadia; Martinez-Lage, Maria; Biros, George; Wolf, Ronald L; Bilello, Michel; O'Rourke, Donald M; Davatzikos, Christos
2016-03-01
MRI characteristics of brain gliomas have been used to predict clinical outcome and molecular tumor characteristics. However, previously reported imaging biomarkers have not been sufficiently accurate or reproducible to enter routine clinical practice and often rely on relatively simple MRI measures. The current study leverages advanced image analysis and machine learning algorithms to identify complex and reproducible imaging patterns predictive of overall survival and molecular subtype in glioblastoma (GB). One hundred five patients with GB were first used to extract approximately 60 diverse features from preoperative multiparametric MRIs. These imaging features were used by a machine learning algorithm to derive imaging predictors of patient survival and molecular subtype. Cross-validation ensured generalizability of these predictors to new patients. Subsequently, the predictors were evaluated in a prospective cohort of 29 new patients. Survival curves yielded a hazard ratio of 10.64 for predicted long versus short survivors. The overall, 3-way (long/medium/short survival) accuracy in the prospective cohort approached 80%. Classification of patients into the 4 molecular subtypes of GB achieved 76% accuracy. By employing machine learning techniques, we were able to demonstrate that imaging patterns are highly predictive of patient survival. Additionally, we found that GB subtypes have distinctive imaging phenotypes. These results reveal that when imaging markers related to infiltration, cell density, microvascularity, and blood-brain barrier compromise are integrated via advanced pattern analysis methods, they form very accurate predictive biomarkers. These predictive markers used solely preoperative images, hence they can significantly augment diagnosis and treatment of GB patients. © The Author(s) 2015. Published by Oxford University Press on behalf of the Society for Neuro-Oncology. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Wagner, Lucas; Schmal, Christoph; Staiger, Dorothee; Danisman, Selahattin
2017-01-01
The analysis of circadian leaf movement rhythms is a simple yet effective method to study effects of treatments or gene mutations on the circadian clock of plants. Currently, leaf movements are analysed using time lapse photography and subsequent bioinformatics analyses of leaf movements. Programs that are used for this purpose either are able to perform one function (i.e. leaf tip detection or rhythm analysis) or their function is limited to specific computational environments. We developed a leaf movement analysis tool-PALMA-that works in command line and combines image extraction with rhythm analysis using Fast Fourier transformation and non-linear least squares fitting. We validated PALMA in both simulated time series and in experiments using the known short period mutant sensitivity to red light reduced 1 ( srr1 - 1 ). We compared PALMA with two established leaf movement analysis tools and found it to perform equally well. Finally, we tested the effect of reduced iron conditions on the leaf movement rhythms of wild type plants. Here, we found that PALMA successfully detected period lengthening under reduced iron conditions. PALMA correctly estimated the period of both simulated and real-life leaf movement experiments. As a platform-independent console-program that unites both functions needed for the analysis of circadian leaf movements it is a valid alternative to existing leaf movement analysis tools.
New simple and low-cost methods for periodic checks of Cyclone® Plus Storage Phosphor System.
Edalucci, Elisabetta; Maffione, Anna Margherita; Fornasier, Maria Rosa; De Denaro, Mario; Scian, Giovanni; Dore, Franca; Rubello, Domenico
2017-01-01
The recent large use of the Cyclone® Plus Storage Phosphor System, especially in European countries, as imaging system for quantification of radiochemical purity of radiopharmaceuticals raised the problem of setting the periodic controls as required by European Legislation. We described simple, low-cost methods for Cyclone® Plus quality controls, which can be useful to evaluate the performance measurement of this imaging system.
2016-07-01
their biocompatibility, biodegradability , and simple surface chemistry that is amenable to drug loading and targeting20. Because of this, they are... Biodegradable luminescent porous silicon nanoparticles for in vivo applications. Nature Materials 8, 331–336 (2009). 21. Yen, Y.-F., Nagasawa, K...ideally suited as biomedical imaging agents, due to their biocompatibility, biodegradability , and simple surface chemistry that is amenable to drug
NASA Technical Reports Server (NTRS)
Sabol, Donald E., Jr.; Roberts, Dar A.; Adams, John B.; Smith, Milton O.
1993-01-01
An important application of remote sensing is to map and monitor changes over large areas of the land surface. This is particularly significant with the current interest in monitoring vegetation communities. Most of traditional methods for mapping different types of plant communities are based upon statistical classification techniques (i.e., parallel piped, nearest-neighbor, etc.) applied to uncalibrated multispectral data. Classes from these techniques are typically difficult to interpret (particularly to a field ecologist/botanist). Also, classes derived for one image can be very different from those derived from another image of the same area, making interpretation of observed temporal changes nearly impossible. More recently, neural networks have been applied to classification. Neural network classification, based upon spectral matching, is weak in dealing with spectral mixtures (a condition prevalent in images of natural surfaces). Another approach to mapping vegetation communities is based on spectral mixture analysis, which can provide a consistent framework for image interpretation. Roberts et al. (1990) mapped vegetation using the band residuals from a simple mixing model (the same spectral endmembers applied to all image pixels). Sabol et al. (1992b) and Roberts et al. (1992) used different methods to apply the most appropriate spectral endmembers to each image pixel, thereby allowing mapping of vegetation based upon the the different endmember spectra. In this paper, we describe a new approach to classification of vegetation communities based upon the spectra fractions derived from spectral mixture analysis. This approach was applied to three 1992 AVIRIS images of Jasper Ridge, California to observe seasonal changes in surface composition.
Gabrani-Juma, Hanif; Clarkin, Owen J; Pourmoghaddas, Amir; Driscoll, Brandon; Wells, R Glenn; deKemp, Robert A; Klein, Ran
2017-01-01
Simple and robust techniques are lacking to assess performance of flow quantification using dynamic imaging. We therefore developed a method to qualify flow quantification technologies using a physical compartment exchange phantom and image analysis tool. We validate and demonstrate utility of this method using dynamic PET and SPECT. Dynamic image sequences were acquired on two PET/CT and a cardiac dedicated SPECT (with and without attenuation and scatter corrections) systems. A two-compartment exchange model was fit to image derived time-activity curves to quantify flow rates. Flowmeter measured flow rates (20-300 mL/min) were set prior to imaging and were used as reference truth to which image derived flow rates were compared. Both PET cameras had excellent agreement with truth ( [Formula: see text]). High-end PET had no significant bias (p > 0.05) while lower-end PET had minimal slope bias (wash-in and wash-out slopes were 1.02 and 1.01) but no significant reduction in precision relative to high-end PET (<15% vs. <14% limits of agreement, p > 0.3). SPECT (without scatter and attenuation corrections) slope biases were noted (0.85 and 1.32) and attributed to camera saturation in early time frames. Analysis of wash-out rates from non-saturated, late time frames resulted in excellent agreement with truth ( [Formula: see text], slope = 0.97). Attenuation and scatter corrections did not significantly impact SPECT performance. The proposed phantom, software and quality assurance paradigm can be used to qualify imaging instrumentation and protocols for quantification of kinetic rate parameters using dynamic imaging.
Holographic Optical Coherence Imaging of Rat Osteogenic Sarcoma Tumor Spheroids
NASA Astrophysics Data System (ADS)
Yu, Ping; Mustata, Mirela; Peng, Leilei; Turek, John J.; Melloch, Michael R.; French, Paul M. W.; Nolte, David D.
2004-09-01
Holographic optical coherence imaging is a full-frame variant of coherence-domain imaging. An optoelectronic semiconductor holographic film functions as a coherence filter placed before a conventional digital video camera that passes coherent (structure-bearing) light to the camera during holographic readout while preferentially rejecting scattered light. The data are acquired as a succession of en face images at increasing depth inside the sample in a fly-through acquisition. The samples of living tissue were rat osteogenic sarcoma multicellular tumor spheroids that were grown from a single osteoblast cell line in a bioreactor. Tumor spheroids are nearly spherical and have radial symmetry, presenting a simple geometry for analysis. The tumors investigated ranged in diameter from several hundred micrometers to over 1 mm. Holographic features from the tumors were observed in reflection to depths of 500-600 µm with a total tissue path length of approximately 14 mean free paths. The volumetric data from the tumor spheroids reveal heterogeneous structure, presumably caused by necrosis and microcalcifications characteristic of some human avascular tumors.
Gaussian vs. Bessel light-sheets: performance analysis in live large sample imaging
NASA Astrophysics Data System (ADS)
Reidt, Sascha L.; Correia, Ricardo B. C.; Donnachie, Mark; Weijer, Cornelis J.; MacDonald, Michael P.
2017-08-01
Lightsheet fluorescence microscopy (LSFM) has rapidly progressed in the past decade from an emerging technology into an established methodology. This progress has largely been driven by its suitability to developmental biology, where it is able to give excellent spatial-temporal resolution over relatively large fields of view with good contrast and low phototoxicity. In many respects it is superseding confocal microscopy. However, it is no magic bullet and still struggles to image deeply in more highly scattering samples. Many solutions to this challenge have been presented, including, Airy and Bessel illumination, 2-photon operation and deconvolution techniques. In this work, we show a comparison between a simple but effective Gaussian beam illumination and Bessel illumination for imaging in chicken embryos. Whilst Bessel illumination is shown to be of benefit when a greater depth of field is required, it is not possible to see any benefits for imaging into the highly scattering tissue of the chick embryo.
Forensic hash for multimedia information
NASA Astrophysics Data System (ADS)
Lu, Wenjun; Varna, Avinash L.; Wu, Min
2010-01-01
Digital multimedia such as images and videos are prevalent on today's internet and cause significant social impact, which can be evidenced by the proliferation of social networking sites with user generated contents. Due to the ease of generating and modifying images and videos, it is critical to establish trustworthiness for online multimedia information. In this paper, we propose novel approaches to perform multimedia forensics using compact side information to reconstruct the processing history of a document. We refer to this as FASHION, standing for Forensic hASH for informatION assurance. Based on the Radon transform and scale space theory, the proposed forensic hash is compact and can effectively estimate the parameters of geometric transforms and detect local tampering that an image may have undergone. Forensic hash is designed to answer a broader range of questions regarding the processing history of multimedia data than the simple binary decision from traditional robust image hashing, and also offers more efficient and accurate forensic analysis than multimedia forensic techniques that do not use any side information.
A simple autocorrelation algorithm for determining grain size from digital images of sediment
Rubin, D.M.
2004-01-01
Autocorrelation between pixels in digital images of sediment can be used to measure average grain size of sediment on the bed, grain-size distribution of bed sediment, and vertical profiles in grain size in a cross-sectional image through a bed. The technique is less sensitive than traditional laboratory analyses to tails of a grain-size distribution, but it offers substantial other advantages: it is 100 times as fast; it is ideal for sampling surficial sediment (the part that interacts with a flow); it can determine vertical profiles in grain size on a scale finer than can be sampled physically; and it can be used in the field to provide almost real-time grain-size analysis. The technique can be applied to digital images obtained using any source with sufficient resolution, including digital cameras, digital video, or underwater digital microscopes (for real-time grain-size mapping of the bed). ?? 2004, SEPM (Society for Sedimentary Geology).
Ensemble methods with simple features for document zone classification
NASA Astrophysics Data System (ADS)
Obafemi-Ajayi, Tayo; Agam, Gady; Xie, Bingqing
2012-01-01
Document layout analysis is of fundamental importance for document image understanding and information retrieval. It requires the identification of blocks extracted from a document image via features extraction and block classification. In this paper, we focus on the classification of the extracted blocks into five classes: text (machine printed), handwriting, graphics, images, and noise. We propose a new set of features for efficient classifications of these blocks. We present a comparative evaluation of three ensemble based classification algorithms (boosting, bagging, and combined model trees) in addition to other known learning algorithms. Experimental results are demonstrated for a set of 36503 zones extracted from 416 document images which were randomly selected from the tobacco legacy document collection. The results obtained verify the robustness and effectiveness of the proposed set of features in comparison to the commonly used Ocropus recognition features. When used in conjunction with the Ocropus feature set, we further improve the performance of the block classification system to obtain a classification accuracy of 99.21%.
[Application of Fourier transform profilometry in 3D-surface reconstruction].
Shi, Bi'er; Lu, Kuan; Wang, Yingting; Li, Zhen'an; Bai, Jing
2011-08-01
With the improvement of system frame and reconstruction methods in fluorescent molecules tomography (FMT), the FMT technology has been widely used as an important experimental tool in biomedical research. It is necessary to get the 3D-surface profile of the experimental object as the boundary constraints of FMT reconstruction algorithms. We proposed a new 3D-surface reconstruction method based on Fourier transform profilometry (FTP) method under the blue-purple light condition. The slice images were reconstructed using proper image processing methods, frequency spectrum analysis and filtering. The results of experiment showed that the method properly reconstructed the 3D-surface of objects and has the mm-level accuracy. Compared to other methods, this one is simple and fast. Besides its well-reconstructed, the proposed method could help monitor the behavior of the object during the experiment to ensure the correspondence of the imaging process. Furthermore, the method chooses blue-purple light section as its light source to avoid the interference towards fluorescence imaging.
NASA Astrophysics Data System (ADS)
Jiao, Quanjun; Zhang, Xiao; Sun, Qi
2018-03-01
The availability of dense time series of Landsat images pro-vides a great chance to reconstruct forest disturbance and change history with high temporal resolution, medium spatial resolution and long period. This proposal aims to apply forest change detection method in Hainan Jianfengling Forest Park using yearly Landsat time-series images. A simple detection method from the dense time series Landsat NDVI images will be used to reconstruct forest change history (afforestation and deforestation). The mapping result showed a large decrease occurred in the extent of closed forest from 1980s to 1990s. From the beginning of the 21st century, we found an increase in forest areas with the implementation of forestry measures such as the prohibition of cutting and sealing in our study area. Our findings provide an effective approach for quickly detecting forest changes in tropical original forest, especially for afforestation and deforestation, and a comprehensive analysis tool for forest resource protection.
NASA Astrophysics Data System (ADS)
Soltanian-Zadeh, Hamid; Windham, Joe P.
1992-04-01
Maximizing the minimum absolute contrast-to-noise ratios (CNRs) between a desired feature and multiple interfering processes, by linear combination of images in a magnetic resonance imaging (MRI) scene sequence, is attractive for MRI analysis and interpretation. A general formulation of the problem is presented, along with a novel solution utilizing the simple and numerically stable method of Gram-Schmidt orthogonalization. We derive explicit solutions for the case of two interfering features first, then for three interfering features, and, finally, using a typical example, for an arbitrary number of interfering feature. For the case of two interfering features, we also provide simplified analytical expressions for the signal-to-noise ratios (SNRs) and CNRs of the filtered images. The technique is demonstrated through its applications to simulated and acquired MRI scene sequences of a human brain with a cerebral infarction. For these applications, a 50 to 100% improvement for the smallest absolute CNR is obtained.
Contour fractal analysis of grains
NASA Astrophysics Data System (ADS)
Guida, Giulia; Casini, Francesca; Viggiani, Giulia MB
2017-06-01
Fractal analysis has been shown to be useful in image processing to characterise the shape and the grey-scale complexity in different applications spanning from electronic to medical engineering (e.g. [1]). Fractal analysis consists of several methods to assign a dimension and other fractal characteristics to a dataset describing geometric objects. Limited studies have been conducted on the application of fractal analysis to the classification of the shape characteristics of soil grains. The main objective of the work described in this paper is to obtain, from the results of systematic fractal analysis of artificial simple shapes, the characterization of the particle morphology at different scales. The long term objective of the research is to link the microscopic features of granular media with the mechanical behaviour observed in the laboratory and in situ.
Murata, Takahiro; Horiuchi, Tetsuyoshi; Rahmah, Nunung Nur; Sakai, Keiichi; Hongo, Kazuhiro
2011-01-01
Direct surgery remains important for the treatment of superficial cerebral arteriovenous malformation (AVM). Surgical planning on the basis of careful analysis from various neuroimaging modalities can aid in resection of superficial AVM with favorable outcome. Three-dimensional (3D) magnetic resonance (MR) imaging reconstructed from time-of-flight (TOF) MR angiography was developed as an adjunctive tool for surgical planning of superficial AVM. 3-T TOF MR imaging without contrast medium was performed preoperatively in patients with superficial AVM. The images were imported into OsiriX imaging software and the 3D reconstructed MR image was produced using the volume rendering method. This 3D MR image could clearly visualize the surface angioarchitecture of the AVM with the surrounding brain on a single image, and clarified feeding arteries including draining veins and the relationship with sulci or fissures surrounding the nidus. 3D MR image of the whole AVM angioarchitecture was also displayed by skeletonization of the surrounding brain. Preoperative 3D MR image corresponded to the intraoperative view. Feeders on the brain surface were easily confirmed and obliterated during surgery, with the aid of the 3D MR images. 3D MR imaging for surgical planning of superficial AVM is simple and noninvasive to perform, enhances intraoperative orientation, and is helpful for successful resection.
Demonstration of a single-wavelength spectral-imaging-based Thai jasmine rice identification
NASA Astrophysics Data System (ADS)
Suwansukho, Kajpanya; Sumriddetchkajorn, Sarun; Buranasiri, Prathan
2011-07-01
A single-wavelength spectral-imaging-based Thai jasmine rice breed identification is demonstrated. Our nondestructive identification approach relies on a combination of fluorescent imaging and simple image processing techniques. Especially, we apply simple image thresholding, blob filtering, and image subtracting processes to either a 545 or a 575nm image in order to identify our desired Thai jasmine rice breed from others. Other key advantages include no waste product and fast identification time. In our demonstration, UVC light is used as our exciting light, a liquid crystal tunable optical filter is used as our wavelength seclector, and a digital camera with 640activepixels×480activepixels is used to capture the desired spectral image. Eight Thai rice breeds having similar size and shape are tested. Our experimental proof of concept shows that by suitably applying image thresholding, blob filtering, and image subtracting processes to the selected fluorescent image, the Thai jasmine rice breed can be identified with measured false acceptance rates of <22.9% and <25.7% for spectral images at 545 and 575nm wavelengths, respectively. A measured fast identification time is 25ms, showing high potential for real-time applications.
Fuzzy-based propagation of prior knowledge to improve large-scale image analysis pipelines
Mikut, Ralf
2017-01-01
Many automatically analyzable scientific questions are well-posed and a variety of information about expected outcomes is available a priori. Although often neglected, this prior knowledge can be systematically exploited to make automated analysis operations sensitive to a desired phenomenon or to evaluate extracted content with respect to this prior knowledge. For instance, the performance of processing operators can be greatly enhanced by a more focused detection strategy and by direct information about the ambiguity inherent in the extracted data. We present a new concept that increases the result quality awareness of image analysis operators by estimating and distributing the degree of uncertainty involved in their output based on prior knowledge. This allows the use of simple processing operators that are suitable for analyzing large-scale spatiotemporal (3D+t) microscopy images without compromising result quality. On the foundation of fuzzy set theory, we transform available prior knowledge into a mathematical representation and extensively use it to enhance the result quality of various processing operators. These concepts are illustrated on a typical bioimage analysis pipeline comprised of seed point detection, segmentation, multiview fusion and tracking. The functionality of the proposed approach is further validated on a comprehensive simulated 3D+t benchmark data set that mimics embryonic development and on large-scale light-sheet microscopy data of a zebrafish embryo. The general concept introduced in this contribution represents a new approach to efficiently exploit prior knowledge to improve the result quality of image analysis pipelines. The generality of the concept makes it applicable to practically any field with processing strategies that are arranged as linear pipelines. The automated analysis of terabyte-scale microscopy data will especially benefit from sophisticated and efficient algorithms that enable a quantitative and fast readout. PMID:29095927
Yield of computed tomography of the cervical spine in cases of simple assault.
Uriell, Matthew L; Allen, Jason W; Lovasik, Brendan P; Benayoun, Marc D; Spandorfer, Robert M; Holder, Chad A
2017-01-01
Computed tomography (CT) of the cervical spine (C-spine) is routinely ordered for low-impact, non-penetrating or "simple" assault at our institution and others. Common clinical decision tools for C-spine imaging in the setting of trauma include the National Emergency X-Radiography Utilization Study (NEXUS) and the Canadian Cervical Spine Rule for Radiography (CCR). While NEXUS and CCR have served to decrease the amount of unnecessary imaging of the C-spine, overutilization of CT is still of concern. A retrospective, cross-sectional study was performed of the electronic medical record (EMR) database at an urban, Level I Trauma Center over a 6-month period for patients receiving a C-spine CT. The primary outcome of interest was prevalence of cervical spine fracture. Secondary outcomes of interest included appropriateness of C-spine imaging after retrospective application of NEXUS and CCR. The hypothesis was that fracture rates within this patient population would be extremely low. No C-spine fractures were identified in the 460 patients who met inclusion criteria. Approximately 29% of patients did not warrant imaging by CCR, and 25% by NEXUS. Of note, approximately 44% of patients were indeterminate for whether imaging was warranted by CCR, with the most common reason being lack of assessment for active neck rotation. Cervical spine CT is overutilized in the setting of simple assault, despite established clinical decision rules. With no fractures identified regardless of other factors, the likelihood that a CT of the cervical spine will identify clinically significant findings in the setting of "simple" assault is extremely low, approaching zero. At minimum, adherence to CCR and NEXUS within this patient population would serve to reduce both imaging costs and population radiation dose exposure. Copyright © 2016 Elsevier Ltd. All rights reserved.
libprofit: Image creation from luminosity profiles
NASA Astrophysics Data System (ADS)
Robotham, A. S. G.; Taranu, D.; Tobar, R.
2016-12-01
libprofit is a C++ library for image creation based on different luminosity profiles. It offers fast and accurate two-dimensional integration for a useful number of profiles, including Sersic, Core-Sersic, broken-exponential, Ferrer, Moffat, empirical King, point-source and sky, with a simple mechanism for adding new profiles. libprofit provides a utility to read the model and profile parameters from the command-line and generate the corresponding image. It can output the resulting image as text values, a binary stream, or as a simple FITS file. It also provides a shared library exposing an API that can be used by any third-party application. R and Python interfaces are available: ProFit (ascl:1612.004) and PyProfit (ascl:1612.005).
Blackledge, Matthew D; Collins, David J; Koh, Dow-Mu; Leach, Martin O
2016-02-01
We present pyOsiriX, a plugin built for the already popular dicom viewer OsiriX that provides users the ability to extend the functionality of OsiriX through simple Python scripts. This approach allows users to integrate the many cutting-edge scientific/image-processing libraries created for Python into a powerful DICOM visualisation package that is intuitive to use and already familiar to many clinical researchers. Using pyOsiriX we hope to bridge the apparent gap between basic imaging scientists and clinical practice in a research setting and thus accelerate the development of advanced clinical image processing. We provide arguments for the use of Python as a robust scripting language for incorporation into larger software solutions, outline the structure of pyOsiriX and how it may be used to extend the functionality of OsiriX, and we provide three case studies that exemplify its utility. For our first case study we use pyOsiriX to provide a tool for smooth histogram display of voxel values within a user-defined region of interest (ROI) in OsiriX. We used a kernel density estimation (KDE) method available in Python using the scikit-learn library, where the total number of lines of Python code required to generate this tool was 22. Our second example presents a scheme for segmentation of the skeleton from CT datasets. We have demonstrated that good segmentation can be achieved for two example CT studies by using a combination of Python libraries including scikit-learn, scikit-image, SimpleITK and matplotlib. Furthermore, this segmentation method was incorporated into an automatic analysis of quantitative PET-CT in a patient with bone metastases from primary prostate cancer. This enabled repeatable statistical evaluation of PET uptake values for each lesion, before and after treatment, providing estaimes maximum and median standardised uptake values (SUVmax and SUVmed respectively). Following treatment we observed a reduction in lesion volume, SUVmax and SUVmed for all lesions, in agreement with a reduction in concurrent measures of serum prostate-specific antigen (PSA). Crown Copyright © 2015. Published by Elsevier Ltd. All rights reserved.
International survey on the management of skin stigmata and suspected tethered cord.
Ponger, Penina; Ben-Sira, Liat; Beni-Adani, Liana; Steinbok, Paul; Constantini, Shlomi
2010-12-01
We designed a survey to investigate current international management trends of neonates with lumbar midline skin stigmata suspicious of tethered cord, among pediatric neurosurgeons, focusing on the lower risk stigmata, simple dimples, deviated gluteal folds, and discolorations. Our findings will enable physicians to assess their current diagnosis routine and aid in clarifying management controversies. A questionnaire on the proposed diagnostic evaluation of seven case reports, each accompanied by relevant imaging, was distributed by e-mail to members of the International Society for Pediatric Neurosurgery, the European Society for Pediatric Neurosurgery, and via the PEDS server list between March and August 2008. Sixty-two questionnaires, completed by experienced professionals with a rather uniform distribution of experience levels, were analyzed. Forty-eight percent do not recommend any imaging of simple dimples, 30% recommend US screening and 22% recommend MR. Seventy-nine percent recommend imaging of deviated gluteal fold with 30% recommending MR. Ninety-two percent recommend imaging infants with hemangiomas with 74% recommending MR. MR for sinus tracts is recommended by 90% if sacral and by 98% if lumber. Eighty-four percent recommend MR for filar cyst. Our survey demonstrates that management of low-risk skin stigmata, simple dimple, deviated gluteal fold, and discolorations lacks consensus. In addition, a significant sector of the professional community proposes a work-up of simple dimples, sacral tracts, and filar cysts that contradicts established recommendations. A simple classification system is needed to attain a better approach, enabling correct diagnosis of tethered cord without exposing neonates to unnecessary examinations.
A simple and non-contact optical imaging probe for evaluation of corneal diseases
NASA Astrophysics Data System (ADS)
Hong, Xun Jie Jeesmond; Shinoj, V. K.; Murukeshan, V. M.; Baskaran, M.; Aung, T.
2015-09-01
Non-contact imaging techniques are preferred in ophthalmology. Corneal disease is one of the leading causes of blindness worldwide, and a possible way of detection is by analyzing the shape and optical quality of the cornea. Here, a simple and cost-effective, non-contact optical probe system is proposed and illustrated. The probe possesses high spatial resolutions and is non-dependent on coupling medium, which are significant for a clinician and patient friendly investigation. These parameters are crucial, when considering an imaging system for the objective diagnosis and management of corneal diseases. The imaging of the cornea is performed on ex vivo porcine samples and subsequently on small laboratory animals, in vivo. The clinical significance of the proposed study is validated by performing imaging of the New Zealand white rabbit's cornea infected with Pseudomonas.
Wavelet denoising of multiframe optical coherence tomography data
Mayer, Markus A.; Borsdorf, Anja; Wagner, Martin; Hornegger, Joachim; Mardin, Christian Y.; Tornow, Ralf P.
2012-01-01
We introduce a novel speckle noise reduction algorithm for OCT images. Contrary to present approaches, the algorithm does not rely on simple averaging of multiple image frames or denoising on the final averaged image. Instead it uses wavelet decompositions of the single frames for a local noise and structure estimation. Based on this analysis, the wavelet detail coefficients are weighted, averaged and reconstructed. At a signal-to-noise gain at about 100% we observe only a minor sharpness decrease, as measured by a full-width-half-maximum reduction of 10.5%. While a similar signal-to-noise gain would require averaging of 29 frames, we achieve this result using only 8 frames as input to the algorithm. A possible application of the proposed algorithm is preprocessing in retinal structure segmentation algorithms, to allow a better differentiation between real tissue information and unwanted speckle noise. PMID:22435103
Wavelet denoising of multiframe optical coherence tomography data.
Mayer, Markus A; Borsdorf, Anja; Wagner, Martin; Hornegger, Joachim; Mardin, Christian Y; Tornow, Ralf P
2012-03-01
We introduce a novel speckle noise reduction algorithm for OCT images. Contrary to present approaches, the algorithm does not rely on simple averaging of multiple image frames or denoising on the final averaged image. Instead it uses wavelet decompositions of the single frames for a local noise and structure estimation. Based on this analysis, the wavelet detail coefficients are weighted, averaged and reconstructed. At a signal-to-noise gain at about 100% we observe only a minor sharpness decrease, as measured by a full-width-half-maximum reduction of 10.5%. While a similar signal-to-noise gain would require averaging of 29 frames, we achieve this result using only 8 frames as input to the algorithm. A possible application of the proposed algorithm is preprocessing in retinal structure segmentation algorithms, to allow a better differentiation between real tissue information and unwanted speckle noise.
EduGATE - basic examples for educative purpose using the GATE simulation platform.
Pietrzyk, Uwe; Zakhnini, Abdelhamid; Axer, Markus; Sauerzapf, Sophie; Benoit, Didier; Gaens, Michaela
2013-02-01
EduGATE is a collection of basic examples to introduce students to the fundamental physical aspects of medical imaging devices. It is based on the GATE platform, which has received a wide acceptance in the field of simulating medical imaging devices including SPECT, PET, CT and also applications in radiation therapy. GATE can be configured by commands, which are, for the sake of simplicity, listed in a collection of one or more macro files to set up phantoms, multiple types of sources, detection device, and acquisition parameters. The aim of the EduGATE is to use all these helpful features of GATE to provide insights into the physics of medical imaging by means of a collection of very basic and simple GATE macros in connection with analysis programs based on ROOT, a framework for data processing. A graphical user interface to define a configuration is also included. Copyright © 2012. Published by Elsevier GmbH.
THz-wave parametric sources and imaging applications
NASA Astrophysics Data System (ADS)
Kawase, Kodo
2004-12-01
We have studied the generation of terahertz (THz) waves by optical parametric processes based on laser light scattering from the polariton mode of nonlinear crystals. Using parametric oscillation of MgO-doped LiNbO3 crystal pumped by a nano-second Q-switched Nd:YAG laser, we have realized a widely tunable coherent THz-wave sources with a simple configuration. We have also developed a novel basic technology for THz imaging, which allows detection and identification of chemicals by introducing the component spatial pattern analysis. The spatial distributions of the chemicals were obtained from terahertz multispectral trasillumination images, using absorption spectra previously measured with a widely tunable THz-wave parametric oscillator. Further we have applied this technique to the detection and identification of illicit drugs concealed in envelopes. The samples we used were methamphetamine and MDMA, two of the most widely consumed illegal drugs in Japan, and aspirin as a reference.
NASA Astrophysics Data System (ADS)
Sheshkus, Alexander; Limonova, Elena; Nikolaev, Dmitry; Krivtsov, Valeriy
2017-03-01
In this paper, we propose an expansion of convolutional neural network (CNN) input features based on Hough Transform. We perform morphological contrasting of source image followed by Hough Transform, and then use it as input for some convolutional filters. Thus, CNNs computational complexity and the number of units are not affected. Morphological contrasting and Hough Transform are the only additional computational expenses of introduced CNN input features expansion. Proposed approach was demonstrated on the example of CNN with very simple structure. We considered two image recognition problems, that were object classification on CIFAR-10 and printed character recognition on private dataset with symbols taken from Russian passports. Our approach allowed to reach noticeable accuracy improvement without taking much computational effort, which can be extremely important in industrial recognition systems or difficult problems utilising CNNs, like pressure ridge analysis and classification.
Nakamura, Takeshi; Aoki, Kazuhiro; Matsuda, Michiyuki
2008-08-01
Genetically encoded probes based on Förster resonance energy transfer (FRET) enable us to decipher spatiotemporal information encoded in complex tissues such as the brain. Firstly, this review focuses on FRET probes wherein both the donor and acceptor are fluorescence proteins and are incorporated into a single molecule, i.e. unimolecular probes. Advantages of these probes lie in their easy loading into cells, the simple acquisition of FRET images, and the clear evaluation of data. Next, we introduce our recent study which encompasses FRET imaging and in silico simulation. In nerve growth factor-induced neurite outgrowth in PC12 cells, we found positive and negative signaling feedback loops. We propose that these feedback loops determine neurite-budding sites. We would like to emphasize that it is now time to accelerate crossover research in neuroscience, optics, and computational biology.
Blonski, Wojciech C; Campbell, Mical S; Faust, Thomas; Metz, David C
2006-01-01
Simple liver cysts are congenital with a prevalence of 2.5%-4.25%. Imaging, whether by US, CT or MRI, is accurate in distinguishing simple cysts from other etiologies, including parasitic, neoplastic, duct-related, and traumatic cysts. Symptomatic simple liver cysts are rare, and the true frequency of symptoms is not known. Symptomatic simple liver cysts are predominantly large (> 4 cm), right-sided, and more common in women and older patients. The vast majority of simple hepatic cysts require no treatment or follow-up, though large cysts (> 4 cm) may be followed initially with serial imaging to ensure stability. Attribution of symptoms to a large simple cyst should be undertaken with caution, after alternative diagnoses have been excluded. Aspiration may be performed to test whether symptoms are due to the cyst; however, cyst recurrence should be expected. Limited experience with both laparoscopic deroofing and aspiration, followed by instillation of a sclerosing agent has demonstrated promising results for the treatment of symptomatic cysts. Here, we describe a patient with a large, symptomatic, simple liver cyst who experienced complete resolution of symptoms following cyst drainage and alcohol ablation, and we present a comprehensive review of the literature. PMID:16718826
A single scan skeletonization algorithm: application to medical imaging of trabecular bone
NASA Astrophysics Data System (ADS)
Arlicot, Aurore; Amouriq, Yves; Evenou, Pierre; Normand, Nicolas; Guédon, Jean-Pierre
2010-03-01
Shape description is an important step in image analysis. The skeleton is used as a simple, compact representation of a shape. A skeleton represents the line centered in the shape and must be homotopic and one point wide. Current skeletonization algorithms compute the skeleton over several image scans, using either thinning algorithms or distance transforms. The principle of thinning is to delete points as one goes along, preserving the topology of the shape. On the other hand, the maxima of the local distance transform identifies the skeleton and is an equivalent way to calculate the medial axis. However, with this method, the skeleton obtained is disconnected so it is required to connect all the points of the medial axis to produce the skeleton. In this study we introduce a translated distance transform and adapt an existing distance driven homotopic algorithm to perform skeletonization with a single scan and thus allow the processing of unbounded images. This method is applied, in our study, on micro scanner images of trabecular bones. We wish to characterize the bone micro architecture in order to quantify bone integrity.
Harrison, Thomas C; Sigler, Albrecht; Murphy, Timothy H
2009-09-15
We describe a simple and low-cost system for intrinsic optical signal (IOS) imaging using stable LED light sources, basic microscopes, and commonly available CCD cameras. IOS imaging measures activity-dependent changes in the light reflectance of brain tissue, and can be performed with a minimum of specialized equipment. Our system uses LED ring lights that can be mounted on standard microscope objectives or video lenses to provide a homogeneous and stable light source, with less than 0.003% fluctuation across images averaged from 40 trials. We describe the equipment and surgical techniques necessary for both acute and chronic mouse preparations, and provide software that can create maps of sensory representations from images captured by inexpensive 8-bit cameras or by 12-bit cameras. The IOS imaging system can be adapted to commercial upright microscopes or custom macroscopes, eliminating the need for dedicated equipment or complex optical paths. This method can be combined with parallel high resolution imaging techniques such as two-photon microscopy.
A hybrid color mapping approach to fusing MODIS and Landsat images for forward prediction
USDA-ARS?s Scientific Manuscript database
We present a new, simple, and efficient approach to fusing MODIS and Landsat images. It is well known that MODIS images have high temporal resolution and low spatial resolution whereas Landsat images are just the opposite. Similar to earlier approaches, our goal is to fuse MODIS and Landsat images t...
Application of Remote Sensing for the Analysis of Environmental Changes in Albania
NASA Astrophysics Data System (ADS)
Frasheri, N.; Beqiraj, G.; Bushati, S.; Frasheri, A.
2016-08-01
In the paper there is presented a review of remote sensing studies carried out for investigation of environmental changes in Albania. Using, often simple methodologies and general purpose image processing software, and exploiting free Internet archives of satellite imagery, significant results were obtained for hot areas of environmental changes. Such areas include sea coasts experiencing sea transgression, temporal variations of vegetation and aerosols, lakes, landslides and regional tectonics. Internet archives of European Space Agency ESA and USA Geological Service USGS are used.
Horizon Brightness Revisited: Measurements and a Model of Clear-Sky Radiances
1994-07-20
Clear daytime skies persistently display a subtle local maximum of radiance near the astronomical horizon. Spectroradiometry and digital image analysis confirm this maximum’s reality, and they show that its angular width and elevation vary with solar elevation, azimuth relative to the Sun, and aerosol optical depth. Many existing models of atmospheric scattering do not generate this near-horizon radiance maximum, but a simple second-order scattering model does, and it reproduces many of the maximum’s details.
Flow-induced Flutter of Heart Valves: Experiments with Canonical Models
NASA Astrophysics Data System (ADS)
Dou, Zhongwang; Seo, Jung-Hee; Mittal, Rajat
2017-11-01
For the better understanding of hemodynamics associated with valvular function in health and disease, the flow-induced flutter of heart valve leaflets is studied using benchtop experiments with canonical valve models. A simple experimental model with flexible leaflets is constructed and a pulsatile pump drives the flow through the leaflets. We quantify the leaflet dynamics using digital image analysis and also characterize the dynamics of the flow around the leaflets using particle imaging velocimetry. Experiments are conducted over a wide range of flow and leaflet parameters and data curated for use as a benchmark for validation of computational fluid-structure interaction models. The authors would like to acknowledge Supported from NSF Grants IIS-1344772, CBET-1511200 and NSF XSEDE Grant TG-CTS100002.
Zeller, Perrine; Ploux, Olivier; Méjean, Annick
2016-03-01
Cyanobacteria contain pigments, which generate auto-fluorescence that interferes with fluorescence in situ hybridization (FISH) imaging of cyanobacteria. We describe simple chemical treatments using CuSO4 or H2O2 that significantly reduce the auto-fluorescence of Microcystis strains. These protocols were successfully applied in FISH experiments using 16S rRNA specific probes and filamentous cyanobacteria. Copyright © 2016 Elsevier B.V. All rights reserved.
Improving cerebellar segmentation with statistical fusion
NASA Astrophysics Data System (ADS)
Plassard, Andrew J.; Yang, Zhen; Prince, Jerry L.; Claassen, Daniel O.; Landman, Bennett A.
2016-03-01
The cerebellum is a somatotopically organized central component of the central nervous system well known to be involved with motor coordination and increasingly recognized roles in cognition and planning. Recent work in multiatlas labeling has created methods that offer the potential for fully automated 3-D parcellation of the cerebellar lobules and vermis (which are organizationally equivalent to cortical gray matter areas). This work explores the trade offs of using different statistical fusion techniques and post hoc optimizations in two datasets with distinct imaging protocols. We offer a novel fusion technique by extending the ideas of the Selective and Iterative Method for Performance Level Estimation (SIMPLE) to a patch-based performance model. We demonstrate the effectiveness of our algorithm, Non- Local SIMPLE, for segmentation of a mixed population of healthy subjects and patients with severe cerebellar anatomy. Under the first imaging protocol, we show that Non-Local SIMPLE outperforms previous gold-standard segmentation techniques. In the second imaging protocol, we show that Non-Local SIMPLE outperforms previous gold standard techniques but is outperformed by a non-locally weighted vote with the deeper population of atlases available. This work advances the state of the art in open source cerebellar segmentation algorithms and offers the opportunity for routinely including cerebellar segmentation in magnetic resonance imaging studies that acquire whole brain T1-weighted volumes with approximately 1 mm isotropic resolution.
Spectral imaging perspective on cytomics.
Levenson, Richard M
2006-07-01
Cytomics involves the analysis of cellular morphology and molecular phenotypes, with reference to tissue architecture and to additional metadata. To this end, a variety of imaging and nonimaging technologies need to be integrated. Spectral imaging is proposed as a tool that can simplify and enrich the extraction of morphological and molecular information. Simple-to-use instrumentation is available that mounts on standard microscopes and can generate spectral image datasets with excellent spatial and spectral resolution; these can be exploited by sophisticated analysis tools. This report focuses on brightfield microscopy-based approaches. Cytological and histological samples were stained using nonspecific standard stains (Giemsa; hematoxylin and eosin (H&E)) or immunohistochemical (IHC) techniques employing three chromogens plus a hematoxylin counterstain. The samples were imaged using the Nuance system, a commercially available, liquid-crystal tunable-filter-based multispectral imaging platform. The resulting data sets were analyzed using spectral unmixing algorithms and/or learn-by-example classification tools. Spectral unmixing of Giemsa-stained guinea-pig blood films readily classified the major blood elements. Machine-learning classifiers were also successful at the same task, as well in distinguishing normal from malignant regions in a colon-cancer example, and in delineating regions of inflammation in an H&E-stained kidney sample. In an example of a multiplexed ICH sample, brown, red, and blue chromogens were isolated into separate images without crosstalk or interference from the (also blue) hematoxylin counterstain. Cytomics requires both accurate architectural segmentation as well as multiplexed molecular imaging to associate molecular phenotypes with relevant cellular and tissue compartments. Multispectral imaging can assist in both these tasks, and conveys new utility to brightfield-based microscopy approaches. Copyright 2006 International Society for Analytical Cytology.
The Extraction of Terrace in the Loess Plateau Based on radial method
NASA Astrophysics Data System (ADS)
Liu, W.; Li, F.
2016-12-01
The terrace of Loess Plateau, as a typical kind of artificial landform and an important measure of soil and water conservation, its positioning and automatic extraction will simplify the work of land use investigation. The existing methods of terrace extraction mainly include visual interpretation and automatic extraction. The manual method is used in land use investigation, but it is time-consuming and laborious. Researchers put forward some automatic extraction methods. For example, Fourier transform method can recognize terrace and find accurate position from frequency domain image, but it is more affected by the linear objects in the same direction of terrace; Texture analysis method is simple and have a wide range application of image processing. The disadvantage of texture analysis method is unable to recognize terraces' edge; Object-oriented is a new method of image classification, but when introduce it to terrace extracting, fracture polygons will be the most serious problem and it is difficult to explain its geological meaning. In order to positioning the terraces, we use high- resolution remote sensing image to extract and analyze the gray value of the pixels which the radial went through. During the recognition process, we firstly use the DEM data analysis or by manual selecting, to roughly confirm the position of peak points; secondly, take each of the peak points as the center to make radials in all directions; finally, extracting the gray values of the pixels which the radials went through, and analyzing its changing characteristics to confirm whether the terrace exists. For the purpose of getting accurate position of terrace, terraces' discontinuity, extension direction, ridge width, image processing algorithm, remote sensing image illumination and other influence factors were fully considered when designing the algorithms.
A knowledge-based machine vision system for space station automation
NASA Technical Reports Server (NTRS)
Chipman, Laure J.; Ranganath, H. S.
1989-01-01
A simple knowledge-based approach to the recognition of objects in man-made scenes is being developed. Specifically, the system under development is a proposed enhancement to a robot arm for use in the space station laboratory module. The system will take a request from a user to find a specific object, and locate that object by using its camera input and information from a knowledge base describing the scene layout and attributes of the object types included in the scene. In order to use realistic test images in developing the system, researchers are using photographs of actual NASA simulator panels, which provide similar types of scenes to those expected in the space station environment. Figure 1 shows one of these photographs. In traditional approaches to image analysis, the image is transformed step by step into a symbolic representation of the scene. Often the first steps of the transformation are done without any reference to knowledge of the scene or objects. Segmentation of an image into regions generally produces a counterintuitive result in which regions do not correspond to objects in the image. After segmentation, a merging procedure attempts to group regions into meaningful units that will more nearly correspond to objects. Here, researchers avoid segmenting the image as a whole, and instead use a knowledge-directed approach to locate objects in the scene. The knowledge-based approach to scene analysis is described and the categories of knowledge used in the system are discussed.
A simple water-immersion condenser for imaging living brain slices on an inverted microscope.
Prusky, G T
1997-09-05
Due to some physical limitations of conventional condensers, inverted compound microscopes are not optimally suited for imaging living brain slices with transmitted light. Herein is described a simple device that converts an inverted microscope into an effective tool for this application by utilizing an objective as a condenser. The device is mounted on a microscope in place of the condenser, is threaded to accept a water immersion objective, and has a slot for a differential interference contrast (DIC) slider. When combined with infrared video techniques, this device allows an inverted microscope to effectively image living cells within thick brain slices in an open perfusion chamber.
Using Spatial Correlations of SPDC Sources for Increasing the Signal to Noise Ratio in Images
NASA Astrophysics Data System (ADS)
Ruíz, A. I.; Caudillo, R.; Velázquez, V. M.; Barrios, E.
2017-05-01
We experimentally show that, by using spatial correlations of photon pairs produced by Spontaneous Parametric Down-Conversion, it is possible to increase the Signal to Noise Ratio in images of objects illuminated with those photons; in comparison, objects illuminated with light from a laser present a minor ratio. Our simple experimental set-up was capable to produce an average improvement in signal to noise ratio of 11dB of Parametric Down-Converted light over laser light. This simple method can be easily implemented for obtaining high contrast images of faint objects and for transmitting information with low noise.
A step-by-step solution for embedding user-controlled cines into educational Web pages.
Cornfeld, Daniel
2008-03-01
The objective of this article is to introduce a simple method for embedding user-controlled cines into a Web page using a simple JavaScript. Step-by-step instructions are included and the source code is made available. This technique allows the creation of portable Web pages that allow the user to scroll through cases as if seated at a PACS workstation. A simple JavaScript allows scrollable image stacks to be included on Web pages. With this technique, you can quickly and easily incorporate entire stacks of CT or MR images into online teaching files. This technique has the potential for use in case presentations, online didactics, teaching archives, and resident testing.
Computational Ghost Imaging for Remote Sensing
NASA Technical Reports Server (NTRS)
Erkmen, Baris I.
2012-01-01
This work relates to the generic problem of remote active imaging; that is, a source illuminates a target of interest and a receiver collects the scattered light off the target to obtain an image. Conventional imaging systems consist of an imaging lens and a high-resolution detector array [e.g., a CCD (charge coupled device) array] to register the image. However, conventional imaging systems for remote sensing require high-quality optics and need to support large detector arrays and associated electronics. This results in suboptimal size, weight, and power consumption. Computational ghost imaging (CGI) is a computational alternative to this traditional imaging concept that has a very simple receiver structure. In CGI, the transmitter illuminates the target with a modulated light source. A single-pixel (bucket) detector collects the scattered light. Then, via computation (i.e., postprocessing), the receiver can reconstruct the image using the knowledge of the modulation that was projected onto the target by the transmitter. This way, one can construct a very simple receiver that, in principle, requires no lens to image a target. Ghost imaging is a transverse imaging modality that has been receiving much attention owing to a rich interconnection of novel physical characteristics and novel signal processing algorithms suitable for active computational imaging. The original ghost imaging experiments consisted of two correlated optical beams traversing distinct paths and impinging on two spatially-separated photodetectors: one beam interacts with the target and then illuminates on a single-pixel (bucket) detector that provides no spatial resolution, whereas the other beam traverses an independent path and impinges on a high-resolution camera without any interaction with the target. The term ghost imaging was coined soon after the initial experiments were reported, to emphasize the fact that by cross-correlating two photocurrents, one generates an image of the target. In CGI, the measurement obtained from the reference arm (with the high-resolution detector) is replaced by a computational derivation of the measurement-plane intensity profile of the reference-arm beam. The algorithms applied to computational ghost imaging have diversified beyond simple correlation measurements, and now include modern reconstruction algorithms based on compressive sensing.
NASA Astrophysics Data System (ADS)
Win, Khin Yadanar; Choomchuay, Somsak; Hamamoto, Kazuhiko
2017-06-01
The automated segmentation of cell nuclei is an essential stage in the quantitative image analysis of cell nuclei extracted from smear cytology images of pleural fluid. Cell nuclei can indicate cancer as the characteristics of cell nuclei are associated with cells proliferation and malignancy in term of size, shape and the stained color. Nevertheless, automatic nuclei segmentation has remained challenging due to the artifacts caused by slide preparation, nuclei heterogeneity such as the poor contrast, inconsistent stained color, the cells variation, and cells overlapping. In this paper, we proposed a watershed-based method that is capable to segment the nuclei of the variety of cells from cytology pleural fluid smear images. Firstly, the original image is preprocessed by converting into the grayscale image and enhancing by adjusting and equalizing the intensity using histogram equalization. Next, the cell nuclei are segmented using OTSU thresholding as the binary image. The undesirable artifacts are eliminated using morphological operations. Finally, the distance transform based watershed method is applied to isolate the touching and overlapping cell nuclei. The proposed method is tested with 25 Papanicolaou (Pap) stained pleural fluid images. The accuracy of our proposed method is 92%. The method is relatively simple, and the results are very promising.
Noise-gating to Clean Astrophysical Image Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
DeForest, C. E.
I present a family of algorithms to reduce noise in astrophysical images and image sequences, preserving more information from the original data than is retained by conventional techniques. The family uses locally adaptive filters (“noise gates”) in the Fourier domain to separate coherent image structure from background noise based on the statistics of local neighborhoods in the image. Processing of solar data limited by simple shot noise or by additive noise reveals image structure not easily visible in the originals, preserves photometry of observable features, and reduces shot noise by a factor of 10 or more with little to nomore » apparent loss of resolution. This reveals faint features that were either not directly discernible or not sufficiently strongly detected for quantitative analysis. The method works best on image sequences containing related subjects, for example movies of solar evolution, but is also applicable to single images provided that there are enough pixels. The adaptive filter uses the statistical properties of noise and of local neighborhoods in the data to discriminate between coherent features and incoherent noise without reference to the specific shape or evolution of those features. The technique can potentially be modified in a straightforward way to exploit additional a priori knowledge about the functional form of the noise.« less
Noise-gating to Clean Astrophysical Image Data
NASA Astrophysics Data System (ADS)
DeForest, C. E.
2017-04-01
I present a family of algorithms to reduce noise in astrophysical images and image sequences, preserving more information from the original data than is retained by conventional techniques. The family uses locally adaptive filters (“noise gates”) in the Fourier domain to separate coherent image structure from background noise based on the statistics of local neighborhoods in the image. Processing of solar data limited by simple shot noise or by additive noise reveals image structure not easily visible in the originals, preserves photometry of observable features, and reduces shot noise by a factor of 10 or more with little to no apparent loss of resolution. This reveals faint features that were either not directly discernible or not sufficiently strongly detected for quantitative analysis. The method works best on image sequences containing related subjects, for example movies of solar evolution, but is also applicable to single images provided that there are enough pixels. The adaptive filter uses the statistical properties of noise and of local neighborhoods in the data to discriminate between coherent features and incoherent noise without reference to the specific shape or evolution of those features. The technique can potentially be modified in a straightforward way to exploit additional a priori knowledge about the functional form of the noise.
NASA Astrophysics Data System (ADS)
Zhu, Bangshang; Yuan, Falei; Yuan, Xiaoya; Bo, Yang; Wang, Yongting; Yang, Guo-Yuan; Drummen, Gregor P. C.; Zhu, Xinyuan
2014-02-01
Micro-computed tomography (micro-CT) is a powerful tool for visualizing the vascular systems of tissues, organs, or entire small animals. Vascular contrast agents play a vital role in micro-CT imaging in order to obtain clear and high-quality images. In this study, a new kind of nanostructured barium phosphate was fabricated and used as a contrast agent for ex vivo micro-CT imaging of blood vessels in the mouse brain. Nanostructured barium phosphate was synthesized through a simple wet precipitation method using Ba(NO3)2, and (NH4)2HPO4 as starting materials. The physiochemical properties of barium phosphate were characterized by scanning electron microscopy, transmission electron microscopy, X-ray diffraction, Fourier transform infrared spectroscopy, and thermal analysis. Furthermore, the impact of the produced nanostructures on cell viability was evaluated via the MTT assay, which generally showed low to moderate cytotoxicity. Finally, the animal test images demonstrated that the use of nanostructured barium phosphate as a contrast agent in Micro-CT imaging produced sharp images with excellent contrast. Both major vessels and the microvasculature were clearly observable in the imaged mouse brain. Overall, the results indicate that nanostructured barium phosphate is a potential and useful vascular contrast agent for micro-CT imaging.
Gmyr, Valery; Bonner, Caroline; Lukowiak, Bruno; Pawlowski, Valerie; Dellaleau, Nathalie; Belaich, Sandrine; Aluka, Isanga; Moermann, Ericka; Thevenet, Julien; Ezzouaoui, Rimed; Queniat, Gurvan; Pattou, Francois; Kerr-Conte, Julie
2015-01-01
Reliable assessment of islet viability, mass, and purity must be met prior to transplanting an islet preparation into patients with type 1 diabetes. The standard method for quantifying human islet preparations is by direct microscopic analysis of dithizone-stained islet samples, but this technique may be susceptible to inter-/intraobserver variability, which may induce false positive/negative islet counts. Here we describe a simple, reliable, automated digital image analysis (ADIA) technique for accurately quantifying islets into total islet number, islet equivalent number (IEQ), and islet purity before islet transplantation. Islets were isolated and purified from n = 42 human pancreata according to the automated method of Ricordi et al. For each preparation, three islet samples were stained with dithizone and expressed as IEQ number. Islets were analyzed manually by microscopy or automatically quantified using Nikon's inverted Eclipse Ti microscope with built-in NIS-Elements Advanced Research (AR) software. The AIDA method significantly enhanced the number of islet preparations eligible for engraftment compared to the standard manual method (p < 0.001). Comparisons of individual methods showed good correlations between mean values of IEQ number (r(2) = 0.91) and total islet number (r(2) = 0.88) and thus increased to r(2) = 0.93 when islet surface area was estimated comparatively with IEQ number. The ADIA method showed very high intraobserver reproducibility compared to the standard manual method (p < 0.001). However, islet purity was routinely estimated as significantly higher with the manual method versus the ADIA method (p < 0.001). The ADIA method also detected small islets between 10 and 50 µm in size. Automated digital image analysis utilizing the Nikon Instruments software is an unbiased, simple, and reliable teaching tool to comprehensively assess the individual size of each islet cell preparation prior to transplantation. Implementation of this technology to improve engraftment may help to advance the therapeutic efficacy and accessibility of islet transplantation across centers.
NASA Astrophysics Data System (ADS)
Bokhart, Mark T.; Nazari, Milad; Garrard, Kenneth P.; Muddiman, David C.
2018-01-01
A major update to the mass spectrometry imaging (MSI) software MSiReader is presented, offering a multitude of newly added features critical to MSI analyses. MSiReader is a free, open-source, and vendor-neutral software written in the MATLAB platform and is capable of analyzing most common MSI data formats. A standalone version of the software, which does not require a MATLAB license, is also distributed. The newly incorporated data analysis features expand the utility of MSiReader beyond simple visualization of molecular distributions. The MSiQuantification tool allows researchers to calculate absolute concentrations from quantification MSI experiments exclusively through MSiReader software, significantly reducing data analysis time. An image overlay feature allows the incorporation of complementary imaging modalities to be displayed with the MSI data. A polarity filter has also been incorporated into the data loading step, allowing the facile analysis of polarity switching experiments without the need for data parsing prior to loading the data file into MSiReader. A quality assurance feature to generate a mass measurement accuracy (MMA) heatmap for an analyte of interest has also been added to allow for the investigation of MMA across the imaging experiment. Most importantly, as new features have been added performance has not degraded, in fact it has been dramatically improved. These new tools and the improvements to the performance in MSiReader v1.0 enable the MSI community to evaluate their data in greater depth and in less time. [Figure not available: see fulltext.
Sunspots and Their Simple Harmonic Motion
ERIC Educational Resources Information Center
Ribeiro, C. I.
2013-01-01
In this paper an example of a simple harmonic motion, the apparent motion of sunspots due to the Sun's rotation, is described, which can be used to teach this subject to high-school students. Using real images of the Sun, students can calculate the star's rotation period with the simple harmonic motion mathematical expression.