Sample records for image analysis allowed

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    White, Amanda M.; Daly, Don S.; Willse, Alan R.

    The Automated Microarray Image Analysis (AMIA) Toolbox for MATLAB is a flexible, open-source microarray image analysis tool that allows the user to customize analysis of sets of microarray images. This tool provides several methods of identifying and quantify spot statistics, as well as extensive diagnostic statistics and images to identify poor data quality or processing. The open nature of this software allows researchers to understand the algorithms used to provide intensity estimates and to modify them easily if desired.

  2. Dual wavelength imaging allows analysis of membrane fusion of influenza virus inside cells.

    PubMed

    Sakai, Tatsuya; Ohuchi, Masanobu; Imai, Masaki; Mizuno, Takafumi; Kawasaki, Kazunori; Kuroda, Kazumichi; Yamashina, Shohei

    2006-02-01

    Influenza virus hemagglutinin (HA) is a determinant of virus infectivity. Therefore, it is important to determine whether HA of a new influenza virus, which can potentially cause pandemics, is functional against human cells. The novel imaging technique reported here allows rapid analysis of HA function by visualizing viral fusion inside cells. This imaging was designed to detect fusion changing the spectrum of the fluorescence-labeled virus. Using this imaging, we detected the fusion between a virus and a very small endosome that could not be detected previously, indicating that the imaging allows highly sensitive detection of viral fusion.

  3. Techniques in helical scanning, dynamic imaging and image segmentation for improved quantitative analysis with X-ray micro-CT

    NASA Astrophysics Data System (ADS)

    Sheppard, Adrian; Latham, Shane; Middleton, Jill; Kingston, Andrew; Myers, Glenn; Varslot, Trond; Fogden, Andrew; Sawkins, Tim; Cruikshank, Ron; Saadatfar, Mohammad; Francois, Nicolas; Arns, Christoph; Senden, Tim

    2014-04-01

    This paper reports on recent advances at the micro-computed tomography facility at the Australian National University. Since 2000 this facility has been a significant centre for developments in imaging hardware and associated software for image reconstruction, image analysis and image-based modelling. In 2010 a new instrument was constructed that utilises theoretically-exact image reconstruction based on helical scanning trajectories, allowing higher cone angles and thus better utilisation of the available X-ray flux. We discuss the technical hurdles that needed to be overcome to allow imaging with cone angles in excess of 60°. We also present dynamic tomography algorithms that enable the changes between one moment and the next to be reconstructed from a sparse set of projections, allowing higher speed imaging of time-varying samples. Researchers at the facility have also created a sizeable distributed-memory image analysis toolkit with capabilities ranging from tomographic image reconstruction to 3D shape characterisation. We show results from image registration and present some of the new imaging and experimental techniques that it enables. Finally, we discuss the crucial question of image segmentation and evaluate some recently proposed techniques for automated segmentation.

  4. The Spectral Image Processing System (SIPS): Software for integrated analysis of AVIRIS data

    NASA Technical Reports Server (NTRS)

    Kruse, F. A.; Lefkoff, A. B.; Boardman, J. W.; Heidebrecht, K. B.; Shapiro, A. T.; Barloon, P. J.; Goetz, A. F. H.

    1992-01-01

    The Spectral Image Processing System (SIPS) is a software package developed by the Center for the Study of Earth from Space (CSES) at the University of Colorado, Boulder, in response to a perceived need to provide integrated tools for analysis of imaging spectrometer data both spectrally and spatially. SIPS was specifically designed to deal with data from the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) and the High Resolution Imaging Spectrometer (HIRIS), but was tested with other datasets including the Geophysical and Environmental Research Imaging Spectrometer (GERIS), GEOSCAN images, and Landsat TM. SIPS was developed using the 'Interactive Data Language' (IDL). It takes advantage of high speed disk access and fast processors running under the UNIX operating system to provide rapid analysis of entire imaging spectrometer datasets. SIPS allows analysis of single or multiple imaging spectrometer data segments at full spatial and spectral resolution. It also allows visualization and interactive analysis of image cubes derived from quantitative analysis procedures such as absorption band characterization and spectral unmixing. SIPS consists of three modules: SIPS Utilities, SIPS_View, and SIPS Analysis. SIPS version 1.1 is described below.

  5. Blood pulsation measurement using cameras operating in visible light: limitations.

    PubMed

    Koprowski, Robert

    2016-10-03

    The paper presents an automatic method for analysis and processing of images from a camera operating in visible light. This analysis applies to images containing the human facial area (body) and enables to measure the blood pulse rate. Special attention was paid to the limitations of this measurement method taking into account the possibility of using consumer cameras in real conditions (different types of lighting, different camera resolution, camera movement). The proposed new method of image analysis and processing was associated with three stages: (1) image pre-processing-allowing for the image filtration and stabilization (object location tracking); (2) main image processing-allowing for segmentation of human skin areas, acquisition of brightness changes; (3) signal analysis-filtration, FFT (Fast Fourier Transformation) analysis, pulse calculation. The presented algorithm and method for measuring the pulse rate has the following advantages: (1) it allows for non-contact and non-invasive measurement; (2) it can be carried out using almost any camera, including webcams; (3) it enables to track the object on the stage, which allows for the measurement of the heart rate when the patient is moving; (4) for a minimum of 40,000 pixels, it provides a measurement error of less than ±2 beats per minute for p < 0.01 and sunlight, or a slightly larger error (±3 beats per minute) for artificial lighting; (5) analysis of a single image takes about 40 ms in Matlab Version 7.11.0.584 (R2010b) with Image Processing Toolbox Version 7.1 (R2010b).

  6. Microscopy image segmentation tool: Robust image data analysis

    NASA Astrophysics Data System (ADS)

    Valmianski, Ilya; Monton, Carlos; Schuller, Ivan K.

    2014-03-01

    We present a software package called Microscopy Image Segmentation Tool (MIST). MIST is designed for analysis of microscopy images which contain large collections of small regions of interest (ROIs). Originally developed for analysis of porous anodic alumina scanning electron images, MIST capabilities have been expanded to allow use in a large variety of problems including analysis of biological tissue, inorganic and organic film grain structure, as well as nano- and meso-scopic structures. MIST provides a robust segmentation algorithm for the ROIs, includes many useful analysis capabilities, and is highly flexible allowing incorporation of specialized user developed analysis. We describe the unique advantages MIST has over existing analysis software. In addition, we present a number of diverse applications to scanning electron microscopy, atomic force microscopy, magnetic force microscopy, scanning tunneling microscopy, and fluorescent confocal laser scanning microscopy.

  7. Computer image analysis in obtaining characteristics of images: greenhouse tomatoes in the process of generating learning sets of artificial neural networks

    NASA Astrophysics Data System (ADS)

    Zaborowicz, M.; Przybył, J.; Koszela, K.; Boniecki, P.; Mueller, W.; Raba, B.; Lewicki, A.; Przybył, K.

    2014-04-01

    The aim of the project was to make the software which on the basis on image of greenhouse tomato allows for the extraction of its characteristics. Data gathered during the image analysis and processing were used to build learning sets of artificial neural networks. Program enables to process pictures in jpeg format, acquisition of statistical information of the picture and export them to an external file. Produced software is intended to batch analyze collected research material and obtained information saved as a csv file. Program allows for analysis of 33 independent parameters implicitly to describe tested image. The application is dedicated to processing and image analysis of greenhouse tomatoes. The program can be used for analysis of other fruits and vegetables of a spherical shape.

  8. Chemometric analysis of multisensor hyperspectral images of precipitated atmospheric particulate matter.

    PubMed

    Ofner, Johannes; Kamilli, Katharina A; Eitenberger, Elisabeth; Friedbacher, Gernot; Lendl, Bernhard; Held, Andreas; Lohninger, Hans

    2015-09-15

    The chemometric analysis of multisensor hyperspectral data allows a comprehensive image-based analysis of precipitated atmospheric particles. Atmospheric particulate matter was precipitated on aluminum foils and analyzed by Raman microspectroscopy and subsequently by electron microscopy and energy dispersive X-ray spectroscopy. All obtained images were of the same spot of an area of 100 × 100 μm(2). The two hyperspectral data sets and the high-resolution scanning electron microscope images were fused into a combined multisensor hyperspectral data set. This multisensor data cube was analyzed using principal component analysis, hierarchical cluster analysis, k-means clustering, and vertex component analysis. The detailed chemometric analysis of the multisensor data allowed an extensive chemical interpretation of the precipitated particles, and their structure and composition led to a comprehensive understanding of atmospheric particulate matter.

  9. Image Processing for Bioluminescence Resonance Energy Transfer Measurement-BRET-Analyzer.

    PubMed

    Chastagnier, Yan; Moutin, Enora; Hemonnot, Anne-Laure; Perroy, Julie

    2017-01-01

    A growing number of tools now allow live recordings of various signaling pathways and protein-protein interaction dynamics in time and space by ratiometric measurements, such as Bioluminescence Resonance Energy Transfer (BRET) Imaging. Accurate and reproducible analysis of ratiometric measurements has thus become mandatory to interpret quantitative imaging. In order to fulfill this necessity, we have developed an open source toolset for Fiji- BRET-Analyzer -allowing a systematic analysis, from image processing to ratio quantification. We share this open source solution and a step-by-step tutorial at https://github.com/ychastagnier/BRET-Analyzer. This toolset proposes (1) image background subtraction, (2) image alignment over time, (3) a composite thresholding method of the image used as the denominator of the ratio to refine the precise limits of the sample, (4) pixel by pixel division of the images and efficient distribution of the ratio intensity on a pseudocolor scale, and (5) quantification of the ratio mean intensity and standard variation among pixels in chosen areas. In addition to systematize the analysis process, we show that the BRET-Analyzer allows proper reconstitution and quantification of the ratiometric image in time and space, even from heterogeneous subcellular volumes. Indeed, analyzing twice the same images, we demonstrate that compared to standard analysis BRET-Analyzer precisely define the luminescent specimen limits, enlightening proficient strengths from small and big ensembles over time. For example, we followed and quantified, in live, scaffold proteins interaction dynamics in neuronal sub-cellular compartments including dendritic spines, for half an hour. In conclusion, BRET-Analyzer provides a complete, versatile and efficient toolset for automated reproducible and meaningful image ratio analysis.

  10. Thermal image analysis using the serpentine method

    NASA Astrophysics Data System (ADS)

    Koprowski, Robert; Wilczyński, Sławomir

    2018-03-01

    Thermal imaging is an increasingly widespread alternative to other imaging methods. As a supplementary method in diagnostics, it can be used both statically and with dynamic temperature changes. The paper proposes a new image analysis method that allows for the acquisition of new diagnostic information as well as object segmentation. The proposed serpentine analysis uses known and new methods of image analysis and processing proposed by the authors. Affine transformations of an image and subsequent Fourier analysis provide a new diagnostic quality. The method is fully repeatable and automatic and independent of inter-individual variability in patients. The segmentation results are by 10% better than those obtained from the watershed method and the hybrid segmentation method based on the Canny detector. The first and second harmonics of serpentine analysis enable to determine the type of temperature changes in the region of interest (gradient, number of heat sources etc.). The presented serpentine method provides new quantitative information on thermal imaging and more. Since it allows for image segmentation and designation of contact points of two and more heat sources (local minimum), it can be used to support medical diagnostics in many areas of medicine.

  11. Analysis and segmentation of images in case of solving problems of detecting and tracing objects on real-time video

    NASA Astrophysics Data System (ADS)

    Ezhova, Kseniia; Fedorenko, Dmitriy; Chuhlamov, Anton

    2016-04-01

    The article deals with the methods of image segmentation based on color space conversion, and allow the most efficient way to carry out the detection of a single color in a complex background and lighting, as well as detection of objects on a homogeneous background. The results of the analysis of segmentation algorithms of this type, the possibility of their implementation for creating software. The implemented algorithm is very time-consuming counting, making it a limited application for the analysis of the video, however, it allows us to solve the problem of analysis of objects in the image if there is no dictionary of images and knowledge bases, as well as the problem of choosing the optimal parameters of the frame quantization for video analysis.

  12. Roles of universal three-dimensional image analysis devices that assist surgical operations.

    PubMed

    Sakamoto, Tsuyoshi

    2014-04-01

    The circumstances surrounding medical image analysis have undergone rapid evolution. In such a situation, it can be said that "imaging" obtained through medical imaging modality and the "analysis" that we employ have become amalgamated. Recently, we feel the distance between "imaging" and "analysis" has become closer regarding the imaging analysis of any organ system, as if both terms mentioned above have become integrated. The history of medical image analysis started with the appearance of the computer. The invention of multi-planar reconstruction (MPR) used in the helical scan had a significant impact and became the basis for recent image analysis. Subsequently, curbed MPR (CPR) and other methods were developed, and the 3D diagnostic imaging and image analysis of the human body have started on a full scale. Volume rendering: the development of a new rendering algorithm and the significant improvement of memory and CPUs contributed to the development of "volume rendering," which allows 3D views with retained internal information. A new value was created by this development; computed tomography (CT) images that used to be for "diagnosis" before that time have become "applicable to treatment." In the past, before the development of volume rendering, a clinician had to mentally reconstruct an image reconfigured for diagnosis into a 3D image, but these developments have allowed the depiction of a 3D image on a monitor. Current technology: Currently, in Japan, the estimation of the liver volume and the perfusion area of the portal vein and hepatic vein are vigorously being adopted during preoperative planning for hepatectomy. Such a circumstance seems to be brought by the substantial improvement of said basic techniques and by upgrading the user interface, allowing doctors easy manipulation by themselves. The following describes the specific techniques. Future of post-processing technology: It is expected, in terms of the role of image analysis, for better or worse, that computer-aided diagnosis (CAD) will develop to a highly advanced level in every diagnostic field. Further, it is also expected in the treatment field that a technique coordinating various devices will be strongly required as a surgery navigator. Actually, surgery using an image navigator is being widely studied, and coordination with hardware, including robots, will also be developed. © 2014 Japanese Society of Hepato-Biliary-Pancreatic Surgery.

  13. [Development of an original computer program FISHMet: use for molecular cytogenetic diagnosis and genome mapping by fluorescent in situ hybridization (FISH)].

    PubMed

    Iurov, Iu B; Khazatskiĭ, I A; Akindinov, V A; Dovgilov, L V; Kobrinskiĭ, B A; Vorsanova, S G

    2000-08-01

    Original software FISHMet has been developed and tried for improving the efficiency of diagnosis of hereditary diseases caused by chromosome aberrations and for chromosome mapping by fluorescent in situ hybridization (FISH) method. The program allows creation and analysis of pseudocolor chromosome images and hybridization signals in the Windows 95 system, allows computer analysis and editing of the results of pseudocolor hybridization in situ, including successive imposition of initial black-and-white images created using fluorescent filters (blue, green, and red), and editing of each image individually or of a summary pseudocolor image in BMP, TIFF, and JPEG formats. Components of image computer analysis system (LOMO, Leitz Ortoplan, and Axioplan fluorescent microscopes, COHU 4910 and Sanyo VCB-3512P CCD cameras, Miro-Video, Scion LG-3 and VG-5 image capture maps, and Pentium 100 and Pentium 200 computers) and specialized software for image capture and visualization (Scion Image PC and Video-Cup) have been used with good results in the study.

  14. Distortion correction and cross-talk compensation algorithm for use with an imaging spectrometer based spatially resolved diffuse reflectance system

    NASA Astrophysics Data System (ADS)

    Cappon, Derek J.; Farrell, Thomas J.; Fang, Qiyin; Hayward, Joseph E.

    2016-12-01

    Optical spectroscopy of human tissue has been widely applied within the field of biomedical optics to allow rapid, in vivo characterization and analysis of the tissue. When designing an instrument of this type, an imaging spectrometer is often employed to allow for simultaneous analysis of distinct signals. This is especially important when performing spatially resolved diffuse reflectance spectroscopy. In this article, an algorithm is presented that allows for the automated processing of 2-dimensional images acquired from an imaging spectrometer. The algorithm automatically defines distinct spectrometer tracks and adaptively compensates for distortion introduced by optical components in the imaging chain. Crosstalk resulting from the overlap of adjacent spectrometer tracks in the image is detected and subtracted from each signal. The algorithm's performance is demonstrated in the processing of spatially resolved diffuse reflectance spectra recovered from an Intralipid and ink liquid phantom and is shown to increase the range of wavelengths over which usable data can be recovered.

  15. Parallel processing considerations for image recognition tasks

    NASA Astrophysics Data System (ADS)

    Simske, Steven J.

    2011-01-01

    Many image recognition tasks are well-suited to parallel processing. The most obvious example is that many imaging tasks require the analysis of multiple images. From this standpoint, then, parallel processing need be no more complicated than assigning individual images to individual processors. However, there are three less trivial categories of parallel processing that will be considered in this paper: parallel processing (1) by task; (2) by image region; and (3) by meta-algorithm. Parallel processing by task allows the assignment of multiple workflows-as diverse as optical character recognition [OCR], document classification and barcode reading-to parallel pipelines. This can substantially decrease time to completion for the document tasks. For this approach, each parallel pipeline is generally performing a different task. Parallel processing by image region allows a larger imaging task to be sub-divided into a set of parallel pipelines, each performing the same task but on a different data set. This type of image analysis is readily addressed by a map-reduce approach. Examples include document skew detection and multiple face detection and tracking. Finally, parallel processing by meta-algorithm allows different algorithms to be deployed on the same image simultaneously. This approach may result in improved accuracy.

  16. Biomedical image analysis and processing in clouds

    NASA Astrophysics Data System (ADS)

    Bednarz, Tomasz; Szul, Piotr; Arzhaeva, Yulia; Wang, Dadong; Burdett, Neil; Khassapov, Alex; Chen, Shiping; Vallotton, Pascal; Lagerstrom, Ryan; Gureyev, Tim; Taylor, John

    2013-10-01

    Cloud-based Image Analysis and Processing Toolbox project runs on the Australian National eResearch Collaboration Tools and Resources (NeCTAR) cloud infrastructure and allows access to biomedical image processing and analysis services to researchers via remotely accessible user interfaces. By providing user-friendly access to cloud computing resources and new workflow-based interfaces, our solution enables researchers to carry out various challenging image analysis and reconstruction tasks. Several case studies will be presented during the conference.

  17. Quantification of root gravitropic response using a constant stimulus feedback system.

    PubMed

    Wolverton, Chris

    2015-01-01

    Numerous software packages now exist for quantifying root growth responses, most of which analyze a time resolved sequence of images ex post facto. However, few allow for the real-time analysis of growth responses. The system in routine use in our lab allows for real-time growth analysis and couples this to positional feedback to control the stimulus experienced by the responding root. This combination allows us to overcome one of the confounding variables in studies of root gravity response. Seedlings are grown on standard petri plates attached to a vertical rotating stage and imaged using infrared illumination. The angle of a particular region of the root is determined by image analysis, compared to the prescribed angle, and any corrections in positioning are made by controlling a stepper motor. The system allows for the long-term stimulation of a root at a constant angle and yields insights into the gravity perception and transduction machinery not possible with other approaches.

  18. Cloud-based processing of multi-spectral imaging data

    NASA Astrophysics Data System (ADS)

    Bernat, Amir S.; Bolton, Frank J.; Weiser, Reuven; Levitz, David

    2017-03-01

    Multispectral imaging holds great promise as a non-contact tool for the assessment of tissue composition. Performing multi - spectral imaging on a hand held mobile device would allow to bring this technology and with it knowledge to low resource settings to provide a state of the art classification of tissue health. This modality however produces considerably larger data sets than white light imaging and requires preliminary image analysis for it to be used. The data then needs to be analyzed and logged, while not requiring too much of the system resource or a long computation time and battery use by the end point device. Cloud environments were designed to allow offloading of those problems by allowing end point devices (smartphones) to offload computationally hard tasks. For this end we present a method where the a hand held device based around a smartphone captures a multi - spectral dataset in a movie file format (mp4) and compare it to other image format in size, noise and correctness. We present the cloud configuration used for segmenting images to frames where they can later be used for further analysis.

  19. Comparing methods for analysis of biomedical hyperspectral image data

    NASA Astrophysics Data System (ADS)

    Leavesley, Silas J.; Sweat, Brenner; Abbott, Caitlyn; Favreau, Peter F.; Annamdevula, Naga S.; Rich, Thomas C.

    2017-02-01

    Over the past 2 decades, hyperspectral imaging technologies have been adapted to address the need for molecule-specific identification in the biomedical imaging field. Applications have ranged from single-cell microscopy to whole-animal in vivo imaging and from basic research to clinical systems. Enabling this growth has been the availability of faster, more effective hyperspectral filtering technologies and more sensitive detectors. Hence, the potential for growth of biomedical hyperspectral imaging is high, and many hyperspectral imaging options are already commercially available. However, despite the growth in hyperspectral technologies for biomedical imaging, little work has been done to aid users of hyperspectral imaging instruments in selecting appropriate analysis algorithms. Here, we present an approach for comparing the effectiveness of spectral analysis algorithms by combining experimental image data with a theoretical "what if" scenario. This approach allows us to quantify several key outcomes that characterize a hyperspectral imaging study: linearity of sensitivity, positive detection cut-off slope, dynamic range, and false positive events. We present results of using this approach for comparing the effectiveness of several common spectral analysis algorithms for detecting weak fluorescent protein emission in the midst of strong tissue autofluorescence. Results indicate that this approach should be applicable to a very wide range of applications, allowing a quantitative assessment of the effectiveness of the combined biology, hardware, and computational analysis for detecting a specific molecular signature.

  20. The New Possibilities from "Big Data" to Overlooked Associations Between Diabetes, Biochemical Parameters, Glucose Control, and Osteoporosis.

    PubMed

    Kruse, Christian

    2018-06-01

    To review current practices and technologies within the scope of "Big Data" that can further our understanding of diabetes mellitus and osteoporosis from large volumes of data. "Big Data" techniques involving supervised machine learning, unsupervised machine learning, and deep learning image analysis are presented with examples of current literature. Supervised machine learning can allow us to better predict diabetes-induced osteoporosis and understand relative predictor importance of diabetes-affected bone tissue. Unsupervised machine learning can allow us to understand patterns in data between diabetic pathophysiology and altered bone metabolism. Image analysis using deep learning can allow us to be less dependent on surrogate predictors and use large volumes of images to classify diabetes-induced osteoporosis and predict future outcomes directly from images. "Big Data" techniques herald new possibilities to understand diabetes-induced osteoporosis and ascertain our current ability to classify, understand, and predict this condition.

  1. In utero eyeball development study by magnetic resonance imaging.

    PubMed

    Brémond-Gignac, D S; Benali, K; Deplus, S; Cussenot, O; Ferkdadji, L; Elmaleh, M; Lassau, J P

    1997-01-01

    The aim of this study was to measure fetal ocular development and to determine a growth curve by means of measurements in utero. Fetal ocular development was recorded by analysis of the results of magnetic resonance imaging (MRI). An anatomic study allowed definition of the best contrasted MRI sequences for calculation of the ocular surface. Biometric analysis of the values of the ocular surface in the neuro-ocular plane in 35 fetuses allowed establishment of a linear model of ocular growth curve in utero. Evaluation of ocular development may allow the detection and confirmation of malformational ocular anomalies such as microphthalmia.

  2. [Research applications in digital radiology. Big data and co].

    PubMed

    Müller, H; Hanbury, A

    2016-02-01

    Medical imaging produces increasingly complex images (e.g. thinner slices and higher resolution) with more protocols, so that image reading has also become much more complex. More information needs to be processed and usually the number of radiologists available for these tasks has not increased to the same extent. The objective of this article is to present current research results from projects on the use of image data for clinical decision support. An infrastructure that can allow large volumes of data to be accessed is presented. In this way the best performing tools can be identified without the medical data having to leave secure servers. The text presents the results of the VISCERAL and Khresmoi EU-funded projects, which allow the analysis of previous cases from institutional archives to support decision-making and for process automation. The results also represent a secure evaluation environment for medical image analysis. This allows the use of data extracted from past cases to solve information needs occurring when diagnosing new cases. The presented research prototypes allow direct extraction of knowledge from the visual data of the images and to use this for decision support or process automation. Real clinical use has not been tested but several subjective user tests showed the effectiveness and efficiency of the process. The future in radiology will clearly depend on better use of the important knowledge in clinical image archives to automate processes and aid decision-making via big data analysis. This can help concentrate the work of radiologists towards the most important parts of diagnostics.

  3. Computer assisted analysis of auroral images obtained from high altitude polar satellites

    NASA Technical Reports Server (NTRS)

    Samadani, Ramin; Flynn, Michael

    1993-01-01

    Automatic techniques that allow the extraction of physically significant parameters from auroral images were developed. This allows the processing of a much larger number of images than is currently possible with manual techniques. Our techniques were applied to diverse auroral image datasets. These results were made available to geophysicists at NASA and at universities in the form of a software system that performs the analysis. After some feedback from users, an upgraded system was transferred to NASA and to two universities. The feasibility of user-trained search and retrieval of large amounts of data using our automatically derived parameter indices was demonstrated. Techniques based on classification and regression trees (CART) were developed and applied to broaden the types of images to which the automated search and retrieval may be applied. Our techniques were tested with DE-1 auroral images.

  4. Designing Image Analysis Pipelines in Light Microscopy: A Rational Approach.

    PubMed

    Arganda-Carreras, Ignacio; Andrey, Philippe

    2017-01-01

    With the progress of microscopy techniques and the rapidly growing amounts of acquired imaging data, there is an increased need for automated image processing and analysis solutions in biological studies. Each new application requires the design of a specific image analysis pipeline, by assembling a series of image processing operations. Many commercial or free bioimage analysis software are now available and several textbooks and reviews have presented the mathematical and computational fundamentals of image processing and analysis. Tens, if not hundreds, of algorithms and methods have been developed and integrated into image analysis software, resulting in a combinatorial explosion of possible image processing sequences. This paper presents a general guideline methodology to rationally address the design of image processing and analysis pipelines. The originality of the proposed approach is to follow an iterative, backwards procedure from the target objectives of analysis. The proposed goal-oriented strategy should help biologists to better apprehend image analysis in the context of their research and should allow them to efficiently interact with image processing specialists.

  5. Design and calibration of a vacuum compatible scanning tunneling microscope

    NASA Technical Reports Server (NTRS)

    Abel, Phillip B.

    1990-01-01

    A vacuum compatible scanning tunneling microscope was designed and built, capable of imaging solid surfaces with atomic resolution. The single piezoelectric tube design is compact, and makes use of sample mounting stubs standard to a commercially available surface analysis system. Image collection and display is computer controlled, allowing storage of images for further analysis. Calibration results from atomic scale images are presented.

  6. TANGO: a generic tool for high-throughput 3D image analysis for studying nuclear organization.

    PubMed

    Ollion, Jean; Cochennec, Julien; Loll, François; Escudé, Christophe; Boudier, Thomas

    2013-07-15

    The cell nucleus is a highly organized cellular organelle that contains the genetic material. The study of nuclear architecture has become an important field of cellular biology. Extracting quantitative data from 3D fluorescence imaging helps understand the functions of different nuclear compartments. However, such approaches are limited by the requirement for processing and analyzing large sets of images. Here, we describe Tools for Analysis of Nuclear Genome Organization (TANGO), an image analysis tool dedicated to the study of nuclear architecture. TANGO is a coherent framework allowing biologists to perform the complete analysis process of 3D fluorescence images by combining two environments: ImageJ (http://imagej.nih.gov/ij/) for image processing and quantitative analysis and R (http://cran.r-project.org) for statistical processing of measurement results. It includes an intuitive user interface providing the means to precisely build a segmentation procedure and set-up analyses, without possessing programming skills. TANGO is a versatile tool able to process large sets of images, allowing quantitative study of nuclear organization. TANGO is composed of two programs: (i) an ImageJ plug-in and (ii) a package (rtango) for R. They are both free and open source, available (http://biophysique.mnhn.fr/tango) for Linux, Microsoft Windows and Macintosh OSX. Distribution is under the GPL v.2 licence. thomas.boudier@snv.jussieu.fr Supplementary data are available at Bioinformatics online.

  7. A database system to support image algorithm evaluation

    NASA Technical Reports Server (NTRS)

    Lien, Y. E.

    1977-01-01

    The design is given of an interactive image database system IMDB, which allows the user to create, retrieve, store, display, and manipulate images through the facility of a high-level, interactive image query (IQ) language. The query language IQ permits the user to define false color functions, pixel value transformations, overlay functions, zoom functions, and windows. The user manipulates the images through generic functions. The user can direct images to display devices for visual and qualitative analysis. Image histograms and pixel value distributions can also be computed to obtain a quantitative analysis of images.

  8. The NOAO NVO Portal

    NASA Astrophysics Data System (ADS)

    Miller, C. J.; Gasson, D.; Fuentes, E.

    2007-10-01

    The NOAO NVO Portal is a web application for one-stop discovery, analysis, and access to VO-compliant imaging data and services. The current release allows for GUI-based discovery of nearly a half million images from archives such as the NOAO Science Archive, the Hubble Space Telescope WFPC2 and ACS instruments, XMM-Newton, Chandra, and ESO's INT Wide-Field Survey, among others. The NOAO Portal allows users to view image metadata, footprint wire-frames, FITS image previews, and provides one-click access to science quality imaging data throughout the entire sky via the Firefox web browser (i.e., no applet or code to download). Users can stage images from multiple archives at the NOAO NVO Portal for quick and easy bulk downloads. The NOAO NVO Portal also provides simplified and direct access to VO analysis services, such as the WESIX catalog generation service. We highlight the features of the NOAO NVO Portal (http://nvo.noao.edu).

  9. 3D Filament Network Segmentation with Multiple Active Contours

    NASA Astrophysics Data System (ADS)

    Xu, Ting; Vavylonis, Dimitrios; Huang, Xiaolei

    2014-03-01

    Fluorescence microscopy is frequently used to study two and three dimensional network structures formed by cytoskeletal polymer fibers such as actin filaments and microtubules. While these cytoskeletal structures are often dilute enough to allow imaging of individual filaments or bundles of them, quantitative analysis of these images is challenging. To facilitate quantitative, reproducible and objective analysis of the image data, we developed a semi-automated method to extract actin networks and retrieve their topology in 3D. Our method uses multiple Stretching Open Active Contours (SOACs) that are automatically initialized at image intensity ridges and then evolve along the centerlines of filaments in the network. SOACs can merge, stop at junctions, and reconfigure with others to allow smooth crossing at junctions of filaments. The proposed approach is generally applicable to images of curvilinear networks with low SNR. We demonstrate its potential by extracting the centerlines of synthetic meshwork images, actin networks in 2D TIRF Microscopy images, and 3D actin cable meshworks of live fission yeast cells imaged by spinning disk confocal microscopy.

  10. Advanced IR System For Supersonic Boundary Layer Transition Flight Experiment

    NASA Technical Reports Server (NTRS)

    Banks, Daniel W.

    2008-01-01

    Infrared thermography is a preferred method investigating transition in flight: a) Global and non-intrusive; b) Can also be used to visualize and characterize other fluid mechanic phenomena such as shock impingement, separation etc. F-15 based system was updated with new camera and digital video recorder to support high Reynolds number transition tests. Digital Recording improves image quality and analysis capability and allows for accurate quantitative (temperature) measurements and greater enhancement through image processing allows analysis of smaller scale phenomena.

  11. Detection and Characterization of Boundary-Layer Transition in Flight at Supersonic Conditions Using Infrared Thermography

    NASA Technical Reports Server (NTRS)

    Banks, Daniel W.

    2008-01-01

    Infrared thermography is a powerful tool for investigating fluid mechanics on flight vehicles. (Can be used to visualize and characterize transition, shock impingement, separation etc.). Updated onboard F-15 based system was used to visualize supersonic boundary layer transition test article. (Tollmien-Schlichting and cross-flow dominant flow fields). Digital Recording improves image quality and analysis capability. (Allows accurate quantitative (temperature) measurements, Greater enhancement through image processing allows analysis of smaller scale phenomena).

  12. An approach for quantitative image quality analysis for CT

    NASA Astrophysics Data System (ADS)

    Rahimi, Amir; Cochran, Joe; Mooney, Doug; Regensburger, Joe

    2016-03-01

    An objective and standardized approach to assess image quality of Compute Tomography (CT) systems is required in a wide variety of imaging processes to identify CT systems appropriate for a given application. We present an overview of the framework we have developed to help standardize and to objectively assess CT image quality for different models of CT scanners used for security applications. Within this framework, we have developed methods to quantitatively measure metrics that should correlate with feature identification, detection accuracy and precision, and image registration capabilities of CT machines and to identify strengths and weaknesses in different CT imaging technologies in transportation security. To that end we have designed, developed and constructed phantoms that allow for systematic and repeatable measurements of roughly 88 image quality metrics, representing modulation transfer function, noise equivalent quanta, noise power spectra, slice sensitivity profiles, streak artifacts, CT number uniformity, CT number consistency, object length accuracy, CT number path length consistency, and object registration. Furthermore, we have developed a sophisticated MATLAB based image analysis tool kit to analyze CT generated images of phantoms and report these metrics in a format that is standardized across the considered models of CT scanners, allowing for comparative image quality analysis within a CT model or between different CT models. In addition, we have developed a modified sparse principal component analysis (SPCA) method to generate a modified set of PCA components as compared to the standard principal component analysis (PCA) with sparse loadings in conjunction with Hotelling T2 statistical analysis method to compare, qualify, and detect faults in the tested systems.

  13. Web based tools for visualizing imaging data and development of XNATView, a zero footprint image viewer

    PubMed Central

    Gutman, David A.; Dunn, William D.; Cobb, Jake; Stoner, Richard M.; Kalpathy-Cramer, Jayashree; Erickson, Bradley

    2014-01-01

    Advances in web technologies now allow direct visualization of imaging data sets without necessitating the download of large file sets or the installation of software. This allows centralization of file storage and facilitates image review and analysis. XNATView is a light framework recently developed in our lab to visualize DICOM images stored in The Extensible Neuroimaging Archive Toolkit (XNAT). It consists of a PyXNAT-based framework to wrap around the REST application programming interface (API) and query the data in XNAT. XNATView was developed to simplify quality assurance, help organize imaging data, and facilitate data sharing for intra- and inter-laboratory collaborations. Its zero-footprint design allows the user to connect to XNAT from a web browser, navigate through projects, experiments, and subjects, and view DICOM images with accompanying metadata all within a single viewing instance. PMID:24904399

  14. Diderot: a Domain-Specific Language for Portable Parallel Scientific Visualization and Image Analysis.

    PubMed

    Kindlmann, Gordon; Chiw, Charisee; Seltzer, Nicholas; Samuels, Lamont; Reppy, John

    2016-01-01

    Many algorithms for scientific visualization and image analysis are rooted in the world of continuous scalar, vector, and tensor fields, but are programmed in low-level languages and libraries that obscure their mathematical foundations. Diderot is a parallel domain-specific language that is designed to bridge this semantic gap by providing the programmer with a high-level, mathematical programming notation that allows direct expression of mathematical concepts in code. Furthermore, Diderot provides parallel performance that takes advantage of modern multicore processors and GPUs. The high-level notation allows a concise and natural expression of the algorithms and the parallelism allows efficient execution on real-world datasets.

  15. Ruby-Helix: an implementation of helical image processing based on object-oriented scripting language.

    PubMed

    Metlagel, Zoltan; Kikkawa, Yayoi S; Kikkawa, Masahide

    2007-01-01

    Helical image analysis in combination with electron microscopy has been used to study three-dimensional structures of various biological filaments or tubes, such as microtubules, actin filaments, and bacterial flagella. A number of packages have been developed to carry out helical image analysis. Some biological specimens, however, have a symmetry break (seam) in their three-dimensional structure, even though their subunits are mostly arranged in a helical manner. We refer to these objects as "asymmetric helices". All the existing packages are designed for helically symmetric specimens, and do not allow analysis of asymmetric helical objects, such as microtubules with seams. Here, we describe Ruby-Helix, a new set of programs for the analysis of "helical" objects with or without a seam. Ruby-Helix is built on top of the Ruby programming language and is the first implementation of asymmetric helical reconstruction for practical image analysis. It also allows easier and semi-automated analysis, performing iterative unbending and accurate determination of the repeat length. As a result, Ruby-Helix enables us to analyze motor-microtubule complexes with higher throughput to higher resolution.

  16. Method for stitching microbial images using a neural network

    NASA Astrophysics Data System (ADS)

    Semenishchev, E. A.; Voronin, V. V.; Marchuk, V. I.; Tolstova, I. V.

    2017-05-01

    Currently an analog microscope has a wide distribution in the following fields: medicine, animal husbandry, monitoring technological objects, oceanography, agriculture and others. Automatic method is preferred because it will greatly reduce the work involved. Stepper motors are used to move the microscope slide and allow to adjust the focus in semi-automatic or automatic mode view with transfer images of microbiological objects from the eyepiece of the microscope to the computer screen. Scene analysis allows to locate regions with pronounced abnormalities for focusing specialist attention. This paper considers the method for stitching microbial images, obtained of semi-automatic microscope. The method allows to keep the boundaries of objects located in the area of capturing optical systems. Objects searching are based on the analysis of the data located in the area of the camera view. We propose to use a neural network for the boundaries searching. The stitching image boundary is held of the analysis borders of the objects. To auto focus, we use the criterion of the minimum thickness of the line boundaries of object. Analysis produced the object located in the focal axis of the camera. We use method of recovery of objects borders and projective transform for the boundary of objects which are based on shifted relative to the focal axis. Several examples considered in this paper show the effectiveness of the proposed approach on several test images.

  17. Imaging hydrogen flames by two-photon, laser-induced fluorescence

    NASA Technical Reports Server (NTRS)

    Miles, R.; Lempert, W.; Kumar, V.; Diskin, G.

    1991-01-01

    A nonintrusive multicomponent imaging system is developed which can image hydrogen, hot oxygen, and air simultaneously. An Ar-F excimer laser is injection-locked to cover the Q1 two-photon transition in molecular hydrogen which allows the observation of both hot oxygen and cold hydrogen. Rayleigh scattering from the water molecules occurs at the same frequency as the illuminating laser allowing analysis of the air density. Images of ignited and nonignited hydrogen jets are recorded with a high-sensitivity gated video camera. The images permit the analysis of turbulent hydrogen-core jet, the combustion zone, and the surrounding air, and two-dimensional spatial correlations can be made to study the turbulent structure and couplings between different regions of the flow field. The method is of interest to the study of practical combustion systems which employ hydrogen-air diffusion flames.

  18. A methodology for the semi-automatic digital image analysis of fragmental impactites

    NASA Astrophysics Data System (ADS)

    Chanou, A.; Osinski, G. R.; Grieve, R. A. F.

    2014-04-01

    A semi-automated digital image analysis method is developed for the comparative textural study of impact melt-bearing breccias. This method uses the freeware software ImageJ developed by the National Institute of Health (NIH). Digital image analysis is performed on scans of hand samples (10-15 cm across), based on macroscopic interpretations of the rock components. All image processing and segmentation are done semi-automatically, with the least possible manual intervention. The areal fraction of components is estimated and modal abundances can be deduced, where the physical optical properties (e.g., contrast, color) of the samples allow it. Other parameters that can be measured include, for example, clast size, clast-preferred orientations, average box-counting dimension or fragment shape complexity, and nearest neighbor distances (NnD). This semi-automated method allows the analysis of a larger number of samples in a relatively short time. Textures, granulometry, and shape descriptors are of considerable importance in rock characterization. The methodology is used to determine the variations of the physical characteristics of some examples of fragmental impactites.

  19. Monitoring of activated sludge settling ability through image analysis: validation on full-scale wastewater treatment plants.

    PubMed

    Mesquita, D P; Dias, O; Amaral, A L; Ferreira, E C

    2009-04-01

    In recent years, a great deal of attention has been focused on the research of activated sludge processes, where the solid-liquid separation phase is frequently considered of critical importance, due to the different problems that severely affect the compaction and the settling of the sludge. Bearing that in mind, in this work, image analysis routines were developed in Matlab environment, allowing the identification and characterization of microbial aggregates and protruding filaments in eight different wastewater treatment plants, for a combined period of 2 years. The monitoring of the activated sludge contents allowed for the detection of bulking events proving that the developed image analysis methodology is adequate for a continuous examination of the morphological changes in microbial aggregates and subsequent estimation of the sludge volume index. In fact, the obtained results proved that the developed image analysis methodology is a feasible method for the continuous monitoring of activated sludge systems and identification of disturbances.

  20. Image analysis technique as a tool to identify morphological changes in Trametes versicolor pellets according to exopolysaccharide or laccase production.

    PubMed

    Tavares, Ana P M; Silva, Rui P; Amaral, António L; Ferreira, Eugénio C; Xavier, Ana M R B

    2014-02-01

    Image analysis technique was applied to identify morphological changes of pellets from white-rot fungus Trametes versicolor on agitated submerged cultures during the production of exopolysaccharide (EPS) or ligninolytic enzymes. Batch tests with four different experimental conditions were carried out. Two different culture media were used, namely yeast medium or Trametes defined medium and the addition of lignolytic inducers as xylidine or pulp and paper industrial effluent were evaluated. Laccase activity, EPS production, and final biomass contents were determined for batch assays and the pellets morphology was assessed by image analysis techniques. The obtained data allowed establishing the choice of the metabolic pathways according to the experimental conditions, either for laccase enzymatic production in the Trametes defined medium, or for EPS production in the rich Yeast Medium experiments. Furthermore, the image processing and analysis methodology allowed for a better comprehension of the physiological phenomena with respect to the corresponding pellets morphological stages.

  1. MSiReader v1.0: Evolving Open-Source Mass Spectrometry Imaging Software for Targeted and Untargeted Analyses.

    PubMed

    Bokhart, Mark T; Nazari, Milad; Garrard, Kenneth P; Muddiman, David C

    2018-01-01

    A major update to the mass spectrometry imaging (MSI) software MSiReader is presented, offering a multitude of newly added features critical to MSI analyses. MSiReader is a free, open-source, and vendor-neutral software written in the MATLAB platform and is capable of analyzing most common MSI data formats. A standalone version of the software, which does not require a MATLAB license, is also distributed. The newly incorporated data analysis features expand the utility of MSiReader beyond simple visualization of molecular distributions. The MSiQuantification tool allows researchers to calculate absolute concentrations from quantification MSI experiments exclusively through MSiReader software, significantly reducing data analysis time. An image overlay feature allows the incorporation of complementary imaging modalities to be displayed with the MSI data. A polarity filter has also been incorporated into the data loading step, allowing the facile analysis of polarity switching experiments without the need for data parsing prior to loading the data file into MSiReader. A quality assurance feature to generate a mass measurement accuracy (MMA) heatmap for an analyte of interest has also been added to allow for the investigation of MMA across the imaging experiment. Most importantly, as new features have been added performance has not degraded, in fact it has been dramatically improved. These new tools and the improvements to the performance in MSiReader v1.0 enable the MSI community to evaluate their data in greater depth and in less time. Graphical Abstract ᅟ.

  2. Imaging samples in silica aerogel using an experimental point spread function.

    PubMed

    White, Amanda J; Ebel, Denton S

    2015-02-01

    Light microscopy is a powerful tool that allows for many types of samples to be examined in a rapid, easy, and nondestructive manner. Subsequent image analysis, however, is compromised by distortion of signal by instrument optics. Deconvolution of images prior to analysis allows for the recovery of lost information by procedures that utilize either a theoretically or experimentally calculated point spread function (PSF). Using a laser scanning confocal microscope (LSCM), we have imaged whole impact tracks of comet particles captured in silica aerogel, a low density, porous SiO2 solid, by the NASA Stardust mission. In order to understand the dynamical interactions between the particles and the aerogel, precise grain location and track volume measurement are required. We report a method for measuring an experimental PSF suitable for three-dimensional deconvolution of imaged particles in aerogel. Using fluorescent beads manufactured into Stardust flight-grade aerogel, we have applied a deconvolution technique standard in the biological sciences to confocal images of whole Stardust tracks. The incorporation of an experimentally measured PSF allows for better quantitative measurements of the size and location of single grains in aerogel and more accurate measurements of track morphology.

  3. New methods for image collection and analysis in scanning Auger microscopy

    NASA Technical Reports Server (NTRS)

    Browning, R.

    1985-01-01

    While scanning Auger micrographs are used extensively for illustrating the stoichiometry of complex surfaces and for indicating areas of interest for fine point Auger spectroscopy, there are many problems in the quantification and analysis of Auger images. These problems include multiple contrast mechanisms and the lack of meaningful relationships with other Auger data. Collection of multielemental Auger images allows some new approaches to image analysis and presentation. Information about the distribution and quantity of elemental combinations at a surface are retrievable, and particular combinations of elements can be imaged, such as alloy phases. Results from the precipitate hardened alloy Al-2124 illustrate multispectral Auger imaging.

  4. Medical Image Analysis by Cognitive Information Systems - a Review.

    PubMed

    Ogiela, Lidia; Takizawa, Makoto

    2016-10-01

    This publication presents a review of medical image analysis systems. The paradigms of cognitive information systems will be presented by examples of medical image analysis systems. The semantic processes present as it is applied to different types of medical images. Cognitive information systems were defined on the basis of methods for the semantic analysis and interpretation of information - medical images - applied to cognitive meaning of medical images contained in analyzed data sets. Semantic analysis was proposed to analyzed the meaning of data. Meaning is included in information, for example in medical images. Medical image analysis will be presented and discussed as they are applied to various types of medical images, presented selected human organs, with different pathologies. Those images were analyzed using different classes of cognitive information systems. Cognitive information systems dedicated to medical image analysis was also defined for the decision supporting tasks. This process is very important for example in diagnostic and therapy processes, in the selection of semantic aspects/features, from analyzed data sets. Those features allow to create a new way of analysis.

  5. Tissue microarrays and quantitative tissue-based image analysis as a tool for oncology biomarker and diagnostic development.

    PubMed

    Dolled-Filhart, Marisa P; Gustavson, Mark D

    2012-11-01

    Translational oncology has been improved by using tissue microarrays (TMAs), which facilitate biomarker analysis of large cohorts on a single slide. This has allowed for rapid analysis and validation of potential biomarkers for prognostic and predictive value, as well as for evaluation of biomarker prevalence. Coupled with quantitative analysis of immunohistochemical (IHC) staining, objective and standardized biomarker data from tumor samples can further advance companion diagnostic approaches for the identification of drug-responsive or resistant patient subpopulations. This review covers the advantages, disadvantages and applications of TMAs for biomarker research. Research literature and reviews of TMAs and quantitative image analysis methodology have been surveyed for this review (with an AQUA® analysis focus). Applications such as multi-marker diagnostic development and pathway-based biomarker subpopulation analyses are described. Tissue microarrays are a useful tool for biomarker analyses including prevalence surveys, disease progression assessment and addressing potential prognostic or predictive value. By combining quantitative image analysis with TMAs, analyses will be more objective and reproducible, allowing for more robust IHC-based diagnostic test development. Quantitative multi-biomarker IHC diagnostic tests that can predict drug response will allow for greater success of clinical trials for targeted therapies and provide more personalized clinical decision making.

  6. Numerical image manipulation and display in solar astronomy

    NASA Technical Reports Server (NTRS)

    Levine, R. H.; Flagg, J. C.

    1977-01-01

    The paper describes the system configuration and data manipulation capabilities of a solar image display system which allows interactive analysis of visual images and on-line manipulation of digital data. Image processing features include smoothing or filtering of images stored in the display, contrast enhancement, and blinking or flickering images. A computer with a core memory of 28,672 words provides the capacity to perform complex calculations based on stored images, including computing histograms, selecting subsets of images for further analysis, combining portions of images to produce images with physical meaning, and constructing mathematical models of features in an image. Some of the processing modes are illustrated by some image sequences from solar observations.

  7. iSBatch: a batch-processing platform for data analysis and exploration of live-cell single-molecule microscopy images and other hierarchical datasets.

    PubMed

    Caldas, Victor E A; Punter, Christiaan M; Ghodke, Harshad; Robinson, Andrew; van Oijen, Antoine M

    2015-10-01

    Recent technical advances have made it possible to visualize single molecules inside live cells. Microscopes with single-molecule sensitivity enable the imaging of low-abundance proteins, allowing for a quantitative characterization of molecular properties. Such data sets contain information on a wide spectrum of important molecular properties, with different aspects highlighted in different imaging strategies. The time-lapsed acquisition of images provides information on protein dynamics over long time scales, giving insight into expression dynamics and localization properties. Rapid burst imaging reveals properties of individual molecules in real-time, informing on their diffusion characteristics, binding dynamics and stoichiometries within complexes. This richness of information, however, adds significant complexity to analysis protocols. In general, large datasets of images must be collected and processed in order to produce statistically robust results and identify rare events. More importantly, as live-cell single-molecule measurements remain on the cutting edge of imaging, few protocols for analysis have been established and thus analysis strategies often need to be explored for each individual scenario. Existing analysis packages are geared towards either single-cell imaging data or in vitro single-molecule data and typically operate with highly specific algorithms developed for particular situations. Our tool, iSBatch, instead allows users to exploit the inherent flexibility of the popular open-source package ImageJ, providing a hierarchical framework in which existing plugins or custom macros may be executed over entire datasets or portions thereof. This strategy affords users freedom to explore new analysis protocols within large imaging datasets, while maintaining hierarchical relationships between experiments, samples, fields of view, cells, and individual molecules.

  8. Big data in multiple sclerosis: development of a web-based longitudinal study viewer in an imaging informatics-based eFolder system for complex data analysis and management

    NASA Astrophysics Data System (ADS)

    Ma, Kevin; Wang, Ximing; Lerner, Alex; Shiroishi, Mark; Amezcua, Lilyana; Liu, Brent

    2015-03-01

    In the past, we have developed and displayed a multiple sclerosis eFolder system for patient data storage, image viewing, and automatic lesion quantification results stored in DICOM-SR format. The web-based system aims to be integrated in DICOM-compliant clinical and research environments to aid clinicians in patient treatments and disease tracking. This year, we have further developed the eFolder system to handle big data analysis and data mining in today's medical imaging field. The database has been updated to allow data mining and data look-up from DICOM-SR lesion analysis contents. Longitudinal studies are tracked, and any changes in lesion volumes and brain parenchyma volumes are calculated and shown on the webbased user interface as graphical representations. Longitudinal lesion characteristic changes are compared with patients' disease history, including treatments, symptom progressions, and any other changes in the disease profile. The image viewer is updated such that imaging studies can be viewed side-by-side to allow visual comparisons. We aim to use the web-based medical imaging informatics eFolder system to demonstrate big data analysis in medical imaging, and use the analysis results to predict MS disease trends and patterns in Hispanic and Caucasian populations in our pilot study. The discovery of disease patterns among the two ethnicities is a big data analysis result that will help lead to personalized patient care and treatment planning.

  9. Integrating advanced visualization technology into the planetary Geoscience workflow

    NASA Astrophysics Data System (ADS)

    Huffman, John; Forsberg, Andrew; Loomis, Andrew; Head, James; Dickson, James; Fassett, Caleb

    2011-09-01

    Recent advances in computer visualization have allowed us to develop new tools for analyzing the data gathered during planetary missions, which is important, since these data sets have grown exponentially in recent years to tens of terabytes in size. As part of the Advanced Visualization in Solar System Exploration and Research (ADVISER) project, we utilize several advanced visualization techniques created specifically with planetary image data in mind. The Geoviewer application allows real-time active stereo display of images, which in aggregate have billions of pixels. The ADVISER desktop application platform allows fast three-dimensional visualization of planetary images overlain on digital terrain models. Both applications include tools for easy data ingest and real-time analysis in a programmatic manner. Incorporation of these tools into our everyday scientific workflow has proved important for scientific analysis, discussion, and publication, and enabled effective and exciting educational activities for students from high school through graduate school.

  10. Satellite image analysis using neural networks

    NASA Technical Reports Server (NTRS)

    Sheldon, Roger A.

    1990-01-01

    The tremendous backlog of unanalyzed satellite data necessitates the development of improved methods for data cataloging and analysis. Ford Aerospace has developed an image analysis system, SIANN (Satellite Image Analysis using Neural Networks) that integrates the technologies necessary to satisfy NASA's science data analysis requirements for the next generation of satellites. SIANN will enable scientists to train a neural network to recognize image data containing scenes of interest and then rapidly search data archives for all such images. The approach combines conventional image processing technology with recent advances in neural networks to provide improved classification capabilities. SIANN allows users to proceed through a four step process of image classification: filtering and enhancement, creation of neural network training data via application of feature extraction algorithms, configuring and training a neural network model, and classification of images by application of the trained neural network. A prototype experimentation testbed was completed and applied to climatological data.

  11. Morphological analysis of oligomeric vs. fibrillar forms of α-synuclein aggregates with super-resolution BALM imaging

    NASA Astrophysics Data System (ADS)

    Huh, Hyun; Lee, Jinwoo; Kim, Hyung Jun; Hohng, Sungchul; Kim, Seong Keun

    2017-12-01

    Application of BALM (binding activated localization microcopy) was shown to allow facile imaging of amyloid fibrils with a typical diameter of ∼14 nm FWHM. We also observed a twisted ribbon-like substructure of mutant amyloid fibrils and even what appear to be toxic amyloid oligomers with their characteristic morphological features consistent with TEM images. Use of an easily available staining dye in this method greatly enhances the prospect of addressing amyloid-related diseases in their diagnosis and drug tests by allowing facile in situ and in vivo detection by optical imaging.

  12. HTML5 PivotViewer: high-throughput visualization and querying of image data on the web.

    PubMed

    Taylor, Stephen; Noble, Roger

    2014-09-15

    Visualization and analysis of large numbers of biological images has generated a bottle neck in research. We present HTML5 PivotViewer, a novel, open source, platform-independent viewer making use of the latest web technologies that allows seamless access to images and associated metadata for each image. This provides a powerful method to allow end users to mine their data. Documentation, examples and links to the software are available from http://www.cbrg.ox.ac.uk/data/pivotviewer/. The software is licensed under GPLv2. © The Author 2014. Published by Oxford University Press.

  13. Evaluation of a web based informatics system with data mining tools for predicting outcomes with quantitative imaging features in stroke rehabilitation clinical trials

    NASA Astrophysics Data System (ADS)

    Wang, Ximing; Kim, Bokkyu; Park, Ji Hoon; Wang, Erik; Forsyth, Sydney; Lim, Cody; Ravi, Ragini; Karibyan, Sarkis; Sanchez, Alexander; Liu, Brent

    2017-03-01

    Quantitative imaging biomarkers are used widely in clinical trials for tracking and evaluation of medical interventions. Previously, we have presented a web based informatics system utilizing quantitative imaging features for predicting outcomes in stroke rehabilitation clinical trials. The system integrates imaging features extraction tools and a web-based statistical analysis tool. The tools include a generalized linear mixed model(GLMM) that can investigate potential significance and correlation based on features extracted from clinical data and quantitative biomarkers. The imaging features extraction tools allow the user to collect imaging features and the GLMM module allows the user to select clinical data and imaging features such as stroke lesion characteristics from the database as regressors and regressands. This paper discusses the application scenario and evaluation results of the system in a stroke rehabilitation clinical trial. The system was utilized to manage clinical data and extract imaging biomarkers including stroke lesion volume, location and ventricle/brain ratio. The GLMM module was validated and the efficiency of data analysis was also evaluated.

  14. On-line 3-dimensional confocal imaging in vivo.

    PubMed

    Li, J; Jester, J V; Cavanagh, H D; Black, T D; Petroll, W M

    2000-09-01

    In vivo confocal microscopy through focusing (CMTF) can provide a 3-D stack of high-resolution corneal images and allows objective measurements of corneal sublayer thickness and backscattering. However, current systems require time-consuming off-line image processing and analysis on multiple software platforms. Furthermore, there is a trade off between the CMTF speed and measurement precision. The purpose of this study was to develop a novel on-line system for in vivo corneal imaging and analysis that overcomes these limitations. A tandem scanning confocal microscope (TSCM) was used for corneal imaging. The TSCM video camera was interfaced directly to a PC image acquisition board to implement real-time digitization. Software was developed to allow in vivo 2-D imaging, CMTF image acquisition, interactive 3-D reconstruction, and analysis of CMTF data to be performed on line in a single user-friendly environment. A procedure was also incorporated to separate the odd/even video fields, thereby doubling the CMTF sampling rate and theoretically improving the precision of CMTF thickness measurements by a factor of two. In vivo corneal examinations of a normal human and a photorefractive keratectomy patient are presented to demonstrate the capabilities of the new system. Improvements in the convenience, speed, and functionality of in vivo CMTF image acquisition, display, and analysis are demonstrated. This is the first full-featured software package designed for in vivo TSCM imaging of the cornea, which performs both 2-D and 3-D image acquisition, display, and processing as well as CMTF analysis. The use of a PC platform and incorporation of easy to use, on line, and interactive features should help to improve the clinical utility of this technology.

  15. Overview of machine vision methods in x-ray imaging and microtomography

    NASA Astrophysics Data System (ADS)

    Buzmakov, Alexey; Zolotov, Denis; Chukalina, Marina; Nikolaev, Dmitry; Gladkov, Andrey; Ingacheva, Anastasia; Yakimchuk, Ivan; Asadchikov, Victor

    2018-04-01

    Digital X-ray imaging became widely used in science, medicine, non-destructive testing. This allows using modern digital images analysis for automatic information extraction and interpretation. We give short review of scientific applications of machine vision in scientific X-ray imaging and microtomography, including image processing, feature detection and extraction, images compression to increase camera throughput, microtomography reconstruction, visualization and setup adjustment.

  16. MSiReader v1.0: Evolving Open-Source Mass Spectrometry Imaging Software for Targeted and Untargeted Analyses

    NASA Astrophysics Data System (ADS)

    Bokhart, Mark T.; Nazari, Milad; Garrard, Kenneth P.; Muddiman, David C.

    2018-01-01

    A major update to the mass spectrometry imaging (MSI) software MSiReader is presented, offering a multitude of newly added features critical to MSI analyses. MSiReader is a free, open-source, and vendor-neutral software written in the MATLAB platform and is capable of analyzing most common MSI data formats. A standalone version of the software, which does not require a MATLAB license, is also distributed. The newly incorporated data analysis features expand the utility of MSiReader beyond simple visualization of molecular distributions. The MSiQuantification tool allows researchers to calculate absolute concentrations from quantification MSI experiments exclusively through MSiReader software, significantly reducing data analysis time. An image overlay feature allows the incorporation of complementary imaging modalities to be displayed with the MSI data. A polarity filter has also been incorporated into the data loading step, allowing the facile analysis of polarity switching experiments without the need for data parsing prior to loading the data file into MSiReader. A quality assurance feature to generate a mass measurement accuracy (MMA) heatmap for an analyte of interest has also been added to allow for the investigation of MMA across the imaging experiment. Most importantly, as new features have been added performance has not degraded, in fact it has been dramatically improved. These new tools and the improvements to the performance in MSiReader v1.0 enable the MSI community to evaluate their data in greater depth and in less time. [Figure not available: see fulltext.

  17. Colony image acquisition and genetic segmentation algorithm and colony analyses

    NASA Astrophysics Data System (ADS)

    Wang, W. X.

    2012-01-01

    Colony anaysis is used in a large number of engineerings such as food, dairy, beverages, hygiene, environmental monitoring, water, toxicology, sterility testing. In order to reduce laboring and increase analysis acuracy, many researchers and developers have made efforts for image analysis systems. The main problems in the systems are image acquisition, image segmentation and image analysis. In this paper, to acquire colony images with good quality, an illumination box was constructed. In the box, the distances between lights and dishe, camra lens and lights, and camera lens and dishe are adjusted optimally. In image segmentation, It is based on a genetic approach that allow one to consider the segmentation problem as a global optimization,. After image pre-processing and image segmentation, the colony analyses are perfomed. The colony image analysis consists of (1) basic colony parameter measurements; (2) colony size analysis; (3) colony shape analysis; and (4) colony surface measurements. All the above visual colony parameters can be selected and combined together, used to make a new engineeing parameters. The colony analysis can be applied into different applications.

  18. Simulation of realistic abnormal SPECT brain perfusion images: application in semi-quantitative analysis

    NASA Astrophysics Data System (ADS)

    Ward, T.; Fleming, J. S.; Hoffmann, S. M. A.; Kemp, P. M.

    2005-11-01

    Simulation is useful in the validation of functional image analysis methods, particularly when considering the number of analysis techniques currently available lacking thorough validation. Problems exist with current simulation methods due to long run times or unrealistic results making it problematic to generate complete datasets. A method is presented for simulating known abnormalities within normal brain SPECT images using a measured point spread function (PSF), and incorporating a stereotactic atlas of the brain for anatomical positioning. This allows for the simulation of realistic images through the use of prior information regarding disease progression. SPECT images of cerebral perfusion have been generated consisting of a control database and a group of simulated abnormal subjects that are to be used in a UK audit of analysis methods. The abnormality is defined in the stereotactic space, then transformed to the individual subject space, convolved with a measured PSF and removed from the normal subject image. The dataset was analysed using SPM99 (Wellcome Department of Imaging Neuroscience, University College, London) and the MarsBaR volume of interest (VOI) analysis toolbox. The results were evaluated by comparison with the known ground truth. The analysis showed improvement when using a smoothing kernel equal to system resolution over the slightly larger kernel used routinely. Significant correlation was found between effective volume of a simulated abnormality and the detected size using SPM99. Improvements in VOI analysis sensitivity were found when using the region median over the region mean. The method and dataset provide an efficient methodology for use in the comparison and cross validation of semi-quantitative analysis methods in brain SPECT, and allow the optimization of analysis parameters.

  19. Three-dimensional intraoperative ultrasound of vascular malformations and supratentorial tumors.

    PubMed

    Woydt, Michael; Horowski, Anja; Krauss, Juergen; Krone, Andreas; Soerensen, Niels; Roosen, Klaus

    2002-01-01

    The benefits and limits of a magnetic sensor-based 3-dimensional (3D) intraoperative ultrasound technique during surgery of vascular malformations and supratentorial tumors were evaluated. Twenty patients with 11 vascular malformations and 9 supratentorial tumors undergoing microsurgical resection or clipping were investigated with an interactive magnetic sensor data acquisition system allowing freehand scanning. An ultrasound probe with a mounted sensor was used after craniotomies to localize lesions, outline tumors or malformation margins, and identify supplying vessels. A 3D data set was obtained allowing reformation of multiple slices in all 3 planes and comparison to 2-dimensional (2D) intraoperative ultrasound images. Off-line gray-scale segmentation analysis allowed differentiation between tissue with different echogenicities. Color-coded information about blood flow was extracted from the images with a reconstruction algorithm. This allowed photorealistic surface displays of perfused tissue, tumor, and surrounding vessels. Three-dimensional intraoperative ultrasound data acquisition was obtained within 5 minutes. Off-line analysis and reconstruction time depends on the type of imaging display and can take up to 30 minutes. The spatial relation between aneurysm sac and surrounding vessels or the skull base could be enhanced in 3 out of 6 aneurysms with 3D intraoperative ultrasound. Perforating arteries were visible in 3 cases only by using 3D imaging. 3D ultrasound provides a promising imaging technique, offering the neurosurgeon an intraoperative spatial orientation of the lesion and its vascular relationships. Thereby, it may improve safety of surgery and understanding of 2D ultrasound images.

  20. Development of an Automated Imaging Pipeline for the Analysis of the Zebrafish Larval Kidney

    PubMed Central

    Westhoff, Jens H.; Giselbrecht, Stefan; Schmidts, Miriam; Schindler, Sebastian; Beales, Philip L.; Tönshoff, Burkhard; Liebel, Urban; Gehrig, Jochen

    2013-01-01

    The analysis of kidney malformation caused by environmental influences during nephrogenesis or by hereditary nephropathies requires animal models allowing the in vivo observation of developmental processes. The zebrafish has emerged as a useful model system for the analysis of vertebrate organ development and function, and it is suitable for the identification of organotoxic or disease-modulating compounds on a larger scale. However, to fully exploit its potential in high content screening applications, dedicated protocols are required allowing the consistent visualization of inner organs such as the embryonic kidney. To this end, we developed a high content screening compatible pipeline for the automated imaging of standardized views of the developing pronephros in zebrafish larvae. Using a custom designed tool, cavities were generated in agarose coated microtiter plates allowing for accurate positioning and orientation of zebrafish larvae. This enabled the subsequent automated acquisition of stable and consistent dorsal views of pronephric kidneys. The established pipeline was applied in a pilot screen for the analysis of the impact of potentially nephrotoxic drugs on zebrafish pronephros development in the Tg(wt1b:EGFP) transgenic line in which the developing pronephros is highlighted by GFP expression. The consistent image data that was acquired allowed for quantification of gross morphological pronephric phenotypes, revealing concentration dependent effects of several compounds on nephrogenesis. In addition, applicability of the imaging pipeline was further confirmed in a morpholino based model for cilia-associated human genetic disorders associated with different intraflagellar transport genes. The developed tools and pipeline can be used to study various aspects in zebrafish kidney research, and can be readily adapted for the analysis of other organ systems. PMID:24324758

  1. Development of an automated imaging pipeline for the analysis of the zebrafish larval kidney.

    PubMed

    Westhoff, Jens H; Giselbrecht, Stefan; Schmidts, Miriam; Schindler, Sebastian; Beales, Philip L; Tönshoff, Burkhard; Liebel, Urban; Gehrig, Jochen

    2013-01-01

    The analysis of kidney malformation caused by environmental influences during nephrogenesis or by hereditary nephropathies requires animal models allowing the in vivo observation of developmental processes. The zebrafish has emerged as a useful model system for the analysis of vertebrate organ development and function, and it is suitable for the identification of organotoxic or disease-modulating compounds on a larger scale. However, to fully exploit its potential in high content screening applications, dedicated protocols are required allowing the consistent visualization of inner organs such as the embryonic kidney. To this end, we developed a high content screening compatible pipeline for the automated imaging of standardized views of the developing pronephros in zebrafish larvae. Using a custom designed tool, cavities were generated in agarose coated microtiter plates allowing for accurate positioning and orientation of zebrafish larvae. This enabled the subsequent automated acquisition of stable and consistent dorsal views of pronephric kidneys. The established pipeline was applied in a pilot screen for the analysis of the impact of potentially nephrotoxic drugs on zebrafish pronephros development in the Tg(wt1b:EGFP) transgenic line in which the developing pronephros is highlighted by GFP expression. The consistent image data that was acquired allowed for quantification of gross morphological pronephric phenotypes, revealing concentration dependent effects of several compounds on nephrogenesis. In addition, applicability of the imaging pipeline was further confirmed in a morpholino based model for cilia-associated human genetic disorders associated with different intraflagellar transport genes. The developed tools and pipeline can be used to study various aspects in zebrafish kidney research, and can be readily adapted for the analysis of other organ systems.

  2. Quantitative analysis of single-molecule superresolution images

    PubMed Central

    Coltharp, Carla; Yang, Xinxing; Xiao, Jie

    2014-01-01

    This review highlights the quantitative capabilities of single-molecule localization-based superresolution imaging methods. In addition to revealing fine structural details, the molecule coordinate lists generated by these methods provide the critical ability to quantify the number, clustering, and colocalization of molecules with 10 – 50 nm resolution. Here we describe typical workflows and precautions for quantitative analysis of single-molecule superresolution images. These guidelines include potential pitfalls and essential control experiments, allowing critical assessment and interpretation of superresolution images. PMID:25179006

  3. Targeted nano analysis of water and ions using cryocorrelative light and scanning transmission electron microscopy.

    PubMed

    Nolin, Frédérique; Ploton, Dominique; Wortham, Laurence; Tchelidze, Pavel; Balossier, Gérard; Banchet, Vincent; Bobichon, Hélène; Lalun, Nathalie; Terryn, Christine; Michel, Jean

    2012-11-01

    Cryo fluorescence imaging coupled with the cryo-EM technique (cryo-CLEM) avoids chemical fixation and embedding in plastic, and is the gold standard for correlated imaging in a close to native state. This multi-modal approach has not previously included elementary nano analysis or evaluation of water content. We developed a new approach allowing analysis of targeted in situ intracellular ions and water measurements at the nanoscale (EDXS and STEM dark field imaging) within domains identified by examination of specific GFP-tagged proteins. This method allows both water and ions- fundamental to cell biology- to be located and quantified at the subcellular level. We illustrate the potential of this approach by investigating changes in water and ion content in nuclear domains identified by GFP-tagged proteins in cells stressed by Actinomycin D treatment and controls. The resolution of our approach was sufficient to distinguish clumps of condensed chromatin from surrounding nucleoplasm by fluorescence imaging and to perform nano analysis in this targeted compartment. Copyright © 2012 Elsevier Inc. All rights reserved.

  4. MilxXplore: a web-based system to explore large imaging datasets.

    PubMed

    Bourgeat, P; Dore, V; Villemagne, V L; Rowe, C C; Salvado, O; Fripp, J

    2013-01-01

    As large-scale medical imaging studies are becoming more common, there is an increasing reliance on automated software to extract quantitative information from these images. As the size of the cohorts keeps increasing with large studies, there is a also a need for tools that allow results from automated image processing and analysis to be presented in a way that enables fast and efficient quality checking, tagging and reporting on cases in which automatic processing failed or was problematic. MilxXplore is an open source visualization platform, which provides an interface to navigate and explore imaging data in a web browser, giving the end user the opportunity to perform quality control and reporting in a user friendly, collaborative and efficient way. Compared to existing software solutions that often provide an overview of the results at the subject's level, MilxXplore pools the results of individual subjects and time points together, allowing easy and efficient navigation and browsing through the different acquisitions of a subject over time, and comparing the results against the rest of the population. MilxXplore is fast, flexible and allows remote quality checks of processed imaging data, facilitating data sharing and collaboration across multiple locations, and can be easily integrated into a cloud computing pipeline. With the growing trend of open data and open science, such a tool will become increasingly important to share and publish results of imaging analysis.

  5. Image Processing Software

    NASA Technical Reports Server (NTRS)

    1990-01-01

    The Ames digital image velocimetry technology has been incorporated in a commercially available image processing software package that allows motion measurement of images on a PC alone. The software, manufactured by Werner Frei Associates, is IMAGELAB FFT. IMAGELAB FFT is a general purpose image processing system with a variety of other applications, among them image enhancement of fingerprints and use by banks and law enforcement agencies for analysis of videos run during robberies.

  6. Direct quantitative evaluation of disease symptoms on living plant leaves growing under natural light.

    PubMed

    Matsunaga, Tomoko M; Ogawa, Daisuke; Taguchi-Shiobara, Fumio; Ishimoto, Masao; Matsunaga, Sachihiro; Habu, Yoshiki

    2017-06-01

    Leaf color is an important indicator when evaluating plant growth and responses to biotic/abiotic stress. Acquisition of images by digital cameras allows analysis and long-term storage of the acquired images. However, under field conditions, where light intensity can fluctuate and other factors (shade, reflection, and background, etc.) vary, stable and reproducible measurement and quantification of leaf color are hard to achieve. Digital scanners provide fixed conditions for obtaining image data, allowing stable and reliable comparison among samples, but require detached plant materials to capture images, and the destructive processes involved often induce deformation of plant materials (curled leaves and faded colors, etc.). In this study, by using a lightweight digital scanner connected to a mobile computer, we obtained digital image data from intact plant leaves grown in natural-light greenhouses without detaching the targets. We took images of soybean leaves infected by Xanthomonas campestris pv. glycines , and distinctively quantified two disease symptoms (brown lesions and yellow halos) using freely available image processing software. The image data were amenable to quantitative and statistical analyses, allowing precise and objective evaluation of disease resistance.

  7. exVis: a visual analysis tool for wind tunnel data

    NASA Astrophysics Data System (ADS)

    Deardorff, D. G.; Keeley, Leslie E.; Uselton, Samuel P.

    1998-05-01

    exVis is a software tool created to support interactive display and analysis of data collected during wind tunnel experiments. It is a result of a continuing project to explore the uses of information technology in improving the effectiveness of aeronautical design professionals. The data analysis goals are accomplished by allowing aerodynamicists to display and query data collected by new data acquisition systems and to create traditional wind tunnel plots from this data by interactively interrogating these images. exVis was built as a collection of distinct modules to allow for rapid prototyping, to foster evolution of capabilities, and to facilitate object reuse within other applications being developed. It was implemented using C++ and Open Inventor, commercially available object-oriented tools. The initial version was composed of three main classes. Two of these modules are autonomous viewer objects intended to display the test images (ImageViewer) and the plots (GraphViewer). The third main class is the Application User Interface (AUI) which manages the passing of data and events between the viewers, as well as providing a user interface to certain features. User feedback was obtained on a regular basis, which allowed for quick revision cycles and appropriately enhanced feature sets. During the development process additional classes were added, including a color map editor and a data set manager. The ImageViewer module was substantially rewritten to add features and to use the data set manager. The use of an object-oriented design was successful in allowing rapid prototyping and easy feature addition.

  8. Evaluation of aortic contractility based on analysis of CT images of the heart

    NASA Astrophysics Data System (ADS)

    DzierŻak, RóŻa; Maciejewski, Ryszard; Uhlig, Sebastian

    2017-08-01

    The paper presents a method to assess the aortic contractility based on the analysis of CT images of the heart. This is an alternative method that can be used for patients who cannot be examined by using echocardiography. Usage of medical imaging application for DICOM file processing allows to evaluate the aortic cross section during systole and diastole. It makes possible to assess the level of aortic contractility.

  9. Engineering the Ideal Gigapixel Image Viewer

    NASA Astrophysics Data System (ADS)

    Perpeet, D. Wassenberg, J.

    2011-09-01

    Despite improvements in automatic processing, analysts are still faced with the task of evaluating gigapixel-scale mosaics or images acquired by telescopes such as Pan-STARRS. Displaying such images in ‘ideal’ form is a major challenge even today, and the amount of data will only increase as sensor resolutions improve. In our opinion, the ideal viewer has several key characteristics. Lossless display - down to individual pixels - ensures all information can be extracted from the image. Support for all relevant pixel formats (integer or floating point) allows displaying data from different sensors. Smooth zooming and panning in the high-resolution data enables rapid screening and navigation in the image. High responsiveness to input commands avoids frustrating delays. Instantaneous image enhancement, e.g. contrast adjustment and image channel selection, helps with analysis tasks. Modest system requirements allow viewing on regular workstation computers or even laptops. To the best of our knowledge, no such software product is currently available. Meeting these goals requires addressing certain realities of current computer architectures. GPU hardware accelerates rendering and allows smooth zooming without high CPU load. Programmable GPU shaders enable instant channel selection and contrast adjustment without any perceptible slowdown or changes to the input data. Relatively low disk transfer speeds suggest the use of compression to decrease the amount of data to transfer. Asynchronous I/O allows decompressing while waiting for previous I/O operations to complete. The slow seek times of magnetic disks motivate optimizing the order of the data on disk. Vectorization and parallelization allow significant increases in computational capacity. Limited memory requires streaming and caching of image regions. We develop a viewer that takes the above issues into account. Its awareness of the computer architecture enables previously unattainable features such as smooth zooming and image enhancement within high-resolution data. We describe our implementation, disclosing its novel file format and lossless image codec whose decompression is faster than copying the raw data in memory. Both provide crucial performance boosts compared to conventional approaches. Usability tests demonstrate the suitability of our viewer for rapid analysis of large SAR datasets, multispectral satellite imagery and mosaics.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shoaf, S.; APS Engineering Support Division

    A real-time image analysis system was developed for beam imaging diagnostics. An Apple Power Mac G5 with an Active Silicon LFG frame grabber was used to capture video images that were processed and analyzed. Software routines were created to utilize vector-processing hardware to reduce the time to process images as compared to conventional methods. These improvements allow for more advanced image processing diagnostics to be performed in real time.

  11. Extended Field Laser Confocal Microscopy (EFLCM): Combining automated Gigapixel image capture with in silico virtual microscopy

    PubMed Central

    Flaberg, Emilie; Sabelström, Per; Strandh, Christer; Szekely, Laszlo

    2008-01-01

    Background Confocal laser scanning microscopy has revolutionized cell biology. However, the technique has major limitations in speed and sensitivity due to the fact that a single laser beam scans the sample, allowing only a few microseconds signal collection for each pixel. This limitation has been overcome by the introduction of parallel beam illumination techniques in combination with cold CCD camera based image capture. Methods Using the combination of microlens enhanced Nipkow spinning disc confocal illumination together with fully automated image capture and large scale in silico image processing we have developed a system allowing the acquisition, presentation and analysis of maximum resolution confocal panorama images of several Gigapixel size. We call the method Extended Field Laser Confocal Microscopy (EFLCM). Results We show using the EFLCM technique that it is possible to create a continuous confocal multi-colour mosaic from thousands of individually captured images. EFLCM can digitize and analyze histological slides, sections of entire rodent organ and full size embryos. It can also record hundreds of thousands cultured cells at multiple wavelength in single event or time-lapse fashion on fixed slides, in live cell imaging chambers or microtiter plates. Conclusion The observer independent image capture of EFLCM allows quantitative measurements of fluorescence intensities and morphological parameters on a large number of cells. EFLCM therefore bridges the gap between the mainly illustrative fluorescence microscopy and purely quantitative flow cytometry. EFLCM can also be used as high content analysis (HCA) instrument for automated screening processes. PMID:18627634

  12. Extended Field Laser Confocal Microscopy (EFLCM): combining automated Gigapixel image capture with in silico virtual microscopy.

    PubMed

    Flaberg, Emilie; Sabelström, Per; Strandh, Christer; Szekely, Laszlo

    2008-07-16

    Confocal laser scanning microscopy has revolutionized cell biology. However, the technique has major limitations in speed and sensitivity due to the fact that a single laser beam scans the sample, allowing only a few microseconds signal collection for each pixel. This limitation has been overcome by the introduction of parallel beam illumination techniques in combination with cold CCD camera based image capture. Using the combination of microlens enhanced Nipkow spinning disc confocal illumination together with fully automated image capture and large scale in silico image processing we have developed a system allowing the acquisition, presentation and analysis of maximum resolution confocal panorama images of several Gigapixel size. We call the method Extended Field Laser Confocal Microscopy (EFLCM). We show using the EFLCM technique that it is possible to create a continuous confocal multi-colour mosaic from thousands of individually captured images. EFLCM can digitize and analyze histological slides, sections of entire rodent organ and full size embryos. It can also record hundreds of thousands cultured cells at multiple wavelength in single event or time-lapse fashion on fixed slides, in live cell imaging chambers or microtiter plates. The observer independent image capture of EFLCM allows quantitative measurements of fluorescence intensities and morphological parameters on a large number of cells. EFLCM therefore bridges the gap between the mainly illustrative fluorescence microscopy and purely quantitative flow cytometry. EFLCM can also be used as high content analysis (HCA) instrument for automated screening processes.

  13. Automated fine structure image analysis method for discrimination of diabetic retinopathy stage using conjunctival microvasculature images

    PubMed Central

    Khansari, Maziyar M; O’Neill, William; Penn, Richard; Chau, Felix; Blair, Norman P; Shahidi, Mahnaz

    2016-01-01

    The conjunctiva is a densely vascularized mucus membrane covering the sclera of the eye with a unique advantage of accessibility for direct visualization and non-invasive imaging. The purpose of this study is to apply an automated quantitative method for discrimination of different stages of diabetic retinopathy (DR) using conjunctival microvasculature images. Fine structural analysis of conjunctival microvasculature images was performed by ordinary least square regression and Fisher linear discriminant analysis. Conjunctival images between groups of non-diabetic and diabetic subjects at different stages of DR were discriminated. The automated method’s discriminate rates were higher than those determined by human observers. The method allowed sensitive and rapid discrimination by assessment of conjunctival microvasculature images and can be potentially useful for DR screening and monitoring. PMID:27446692

  14. Quantitative analysis of cardiovascular MR images.

    PubMed

    van der Geest, R J; de Roos, A; van der Wall, E E; Reiber, J H

    1997-06-01

    The diagnosis of cardiovascular disease requires the precise assessment of both morphology and function. Nearly all aspects of cardiovascular function and flow can be quantified nowadays with fast magnetic resonance (MR) imaging techniques. Conventional and breath-hold cine MR imaging allow the precise and highly reproducible assessment of global and regional left ventricular function. During the same examination, velocity encoded cine (VEC) MR imaging provides measurements of blood flow in the heart and great vessels. Quantitative image analysis often still relies on manual tracing of contours in the images. Reliable automated or semi-automated image analysis software would be very helpful to overcome the limitations associated with the manual and tedious processing of the images. Recent progress in MR imaging of the coronary arteries and myocardial perfusion imaging with contrast media, along with the further development of faster imaging sequences, suggest that MR imaging could evolve into a single technique ('one stop shop') for the evaluation of many aspects of heart disease. As a result, it is very likely that the need for automated image segmentation and analysis software algorithms will further increase. In this paper the developments directed towards the automated image analysis and semi-automated contour detection for cardiovascular MR imaging are presented.

  15. Soft-landing ion mobility of silver clusters for small-molecule matrix-assisted laser desorption ionization mass spectrometry and imaging of latent fingerprints.

    PubMed

    Walton, Barbara L; Verbeck, Guido F

    2014-08-19

    Matrix-assisted laser desorption ionization (MALDI) imaging is gaining popularity, but matrix effects such as mass spectral interference and damage to the sample limit its applications. Replacing traditional matrices with silver particles capable of equivalent or increased photon energy absorption from the incoming laser has proven to be beneficial for low mass analysis. Not only can silver clusters be advantageous for low mass compound detection, but they can be used for imaging as well. Conventional matrix application methods can obstruct samples, such as fingerprints, rendering them useless after mass analysis. The ability to image latent fingerprints without causing damage to the ridge pattern is important as it allows for further characterization of the print. The application of silver clusters by soft-landing ion mobility allows for enhanced MALDI and preservation of fingerprint integrity.

  16. Model analysis for the MAGIC telescope

    NASA Astrophysics Data System (ADS)

    Mazin, D.; Bigongiari, C.; Goebel, F.; Moralejo, A.; Wittek, W.

    The MAGIC Collaboration operates the 17m imaging Cherenkov telescope on the Canary island La Palma. The main goal of the experiment is an energy threshold below 100 GeV for primary gamma rays. The new analysis technique (model analysis) takes advantage of the high resolution (both in space and time) camera by fitting the averaged expected templates of the shower development to the measured shower images in the camera. This approach allows to recognize and reconstruct images just above the level of the night sky background light fluctuations. Progress and preliminary results of the model analysis technique will be presented.

  17. Big Data and Deep data in scanning and electron microscopies: functionality from multidimensional data sets

    DOE PAGES

    Belianinov, Alex; Vasudevan, Rama K; Strelcov, Evgheni; ...

    2015-05-13

    The development of electron, and scanning probe microscopies in the second half of the twentieth century have produced spectacular images of internal structure and composition of matter with, at nanometer, molecular, and atomic resolution. Largely, this progress was enabled by computer-assisted methods of microscope operation, data acquisition and analysis. The progress in imaging technologies in the beginning of the twenty first century has opened the proverbial floodgates of high-veracity information on structure and functionality. High resolution imaging now allows information on atomic positions with picometer precision, allowing for quantitative measurements of individual bond length and angles. Functional imaging often leadsmore » to multidimensional data sets containing partial or full information on properties of interest, acquired as a function of multiple parameters (time, temperature, or other external stimuli). Here, we review several recent applications of the big and deep data analysis methods to visualize, compress, and translate this data into physically and chemically relevant information from imaging data.« less

  18. Hybrid statistics-simulations based method for atom-counting from ADF STEM images.

    PubMed

    De Wael, Annelies; De Backer, Annick; Jones, Lewys; Nellist, Peter D; Van Aert, Sandra

    2017-06-01

    A hybrid statistics-simulations based method for atom-counting from annular dark field scanning transmission electron microscopy (ADF STEM) images of monotype crystalline nanostructures is presented. Different atom-counting methods already exist for model-like systems. However, the increasing relevance of radiation damage in the study of nanostructures demands a method that allows atom-counting from low dose images with a low signal-to-noise ratio. Therefore, the hybrid method directly includes prior knowledge from image simulations into the existing statistics-based method for atom-counting, and accounts in this manner for possible discrepancies between actual and simulated experimental conditions. It is shown by means of simulations and experiments that this hybrid method outperforms the statistics-based method, especially for low electron doses and small nanoparticles. The analysis of a simulated low dose image of a small nanoparticle suggests that this method allows for far more reliable quantitative analysis of beam-sensitive materials. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. Big Data and Deep data in scanning and electron microscopies: functionality from multidimensional data sets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Belianinov, Alex; Vasudevan, Rama K; Strelcov, Evgheni

    The development of electron, and scanning probe microscopies in the second half of the twentieth century have produced spectacular images of internal structure and composition of matter with, at nanometer, molecular, and atomic resolution. Largely, this progress was enabled by computer-assisted methods of microscope operation, data acquisition and analysis. The progress in imaging technologies in the beginning of the twenty first century has opened the proverbial floodgates of high-veracity information on structure and functionality. High resolution imaging now allows information on atomic positions with picometer precision, allowing for quantitative measurements of individual bond length and angles. Functional imaging often leadsmore » to multidimensional data sets containing partial or full information on properties of interest, acquired as a function of multiple parameters (time, temperature, or other external stimuli). Here, we review several recent applications of the big and deep data analysis methods to visualize, compress, and translate this data into physically and chemically relevant information from imaging data.« less

  20. Breast MRI radiogenomics: Current status and research implications.

    PubMed

    Grimm, Lars J

    2016-06-01

    Breast magnetic resonance imaging (MRI) radiogenomics is an emerging area of research that has the potential to directly influence clinical practice. Clinical MRI scanners today are capable of providing excellent temporal and spatial resolution, which allows extraction of numerous imaging features via human extraction approaches or complex computer vision algorithms. Meanwhile, advances in breast cancer genetics research has resulted in the identification of promising genes associated with cancer outcomes. In addition, validated genomic signatures have been developed that allow categorization of breast cancers into distinct molecular subtypes as well as predict the risk of cancer recurrence and response to therapy. Current radiogenomics research has been directed towards exploratory analysis of individual genes, understanding tumor biology, and developing imaging surrogates to genetic analysis with the long-term goal of developing a meaningful tool for clinical care. The background of breast MRI radiogenomics research, image feature extraction techniques, approaches to radiogenomics research, and promising areas of investigation are reviewed. J. Magn. Reson. Imaging 2016;43:1269-1278. © 2015 Wiley Periodicals, Inc.

  1. InterFace: A software package for face image warping, averaging, and principal components analysis.

    PubMed

    Kramer, Robin S S; Jenkins, Rob; Burton, A Mike

    2017-12-01

    We describe InterFace, a software package for research in face recognition. The package supports image warping, reshaping, averaging of multiple face images, and morphing between faces. It also supports principal components analysis (PCA) of face images, along with tools for exploring the "face space" produced by PCA. The package uses a simple graphical user interface, allowing users to perform these sophisticated image manipulations without any need for programming knowledge. The program is available for download in the form of an app, which requires that users also have access to the (freely available) MATLAB Runtime environment.

  2. Graphical user interface for image acquisition and processing

    DOEpatents

    Goldberg, Kenneth A.

    2002-01-01

    An event-driven GUI-based image acquisition interface for the IDL programming environment designed for CCD camera control and image acquisition directly into the IDL environment where image manipulation and data analysis can be performed, and a toolbox of real-time analysis applications. Running the image acquisition hardware directly from IDL removes the necessity of first saving images in one program and then importing the data into IDL for analysis in a second step. Bringing the data directly into IDL creates an opportunity for the implementation of IDL image processing and display functions in real-time. program allows control over the available charge coupled device (CCD) detector parameters, data acquisition, file saving and loading, and image manipulation and processing, all from within IDL. The program is built using IDL's widget libraries to control the on-screen display and user interface.

  3. magHD: a new approach to multi-dimensional data storage, analysis, display and exploitation

    NASA Astrophysics Data System (ADS)

    Angleraud, Christophe

    2014-06-01

    The ever increasing amount of data and processing capabilities - following the well- known Moore's law - is challenging the way scientists and engineers are currently exploiting large datasets. The scientific visualization tools, although quite powerful, are often too generic and provide abstract views of phenomena, thus preventing cross disciplines fertilization. On the other end, Geographic information Systems allow nice and visually appealing maps to be built but they often get very confused as more layers are added. Moreover, the introduction of time as a fourth analysis dimension to allow analysis of time dependent phenomena such as meteorological or climate models, is encouraging real-time data exploration techniques that allow spatial-temporal points of interests to be detected by integration of moving images by the human brain. Magellium is involved in high performance image processing chains for satellite image processing as well as scientific signal analysis and geographic information management since its creation (2003). We believe that recent work on big data, GPU and peer-to-peer collaborative processing can open a new breakthrough in data analysis and display that will serve many new applications in collaborative scientific computing, environment mapping and understanding. The magHD (for Magellium Hyper-Dimension) project aims at developing software solutions that will bring highly interactive tools for complex datasets analysis and exploration commodity hardware, targeting small to medium scale clusters with expansion capabilities to large cloud based clusters.

  4. [Central online quality assurance in radiology: an IT solution exemplified by the German Breast Cancer Screening Program].

    PubMed

    Czwoydzinski, J; Girnus, R; Sommer, A; Heindel, W; Lenzen, H

    2011-09-01

    Physical-technical quality assurance is one of the essential tasks of the National Reference Centers in the German Breast Cancer Screening Program. For this purpose the mammography units are required to transfer the measured values of the constancy tests on a daily basis and all phantom images created for this purpose on a weekly basis to the reference centers. This is a serious logistical challenge. To meet these requirements, we developed an innovative software tool. By the end of 2005, we had already developed web-based software (MammoControl) allowing the transmission of constancy test results via entry forms. For automatic analysis and transmission of the phantom images, we then introduced an extension (MammoControl DIANA). This was based on Java, Java Web Start, the NetBeans Rich Client Platform, the Pixelmed Java DICOM Toolkit and the ImageJ library. MammoControl DIANA was designed to run locally in the mammography units. This allows automated on-site image analysis. Both results and compressed images can then be transmitted to the reference center. We developed analysis modules for the daily and monthly consistency tests and additionally for a homogeneity test. The software we developed facilitates the immediate availability of measurement results, phantom images, and DICOM header data in all reference centers. This allows both targeted guidance and short response time in the case of errors. We achieved a consistent IT-based evaluation with standardized tools for the entire screening program in Germany. © Georg Thieme Verlag KG Stuttgart · New York.

  5. Automated sub-5 nm image registration in integrated correlative fluorescence and electron microscopy using cathodoluminescence pointers

    NASA Astrophysics Data System (ADS)

    Haring, Martijn T.; Liv, Nalan; Zonnevylle, A. Christiaan; Narvaez, Angela C.; Voortman, Lenard M.; Kruit, Pieter; Hoogenboom, Jacob P.

    2017-03-01

    In the biological sciences, data from fluorescence and electron microscopy is correlated to allow fluorescence biomolecule identification within the cellular ultrastructure and/or ultrastructural analysis following live-cell imaging. High-accuracy (sub-100 nm) image overlay requires the addition of fiducial markers, which makes overlay accuracy dependent on the number of fiducials present in the region of interest. Here, we report an automated method for light-electron image overlay at high accuracy, i.e. below 5 nm. Our method relies on direct visualization of the electron beam position in the fluorescence detection channel using cathodoluminescence pointers. We show that image overlay using cathodoluminescence pointers corrects for image distortions, is independent of user interpretation, and does not require fiducials, allowing image correlation with molecular precision anywhere on a sample.

  6. Automated sub-5 nm image registration in integrated correlative fluorescence and electron microscopy using cathodoluminescence pointers.

    PubMed

    Haring, Martijn T; Liv, Nalan; Zonnevylle, A Christiaan; Narvaez, Angela C; Voortman, Lenard M; Kruit, Pieter; Hoogenboom, Jacob P

    2017-03-02

    In the biological sciences, data from fluorescence and electron microscopy is correlated to allow fluorescence biomolecule identification within the cellular ultrastructure and/or ultrastructural analysis following live-cell imaging. High-accuracy (sub-100 nm) image overlay requires the addition of fiducial markers, which makes overlay accuracy dependent on the number of fiducials present in the region of interest. Here, we report an automated method for light-electron image overlay at high accuracy, i.e. below 5 nm. Our method relies on direct visualization of the electron beam position in the fluorescence detection channel using cathodoluminescence pointers. We show that image overlay using cathodoluminescence pointers corrects for image distortions, is independent of user interpretation, and does not require fiducials, allowing image correlation with molecular precision anywhere on a sample.

  7. Automated sub-5 nm image registration in integrated correlative fluorescence and electron microscopy using cathodoluminescence pointers

    PubMed Central

    Haring, Martijn T.; Liv, Nalan; Zonnevylle, A. Christiaan; Narvaez, Angela C.; Voortman, Lenard M.; Kruit, Pieter; Hoogenboom, Jacob P.

    2017-01-01

    In the biological sciences, data from fluorescence and electron microscopy is correlated to allow fluorescence biomolecule identification within the cellular ultrastructure and/or ultrastructural analysis following live-cell imaging. High-accuracy (sub-100 nm) image overlay requires the addition of fiducial markers, which makes overlay accuracy dependent on the number of fiducials present in the region of interest. Here, we report an automated method for light-electron image overlay at high accuracy, i.e. below 5 nm. Our method relies on direct visualization of the electron beam position in the fluorescence detection channel using cathodoluminescence pointers. We show that image overlay using cathodoluminescence pointers corrects for image distortions, is independent of user interpretation, and does not require fiducials, allowing image correlation with molecular precision anywhere on a sample. PMID:28252673

  8. Single-Cell Analysis Using Hyperspectral Imaging Modalities.

    PubMed

    Mehta, Nishir; Shaik, Shahensha; Devireddy, Ram; Gartia, Manas Ranjan

    2018-02-01

    Almost a decade ago, hyperspectral imaging (HSI) was employed by the NASA in satellite imaging applications such as remote sensing technology. This technology has since been extensively used in the exploration of minerals, agricultural purposes, water resources, and urban development needs. Due to recent advancements in optical re-construction and imaging, HSI can now be applied down to micro- and nanometer scales possibly allowing for exquisite control and analysis of single cell to complex biological systems. This short review provides a description of the working principle of the HSI technology and how HSI can be used to assist, substitute, and validate traditional imaging technologies. This is followed by a description of the use of HSI for biological analysis and medical diagnostics with emphasis on single-cell analysis using HSI.

  9. 3D thermography imaging standardization technique for inflammation diagnosis

    NASA Astrophysics Data System (ADS)

    Ju, Xiangyang; Nebel, Jean-Christophe; Siebert, J. Paul

    2005-01-01

    We develop a 3D thermography imaging standardization technique to allow quantitative data analysis. Medical Digital Infrared Thermal Imaging is very sensitive and reliable mean of graphically mapping and display skin surface temperature. It allows doctors to visualise in colour and quantify temperature changes in skin surface. The spectrum of colours indicates both hot and cold responses which may co-exist if the pain associate with an inflammatory focus excites an increase in sympathetic activity. However, due to thermograph provides only qualitative diagnosis information, it has not gained acceptance in the medical and veterinary communities as a necessary or effective tool in inflammation and tumor detection. Here, our technique is based on the combination of visual 3D imaging technique and thermal imaging technique, which maps the 2D thermography images on to 3D anatomical model. Then we rectify the 3D thermogram into a view independent thermogram and conform it a standard shape template. The combination of these imaging facilities allows the generation of combined 3D and thermal data from which thermal signatures can be quantified.

  10. Mirion--a software package for automatic processing of mass spectrometric images.

    PubMed

    Paschke, C; Leisner, A; Hester, A; Maass, K; Guenther, S; Bouschen, W; Spengler, B

    2013-08-01

    Mass spectrometric imaging (MSI) techniques are of growing interest for the Life Sciences. In recent years, the development of new instruments employing ion sources that are tailored for spatial scanning allowed the acquisition of large data sets. A subsequent data processing, however, is still a bottleneck in the analytical process, as a manual data interpretation is impossible within a reasonable time frame. The transformation of mass spectrometric data into spatial distribution images of detected compounds turned out to be the most appropriate method to visualize the results of such scans, as humans are able to interpret images faster and easier than plain numbers. Image generation, thus, is a time-consuming and complex yet very efficient task. The free software package "Mirion," presented in this paper, allows the handling and analysis of data sets acquired by mass spectrometry imaging. Mirion can be used for image processing of MSI data obtained from many different sources, as it uses the HUPO-PSI-based standard data format imzML, which is implemented in the proprietary software of most of the mass spectrometer companies. Different graphical representations of the recorded data are available. Furthermore, automatic calculation and overlay of mass spectrometric images promotes direct comparison of different analytes for data evaluation. The program also includes tools for image processing and image analysis.

  11. Image Accumulation in Pixel Detector Gated by Late External Trigger Signal and its Application in Imaging Activation Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jakubek, J.; Cejnarova, A.; Platkevic, M.

    Single quantum counting pixel detectors of Medipix type are starting to be used in various radiographic applications. Compared to standard devices for digital imaging (such as CCDs or CMOS sensors) they present significant advantages: direct conversion of radiation to electric signal, energy sensitivity, noiseless image integration, unlimited dynamic range, absolute linearity. In this article we describe usage of the pixel device TimePix for image accumulation gated by late trigger signal. Demonstration of the technique is given on imaging coincidence instrumental neutron activation analysis (Imaging CINAA). This method allows one to determine concentration and distribution of certain preselected element in anmore » inspected sample.« less

  12. Image digitising and analysis of outflows from young stars

    NASA Astrophysics Data System (ADS)

    Zealey, W. J.; Mader, S. L.

    1997-08-01

    We present IIIaJ, IIIaF and IVN band images of Herbig-Haro objects digitised from the ESO/SERC Southern Sky Survey plates. These form part of a digital image database of southern HH objects, which allows the identification of emission and reflection nebulosity and the location of the obscured sources of outflows.

  13. Unsupervised analysis of small animal dynamic Cerenkov luminescence imaging

    NASA Astrophysics Data System (ADS)

    Spinelli, Antonello E.; Boschi, Federico

    2011-12-01

    Clustering analysis (CA) and principal component analysis (PCA) were applied to dynamic Cerenkov luminescence images (dCLI). In order to investigate the performances of the proposed approaches, two distinct dynamic data sets obtained by injecting mice with 32P-ATP and 18F-FDG were acquired using the IVIS 200 optical imager. The k-means clustering algorithm has been applied to dCLI and was implemented using interactive data language 8.1. We show that cluster analysis allows us to obtain good agreement between the clustered and the corresponding emission regions like the bladder, the liver, and the tumor. We also show a good correspondence between the time activity curves of the different regions obtained by using CA and manual region of interest analysis on dCLIT and PCA images. We conclude that CA provides an automatic unsupervised method for the analysis of preclinical dynamic Cerenkov luminescence image data.

  14. Fibered fluorescence microscopy (FFM) of intra epidermal nerve fibers--translational marker for peripheral neuropathies in preclinical research: processing and analysis of the data

    NASA Astrophysics Data System (ADS)

    Cornelissen, Frans; De Backer, Steve; Lemeire, Jan; Torfs, Berf; Nuydens, Rony; Meert, Theo; Schelkens, Peter; Scheunders, Paul

    2008-08-01

    Peripheral neuropathy can be caused by diabetes or AIDS or be a side-effect of chemotherapy. Fibered Fluorescence Microscopy (FFM) is a recently developed imaging modality using a fiber optic probe connected to a laser scanning unit. It allows for in-vivo scanning of small animal subjects by moving the probe along the tissue surface. In preclinical research, FFM enables non-invasive, longitudinal in vivo assessment of intra epidermal nerve fibre density in various models for peripheral neuropathies. By moving the probe, FFM allows visualization of larger surfaces, since, during the movement, images are continuously captured, allowing to acquire an area larger then the field of view of the probe. For analysis purposes, we need to obtain a single static image from the multiple overlapping frames. We introduce a mosaicing procedure for this kind of video sequence. Construction of mosaic images with sub-pixel alignment is indispensable and must be integrated into a global consistent image aligning. An additional motivation for the mosaicing is the use of overlapping redundant information to improve the signal to noise ratio of the acquisition, because the individual frames tend to have both high noise levels and intensity inhomogeneities. For longitudinal analysis, mosaics captured at different times must be aligned as well. For alignment, global correlation-based matching is compared with interest point matching. Use of algorithms working on multiple CPU's (parallel processor/cluster/grid) is imperative for use in a screening model.

  15. Pipeline monitoring with unmanned aerial vehicles

    NASA Astrophysics Data System (ADS)

    Kochetkova, L. I.

    2018-05-01

    Pipeline leakage during transportation of combustible substances leads to explosion and fire thus causing death of people and destruction of production and accommodation facilities. Continuous pipeline monitoring allows identifying leaks in due time and quickly taking measures for their elimination. The paper describes the solution of identification of pipeline leakage using unmanned aerial vehicles. It is recommended to apply the spectral analysis with input RGB signal to identify pipeline damages. The application of multi-zone digital images allows defining potential spill of oil hydrocarbons as well as possible soil pollution. The method of multi-temporal digital images within the visible region makes it possible to define changes in soil morphology for its subsequent analysis. The given solution is cost efficient and reliable thus allowing reducing timing and labor resources in comparison with other methods of pipeline monitoring.

  16. Concurrent Image Processing Executive (CIPE). Volume 1: Design overview

    NASA Technical Reports Server (NTRS)

    Lee, Meemong; Groom, Steven L.; Mazer, Alan S.; Williams, Winifred I.

    1990-01-01

    The design and implementation of a Concurrent Image Processing Executive (CIPE), which is intended to become the support system software for a prototype high performance science analysis workstation are described. The target machine for this software is a JPL/Caltech Mark 3fp Hypercube hosted by either a MASSCOMP 5600 or a Sun-3, Sun-4 workstation; however, the design will accommodate other concurrent machines of similar architecture, i.e., local memory, multiple-instruction-multiple-data (MIMD) machines. The CIPE system provides both a multimode user interface and an applications programmer interface, and has been designed around four loosely coupled modules: user interface, host-resident executive, hypercube-resident executive, and application functions. The loose coupling between modules allows modification of a particular module without significantly affecting the other modules in the system. In order to enhance hypercube memory utilization and to allow expansion of image processing capabilities, a specialized program management method, incremental loading, was devised. To minimize data transfer between host and hypercube, a data management method which distributes, redistributes, and tracks data set information was implemented. The data management also allows data sharing among application programs. The CIPE software architecture provides a flexible environment for scientific analysis of complex remote sensing image data, such as planetary data and imaging spectrometry, utilizing state-of-the-art concurrent computation capabilities.

  17. MilxXplore: a web-based system to explore large imaging datasets

    PubMed Central

    Bourgeat, P; Dore, V; Villemagne, V L; Rowe, C C; Salvado, O; Fripp, J

    2013-01-01

    Objective As large-scale medical imaging studies are becoming more common, there is an increasing reliance on automated software to extract quantitative information from these images. As the size of the cohorts keeps increasing with large studies, there is a also a need for tools that allow results from automated image processing and analysis to be presented in a way that enables fast and efficient quality checking, tagging and reporting on cases in which automatic processing failed or was problematic. Materials and methods MilxXplore is an open source visualization platform, which provides an interface to navigate and explore imaging data in a web browser, giving the end user the opportunity to perform quality control and reporting in a user friendly, collaborative and efficient way. Discussion Compared to existing software solutions that often provide an overview of the results at the subject's level, MilxXplore pools the results of individual subjects and time points together, allowing easy and efficient navigation and browsing through the different acquisitions of a subject over time, and comparing the results against the rest of the population. Conclusions MilxXplore is fast, flexible and allows remote quality checks of processed imaging data, facilitating data sharing and collaboration across multiple locations, and can be easily integrated into a cloud computing pipeline. With the growing trend of open data and open science, such a tool will become increasingly important to share and publish results of imaging analysis. PMID:23775173

  18. Merging Dietary Assessment with the Adolescent Lifestyle

    PubMed Central

    Schap, TusaRebecca E; Zhu, Fengqing M; Delp, Edward J; Boushey, Carol J

    2013-01-01

    The use of image-based dietary assessment methods shows promise for improving dietary self-report among children. The Technology Assisted Dietary Assessment (TADA) food record application is a self-administered food record specifically designed to address the burden and human error associated with conventional methods of dietary assessment. Users would take images of foods and beverages at all eating occasions using a mobile telephone or mobile device with an integrated camera, (e.g., Apple iPhone, Google Nexus One, Apple iPod Touch). Once the images are taken, the images are transferred to a back-end server for automated analysis. The first step in this process is image analysis, i.e., segmentation, feature extraction, and classification, allows for automated food identification. Portion size estimation is also automated via segmentation and geometric shape template modeling. The results of the automated food identification and volume estimation can be indexed with the Food and Nutrient Database for Dietary Studies (FNDDS) to provide a detailed diet analysis for use in epidemiologic or intervention studies. Data collected during controlled feeding studies in a camp-like setting have allowed for formative evaluation and validation of the TADA food record application. This review summarizes the system design and the evidence-based development of image-based methods for dietary assessment among children. PMID:23489518

  19. Blackboard architecture for medical image interpretation

    NASA Astrophysics Data System (ADS)

    Davis, Darryl N.; Taylor, Christopher J.

    1991-06-01

    There is a growing interest in using sophisticated knowledge-based systems for biomedical image interpretation. We present a principled attempt to use artificial intelligence methodologies in interpreting lateral skull x-ray images. Such radiographs are routinely used in cephalometric analysis to provide quantitative measurements useful to clinical orthodontists. Manual and interactive methods of analysis are known to be error prone and previous attempts to automate this analysis typically fail to capture the expertise and adaptability required to cope with the variability in biological structure and image quality. An integrated model-based system has been developed which makes use of a blackboard architecture and multiple knowledge sources. A model definition interface allows quantitative models, of feature appearance and location, to be built from examples as well as more qualitative modelling constructs. Visual task definition and blackboard control modules allow task-specific knowledge sources to act on information available to the blackboard in a hypothesise and test reasoning cycle. Further knowledge-based modules include object selection, location hypothesis, intelligent segmentation, and constraint propagation systems. Alternative solutions to given tasks are permitted.

  20. Unified Framework for Development, Deployment and Robust Testing of Neuroimaging Algorithms

    PubMed Central

    Joshi, Alark; Scheinost, Dustin; Okuda, Hirohito; Belhachemi, Dominique; Murphy, Isabella; Staib, Lawrence H.; Papademetris, Xenophon

    2011-01-01

    Developing both graphical and command-line user interfaces for neuroimaging algorithms requires considerable effort. Neuroimaging algorithms can meet their potential only if they can be easily and frequently used by their intended users. Deployment of a large suite of such algorithms on multiple platforms requires consistency of user interface controls, consistent results across various platforms and thorough testing. We present the design and implementation of a novel object-oriented framework that allows for rapid development of complex image analysis algorithms with many reusable components and the ability to easily add graphical user interface controls. Our framework also allows for simplified yet robust nightly testing of the algorithms to ensure stability and cross platform interoperability. All of the functionality is encapsulated into a software object requiring no separate source code for user interfaces, testing or deployment. This formulation makes our framework ideal for developing novel, stable and easy-to-use algorithms for medical image analysis and computer assisted interventions. The framework has been both deployed at Yale and released for public use in the open source multi-platform image analysis software—BioImage Suite (bioimagesuite.org). PMID:21249532

  1. HTML5 PivotViewer: high-throughput visualization and querying of image data on the web

    PubMed Central

    Taylor, Stephen; Noble, Roger

    2014-01-01

    Motivation: Visualization and analysis of large numbers of biological images has generated a bottle neck in research. We present HTML5 PivotViewer, a novel, open source, platform-independent viewer making use of the latest web technologies that allows seamless access to images and associated metadata for each image. This provides a powerful method to allow end users to mine their data. Availability and implementation: Documentation, examples and links to the software are available from http://www.cbrg.ox.ac.uk/data/pivotviewer/. The software is licensed under GPLv2. Contact:  stephen.taylor@imm.ox.ac.uk and roger@coritsu.com PMID:24849578

  2. Methods: a comparative analysis of radiography, microcomputed tomography, and histology for bone tissue engineering.

    PubMed

    Hedberg, Elizabeth L; Kroese-Deutman, Henriette C; Shih, Charles K; Lemoine, Jeremy J; Liebschner, Michael A K; Miller, Michael J; Yasko, Alan W; Crowther, Roger S; Carney, Darrell H; Mikos, Antonios G; Jansen, John A

    2005-01-01

    This study focused on the assessment of radiography, microcomputed tomography, and histology for the evaluation of bone formation in a 15.0-mm defect in the rabbit radius after the implantation of a tissue-engineered construct. Radiography was found to be useful as a noninvasive method for obtaining images of calcified tissue throughout the time course of the experiment. With this method, however, image quality was low, making it difficult to obtain precise information about the location and quantity of the bone formed. Microcomputed tomography was used to create three-dimensional reconstructions of the bone (25-microm resolution). These reconstructions allowed for greater spatial resolution than the radiography, but did not allow for imaging of the implanted scaffold material or the surrounding, nonmineralized tissue. To visualize all materials within the defect area at the cellular level, histology was used. Histological analysis, however, is a destructive technique that did not allow for any further analysis of the samples. Each technique examined here has its own advantages and limitations, but each yields unique information regarding bone regeneration. It is only through the use of all three techniques that complete characterization of the bone growth and tissue/construct responses after implantation in vivo.

  3. ROCView: prototype software for data collection in jackknife alternative free-response receiver operating characteristic analysis

    PubMed Central

    Thompson, J; Hogg, P; Thompson, S; Manning, D; Szczepura, K

    2012-01-01

    ROCView has been developed as an image display and response capture (IDRC) solution to image display and consistent recording of reader responses in relation to the free-response receiver operating characteristic paradigm. A web-based solution to IDRC for observer response studies allows observations to be completed from any location, assuming that display performance and viewing conditions are consistent with the study being completed. The simplistic functionality of the software allows observations to be completed without supervision. ROCView can display images from multiple modalities, in a randomised order if required. Following registration, observers are prompted to begin their image evaluation. All data are recorded via mouse clicks, one to localise (mark) and one to score confidence (rate) using either an ordinal or continuous rating scale. Up to nine “mark-rating” pairs can be made per image. Unmarked images are given a default score of zero. Upon completion of the study, both true-positive and false-positive reports can be downloaded and adapted for analysis. ROCView has the potential to be a useful tool in the assessment of modality performance difference for a range of imaging methods. PMID:22573294

  4. New public dataset for spotting patterns in medieval document images

    NASA Astrophysics Data System (ADS)

    En, Sovann; Nicolas, Stéphane; Petitjean, Caroline; Jurie, Frédéric; Heutte, Laurent

    2017-01-01

    With advances in technology, a large part of our cultural heritage is becoming digitally available. In particular, in the field of historical document image analysis, there is now a growing need for indexing and data mining tools, thus allowing us to spot and retrieve the occurrences of an object of interest, called a pattern, in a large database of document images. Patterns may present some variability in terms of color, shape, or context, making the spotting of patterns a challenging task. Pattern spotting is a relatively new field of research, still hampered by the lack of available annotated resources. We present a new publicly available dataset named DocExplore dedicated to spotting patterns in historical document images. The dataset contains 1500 images and 1464 queries, and allows the evaluation of two tasks: image retrieval and pattern localization. A standardized benchmark protocol along with ad hoc metrics is provided for a fair comparison of the submitted approaches. We also provide some first results obtained with our baseline system on this new dataset, which show that there is room for improvement and that should encourage researchers of the document image analysis community to design new systems and submit improved results.

  5. Histopathological Image Analysis: A Review

    PubMed Central

    Gurcan, Metin N.; Boucheron, Laura; Can, Ali; Madabhushi, Anant; Rajpoot, Nasir; Yener, Bulent

    2010-01-01

    Over the past decade, dramatic increases in computational power and improvement in image analysis algorithms have allowed the development of powerful computer-assisted analytical approaches to radiological data. With the recent advent of whole slide digital scanners, tissue histopathology slides can now be digitized and stored in digital image form. Consequently, digitized tissue histopathology has now become amenable to the application of computerized image analysis and machine learning techniques. Analogous to the role of computer-assisted diagnosis (CAD) algorithms in medical imaging to complement the opinion of a radiologist, CAD algorithms have begun to be developed for disease detection, diagnosis, and prognosis prediction to complement to the opinion of the pathologist. In this paper, we review the recent state of the art CAD technology for digitized histopathology. This paper also briefly describes the development and application of novel image analysis technology for a few specific histopathology related problems being pursued in the United States and Europe. PMID:20671804

  6. Learning representative features for facial images based on a modified principal component analysis

    NASA Astrophysics Data System (ADS)

    Averkin, Anton; Potapov, Alexey

    2013-05-01

    The paper is devoted to facial image analysis and particularly deals with the problem of automatic evaluation of the attractiveness of human faces. We propose a new approach for automatic construction of feature space based on a modified principal component analysis. Input data sets for the algorithm are the learning data sets of facial images, which are rated by one person. The proposed approach allows one to extract features of the individual subjective face beauty perception and to predict attractiveness values for new facial images, which were not included into a learning data set. The Pearson correlation coefficient between values predicted by our method for new facial images and personal attractiveness estimation values equals to 0.89. This means that the new approach proposed is promising and can be used for predicting subjective face attractiveness values in real systems of the facial images analysis.

  7. Automated daily quality control analysis for mammography in a multi-unit imaging center.

    PubMed

    Sundell, Veli-Matti; Mäkelä, Teemu; Meaney, Alexander; Kaasalainen, Touko; Savolainen, Sauli

    2018-01-01

    Background The high requirements for mammography image quality necessitate a systematic quality assurance process. Digital imaging allows automation of the image quality analysis, which can potentially improve repeatability and objectivity compared to a visual evaluation made by the users. Purpose To develop an automatic image quality analysis software for daily mammography quality control in a multi-unit imaging center. Material and Methods An automated image quality analysis software using the discrete wavelet transform and multiresolution analysis was developed for the American College of Radiology accreditation phantom. The software was validated by analyzing 60 randomly selected phantom images from six mammography systems and 20 phantom images with different dose levels from one mammography system. The results were compared to a visual analysis made by four reviewers. Additionally, long-term image quality trends of a full-field digital mammography system and a computed radiography mammography system were investigated. Results The automated software produced feature detection levels comparable to visual analysis. The agreement was good in the case of fibers, while the software detected somewhat more microcalcifications and characteristic masses. Long-term follow-up via a quality assurance web portal demonstrated the feasibility of using the software for monitoring the performance of mammography systems in a multi-unit imaging center. Conclusion Automated image quality analysis enables monitoring the performance of digital mammography systems in an efficient, centralized manner.

  8. CALIPSO: an interactive image analysis software package for desktop PACS workstations

    NASA Astrophysics Data System (ADS)

    Ratib, Osman M.; Huang, H. K.

    1990-07-01

    The purpose of this project is to develop a low cost workstation for quantitative analysis of multimodality images using a Macintosh II personal computer. In the current configuration the Macintosh operates as a stand alone workstation where images are imported either from a central PACS server through a standard Ethernet network or recorded through video digitizer board. The CALIPSO software developed contains a large variety ofbasic image display and manipulation tools. We focused our effort however on the design and implementation ofquantitative analysis methods that can be applied to images from different imaging modalities. Analysis modules currently implemented include geometric and densitometric volumes and ejection fraction calculation from radionuclide and cine-angiograms Fourier analysis ofcardiac wall motion vascular stenosis measurement color coded parametric display of regional flow distribution from dynamic coronary angiograms automatic analysis ofmyocardial distribution ofradiolabelled tracers from tomoscintigraphic images. Several of these analysis tools were selected because they use similar color coded andparametric display methods to communicate quantitative data extracted from the images. 1. Rationale and objectives of the project Developments of Picture Archiving and Communication Systems (PACS) in clinical environment allow physicians and radiologists to assess radiographic images directly through imaging workstations (''). This convenient access to the images is often limited by the number of workstations available due in part to their high cost. There is also an increasing need for quantitative analysis ofthe images. During thepast decade

  9. 3D GeoWall Analysis System for Shuttle External Tank Foreign Object Debris Events

    NASA Technical Reports Server (NTRS)

    Brown, Richard; Navard, Andrew; Spruce, Joseph

    2010-01-01

    An analytical, advanced imaging method has been developed for the initial monitoring and identification of foam debris and similar anomalies that occur post-launch in reference to the space shuttle s external tank (ET). Remote sensing technologies have been used to perform image enhancement and analysis on high-resolution, true-color images collected with the DCS 760 Kodak digital camera located in the right umbilical well of the space shuttle. Improvements to the camera, using filters, have added sharpness/definition to the image sets; however, image review/analysis of the ET has been limited by the fact that the images acquired by umbilical cameras during launch are two-dimensional, and are usually nonreferenceable between frames due to rotation translation of the ET as it falls away from the space shuttle. Use of stereo pairs of these images can enable strong visual indicators that can immediately portray depth perception of damaged areas or movement of fragments between frames is not perceivable in two-dimensional images. A stereoscopic image visualization system has been developed to allow 3D depth perception of stereo-aligned image pairs taken from in-flight umbilical and handheld digital shuttle cameras. This new system has been developed to augment and optimize existing 2D monitoring capabilities. Using this system, candidate sequential image pairs are identified for transformation into stereo viewing pairs. Image orientation is corrected using control points (similar points) between frames to place the two images in proper X-Y viewing perspective. The images are then imported into the WallView stereo viewing software package. The collected control points are used to generate a transformation equation that is used to re-project one image and effectively co-register it to the other image. The co-registered, oriented image pairs are imported into a WallView image set and are used as a 3D stereo analysis slide show. Multiple sequential image pairs can be used to allow forensic review of temporal phenomena between pairs. The observer, while wearing linear polarized glasses, is able to review image pairs in passive 3D stereo.

  10. Non-invasive imaging of skin cancer with fluorescence lifetime imaging using two photon tomography

    NASA Astrophysics Data System (ADS)

    Patalay, Rakesh; Talbot, Clifford; Alexandrov, Yuriy; Munro, Ian; Breunig, Hans Georg; König, Karsten; Warren, Sean; Neil, Mark A. A.; French, Paul M. W.; Chu, Anthony; Stamp, Gordon W.; Dunsby, Christopher

    2011-07-01

    Multispectral fluorescence lifetime imaging (FLIM) using two photon microscopy as a non-invasive technique for the diagnosis of skin lesions is described. Skin contains fluorophores including elastin, keratin, collagen, FAD and NADH. This endogenous contrast allows tissue to be imaged without the addition of exogenous agents and allows the in vivo state of cells and tissues to be studied. A modified DermaInspect® multiphoton tomography system was used to excite autofluorescence at 760 nm in vivo and on freshly excised ex vivo tissue. This instrument simultaneously acquires fluorescence lifetime images in four spectral channels between 360-655 nm using time-correlated single photon counting and can also provide hyperspectral images. The multispectral fluorescence lifetime images were spatially segmented and binned to determine lifetimes for each cell by fitting to a double exponential lifetime model. A comparative analysis between the cellular lifetimes from different diagnoses demonstrates significant diagnostic potential.

  11. A Robust Actin Filaments Image Analysis Framework

    PubMed Central

    Alioscha-Perez, Mitchel; Benadiba, Carine; Goossens, Katty; Kasas, Sandor; Dietler, Giovanni; Willaert, Ronnie; Sahli, Hichem

    2016-01-01

    The cytoskeleton is a highly dynamical protein network that plays a central role in numerous cellular physiological processes, and is traditionally divided into three components according to its chemical composition, i.e. actin, tubulin and intermediate filament cytoskeletons. Understanding the cytoskeleton dynamics is of prime importance to unveil mechanisms involved in cell adaptation to any stress type. Fluorescence imaging of cytoskeleton structures allows analyzing the impact of mechanical stimulation in the cytoskeleton, but it also imposes additional challenges in the image processing stage, such as the presence of imaging-related artifacts and heavy blurring introduced by (high-throughput) automated scans. However, although there exists a considerable number of image-based analytical tools to address the image processing and analysis, most of them are unfit to cope with the aforementioned challenges. Filamentous structures in images can be considered as a piecewise composition of quasi-straight segments (at least in some finer or coarser scale). Based on this observation, we propose a three-steps actin filaments extraction methodology: (i) first the input image is decomposed into a ‘cartoon’ part corresponding to the filament structures in the image, and a noise/texture part, (ii) on the ‘cartoon’ image, we apply a multi-scale line detector coupled with a (iii) quasi-straight filaments merging algorithm for fiber extraction. The proposed robust actin filaments image analysis framework allows extracting individual filaments in the presence of noise, artifacts and heavy blurring. Moreover, it provides numerous parameters such as filaments orientation, position and length, useful for further analysis. Cell image decomposition is relatively under-exploited in biological images processing, and our study shows the benefits it provides when addressing such tasks. Experimental validation was conducted using publicly available datasets, and in osteoblasts grown in two different conditions: static (control) and fluid shear stress. The proposed methodology exhibited higher sensitivity values and similar accuracy compared to state-of-the-art methods. PMID:27551746

  12. In Vivo Myeloperoxidase Imaging and Flow Cytometry Analysis of Intestinal Myeloid Cells.

    PubMed

    Hülsdünker, Jan; Zeiser, Robert

    2016-01-01

    Myeloperoxidase (MPO) imaging is a non-invasive method to detect cells that produce the enzyme MPO that is most abundant in neutrophils, macrophages, and inflammatory monocytes. While lacking specificity for any of these three cell types, MPO imaging can provide guidance for further flow cytometry-based analysis of tissues where these cell types reside. Isolation of leukocytes from the intestinal tract is an error-prone procedure. Here, we describe a protocol for intestinal leukocyte isolation that works reliable in our hands and allows for flow cytometry-based analysis, in particular of neutrophils.

  13. Dedicated computer system AOTK for image processing and analysis of horse navicular bone

    NASA Astrophysics Data System (ADS)

    Zaborowicz, M.; Fojud, A.; Koszela, K.; Mueller, W.; Górna, K.; Okoń, P.; Piekarska-Boniecka, H.

    2017-07-01

    The aim of the research was made the dedicated application AOTK (pol. Analiza Obrazu Trzeszczki Kopytowej) for image processing and analysis of horse navicular bone. The application was produced by using specialized software like Visual Studio 2013 and the .NET platform. To implement algorithms of image processing and analysis were used libraries of Aforge.NET. Implemented algorithms enabling accurate extraction of the characteristics of navicular bones and saving data to external files. Implemented in AOTK modules allowing the calculations of distance selected by user, preliminary assessment of conservation of structure of the examined objects. The application interface is designed in a way that ensures user the best possible view of the analyzed images.

  14. The Spectral Image Processing System (SIPS) - Interactive visualization and analysis of imaging spectrometer data

    NASA Technical Reports Server (NTRS)

    Kruse, F. A.; Lefkoff, A. B.; Boardman, J. W.; Heidebrecht, K. B.; Shapiro, A. T.; Barloon, P. J.; Goetz, A. F. H.

    1993-01-01

    The Center for the Study of Earth from Space (CSES) at the University of Colorado, Boulder, has developed a prototype interactive software system called the Spectral Image Processing System (SIPS) using IDL (the Interactive Data Language) on UNIX-based workstations. SIPS is designed to take advantage of the combination of high spectral resolution and spatial data presentation unique to imaging spectrometers. It streamlines analysis of these data by allowing scientists to rapidly interact with entire datasets. SIPS provides visualization tools for rapid exploratory analysis and numerical tools for quantitative modeling. The user interface is X-Windows-based, user friendly, and provides 'point and click' operation. SIPS is being used for multidisciplinary research concentrating on use of physically based analysis methods to enhance scientific results from imaging spectrometer data. The objective of this continuing effort is to develop operational techniques for quantitative analysis of imaging spectrometer data and to make them available to the scientific community prior to the launch of imaging spectrometer satellite systems such as the Earth Observing System (EOS) High Resolution Imaging Spectrometer (HIRIS).

  15. Covariance of lucky images: performance analysis

    NASA Astrophysics Data System (ADS)

    Cagigal, Manuel P.; Valle, Pedro J.; Cagigas, Miguel A.; Villó-Pérez, Isidro; Colodro-Conde, Carlos; Ginski, C.; Mugrauer, M.; Seeliger, M.

    2017-01-01

    The covariance of ground-based lucky images is a robust and easy-to-use algorithm that allows us to detect faint companions surrounding a host star. In this paper, we analyse the relevance of the number of processed frames, the frames' quality, the atmosphere conditions and the detection noise on the companion detectability. This analysis has been carried out using both experimental and computer-simulated imaging data. Although the technique allows us the detection of faint companions, the camera detection noise and the use of a limited number of frames reduce the minimum detectable companion intensity to around 1000 times fainter than that of the host star when placed at an angular distance corresponding to the few first Airy rings. The reachable contrast could be even larger when detecting companions with the assistance of an adaptive optics system.

  16. Method for data analysis in different institutions: example of image guidance of prostate cancer patients.

    PubMed

    Piotrowski, T; Rodrigues, G; Bajon, T; Yartsev, S

    2014-03-01

    Multi-institutional collaborations allow for more information to be analyzed but the data from different sources may vary in the subgroup sizes and/or conditions of measuring. Rigorous statistical analysis is required for pooling the data in a larger set. Careful comparison of all the components of the data acquisition is indispensable: identical conditions allow for enlargement of the database with improved statistical analysis, clearly defined differences provide opportunity for establishing a better practice. The optimal sequence of required normality, asymptotic normality, and independence tests is proposed. An example of analysis of six subgroups of position corrections in three directions obtained during image guidance procedures for 216 prostate cancer patients from two institutions is presented. Copyright © 2013 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  17. Quantitative 3D Analysis of Nuclear Morphology and Heterochromatin Organization from Whole-Mount Plant Tissue Using NucleusJ.

    PubMed

    Desset, Sophie; Poulet, Axel; Tatout, Christophe

    2018-01-01

    Image analysis is a classical way to study nuclear organization. While nuclear organization used to be investigated by colorimetric or fluorescent labeling of DNA or specific nuclear compartments, new methods in microscopy imaging now enable qualitative and quantitative analyses of chromatin pattern, and nuclear size and shape. Several procedures have been developed to prepare samples in order to collect 3D images for the analysis of spatial chromatin organization, but only few preserve the positional information of the cell within its tissue context. Here, we describe a whole mount tissue preparation procedure coupled to DNA staining using the PicoGreen ® intercalating agent suitable for image analysis of the nucleus in living and fixed tissues. 3D Image analysis is then performed using NucleusJ, an open source ImageJ plugin, which allows for quantifying variations in nuclear morphology such as nuclear volume, sphericity, elongation, and flatness as well as in heterochromatin content and position in respect to the nuclear periphery.

  18. Radiomics: Extracting more information from medical images using advanced feature analysis

    PubMed Central

    Lambin, Philippe; Rios-Velazquez, Emmanuel; Leijenaar, Ralph; Carvalho, Sara; van Stiphout, Ruud G.P.M.; Granton, Patrick; Zegers, Catharina M.L.; Gillies, Robert; Boellard, Ronald; Dekker, André; Aerts, Hugo J.W.L.

    2015-01-01

    Solid cancers are spatially and temporally heterogeneous. This limits the use of invasive biopsy based molecular assays but gives huge potential for medical imaging, which has the ability to capture intra-tumoural heterogeneity in a non-invasive way. During the past decades, medical imaging innovations with new hardware, new imaging agents and standardised protocols, allows the field to move towards quantitative imaging. Therefore, also the development of automated and reproducible analysis methodologies to extract more information from image-based features is a requirement. Radiomics – the high-throughput extraction of large amounts of image features from radiographic images – addresses this problem and is one of the approaches that hold great promises but need further validation in multi-centric settings and in the laboratory. PMID:22257792

  19. 3D Actin Network Centerline Extraction with Multiple Active Contours

    PubMed Central

    Xu, Ting; Vavylonis, Dimitrios; Huang, Xiaolei

    2013-01-01

    Fluorescence microscopy is frequently used to study two and three dimensional network structures formed by cytoskeletal polymer fibers such as actin filaments and actin cables. While these cytoskeletal structures are often dilute enough to allow imaging of individual filaments or bundles of them, quantitative analysis of these images is challenging. To facilitate quantitative, reproducible and objective analysis of the image data, we propose a semi-automated method to extract actin networks and retrieve their topology in 3D. Our method uses multiple Stretching Open Active Contours (SOACs) that are automatically initialized at image intensity ridges and then evolve along the centerlines of filaments in the network. SOACs can merge, stop at junctions, and reconfigure with others to allow smooth crossing at junctions of filaments. The proposed approach is generally applicable to images of curvilinear networks with low SNR. We demonstrate its potential by extracting the centerlines of synthetic meshwork images, actin networks in 2D Total Internal Reflection Fluorescence Microscopy images, and 3D actin cable meshworks of live fission yeast cells imaged by spinning disk confocal microscopy. Quantitative evaluation of the method using synthetic images shows that for images with SNR above 5.0, the average vertex error measured by the distance between our result and ground truth is 1 voxel, and the average Hausdorff distance is below 10 voxels. PMID:24316442

  20. SIMA: Python software for analysis of dynamic fluorescence imaging data.

    PubMed

    Kaifosh, Patrick; Zaremba, Jeffrey D; Danielson, Nathan B; Losonczy, Attila

    2014-01-01

    Fluorescence imaging is a powerful method for monitoring dynamic signals in the nervous system. However, analysis of dynamic fluorescence imaging data remains burdensome, in part due to the shortage of available software tools. To address this need, we have developed SIMA, an open source Python package that facilitates common analysis tasks related to fluorescence imaging. Functionality of this package includes correction of motion artifacts occurring during in vivo imaging with laser-scanning microscopy, segmentation of imaged fields into regions of interest (ROIs), and extraction of signals from the segmented ROIs. We have also developed a graphical user interface (GUI) for manual editing of the automatically segmented ROIs and automated registration of ROIs across multiple imaging datasets. This software has been designed with flexibility in mind to allow for future extension with different analysis methods and potential integration with other packages. Software, documentation, and source code for the SIMA package and ROI Buddy GUI are freely available at http://www.losonczylab.org/sima/.

  1. [MR tomography of the heart].

    PubMed

    Hahn, D; Beer, M; Sandstede, J

    2000-10-01

    The introduction of magnetic resonance (MR) tomography has fundamentally changed radiological diagnosis for many diseases. Invasive digital subtraction angiography has already been widely replaced by noninvasive MR angiography for most of the vascular diseases. The rapid technical development of MR imaging in recent years has opened new functional imaging techniques. MR imaging of the heart allows simultaneous measurement of morphological and functional parameters in a single noninvasive examination without any radiation exposure. Because of the high spatial resolution and the reproducibility cine MR imaging is now the gold standard for functional analysis. With the improvement of myocardial perfusion and viability studies many diseases of the heart can be diagnosed in a single examination. MR spectroscopy is the only method which allows a view of the metabolism of the heart. New examinations for vascular imaging and flow quantification complete the goal of "one-stop-shop" imaging of the heart. MR imaging is the only diagnostic modality which allows a complete evaluation of many diseases of the heart with one technique, basic examination as well as follow-up studies. The very rapid improvement in MRI will overcome most of the limitations in the near future, especially concerning MR coronary angiography.

  2. Mass spectral analysis and imaging of tissue by ToF-SIMS--The role of buckminsterfullerene, C60+, primary ions

    NASA Astrophysics Data System (ADS)

    Jones, Emrys A.; Lockyer, Nicholas P.; Vickerman, John C.

    2007-02-01

    Recent developments in desorption/ionisation mass spectrometry techniques have made their application to biological analysis a realistic and successful proposition. Developments in primary ion source technology, mainly through the advent of polyatomic ion beams, have meant that the technique of secondary ion mass spectrometry (SIMS) can now access the depths of information required to allow biological imaging to be a viable option. Here the role of the primary ion C60+ is assessed with regard to molecular imaging of lipids and pharmaceuticals within tissue sections. High secondary ion yields and low surface damage accumulation are demonstrated on both model and real biological samples, indicating the high secondary ion efficiency afforded to the analyst by this primary ion when compared to other cluster ion beams used in imaging. The newly developed 40 keV C60+ ion source allows the beam to be focused such that high resolution imaging is demonstrated on a tissue sample, and the greater yields allow the molecular signal from the drug raclopride to be imaged within tissue section following in vivo dosing. The localisation shown for this drug alludes to issues regarding the chemical environment affecting the ionisation probability of the molecule; the importance of this effect is demonstrated with model systems and the possibility of using laser post-ionisation as a method for reducing this consequence of bio-sample complexity is demonstrated and discussed.

  3. Cognitive approaches for patterns analysis and security applications

    NASA Astrophysics Data System (ADS)

    Ogiela, Marek R.; Ogiela, Lidia

    2017-08-01

    In this paper will be presented new opportunities for developing innovative solutions for semantic pattern classification and visual cryptography, which will base on cognitive and bio-inspired approaches. Such techniques can be used for evaluation of the meaning of analyzed patterns or encrypted information, and allow to involve such meaning into the classification task or encryption process. It also allows using some crypto-biometric solutions to extend personalized cryptography methodologies based on visual pattern analysis. In particular application of cognitive information systems for semantic analysis of different patterns will be presented, and also a novel application of such systems for visual secret sharing will be described. Visual shares for divided information can be created based on threshold procedure, which may be dependent on personal abilities to recognize some image details visible on divided images.

  4. Angular reconstitution-based 3D reconstructions of nanomolecular structures from superresolution light-microscopy images

    PubMed Central

    Salas, Desirée; Le Gall, Antoine; Fiche, Jean-Bernard; Valeri, Alessandro; Ke, Yonggang; Bron, Patrick; Bellot, Gaetan

    2017-01-01

    Superresolution light microscopy allows the imaging of labeled supramolecular assemblies at a resolution surpassing the classical diffraction limit. A serious limitation of the superresolution approach is sample heterogeneity and the stochastic character of the labeling procedure. To increase the reproducibility and the resolution of the superresolution results, we apply multivariate statistical analysis methods and 3D reconstruction approaches originally developed for cryogenic electron microscopy of single particles. These methods allow for the reference-free 3D reconstruction of nanomolecular structures from two-dimensional superresolution projection images. Since these 2D projection images all show the structure in high-resolution directions of the optical microscope, the resulting 3D reconstructions have the best possible isotropic resolution in all directions. PMID:28811371

  5. Currently available methodologies for the processing of intravascular ultrasound and optical coherence tomography images.

    PubMed

    Athanasiou, Lambros; Sakellarios, Antonis I; Bourantas, Christos V; Tsirka, Georgia; Siogkas, Panagiotis; Exarchos, Themis P; Naka, Katerina K; Michalis, Lampros K; Fotiadis, Dimitrios I

    2014-07-01

    Optical coherence tomography and intravascular ultrasound are the most widely used methodologies in clinical practice as they provide high resolution cross-sectional images that allow comprehensive visualization of the lumen and plaque morphology. Several methods have been developed in recent years to process the output of these imaging modalities, which allow fast, reliable and reproducible detection of the luminal borders and characterization of plaque composition. These methods have proven useful in the study of the atherosclerotic process as they have facilitated analysis of a vast amount of data. This review presents currently available intravascular ultrasound and optical coherence tomography processing methodologies for segmenting and characterizing the plaque area, highlighting their advantages and disadvantages, and discusses the future trends in intravascular imaging.

  6. Handling Different Spatial Resolutions in Image Fusion by Multivariate Curve Resolution-Alternating Least Squares for Incomplete Image Multisets.

    PubMed

    Piqueras, Sara; Bedia, Carmen; Beleites, Claudia; Krafft, Christoph; Popp, Jürgen; Maeder, Marcel; Tauler, Romà; de Juan, Anna

    2018-06-05

    Data fusion of different imaging techniques allows a comprehensive description of chemical and biological systems. Yet, joining images acquired with different spectroscopic platforms is complex because of the different sample orientation and image spatial resolution. Whereas matching sample orientation is often solved by performing suitable affine transformations of rotation, translation, and scaling among images, the main difficulty in image fusion is preserving the spatial detail of the highest spatial resolution image during multitechnique image analysis. In this work, a special variant of the unmixing algorithm Multivariate Curve Resolution Alternating Least Squares (MCR-ALS) for incomplete multisets is proposed to provide a solution for this kind of problem. This algorithm allows analyzing simultaneously images collected with different spectroscopic platforms without losing spatial resolution and ensuring spatial coherence among the images treated. The incomplete multiset structure concatenates images of the two platforms at the lowest spatial resolution with the image acquired with the highest spatial resolution. As a result, the constituents of the sample analyzed are defined by a single set of distribution maps, common to all platforms used and with the highest spatial resolution, and their related extended spectral signatures, covering the signals provided by each of the fused techniques. We demonstrate the potential of the new variant of MCR-ALS for multitechnique analysis on three case studies: (i) a model example of MIR and Raman images of pharmaceutical mixture, (ii) FT-IR and Raman images of palatine tonsil tissue, and (iii) mass spectrometry and Raman images of bean tissue.

  7. Ex-vivo imaging of excised tissue using vital dyes and confocal microscopy

    PubMed Central

    Johnson, Simon; Rabinovitch, Peter

    2012-01-01

    Vital dyes routinely used for staining cultured cells can also be used to stain and image live tissue slices ex-vivo. Staining tissue with vital dyes allows researchers to collect structural and functional data simultaneously and can be used for qualitative or quantitative fluorescent image collection. The protocols presented here are useful for structural and functional analysis of viable properties of cells in intact tissue slices, allowing for the collection of data in a structurally relevant environment. With these protocols, vital dyes can be applied as a research tool to disease processes and properties of tissue not amenable to cell culture based studies. PMID:22752953

  8. OpenComet: An automated tool for comet assay image analysis

    PubMed Central

    Gyori, Benjamin M.; Venkatachalam, Gireedhar; Thiagarajan, P.S.; Hsu, David; Clement, Marie-Veronique

    2014-01-01

    Reactive species such as free radicals are constantly generated in vivo and DNA is the most important target of oxidative stress. Oxidative DNA damage is used as a predictive biomarker to monitor the risk of development of many diseases. The comet assay is widely used for measuring oxidative DNA damage at a single cell level. The analysis of comet assay output images, however, poses considerable challenges. Commercial software is costly and restrictive, while free software generally requires laborious manual tagging of cells. This paper presents OpenComet, an open-source software tool providing automated analysis of comet assay images. It uses a novel and robust method for finding comets based on geometric shape attributes and segmenting the comet heads through image intensity profile analysis. Due to automation, OpenComet is more accurate, less prone to human bias, and faster than manual analysis. A live analysis functionality also allows users to analyze images captured directly from a microscope. We have validated OpenComet on both alkaline and neutral comet assay images as well as sample images from existing software packages. Our results show that OpenComet achieves high accuracy with significantly reduced analysis time. PMID:24624335

  9. OpenComet: an automated tool for comet assay image analysis.

    PubMed

    Gyori, Benjamin M; Venkatachalam, Gireedhar; Thiagarajan, P S; Hsu, David; Clement, Marie-Veronique

    2014-01-01

    Reactive species such as free radicals are constantly generated in vivo and DNA is the most important target of oxidative stress. Oxidative DNA damage is used as a predictive biomarker to monitor the risk of development of many diseases. The comet assay is widely used for measuring oxidative DNA damage at a single cell level. The analysis of comet assay output images, however, poses considerable challenges. Commercial software is costly and restrictive, while free software generally requires laborious manual tagging of cells. This paper presents OpenComet, an open-source software tool providing automated analysis of comet assay images. It uses a novel and robust method for finding comets based on geometric shape attributes and segmenting the comet heads through image intensity profile analysis. Due to automation, OpenComet is more accurate, less prone to human bias, and faster than manual analysis. A live analysis functionality also allows users to analyze images captured directly from a microscope. We have validated OpenComet on both alkaline and neutral comet assay images as well as sample images from existing software packages. Our results show that OpenComet achieves high accuracy with significantly reduced analysis time.

  10. Affordable Imaging Lab for Noninvasive Analysis of Biomass and Early Vigour in Cereal Crops

    PubMed Central

    2018-01-01

    Plant phenotyping by imaging allows automated analysis of plants for various morphological and physiological traits. In this work, we developed a low-cost RGB imaging phenotyping lab (LCP lab) for low-throughput imaging and analysis using affordable imaging equipment and freely available software. LCP lab comprising RGB imaging and analysis pipeline is set up and demonstrated with early vigour analysis in wheat. Using this lab, a few hundred pots can be photographed in a day and the pots are tracked with QR codes. The software pipeline for both imaging and analysis is built from freely available software. The LCP lab was evaluated for early vigour analysis of five wheat cultivars. A high coefficient of determination (R2 0.94) was obtained between the dry weight and the projected leaf area of 20-day-old wheat plants and R2 of 0.9 for the relative growth rate between 10 and 20 days of plant growth. Detailed description for setting up such a lab is provided together with custom scripts built for imaging and analysis. The LCP lab is an affordable alternative for analysis of cereal crops when access to a high-throughput phenotyping facility is unavailable or when the experiments require growing plants in highly controlled climate chambers. The protocols described in this work are useful for building affordable imaging system for small-scale research projects and for education. PMID:29850536

  11. Merging dietary assessment with the adolescent lifestyle.

    PubMed

    Schap, T E; Zhu, F; Delp, E J; Boushey, C J

    2014-01-01

    The use of image-based dietary assessment methods shows promise for improving dietary self-report among children. The Technology Assisted Dietary Assessment (TADA) food record application is a self-administered food record specifically designed to address the burden and human error associated with conventional methods of dietary assessment. Users would take images of foods and beverages at all eating occasions using a mobile telephone or mobile device with an integrated camera [e.g. Apple iPhone, Apple iPod Touch (Apple Inc., Cupertino, CA, USA); Nexus One (Google, Mountain View, CA, USA)]. Once the images are taken, the images are transferred to a back-end server for automated analysis. The first step in this process is image analysis (i.e. segmentation, feature extraction and classification), which allows for automated food identification. Portion size estimation is also automated via segmentation and geometric shape template modeling. The results of the automated food identification and volume estimation can be indexed with the Food and Nutrient Database for Dietary Studies to provide a detailed diet analysis for use in epidemiological or intervention studies. Data collected during controlled feeding studies in a camp-like setting have allowed for formative evaluation and validation of the TADA food record application. This review summarises the system design and the evidence-based development of image-based methods for dietary assessment among children. © 2013 The Authors Journal of Human Nutrition and Dietetics © 2013 The British Dietetic Association Ltd.

  12. Spatial/Spectral Identification of Endmembers from AVIRIS Data using Mathematical Morphology

    NASA Technical Reports Server (NTRS)

    Plaza, Antonio; Martinez, Pablo; Gualtieri, J. Anthony; Perez, Rosa M.

    2001-01-01

    During the last several years, a number of airborne and satellite hyperspectral sensors have been developed or improved for remote sensing applications. Imaging spectrometry allows the detection of materials, objects and regions in a particular scene with a high degree of accuracy. Hyperspectral data typically consist of hundreds of thousands of spectra, so the analysis of this information is a key issue. Mathematical morphology theory is a widely used nonlinear technique for image analysis and pattern recognition. Although it is especially well suited to segment binary or grayscale images with irregular and complex shapes, its application in the classification/segmentation of multispectral or hyperspectral images has been quite rare. In this paper, we discuss a new completely automated methodology to find endmembers in the hyperspectral data cube using mathematical morphology. The extension of classic morphology to the hyperspectral domain allows us to integrate spectral and spatial information in the analysis process. In Section 3, some basic concepts about mathematical morphology and the technical details of our algorithm are provided. In Section 4, the accuracy of the proposed method is tested by its application to real hyperspectral data obtained from the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) imaging spectrometer. Some details about these data and reference results, obtained by well-known endmember extraction techniques, are provided in Section 2. Finally, in Section 5 we expose the main conclusions at which we have arrived.

  13. Development of a national electronic interval cancer review for breast screening

    NASA Astrophysics Data System (ADS)

    Halling-Brown, M. D.; Patel, M. N.; Wallis, M. G.; Young, K. C.

    2018-03-01

    Reviewing interval cancers and prior screening mammograms are a key measure to monitor screening performance. Radiological analysis of the imaging features in prior mammograms and retrospective classification are an important educational tool for readers to improve individual performance. The requirements of remote, collaborative image review sessions, such as those required to run a remote interval cancer review, are variable and demand a flexible and configurable software solution that is not currently available on commercial workstations. The wide range of requirements for both collection and remote review of interval cancers has precipitated the creation of extensible medical image viewers and accompanying systems. In order to allow remote viewing, an application has been designed to allow workstation-independent, PACS-less viewing and interaction with medical images in a remote, collaborative manner, providing centralised reporting and web-based feedback. A semi-automated process, which allows the centralisation of interval cancer cases, has been developed. This stand-alone, flexible image collection toolkit provides the extremely important function of bespoke, ad-hoc image collection at sites where there is no dedicated hardware. Web interfaces have been created which allow a national or regional administrator to organise, coordinate and administer interval cancer review sessions and deploy invites to session members to participate. The same interface allows feedback to be analysed and distributed. The eICR provides a uniform process for classifying interval cancers across the NHSBSP, which facilitates rapid access to a robust 'external' review for patients and their relatives seeking answers about why their cancer was 'missed'.

  14. Microfluidic electrochemical device and process for chemical imaging and electrochemical analysis at the electrode-liquid interface in-situ

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, Xiao-Ying; Liu, Bingwen; Yang, Li

    2016-03-01

    A microfluidic electrochemical device and process are detailed that provide chemical imaging and electrochemical analysis under vacuum at the surface of the electrode-sample or electrode-liquid interface in-situ. The electrochemical device allows investigation of various surface layers including diffuse layers at selected depths populated with, e.g., adsorbed molecules in which chemical transformation in electrolyte solutions occurs.

  15. Component pattern analysis of chemicals using multispectral THz imaging system

    NASA Astrophysics Data System (ADS)

    Kawase, Kodo; Ogawa, Yuichi; Watanabe, Yuki

    2004-04-01

    We have developed a novel basic technology for terahertz (THz) imaging, which allows detection and identification of chemicals by introducing the component spatial pattern analysis. The spatial distributions of the chemicals were obtained from terahertz multispectral transillumination images, using absorption spectra previously measured with a widely tunable THz-wave parametric oscillator. Further we have applied this technique to the detection and identification of illicit drugs concealed in envelopes. The samples we used were methamphetamine and MDMA, two of the most widely consumed illegal drugs in Japan, and aspirin as a reference.

  16. False dyssynchrony: problem with image-based cardiac functional analysis using x-ray computed tomography

    NASA Astrophysics Data System (ADS)

    Kidoh, Masafumi; Shen, Zeyang; Suzuki, Yuki; Ciuffo, Luisa; Ashikaga, Hiroshi; Fung, George S. K.; Otake, Yoshito; Zimmerman, Stefan L.; Lima, Joao A. C.; Higuchi, Takahiro; Lee, Okkyun; Sato, Yoshinobu; Becker, Lewis C.; Fishman, Elliot K.; Taguchi, Katsuyuki

    2017-03-01

    We have developed a digitally synthesized patient which we call "Zach" (Zero millisecond Adjustable Clinical Heart) phantom, which allows for an access to the ground truth and assessment of image-based cardiac functional analysis (CFA) using CT images with clinically realistic settings. The study using Zach phantom revealed a major problem with image-based CFA: "False dyssynchrony." Even though the true motion of wall segments is in synchrony, it may appear to be dyssynchrony with the reconstructed cardiac CT images. It is attributed to how cardiac images are reconstructed and how wall locations are updated over cardiac phases. The presence and the degree of false dyssynchrony may vary from scan-to-scan, which could degrade the accuracy and the repeatability (or precision) of image-based CT-CFA exams.

  17. Retina Image Screening and Analysis Software Version 2.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tobin, Jr., Kenneth W.; Karnowski, Thomas P.; Aykac, Deniz

    2009-04-01

    The software allows physicians or researchers to ground-truth images of retinas, identifying key physiological features and lesions that are indicative of disease. The software features methods to automatically detect the physiological features and lesions. The software contains code to measure the quality of images received from a telemedicine network; create and populate a database for a telemedicine network; review and report the diagnosis of a set of images; and also contains components to transmit images from a Zeiss camera to the network through SFTP.

  18. Spatiotemporal image correlation analysis of blood flow in branched vessel networks of zebrafish embryos

    NASA Astrophysics Data System (ADS)

    Ceffa, Nicolo G.; Cesana, Ilaria; Collini, Maddalena; D'Alfonso, Laura; Carra, Silvia; Cotelli, Franco; Sironi, Laura; Chirico, Giuseppe

    2017-10-01

    Ramification of blood circulation is relevant in a number of physiological and pathological conditions. The oxygen exchange occurs largely in the capillary bed, and the cancer progression is closely linked to the angiogenesis around the tumor mass. Optical microscopy has made impressive improvements in in vivo imaging and dynamic studies based on correlation analysis of time stacks of images. Here, we develop and test advanced methods that allow mapping the flow fields in branched vessel networks at the resolution of 10 to 20 μm. The methods, based on the application of spatiotemporal image correlation spectroscopy and its extension to cross-correlation analysis, are applied here to the case of early stage embryos of zebrafish.

  19. DiversePathsJ: diverse shortest paths for bioimage analysis.

    PubMed

    Uhlmann, Virginie; Haubold, Carsten; Hamprecht, Fred A; Unser, Michael

    2018-02-01

    We introduce a formulation for the general task of finding diverse shortest paths between two end-points. Our approach is not linked to a specific biological problem and can be applied to a large variety of images thanks to its generic implementation as a user-friendly ImageJ/Fiji plugin. It relies on the introduction of additional layers in a Viterbi path graph, which requires slight modifications to the standard Viterbi algorithm rules. This layered graph construction allows for the specification of various constraints imposing diversity between solutions. The software allows obtaining a collection of diverse shortest paths under some user-defined constraints through a convenient and user-friendly interface. It can be used alone or be integrated into larger image analysis pipelines. http://bigwww.epfl.ch/algorithms/diversepathsj. michael.unser@epfl.ch or fred.hamprecht@iwr.uni-heidelberg.de. Supplementary data are available at Bioinformatics online. © The Author(s) 2017. Published by Oxford University Press.

  20. Self-organizing maps: a versatile tool for the automatic analysis of untargeted imaging datasets.

    PubMed

    Franceschi, Pietro; Wehrens, Ron

    2014-04-01

    MS-based imaging approaches allow for location-specific identification of chemical components in biological samples, opening up possibilities of much more detailed understanding of biological processes and mechanisms. Data analysis, however, is challenging, mainly because of the sheer size of such datasets. This article presents a novel approach based on self-organizing maps, extending previous work in order to be able to handle the large number of variables present in high-resolution mass spectra. The key idea is to generate prototype images, representing spatial distributions of ions, rather than prototypical mass spectra. This allows for a two-stage approach, first generating typical spatial distributions and associated m/z bins, and later analyzing the interesting bins in more detail using accurate masses. The possibilities and advantages of the new approach are illustrated on an in-house dataset of apple slices. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. Informatics in radiology: automated structured reporting of imaging findings using the AIM standard and XML.

    PubMed

    Zimmerman, Stefan L; Kim, Woojin; Boonn, William W

    2011-01-01

    Quantitative and descriptive imaging data are a vital component of the radiology report and are frequently of paramount importance to the ordering physician. Unfortunately, current methods of recording these data in the report are both inefficient and error prone. In addition, the free-text, unstructured format of a radiology report makes aggregate analysis of data from multiple reports difficult or even impossible without manual intervention. A structured reporting work flow has been developed that allows quantitative data created at an advanced imaging workstation to be seamlessly integrated into the radiology report with minimal radiologist intervention. As an intermediary step between the workstation and the reporting software, quantitative and descriptive data are converted into an extensible markup language (XML) file in a standardized format specified by the Annotation and Image Markup (AIM) project of the National Institutes of Health Cancer Biomedical Informatics Grid. The AIM standard was created to allow image annotation data to be stored in a uniform machine-readable format. These XML files containing imaging data can also be stored on a local database for data mining and analysis. This structured work flow solution has the potential to improve radiologist efficiency, reduce errors, and facilitate storage of quantitative and descriptive imaging data for research. Copyright © RSNA, 2011.

  2. Systems engineering analysis of five 'as-manufactured' SXI telescopes

    NASA Astrophysics Data System (ADS)

    Harvey, James E.; Atanassova, Martina; Krywonos, Andrey

    2005-09-01

    Four flight models and a spare of the Solar X-ray Imager (SXI) telescope mirrors have been fabricated. The first of these is scheduled to be launched on the NOAA GOES- N satellite on July 29, 2005. A complete systems engineering analysis of the "as-manufactured" telescope mirrors has been performed that includes diffraction effects, residual design errors (aberrations), surface scatter effects, and all of the miscellaneous errors in the mirror manufacturer's error budget tree. Finally, a rigorous analysis of mosaic detector effects has been included. SXI is a staring telescope providing full solar disc images at X-ray wavelengths. For wide-field applications such as this, a field-weighted-average measure of resolution has been modeled. Our performance predictions have allowed us to use metrology data to model the "as-manufactured" performance of the X-ray telescopes and to adjust the final focal plane location to optimize the number of spatial resolution elements in a given operational field-of-view (OFOV) for either the aerial image or the detected image. The resulting performance predictions from five separate mirrors allow us to evaluate and quantify the optical fabrication process for producing these very challenging grazing incidence X-ray optics.

  3. D Tracking Based Augmented Reality for Cultural Heritage Data Management

    NASA Astrophysics Data System (ADS)

    Battini, C.; Landi, G.

    2015-02-01

    The development of contactless documentation techniques is allowing researchers to collect high volumes of three-dimensional data in a short time but with high levels of accuracy. The digitalisation of cultural heritage opens up the possibility of using image processing and analysis, and computer graphics techniques, to preserve this heritage for future generations; augmenting it with additional information or with new possibilities for its enjoyment and use. The collection of precise datasets about cultural heritage status is crucial for its interpretation, its conservation and during the restoration processes. The application of digital-imaging solutions for various feature extraction, image data-analysis techniques, and three-dimensional reconstruction of ancient artworks, allows the creation of multidimensional models that can incorporate information coming from heterogeneous data sets, research results and historical sources. Real objects can be scanned and reconstructed virtually, with high levels of data accuracy and resolution. Real-time visualisation software and hardware is rapidly evolving and complex three-dimensional models can be interactively visualised and explored on applications developed for mobile devices. This paper will show how a 3D reconstruction of an object, with multiple layers of information, can be stored and visualised through a mobile application that will allow interaction with a physical object for its study and analysis, using 3D Tracking based Augmented Reality techniques.

  4. Technical Advance: Live-imaging analysis of human dendritic cell migrating behavior under the influence of immune-stimulating reagents in an organotypic model of lung

    PubMed Central

    Nguyen Hoang, Anh Thu; Chen, Puran; Björnfot, Sofia; Högstrand, Kari; Lock, John G.; Grandien, Alf; Coles, Mark; Svensson, Mattias

    2014-01-01

    This manuscript describes technical advances allowing manipulation and quantitative analyses of human DC migratory behavior in lung epithelial tissue. DCs are hematopoietic cells essential for the maintenance of tissue homeostasis and the induction of tissue-specific immune responses. Important functions include cytokine production and migration in response to infection for the induction of proper immune responses. To design appropriate strategies to exploit human DC functional properties in lung tissue for the purpose of clinical evaluation, e.g., candidate vaccination and immunotherapy strategies, we have developed a live-imaging assay based on our previously described organotypic model of the human lung. This assay allows provocations and subsequent quantitative investigations of DC functional properties under conditions mimicking morphological and functional features of the in vivo parental tissue. We present protocols to set up and prepare tissue models for 4D (x, y, z, time) fluorescence-imaging analysis that allow spatial and temporal studies of human DCs in live epithelial tissue, followed by flow cytometry analysis of DCs retrieved from digested tissue models. This model system can be useful for elucidating incompletely defined pathways controlling DC functional responses to infection and inflammation in lung epithelial tissue, as well as the efficacy of locally administered candidate interventions. PMID:24899587

  5. Design of a Web-tool for diagnostic clinical trials handling medical imaging research.

    PubMed

    Baltasar Sánchez, Alicia; González-Sistal, Angel

    2011-04-01

    New clinical studies in medicine are based on patients and controls using different imaging diagnostic modalities. Medical information systems are not designed for clinical trials employing clinical imaging. Although commercial software and communication systems focus on storage of image data, they are not suitable for storage and mining of new types of quantitative data. We sought to design a Web-tool to support diagnostic clinical trials involving different experts and hospitals or research centres. The image analysis of this project is based on skeletal X-ray imaging. It involves a computerised image method using quantitative analysis of regions of interest in healthy bone and skeletal metastases. The database is implemented with ASP.NET 3.5 and C# technologies for our Web-based application. For data storage, we chose MySQL v.5.0, one of the most popular open source databases. User logins were necessary, and access to patient data was logged for auditing. For security, all data transmissions were carried over encrypted connections. This Web-tool is available to users scattered at different locations; it allows an efficient organisation and storage of data (case report form) and images and allows each user to know precisely what his task is. The advantages of our Web-tool are as follows: (1) sustainability is guaranteed; (2) network locations for collection of data are secured; (3) all clinical information is stored together with the original images and the results derived from processed images and statistical analysis that enable us to perform retrospective studies; (4) changes are easily incorporated because of the modular architecture; and (5) assessment of trial data collected at different sites is centralised to reduce statistical variance.

  6. Quantitative CT imaging for adipose tissue analysis in mouse model of obesity

    NASA Astrophysics Data System (ADS)

    Marchadier, A.; Vidal, C.; Tafani, J.-P.; Ordureau, S.; Lédée, R.; Léger, C.

    2011-03-01

    In obese humans CT imaging is a validated method for follow up studies of adipose tissue distribution and quantification of visceral and subcutaneous fat. Equivalent methods in murine models of obesity are still lacking. Current small animal micro-CT involves long-term X-ray exposure precluding longitudinal studies. We have overcome this limitation by using a human medical CT which allows very fast 3D imaging (2 sec) and minimal radiation exposure. This work presents novel methods fitted to in vivo investigations of mice model of obesity, allowing (i) automated detection of adipose tissue in abdominal regions of interest, (ii) quantification of visceral and subcutaneous fat. For each mouse, 1000 slices (100μm thickness, 160 μm resolution) were acquired in 2 sec using a Toshiba medical CT (135 kV, 400mAs). A Gaussian mixture model of the Hounsfield curve of 2D slices was computed with the Expectation Maximization algorithm. Identification of each Gaussian part allowed the automatic classification of adipose tissue voxels. The abdominal region of interest (umbilical) was automatically detected as the slice showing the highest ratio of the Gaussian proportion between adipose and lean tissues. Segmentation of visceral and subcutaneous fat compartments was achieved with 2D 1/2 level set methods. Our results show that the application of human clinical CT to mice is a promising approach for the study of obesity, allowing valuable comparison between species using the same imaging materials and software analysis.

  7. FPGA-based multi-channel fluorescence lifetime analysis of Fourier multiplexed frequency-sweeping lifetime imaging

    PubMed Central

    Zhao, Ming; Li, Yu; Peng, Leilei

    2014-01-01

    We report a fast non-iterative lifetime data analysis method for the Fourier multiplexed frequency-sweeping confocal FLIM (Fm-FLIM) system [ Opt. Express22, 10221 ( 2014)24921725]. The new method, named R-method, allows fast multi-channel lifetime image analysis in the system’s FPGA data processing board. Experimental tests proved that the performance of the R-method is equivalent to that of single-exponential iterative fitting, and its sensitivity is well suited for time-lapse FLIM-FRET imaging of live cells, for example cyclic adenosine monophosphate (cAMP) level imaging with GFP-Epac-mCherry sensors. With the R-method and its FPGA implementation, multi-channel lifetime images can now be generated in real time on the multi-channel frequency-sweeping FLIM system, and live readout of FRET sensors can be performed during time-lapse imaging. PMID:25321778

  8. GelScape: a web-based server for interactively annotating, manipulating, comparing and archiving 1D and 2D gel images.

    PubMed

    Young, Nelson; Chang, Zhan; Wishart, David S

    2004-04-12

    GelScape is a web-based tool that permits facile, interactive annotation, comparison, manipulation and storage of protein gel images. It uses Java applet-servlet technology to allow rapid, remote image handling and image processing in a platform-independent manner. It supports many of the features found in commercial, stand-alone gel analysis software including spot annotation, spot integration, gel warping, image resizing, HTML image mapping, image overlaying as well as the storage of gel image and gel annotation data in compliance with Federated Gel Database requirements.

  9. The integration of a LANDSAT analysis capability with a geographic information system

    NASA Technical Reports Server (NTRS)

    Nordstrand, E. A.

    1981-01-01

    The integration of LANDSAT data was achieved through the development of a flexible, compatible analysis tool and using an existing data base to select the usable data from a LANDSAT analysis. The software package allows manipulation of grid cell data plus the flexibility to allow the user to include FORTRAN statements for special functions. Using this combination of capabilities the user can classify a LANDSAT image and then selectivity merge the results with other data that may exist for the study area.

  10. Motmot, an open-source toolkit for realtime video acquisition and analysis.

    PubMed

    Straw, Andrew D; Dickinson, Michael H

    2009-07-22

    Video cameras sense passively from a distance, offer a rich information stream, and provide intuitively meaningful raw data. Camera-based imaging has thus proven critical for many advances in neuroscience and biology, with applications ranging from cellular imaging of fluorescent dyes to tracking of whole-animal behavior at ecologically relevant spatial scales. Here we present 'Motmot': an open-source software suite for acquiring, displaying, saving, and analyzing digital video in real-time. At the highest level, Motmot is written in the Python computer language. The large amounts of data produced by digital cameras are handled by low-level, optimized functions, usually written in C. This high-level/low-level partitioning and use of select external libraries allow Motmot, with only modest complexity, to perform well as a core technology for many high-performance imaging tasks. In its current form, Motmot allows for: (1) image acquisition from a variety of camera interfaces (package motmot.cam_iface), (2) the display of these images with minimal latency and computer resources using wxPython and OpenGL (package motmot.wxglvideo), (3) saving images with no compression in a single-pass, low-CPU-use format (package motmot.FlyMovieFormat), (4) a pluggable framework for custom analysis of images in realtime and (5) firmware for an inexpensive USB device to synchronize image acquisition across multiple cameras, with analog input, or with other hardware devices (package motmot.fview_ext_trig). These capabilities are brought together in a graphical user interface, called 'FView', allowing an end user to easily view and save digital video without writing any code. One plugin for FView, 'FlyTrax', which tracks the movement of fruit flies in real-time, is included with Motmot, and is described to illustrate the capabilities of FView. Motmot enables realtime image processing and display using the Python computer language. In addition to the provided complete applications, the architecture allows the user to write relatively simple plugins, which can accomplish a variety of computer vision tasks and be integrated within larger software systems. The software is available at http://code.astraw.com/projects/motmot.

  11. Evaluating performance of biomedical image retrieval systems – an overview of the medical image retrieval task at ImageCLEF 2004–2013

    PubMed Central

    Kalpathy-Cramer, Jayashree; de Herrera, Alba García Seco; Demner-Fushman, Dina; Antani, Sameer; Bedrick, Steven; Müller, Henning

    2014-01-01

    Medical image retrieval and classification have been extremely active research topics over the past 15 years. With the ImageCLEF benchmark in medical image retrieval and classification a standard test bed was created that allows researchers to compare their approaches and ideas on increasingly large and varied data sets including generated ground truth. This article describes the lessons learned in ten evaluations campaigns. A detailed analysis of the data also highlights the value of the resources created. PMID:24746250

  12. Quantification of video-taped images in microcirculation research using inexpensive imaging software (Adobe Photoshop).

    PubMed

    Brunner, J; Krummenauer, F; Lehr, H A

    2000-04-01

    Study end-points in microcirculation research are usually video-taped images rather than numeric computer print-outs. Analysis of these video-taped images for the quantification of microcirculatory parameters usually requires computer-based image analysis systems. Most software programs for image analysis are custom-made, expensive, and limited in their applicability to selected parameters and study end-points. We demonstrate herein that an inexpensive, commercially available computer software (Adobe Photoshop), run on a Macintosh G3 computer with inbuilt graphic capture board provides versatile, easy to use tools for the quantification of digitized video images. Using images obtained by intravital fluorescence microscopy from the pre- and postischemic muscle microcirculation in the skinfold chamber model in hamsters, Photoshop allows simple and rapid quantification (i) of microvessel diameters, (ii) of the functional capillary density and (iii) of postischemic leakage of FITC-labeled high molecular weight dextran from postcapillary venules. We present evidence of the technical accuracy of the software tools and of a high degree of interobserver reliability. Inexpensive commercially available imaging programs (i.e., Adobe Photoshop) provide versatile tools for image analysis with a wide range of potential applications in microcirculation research.

  13. High-throughput high-volume nuclear imaging for preclinical in vivo compound screening§.

    PubMed

    Macholl, Sven; Finucane, Ciara M; Hesterman, Jacob; Mather, Stephen J; Pauplis, Rachel; Scully, Deirdre; Sosabowski, Jane K; Jouannot, Erwan

    2017-12-01

    Preclinical single-photon emission computed tomography (SPECT)/CT imaging studies are hampered by low throughput, hence are found typically within small volume feasibility studies. Here, imaging and image analysis procedures are presented that allow profiling of a large volume of radiolabelled compounds within a reasonably short total study time. Particular emphasis was put on quality control (QC) and on fast and unbiased image analysis. 2-3 His-tagged proteins were simultaneously radiolabelled by 99m Tc-tricarbonyl methodology and injected intravenously (20 nmol/kg; 100 MBq; n = 3) into patient-derived xenograft (PDX) mouse models. Whole-body SPECT/CT images of 3 mice simultaneously were acquired 1, 4, and 24 h post-injection, extended to 48 h and/or by 0-2 h dynamic SPECT for pre-selected compounds. Organ uptake was quantified by automated multi-atlas and manual segmentations. Data were plotted automatically, quality controlled and stored on a collaborative image management platform. Ex vivo uptake data were collected semi-automatically and analysis performed as for imaging data. >500 single animal SPECT images were acquired for 25 proteins over 5 weeks, eventually generating >3500 ROI and >1000 items of tissue data. SPECT/CT images clearly visualized uptake in tumour and other tissues even at 48 h post-injection. Intersubject uptake variability was typically 13% (coefficient of variation, COV). Imaging results correlated well with ex vivo data. The large data set of tumour, background and systemic uptake/clearance data from 75 mice for 25 compounds allows identification of compounds of interest. The number of animals required was reduced considerably by longitudinal imaging compared to dissection experiments. All experimental work and analyses were accomplished within 3 months expected to be compatible with drug development programmes. QC along all workflow steps, blinding of the imaging contract research organization to compound properties and automation provide confidence in the data set. Additional ex vivo data were useful as a control but could be omitted from future studies in the same centre. For even larger compound libraries, radiolabelling could be expedited and the number of imaging time points adapted to increase weekly throughput. Multi-atlas segmentation could be expanded via SPECT/MRI; however, this would require an MRI-compatible mouse hotel. Finally, analysis of nuclear images of radiopharmaceuticals in clinical trials may benefit from the automated analysis procedures developed.

  14. Challenges and opportunities for quantifying roots and rhizosphere interactions through imaging and image analysis.

    PubMed

    Downie, H F; Adu, M O; Schmidt, S; Otten, W; Dupuy, L X; White, P J; Valentine, T A

    2015-07-01

    The morphology of roots and root systems influences the efficiency by which plants acquire nutrients and water, anchor themselves and provide stability to the surrounding soil. Plant genotype and the biotic and abiotic environment significantly influence root morphology, growth and ultimately crop yield. The challenge for researchers interested in phenotyping root systems is, therefore, not just to measure roots and link their phenotype to the plant genotype, but also to understand how the growth of roots is influenced by their environment. This review discusses progress in quantifying root system parameters (e.g. in terms of size, shape and dynamics) using imaging and image analysis technologies and also discusses their potential for providing a better understanding of root:soil interactions. Significant progress has been made in image acquisition techniques, however trade-offs exist between sample throughput, sample size, image resolution and information gained. All of these factors impact on downstream image analysis processes. While there have been significant advances in computation power, limitations still exist in statistical processes involved in image analysis. Utilizing and combining different imaging systems, integrating measurements and image analysis where possible, and amalgamating data will allow researchers to gain a better understanding of root:soil interactions. © 2014 John Wiley & Sons Ltd.

  15. Multidimensional Processing and Visual Rendering of Complex 3D Biomedical Images

    NASA Technical Reports Server (NTRS)

    Sams, Clarence F.

    2016-01-01

    The proposed technology uses advanced image analysis techniques to maximize the resolution and utility of medical imaging methods being used during spaceflight. We utilize COTS technology for medical imaging, but our applications require higher resolution assessment of the medical images than is routinely applied with nominal system software. By leveraging advanced data reduction and multidimensional imaging techniques utilized in analysis of Planetary Sciences and Cell Biology imaging, it is possible to significantly increase the information extracted from the onboard biomedical imaging systems. Year 1 focused on application of these techniques to the ocular images collected on ground test subjects and ISS crewmembers. Focus was on the choroidal vasculature and the structure of the optic disc. Methods allowed for increased resolution and quantitation of structural changes enabling detailed assessment of progression over time. These techniques enhance the monitoring and evaluation of crew vision issues during space flight.

  16. Image enhancement and color constancy for a vehicle-mounted change detection system

    NASA Astrophysics Data System (ADS)

    Tektonidis, Marco; Monnin, David

    2016-10-01

    Vehicle-mounted change detection systems allow to improve situational awareness on outdoor itineraries of inter- est. Since the visibility of acquired images is often affected by illumination effects (e.g., shadows) it is important to enhance local contrast. For the analysis and comparison of color images depicting the same scene at different time points it is required to compensate color and lightness inconsistencies caused by the different illumination conditions. We have developed an approach for image enhancement and color constancy based on the center/surround Retinex model and the Gray World hypothesis. The combination of the two methods using a color processing function improves color rendition, compared to both methods. The use of stacked integral images (SII) allows to efficiently perform local image processing. Our combined Retinex/Gray World approach has been successfully applied to image sequences acquired on outdoor itineraries at different time points and a comparison with previous Retinex-based approaches has been carried out.

  17. ProstateAnalyzer: Web-based medical application for the management of prostate cancer using multiparametric MR imaging.

    PubMed

    Mata, Christian; Walker, Paul M; Oliver, Arnau; Brunotte, François; Martí, Joan; Lalande, Alain

    2016-01-01

    In this paper, we present ProstateAnalyzer, a new web-based medical tool for prostate cancer diagnosis. ProstateAnalyzer allows the visualization and analysis of magnetic resonance images (MRI) in a single framework. ProstateAnalyzer recovers the data from a PACS server and displays all the associated MRI images in the same framework, usually consisting of 3D T2-weighted imaging for anatomy, dynamic contrast-enhanced MRI for perfusion, diffusion-weighted imaging in the form of an apparent diffusion coefficient (ADC) map and MR Spectroscopy. ProstateAnalyzer allows annotating regions of interest in a sequence and propagates them to the others. From a representative case, the results using the four visualization platforms are fully detailed, showing the interaction among them. The tool has been implemented as a Java-based applet application to facilitate the portability of the tool to the different computer architectures and software and allowing the possibility to work remotely via the web. ProstateAnalyzer enables experts to manage prostate cancer patient data set more efficiently. The tool allows delineating annotations by experts and displays all the required information for use in diagnosis. According to the current European Society of Urogenital Radiology guidelines, it also includes the PI-RADS structured reporting scheme.

  18. Fracture risk assessment: improved evaluation of vertebral integrity among metastatic cancer patients to aid in surgical decision-making

    NASA Astrophysics Data System (ADS)

    Augustine, Kurt E.; Camp, Jon J.; Holmes, David R.; Huddleston, Paul M.; Lu, Lichun; Yaszemski, Michael J.; Robb, Richard A.

    2012-03-01

    Failure of the spine's structural integrity from metastatic disease can lead to both pain and neurologic deficit. Fractures that require treatment occur in over 30% of bony metastases. Our objective is to use computed tomography (CT) in conjunction with analytic techniques that have been previously developed to predict fracture risk in cancer patients with metastatic disease to the spine. Current clinical practice for cancer patients with spine metastasis often requires an empirical decision regarding spinal reconstructive surgery. Early image-based software systems used for CT analysis are time consuming and poorly suited for clinical application. The Biomedical Image Resource (BIR) at Mayo Clinic, Rochester has developed an image analysis computer program that calculates from CT scans, the residual load-bearing capacity in a vertebra with metastatic cancer. The Spine Cancer Assessment (SCA) program is built on a platform designed for clinical practice, with a workflow format that allows for rapid selection of patient CT exams, followed by guided image analysis tasks, resulting in a fracture risk report. The analysis features allow the surgeon to quickly isolate a single vertebra and obtain an immediate pre-surgical multiple parallel section composite beam fracture risk analysis based on algorithms developed at Mayo Clinic. The analysis software is undergoing clinical validation studies. We expect this approach will facilitate patient management and utilization of reliable guidelines for selecting among various treatment option based on fracture risk.

  19. The role of PET quantification in cardiovascular imaging.

    PubMed

    Slomka, Piotr; Berman, Daniel S; Alexanderson, Erick; Germano, Guido

    2014-08-01

    Positron Emission Tomography (PET) has several clinical and research applications in cardiovascular imaging. Myocardial perfusion imaging with PET allows accurate global and regional measurements of myocardial perfusion, myocardial blood flow and function at stress and rest in one exam. Simultaneous assessment of function and perfusion by PET with quantitative software is currently the routine practice. Combination of ejection fraction reserve with perfusion information may improve the identification of severe disease. The myocardial viability can be estimated by quantitative comparison of fluorodeoxyglucose ( 18 FDG) and rest perfusion imaging. The myocardial blood flow and coronary flow reserve measurements are becoming routinely included in the clinical assessment due to enhanced dynamic imaging capabilities of the latest PET/CT scanners. Absolute flow measurements allow evaluation of the coronary microvascular dysfunction and provide additional prognostic and diagnostic information for coronary disease. Standard quantitative approaches to compute myocardial blood flow from kinetic PET data in automated and rapid fashion have been developed for 13 N-ammonia, 15 O-water and 82 Rb radiotracers. The agreement between software methods available for such analysis is excellent. Relative quantification of 82 Rb PET myocardial perfusion, based on comparisons to normal databases, demonstrates high performance for the detection of obstructive coronary disease. New tracers, such as 18 F-flurpiridaz may allow further improvements in the disease detection. Computerized analysis of perfusion at stress and rest reduces the variability of the assessment as compared to visual analysis. PET quantification can be enhanced by precise coregistration with CT angiography. In emerging clinical applications, the potential to identify vulnerable plaques by quantification of atherosclerotic plaque uptake of 18 FDG and 18 F-sodium fluoride tracers in carotids, aorta and coronary arteries has been demonstrated.

  20. Integrated analysis of remote sensing products from basic geological surveys. [Brazil

    NASA Technical Reports Server (NTRS)

    Dasilvafagundesfilho, E. (Principal Investigator)

    1984-01-01

    Recent advances in remote sensing led to the development of several techniques to obtain image information. These techniques as effective tools in geological maping are analyzed. A strategy for optimizing the images in basic geological surveying is presented. It embraces as integrated analysis of spatial, spectral, and temporal data through photoptic (color additive viewer) and computer processing at different scales, allowing large areas survey in a fast, precise, and low cost manner.

  1. Automated processing of zebrafish imaging data: a survey.

    PubMed

    Mikut, Ralf; Dickmeis, Thomas; Driever, Wolfgang; Geurts, Pierre; Hamprecht, Fred A; Kausler, Bernhard X; Ledesma-Carbayo, María J; Marée, Raphaël; Mikula, Karol; Pantazis, Periklis; Ronneberger, Olaf; Santos, Andres; Stotzka, Rainer; Strähle, Uwe; Peyriéras, Nadine

    2013-09-01

    Due to the relative transparency of its embryos and larvae, the zebrafish is an ideal model organism for bioimaging approaches in vertebrates. Novel microscope technologies allow the imaging of developmental processes in unprecedented detail, and they enable the use of complex image-based read-outs for high-throughput/high-content screening. Such applications can easily generate Terabytes of image data, the handling and analysis of which becomes a major bottleneck in extracting the targeted information. Here, we describe the current state of the art in computational image analysis in the zebrafish system. We discuss the challenges encountered when handling high-content image data, especially with regard to data quality, annotation, and storage. We survey methods for preprocessing image data for further analysis, and describe selected examples of automated image analysis, including the tracking of cells during embryogenesis, heartbeat detection, identification of dead embryos, recognition of tissues and anatomical landmarks, and quantification of behavioral patterns of adult fish. We review recent examples for applications using such methods, such as the comprehensive analysis of cell lineages during early development, the generation of a three-dimensional brain atlas of zebrafish larvae, and high-throughput drug screens based on movement patterns. Finally, we identify future challenges for the zebrafish image analysis community, notably those concerning the compatibility of algorithms and data formats for the assembly of modular analysis pipelines.

  2. Automated Processing of Zebrafish Imaging Data: A Survey

    PubMed Central

    Dickmeis, Thomas; Driever, Wolfgang; Geurts, Pierre; Hamprecht, Fred A.; Kausler, Bernhard X.; Ledesma-Carbayo, María J.; Marée, Raphaël; Mikula, Karol; Pantazis, Periklis; Ronneberger, Olaf; Santos, Andres; Stotzka, Rainer; Strähle, Uwe; Peyriéras, Nadine

    2013-01-01

    Abstract Due to the relative transparency of its embryos and larvae, the zebrafish is an ideal model organism for bioimaging approaches in vertebrates. Novel microscope technologies allow the imaging of developmental processes in unprecedented detail, and they enable the use of complex image-based read-outs for high-throughput/high-content screening. Such applications can easily generate Terabytes of image data, the handling and analysis of which becomes a major bottleneck in extracting the targeted information. Here, we describe the current state of the art in computational image analysis in the zebrafish system. We discuss the challenges encountered when handling high-content image data, especially with regard to data quality, annotation, and storage. We survey methods for preprocessing image data for further analysis, and describe selected examples of automated image analysis, including the tracking of cells during embryogenesis, heartbeat detection, identification of dead embryos, recognition of tissues and anatomical landmarks, and quantification of behavioral patterns of adult fish. We review recent examples for applications using such methods, such as the comprehensive analysis of cell lineages during early development, the generation of a three-dimensional brain atlas of zebrafish larvae, and high-throughput drug screens based on movement patterns. Finally, we identify future challenges for the zebrafish image analysis community, notably those concerning the compatibility of algorithms and data formats for the assembly of modular analysis pipelines. PMID:23758125

  3. Imaging mass spectrometry data reduction: automated feature identification and extraction.

    PubMed

    McDonnell, Liam A; van Remoortere, Alexandra; de Velde, Nico; van Zeijl, René J M; Deelder, André M

    2010-12-01

    Imaging MS now enables the parallel analysis of hundreds of biomolecules, spanning multiple molecular classes, which allows tissues to be described by their molecular content and distribution. When combined with advanced data analysis routines, tissues can be analyzed and classified based solely on their molecular content. Such molecular histology techniques have been used to distinguish regions with differential molecular signatures that could not be distinguished using established histologic tools. However, its potential to provide an independent, complementary analysis of clinical tissues has been limited by the very large file sizes and large number of discrete variables associated with imaging MS experiments. Here we demonstrate data reduction tools, based on automated feature identification and extraction, for peptide, protein, and lipid imaging MS, using multiple imaging MS technologies, that reduce data loads and the number of variables by >100×, and that highlight highly-localized features that can be missed using standard data analysis strategies. It is then demonstrated how these capabilities enable multivariate analysis on large imaging MS datasets spanning multiple tissues. Copyright © 2010 American Society for Mass Spectrometry. Published by Elsevier Inc. All rights reserved.

  4. A method for the automated processing and analysis of images of ULVWF-platelet strings.

    PubMed

    Reeve, Scott R; Abbitt, Katherine B; Cruise, Thomas D; Hose, D Rodney; Lawford, Patricia V

    2013-01-01

    We present a method for identifying and analysing unusually large von Willebrand factor (ULVWF)-platelet strings in noisy low-quality images. The method requires relatively inexpensive, non-specialist equipment and allows multiple users to be employed in the capture of images. Images are subsequently enhanced and analysed, using custom-written software to perform the processing tasks. The formation and properties of ULVWF-platelet strings released in in vitro flow-based assays have recently become a popular research area. Endothelial cells are incorporated into a flow chamber, chemically stimulated to induce ULVWF release and perfused with isolated platelets which are able to bind to the ULVWF to form strings. The numbers and lengths of the strings released are related to characteristics of the flow. ULVWF-platelet strings are routinely identified by eye from video recordings captured during experiments and analysed manually using basic NIH image software to determine the number of strings and their lengths. This is a laborious, time-consuming task and a single experiment, often consisting of data from four to six dishes of endothelial cells, can take 2 or more days to analyse. The method described here allows analysis of the strings to provide data such as the number and length of strings, number of platelets per string and the distance between each platelet to be found. The software reduces analysis time, and more importantly removes user subjectivity, producing highly reproducible results with an error of less than 2% when compared with detailed manual analysis.

  5. BATSE imaging survey of the Galactic plane

    NASA Technical Reports Server (NTRS)

    Grindlay, J. E.; Barret, D.; Bloser, P. F.; Zhang, S. N.; Robinson, C.; Harmon, B. A.

    1997-01-01

    The burst and transient source experiment (BATSE) onboard the Compton Gamma Ray Observatory (CGRO) provides all sky monitoring capability, occultation analysis and occultation imaging which enables new and fainter sources to be searched for in relatively crowded fields. The occultation imaging technique is used in combination with an automated BATSE image scanner, allowing an analysis of large data sets of occultation images for detections of candidate sources and for the construction of source catalogs and data bases. This automated image scanner system is being tested on archival data in order to optimize the search and detection thresholds. The image search system, its calibration results and preliminary survey results on archival data are reported on. The aim of the survey is to identify a complete sample of black hole candidates in the galaxy and constrain the number of black hole systems and neutron star systems.

  6. Optical Fourier diffractometry applied to degraded bone structure recognition

    NASA Astrophysics Data System (ADS)

    Galas, Jacek; Godwod, Krzysztof; Szawdyn, Jacek; Sawicki, Andrzej

    1993-09-01

    Image processing and recognition methods are useful in many fields. This paper presents the hybrid optical and digital method applied to recognition of pathological changes in bones involved by metabolic bone diseases. The trabecular bone structure, registered by x ray on the photographic film, is analyzed in the new type of computer controlled diffractometer. The set of image parameters, extracted from diffractogram, is evaluated by statistical analysis. The synthetic image descriptors in discriminant space, constructed on the base of 3 training groups of images (control, osteoporosis, and osteomalacia groups) by discriminant analysis, allow us to recognize bone samples with degraded bone structure and to recognize the disease. About 89% of the images were classified correctly. This method after optimization process will be verified in medical investigations.

  7. The Image Data Resource: A Bioimage Data Integration and Publication Platform.

    PubMed

    Williams, Eleanor; Moore, Josh; Li, Simon W; Rustici, Gabriella; Tarkowska, Aleksandra; Chessel, Anatole; Leo, Simone; Antal, Bálint; Ferguson, Richard K; Sarkans, Ugis; Brazma, Alvis; Salas, Rafael E Carazo; Swedlow, Jason R

    2017-08-01

    Access to primary research data is vital for the advancement of science. To extend the data types supported by community repositories, we built a prototype Image Data Resource (IDR) that collects and integrates imaging data acquired across many different imaging modalities. IDR links data from several imaging modalities, including high-content screening, super-resolution and time-lapse microscopy, digital pathology, public genetic or chemical databases, and cell and tissue phenotypes expressed using controlled ontologies. Using this integration, IDR facilitates the analysis of gene networks and reveals functional interactions that are inaccessible to individual studies. To enable re-analysis, we also established a computational resource based on Jupyter notebooks that allows remote access to the entire IDR. IDR is also an open source platform that others can use to publish their own image data. Thus IDR provides both a novel on-line resource and a software infrastructure that promotes and extends publication and re-analysis of scientific image data.

  8. Connecting Swath Satellite Data With Imagery in Mapping Applications

    NASA Astrophysics Data System (ADS)

    Thompson, C. K.; Hall, J. R.; Penteado, P. F.; Roberts, J. T.; Zhou, A. Y.

    2016-12-01

    Visualizations of gridded science data products (referred to as Level 3 or Level 4) typically provide a straightforward correlation between image pixels and the source science data. This direct relationship allows users to make initial inferences based on imagery values, facilitating additional operations on the underlying data values, such as data subsetting and analysis. However, that same pixel-to-data relationship for ungridded science data products (referred to as Level 2) is significantly more challenging. These products, also referred to as "swath products", are in orbital "instrument space" and raster visualization pixels do not directly correlate to science data values. Interpolation algorithms are often employed during the gridding or projection of a science dataset prior to image generation, introducing intermediary values that separate the image from the source data values. NASA's Global Imagery Browse Services (GIBS) is researching techniques for efficiently serving "image-ready" data allowing client-side dynamic visualization and analysis capabilities. This presentation will cover some GIBS prototyping work designed to maintain connectivity between Level 2 swath data and its corresponding raster visualizations. Specifically, we discuss the DAta-to-Image-SYstem (DAISY), an indexing approach for Level 2 swath data, and the mechanisms whereby a client may dynamically visualize the data in raster form.

  9. Faster tissue interface analysis from Raman microscopy images using compressed factorisation

    NASA Astrophysics Data System (ADS)

    Palmer, Andrew D.; Bannerman, Alistair; Grover, Liam; Styles, Iain B.

    2013-06-01

    The structure of an artificial ligament was examined using Raman microscopy in combination with novel data analysis. Basis approximation and compressed principal component analysis are shown to provide efficient compression of confocal Raman microscopy images, alongside powerful methods for unsupervised analysis. This scheme allows the acceleration of data mining, such as principal component analysis, as they can be performed on the compressed data representation, providing a decrease in the factorisation time of a single image from five minutes to under a second. Using this workflow the interface region between a chemically engineered ligament construct and a bone-mimic anchor was examined. Natural ligament contains a striated interface between the bone and tissue that provides improved mechanical load tolerance, a similar interface was found in the ligament construct.

  10. Cell surface and cell outline imaging in plant tissues using the backscattered electron detector in a variable pressure scanning electron microscope

    PubMed Central

    2013-01-01

    Background Scanning electron microscopy (SEM) has been used for high-resolution imaging of plant cell surfaces for many decades. Most SEM imaging employs the secondary electron detector under high vacuum to provide pseudo-3D images of plant organs and especially of surface structures such as trichomes and stomatal guard cells; these samples generally have to be metal-coated to avoid charging artefacts. Variable pressure-SEM allows examination of uncoated tissues, and provides a flexible range of options for imaging, either with a secondary electron detector or backscattered electron detector. In one application, we used the backscattered electron detector under low vacuum conditions to collect images of uncoated barley leaf tissue followed by simple quantification of cell areas. Results Here, we outline methods for backscattered electron imaging of a variety of plant tissues with particular focus on collecting images for quantification of cell size and shape. We demonstrate the advantages of this technique over other methods to obtain high contrast cell outlines, and define a set of parameters for imaging Arabidopsis thaliana leaf epidermal cells together with a simple image analysis protocol. We also show how to vary parameters such as accelerating voltage and chamber pressure to optimise imaging in a range of other plant tissues. Conclusions Backscattered electron imaging of uncoated plant tissue allows acquisition of images showing details of plant morphology together with images of high contrast cell outlines suitable for semi-automated image analysis. The method is easily adaptable to many types of tissue and suitable for any laboratory with standard SEM preparation equipment and a variable-pressure-SEM or tabletop SEM. PMID:24135233

  11. An analytical tool that quantifies cellular morphology changes from three-dimensional fluorescence images.

    PubMed

    Haass-Koffler, Carolina L; Naeemuddin, Mohammad; Bartlett, Selena E

    2012-08-31

    The most common software analysis tools available for measuring fluorescence images are for two-dimensional (2D) data that rely on manual settings for inclusion and exclusion of data points, and computer-aided pattern recognition to support the interpretation and findings of the analysis. It has become increasingly important to be able to measure fluorescence images constructed from three-dimensional (3D) datasets in order to be able to capture the complexity of cellular dynamics and understand the basis of cellular plasticity within biological systems. Sophisticated microscopy instruments have permitted the visualization of 3D fluorescence images through the acquisition of multispectral fluorescence images and powerful analytical software that reconstructs the images from confocal stacks that then provide a 3D representation of the collected 2D images. Advanced design-based stereology methods have progressed from the approximation and assumptions of the original model-based stereology even in complex tissue sections. Despite these scientific advances in microscopy, a need remains for an automated analytic method that fully exploits the intrinsic 3D data to allow for the analysis and quantification of the complex changes in cell morphology, protein localization and receptor trafficking. Current techniques available to quantify fluorescence images include Meta-Morph (Molecular Devices, Sunnyvale, CA) and Image J (NIH) which provide manual analysis. Imaris (Andor Technology, Belfast, Northern Ireland) software provides the feature MeasurementPro, which allows the manual creation of measurement points that can be placed in a volume image or drawn on a series of 2D slices to create a 3D object. This method is useful for single-click point measurements to measure a line distance between two objects or to create a polygon that encloses a region of interest, but it is difficult to apply to complex cellular network structures. Filament Tracer (Andor) allows automatic detection of the 3D neuronal filament-like however, this module has been developed to measure defined structures such as neurons, which are comprised of dendrites, axons and spines (tree-like structure). This module has been ingeniously utilized to make morphological measurements to non-neuronal cells, however, the output data provide information of an extended cellular network by using a software that depends on a defined cell shape rather than being an amorphous-shaped cellular model. To overcome the issue of analyzing amorphous-shaped cells and making the software more suitable to a biological application, Imaris developed Imaris Cell. This was a scientific project with the Eidgenössische Technische Hochschule, which has been developed to calculate the relationship between cells and organelles. While the software enables the detection of biological constraints, by forcing one nucleus per cell and using cell membranes to segment cells, it cannot be utilized to analyze fluorescence data that are not continuous because ideally it builds cell surface without void spaces. To our knowledge, at present no user-modifiable automated approach that provides morphometric information from 3D fluorescence images has been developed that achieves cellular spatial information of an undefined shape (Figure 1). We have developed an analytical platform using the Imaris core software module and Imaris XT interfaced to MATLAB (Mat Works, Inc.). These tools allow the 3D measurement of cells without a pre-defined shape and with inconsistent fluorescence network components. Furthermore, this method will allow researchers who have extended expertise in biological systems, but not familiarity to computer applications, to perform quantification of morphological changes in cell dynamics.

  12. Automatic neuron segmentation and neural network analysis method for phase contrast microscopy images.

    PubMed

    Pang, Jincheng; Özkucur, Nurdan; Ren, Michael; Kaplan, David L; Levin, Michael; Miller, Eric L

    2015-11-01

    Phase Contrast Microscopy (PCM) is an important tool for the long term study of living cells. Unlike fluorescence methods which suffer from photobleaching of fluorophore or dye molecules, PCM image contrast is generated by the natural variations in optical index of refraction. Unfortunately, the same physical principles which allow for these studies give rise to complex artifacts in the raw PCM imagery. Of particular interest in this paper are neuron images where these image imperfections manifest in very different ways for the two structures of specific interest: cell bodies (somas) and dendrites. To address these challenges, we introduce a novel parametric image model using the level set framework and an associated variational approach which simultaneously restores and segments this class of images. Using this technique as the basis for an automated image analysis pipeline, results for both the synthetic and real images validate and demonstrate the advantages of our approach.

  13. A new approach to modeling the influence of image features on fixation selection in scenes

    PubMed Central

    Nuthmann, Antje; Einhäuser, Wolfgang

    2015-01-01

    Which image characteristics predict where people fixate when memorizing natural images? To answer this question, we introduce a new analysis approach that combines a novel scene-patch analysis with generalized linear mixed models (GLMMs). Our method allows for (1) directly describing the relationship between continuous feature value and fixation probability, and (2) assessing each feature's unique contribution to fixation selection. To demonstrate this method, we estimated the relative contribution of various image features to fixation selection: luminance and luminance contrast (low-level features); edge density (a mid-level feature); visual clutter and image segmentation to approximate local object density in the scene (higher-level features). An additional predictor captured the central bias of fixation. The GLMM results revealed that edge density, clutter, and the number of homogenous segments in a patch can independently predict whether image patches are fixated or not. Importantly, neither luminance nor contrast had an independent effect above and beyond what could be accounted for by the other predictors. Since the parcellation of the scene and the selection of features can be tailored to the specific research question, our approach allows for assessing the interplay of various factors relevant for fixation selection in scenes in a powerful and flexible manner. PMID:25752239

  14. A fast global fitting algorithm for fluorescence lifetime imaging microscopy based on image segmentation.

    PubMed

    Pelet, S; Previte, M J R; Laiho, L H; So, P T C

    2004-10-01

    Global fitting algorithms have been shown to improve effectively the accuracy and precision of the analysis of fluorescence lifetime imaging microscopy data. Global analysis performs better than unconstrained data fitting when prior information exists, such as the spatial invariance of the lifetimes of individual fluorescent species. The highly coupled nature of global analysis often results in a significantly slower convergence of the data fitting algorithm as compared with unconstrained analysis. Convergence speed can be greatly accelerated by providing appropriate initial guesses. Realizing that the image morphology often correlates with fluorophore distribution, a global fitting algorithm has been developed to assign initial guesses throughout an image based on a segmentation analysis. This algorithm was tested on both simulated data sets and time-domain lifetime measurements. We have successfully measured fluorophore distribution in fibroblasts stained with Hoechst and calcein. This method further allows second harmonic generation from collagen and elastin autofluorescence to be differentiated in fluorescence lifetime imaging microscopy images of ex vivo human skin. On our experimental measurement, this algorithm increased convergence speed by over two orders of magnitude and achieved significantly better fits. Copyright 2004 Biophysical Society

  15. Computer system for scanning tunneling microscope automation

    NASA Astrophysics Data System (ADS)

    Aguilar, M.; García, A.; Pascual, P. J.; Presa, J.; Santisteban, A.

    1987-03-01

    A computerized system for the automation of a scanning tunneling microscope is presented. It is based on an IBM personal computer (PC) either an XT or an AT, which performs the control, data acquisition and storage operations, displays the STM "images" in real time, and provides image processing tools for the restoration and analysis of data. It supports different data acquisition and control cards and image display cards. The software has been designed in a modular way to allow the replacement of these cards and other equipment improvements as well as the inclusion of user routines for data analysis.

  16. Quality assessment of digital X-ray chest images using an anthropomorphic chest phantom

    NASA Astrophysics Data System (ADS)

    Vodovatov, A. V.; Kamishanskaya, I. G.; Drozdov, A. A.; Bernhardsson, C.

    2017-02-01

    The current study is focused on determining the optimal tube voltage for the conventional X-ray digital chest screening examinations, using a visual grading analysis method. Chest images of an anthropomorphic phantom were acquired in posterior-anterior projection on four digital X-ray units with different detector types. X-ray images obtained with an anthropomorphic phantom were accepted by the radiologists as corresponding to a normal human anatomy, hence allowing using phantoms in image quality trials without limitations.

  17. Feasibility of imaging superficial palmar arch using micro-ultrasound, 7T and 3T magnetic resonance imaging.

    PubMed

    Pruzan, Alison N; Kaufman, Audrey E; Calcagno, Claudia; Zhou, Yu; Fayad, Zahi A; Mani, Venkatesh

    2017-02-28

    To demonstrate feasibility of vessel wall imaging of the superficial palmar arch using high frequency micro-ultrasound, 7T and 3T magnetic resonance imaging (MRI). Four subjects (ages 22-50 years) were scanned on a micro-ultrasound system with a 45-MHz transducer (Vevo 2100, VisualSonics). Subjects' hands were then imaged on a 3T clinical MR scanner (Siemens Biograph MMR) using an 8-channel special purpose phased array carotid coil. Lastly, subjects' hands were imaged on a 7T clinical MR scanner (Siemens Magnetom 7T Whole Body Scanner) using a custom built 8-channel transmit receive carotid coil. All three imaging modalities were subjectively analyzed for image quality and visualization of the vessel wall. Results of this very preliminary study indicated that vessel wall imaging of the superficial palmar arch was feasible with a whole body 7T and 3T MRI in comparison with micro-ultrasound. Subjective analysis of image quality (1-5 scale, 1: poorest, 5: best) from B mode, ultrasound, 3T SPACE MRI and 7T SPACE MRI indicated that the image quality obtained at 7T was superior to both 3T MRI and micro-ultrasound. The 3D SPACE sequence at both 7T and 3T MRI with isotropic voxels allowed for multi-planar reformatting of images and allowed for less operator dependent results as compared to high frequency micro-ultrasound imaging. Although quantitative analysis revealed that there was no significant difference between the three methods, the 7T Tesla trended to have better visibility of the vessel and its wall. Imaging of smaller arteries at the 7T is feasible for evaluating atherosclerosis burden and may be of clinical relevance in multiple diseases.

  18. Computer-Aided Diagnostic System For Mass Survey Chest Images

    NASA Astrophysics Data System (ADS)

    Yasuda, Yoshizumi; Kinoshita, Yasuhiro; Emori, Yasufumi; Yoshimura, Hitoshi

    1988-06-01

    In order to support screening of chest radiographs on mass survey, a computer-aided diagnostic system that automatically detects abnormality of candidate images using a digital image analysis technique has been developed. Extracting boundary lines of lung fields and examining their shapes allowed various kind of abnormalities to be detected. Correction and expansion were facilitated by describing the system control, image analysis control and judgement of abnormality in the rule type programing language. In the experiments using typical samples of student's radiograms, good results were obtained for the detection of abnormal shape of lung field, cardiac hypertrophy and scoliosis. As for the detection of diaphragmatic abnormality, relatively good results were obtained but further improvements will be necessary.

  19. The computer treatment of remotely sensed data: An introduction to techniques which have geologic applications. [image enhancement and thematic classification in Brazil

    NASA Technical Reports Server (NTRS)

    Parada, N. D. J. (Principal Investigator); Paradella, W. R.; Vitorello, I.

    1982-01-01

    Several aspects of computer-assisted analysis techniques for image enhancement and thematic classification by which LANDSAT MSS imagery may be treated quantitatively are explained. On geological applications, computer processing of digital data allows, possibly, the fullest use of LANDSAT data, by displaying enhanced and corrected data for visual analysis and by evaluating and assigning each spectral pixel information to a given class.

  20. A Graphical User Interface for Software-assisted Tracking of Protein Concentration in Dynamic Cellular Protrusions.

    PubMed

    Saha, Tanumoy; Rathmann, Isabel; Galic, Milos

    2017-07-11

    Filopodia are dynamic, finger-like cellular protrusions associated with migration and cell-cell communication. In order to better understand the complex signaling mechanisms underlying filopodial initiation, elongation and subsequent stabilization or retraction, it is crucial to determine the spatio-temporal protein activity in these dynamic structures. To analyze protein function in filopodia, we recently developed a semi-automated tracking algorithm that adapts to filopodial shape-changes, thus allowing parallel analysis of protrusion dynamics and relative protein concentration along the whole filopodial length. Here, we present a detailed step-by-step protocol for optimized cell handling, image acquisition and software analysis. We further provide instructions for the use of optional features during image analysis and data representation, as well as troubleshooting guidelines for all critical steps along the way. Finally, we also include a comparison of the described image analysis software with other programs available for filopodia quantification. Together, the presented protocol provides a framework for accurate analysis of protein dynamics in filopodial protrusions using image analysis software.

  1. Hierarchical storage of large volume of multidector CT data using distributed servers

    NASA Astrophysics Data System (ADS)

    Ratib, Osman; Rosset, Antoine; Heuberger, Joris; Bandon, David

    2006-03-01

    Multidector scanners and hybrid multimodality scanners have the ability to generate large number of high-resolution images resulting in very large data sets. In most cases, these datasets are generated for the sole purpose of generating secondary processed images and 3D rendered images as well as oblique and curved multiplanar reformatted images. It is therefore not essential to archive the original images after they have been processed. We have developed an architecture of distributed archive servers for temporary storage of large image datasets for 3D rendering and image processing without the need for long term storage in PACS archive. With the relatively low cost of storage devices it is possible to configure these servers to hold several months or even years of data, long enough for allowing subsequent re-processing if required by specific clinical situations. We tested the latest generation of RAID servers provided by Apple computers with a capacity of 5 TBytes. We implemented a peer-to-peer data access software based on our Open-Source image management software called OsiriX, allowing remote workstations to directly access DICOM image files located on the server through a new technology called "bonjour". This architecture offers a seamless integration of multiple servers and workstations without the need for central database or complex workflow management tools. It allows efficient access to image data from multiple workstation for image analysis and visualization without the need for image data transfer. It provides a convenient alternative to centralized PACS architecture while avoiding complex and time-consuming data transfer and storage.

  2. Automated Modular Magnetic Resonance Imaging Clinical Decision Support System (MIROR): An Application in Pediatric Cancer Diagnosis

    PubMed Central

    Zarinabad, Niloufar; Meeus, Emma M; Manias, Karen; Foster, Katharine

    2018-01-01

    Background Advances in magnetic resonance imaging and the introduction of clinical decision support systems has underlined the need for an analysis tool to extract and analyze relevant information from magnetic resonance imaging data to aid decision making, prevent errors, and enhance health care. Objective The aim of this study was to design and develop a modular medical image region of interest analysis tool and repository (MIROR) for automatic processing, classification, evaluation, and representation of advanced magnetic resonance imaging data. Methods The clinical decision support system was developed and evaluated for diffusion-weighted imaging of body tumors in children (cohort of 48 children, with 37 malignant and 11 benign tumors). Mevislab software and Python have been used for the development of MIROR. Regions of interests were drawn around benign and malignant body tumors on different diffusion parametric maps, and extracted information was used to discriminate the malignant tumors from benign tumors. Results Using MIROR, the various histogram parameters derived for each tumor case when compared with the information in the repository provided additional information for tumor characterization and facilitated the discrimination between benign and malignant tumors. Clinical decision support system cross-validation showed high sensitivity and specificity in discriminating between these tumor groups using histogram parameters. Conclusions MIROR, as a diagnostic tool and repository, allowed the interpretation and analysis of magnetic resonance imaging images to be more accessible and comprehensive for clinicians. It aims to increase clinicians’ skillset by introducing newer techniques and up-to-date findings to their repertoire and make information from previous cases available to aid decision making. The modular-based format of the tool allows integration of analyses that are not readily available clinically and streamlines the future developments. PMID:29720361

  3. Utilization of a high-throughput shoot imaging system to examine the dynamic phenotypic responses of a C4 cereal crop plant to nitrogen and water deficiency over time

    PubMed Central

    Neilson, E. H.; Edwards, A. M.; Blomstedt, C. K.; Berger, B.; Møller, B. Lindberg; Gleadow, R. M.

    2015-01-01

    The use of high-throughput phenotyping systems and non-destructive imaging is widely regarded as a key technology allowing scientists and breeders to develop crops with the ability to perform well under diverse environmental conditions. However, many of these phenotyping studies have been optimized using the model plant Arabidopsis thaliana. In this study, The Plant Accelerator® at The University of Adelaide, Australia, was used to investigate the growth and phenotypic response of the important cereal crop, Sorghum bicolor L. Moench and related hybrids to water-limited conditions and different levels of fertilizer. Imaging in different spectral ranges was used to monitor plant composition, chlorophyll, and moisture content. Phenotypic image analysis accurately measured plant biomass. The data set obtained enabled the responses of the different sorghum varieties to the experimental treatments to be differentiated and modelled. Plant architectural instead of architecture elements were determined using imaging and found to correlate with an improved tolerance to stress, for example diurnal leaf curling and leaf area index. Analysis of colour images revealed that leaf ‘greenness’ correlated with foliar nitrogen and chlorophyll, while near infrared reflectance (NIR) analysis was a good predictor of water content and leaf thickness, and correlated with plant moisture content. It is shown that imaging sorghum using a high-throughput system can accurately identify and differentiate between growth and specific phenotypic traits. R scripts for robust, parsimonious models are provided to allow other users of phenomic imaging systems to extract useful data readily, and thus relieve a bottleneck in phenotypic screening of multiple genotypes of key crop plants. PMID:25697789

  4. Toshiba TDF-500 High Resolution Viewing And Analysis System

    NASA Astrophysics Data System (ADS)

    Roberts, Barry; Kakegawa, M.; Nishikawa, M.; Oikawa, D.

    1988-06-01

    A high resolution, operator interactive, medical viewing and analysis system has been developed by Toshiba and Bio-Imaging Research. This system provides many advanced features including high resolution displays, a very large image memory and advanced image processing capability. In particular, the system provides CRT frame buffers capable of update in one frame period, an array processor capable of image processing at operator interactive speeds, and a memory system capable of updating multiple frame buffers at frame rates whilst supporting multiple array processors. The display system provides 1024 x 1536 display resolution at 40Hz frame and 80Hz field rates. In particular, the ability to provide whole or partial update of the screen at the scanning rate is a key feature. This allows multiple viewports or windows in the display buffer with both fixed and cine capability. To support image processing features such as windowing, pan, zoom, minification, filtering, ROI analysis, multiplanar and 3D reconstruction, a high performance CPU is integrated into the system. This CPU is an array processor capable of up to 400 million instructions per second. To support the multiple viewer and array processors' instantaneous high memory bandwidth requirement, an ultra fast memory system is used. This memory system has a bandwidth capability of 400MB/sec and a total capacity of 256MB. This bandwidth is more than adequate to support several high resolution CRT's and also the fast processing unit. This fully integrated approach allows effective real time image processing. The integrated design of viewing system, memory system and array processor are key to the imaging system. It is the intention to describe the architecture of the image system in this paper.

  5. Technical advance: live-imaging analysis of human dendritic cell migrating behavior under the influence of immune-stimulating reagents in an organotypic model of lung.

    PubMed

    Nguyen Hoang, Anh Thu; Chen, Puran; Björnfot, Sofia; Högstrand, Kari; Lock, John G; Grandien, Alf; Coles, Mark; Svensson, Mattias

    2014-09-01

    This manuscript describes technical advances allowing manipulation and quantitative analyses of human DC migratory behavior in lung epithelial tissue. DCs are hematopoietic cells essential for the maintenance of tissue homeostasis and the induction of tissue-specific immune responses. Important functions include cytokine production and migration in response to infection for the induction of proper immune responses. To design appropriate strategies to exploit human DC functional properties in lung tissue for the purpose of clinical evaluation, e.g., candidate vaccination and immunotherapy strategies, we have developed a live-imaging assay based on our previously described organotypic model of the human lung. This assay allows provocations and subsequent quantitative investigations of DC functional properties under conditions mimicking morphological and functional features of the in vivo parental tissue. We present protocols to set up and prepare tissue models for 4D (x, y, z, time) fluorescence-imaging analysis that allow spatial and temporal studies of human DCs in live epithelial tissue, followed by flow cytometry analysis of DCs retrieved from digested tissue models. This model system can be useful for elucidating incompletely defined pathways controlling DC functional responses to infection and inflammation in lung epithelial tissue, as well as the efficacy of locally administered candidate interventions. © 2014 Society for Leukocyte Biology.

  6. A quantitative framework for flower phenotyping in cultivated carnation (Dianthus caryophyllus L.).

    PubMed

    Chacón, Borja; Ballester, Roberto; Birlanga, Virginia; Rolland-Lagan, Anne-Gaëlle; Pérez-Pérez, José Manuel

    2013-01-01

    Most important breeding goals in ornamental crops are plant appearance and flower characteristics where selection is visually performed on direct offspring of crossings. We developed an image analysis toolbox for the acquisition of flower and petal images from cultivated carnation (Dianthus caryophyllus L.) that was validated by a detailed analysis of flower and petal size and shape in 78 commercial cultivars of D. caryophyllus, including 55 standard, 22 spray and 1 pot carnation cultivars. Correlation analyses allowed us to reduce the number of parameters accounting for the observed variation in flower and petal morphology. Convexity was used as a descriptor for the level of serration in flowers and petals. We used a landmark-based approach that allowed us to identify eight main principal components (PCs) accounting for most of the variance observed in petal shape. The effect and the strength of these PCs in standard and spray carnation cultivars are consistent with shared underlying mechanisms involved in the morphological diversification of petals in both subpopulations. Our results also indicate that neighbor-joining trees built with morphological data might infer certain phylogenetic relationships among carnation cultivars. Based on estimated broad-sense heritability values for some flower and petal features, different genetic determinants shall modulate the responses of flower and petal morphology to environmental cues in this species. We believe our image analysis toolbox could allow capturing flower variation in other species of high ornamental value.

  7. Open source tools for fluorescent imaging.

    PubMed

    Hamilton, Nicholas A

    2012-01-01

    As microscopy becomes increasingly automated and imaging expands in the spatial and time dimensions, quantitative analysis tools for fluorescent imaging are becoming critical to remove both bottlenecks in throughput as well as fully extract and exploit the information contained in the imaging. In recent years there has been a flurry of activity in the development of bio-image analysis tools and methods with the result that there are now many high-quality, well-documented, and well-supported open source bio-image analysis projects with large user bases that cover essentially every aspect from image capture to publication. These open source solutions are now providing a viable alternative to commercial solutions. More importantly, they are forming an interoperable and interconnected network of tools that allow data and analysis methods to be shared between many of the major projects. Just as researchers build on, transmit, and verify knowledge through publication, open source analysis methods and software are creating a foundation that can be built upon, transmitted, and verified. Here we describe many of the major projects, their capabilities, and features. We also give an overview of the current state of open source software for fluorescent microscopy analysis and the many reasons to use and develop open source methods. Copyright © 2012 Elsevier Inc. All rights reserved.

  8. GRAPE: a graphical pipeline environment for image analysis in adaptive magnetic resonance imaging.

    PubMed

    Gabr, Refaat E; Tefera, Getaneh B; Allen, William J; Pednekar, Amol S; Narayana, Ponnada A

    2017-03-01

    We present a platform, GRAphical Pipeline Environment (GRAPE), to facilitate the development of patient-adaptive magnetic resonance imaging (MRI) protocols. GRAPE is an open-source project implemented in the Qt C++ framework to enable graphical creation, execution, and debugging of real-time image analysis algorithms integrated with the MRI scanner. The platform provides the tools and infrastructure to design new algorithms, and build and execute an array of image analysis routines, and provides a mechanism to include existing analysis libraries, all within a graphical environment. The application of GRAPE is demonstrated in multiple MRI applications, and the software is described in detail for both the user and the developer. GRAPE was successfully used to implement and execute three applications in MRI of the brain, performed on a 3.0-T MRI scanner: (i) a multi-parametric pipeline for segmenting the brain tissue and detecting lesions in multiple sclerosis (MS), (ii) patient-specific optimization of the 3D fluid-attenuated inversion recovery MRI scan parameters to enhance the contrast of brain lesions in MS, and (iii) an algebraic image method for combining two MR images for improved lesion contrast. GRAPE allows graphical development and execution of image analysis algorithms for inline, real-time, and adaptive MRI applications.

  9. Ratiometric spectral imaging for fast tumor detection and chemotherapy monitoring in vivo

    PubMed Central

    Hwang, Jae Youn; Gross, Zeev; Gray, Harry B.; Medina-Kauwe, Lali K.; Farkas, Daniel L.

    2011-01-01

    We report a novel in vivo spectral imaging approach to cancer detection and chemotherapy assessment. We describe and characterize a ratiometric spectral imaging and analysis method and evaluate its performance for tumor detection and delineation by quantitatively monitoring the specific accumulation of targeted gallium corrole (HerGa) into HER2-positive (HER2 +) breast tumors. HerGa temporal accumulation in nude mice bearing HER2 + breast tumors was monitored comparatively by a. this new ratiometric imaging and analysis method; b. established (reflectance and fluorescence) spectral imaging; c. more commonly used fluorescence intensity imaging. We also tested the feasibility of HerGa imaging in vivo using the ratiometric spectral imaging method for tumor detection and delineation. Our results show that the new method not only provides better quantitative information than typical spectral imaging, but also better specificity than standard fluorescence intensity imaging, thus allowing enhanced in vivo outlining of tumors and dynamic, quantitative monitoring of targeted chemotherapy agent accumulation into them. PMID:21721808

  10. Design Criteria For Networked Image Analysis System

    NASA Astrophysics Data System (ADS)

    Reader, Cliff; Nitteberg, Alan

    1982-01-01

    Image systems design is currently undergoing a metamorphosis from the conventional computing systems of the past into a new generation of special purpose designs. This change is motivated by several factors, notably among which is the increased opportunity for high performance with low cost offered by advances in semiconductor technology. Another key issue is a maturing in understanding of problems and the applicability of digital processing techniques. These factors allow the design of cost-effective systems that are functionally dedicated to specific applications and used in a utilitarian fashion. Following an overview of the above stated issues, the paper presents a top-down approach to the design of networked image analysis systems. The requirements for such a system are presented, with orientation toward the hospital environment. The three main areas are image data base management, viewing of image data and image data processing. This is followed by a survey of the current state of the art, covering image display systems, data base techniques, communications networks and software systems control. The paper concludes with a description of the functional subystems and architectural framework for networked image analysis in a production environment.

  11. Artificial intelligence for geologic mapping with imaging spectrometers

    NASA Technical Reports Server (NTRS)

    Kruse, F. A.

    1993-01-01

    This project was a three year study at the Center for the Study of Earth from Space (CSES) within the Cooperative Institute for Research in Environmental Science (CIRES) at the University of Colorado, Boulder. The goal of this research was to develop an expert system to allow automated identification of geologic materials based on their spectral characteristics in imaging spectrometer data such as the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS). This requirement was dictated by the volume of data produced by imaging spectrometers, which prohibits manual analysis. The research described is based on the development of automated techniques for analysis of imaging spectrometer data that emulate the analytical processes used by a human observer. The research tested the feasibility of such an approach, implemented an operational system, and tested the validity of the results for selected imaging spectrometer data sets.

  12. Software and Algorithms for Biomedical Image Data Processing and Visualization

    NASA Technical Reports Server (NTRS)

    Talukder, Ashit; Lambert, James; Lam, Raymond

    2004-01-01

    A new software equipped with novel image processing algorithms and graphical-user-interface (GUI) tools has been designed for automated analysis and processing of large amounts of biomedical image data. The software, called PlaqTrak, has been specifically used for analysis of plaque on teeth of patients. New algorithms have been developed and implemented to segment teeth of interest from surrounding gum, and a real-time image-based morphing procedure is used to automatically overlay a grid onto each segmented tooth. Pattern recognition methods are used to classify plaque from surrounding gum and enamel, while ignoring glare effects due to the reflection of camera light and ambient light from enamel regions. The PlaqTrak system integrates these components into a single software suite with an easy-to-use GUI (see Figure 1) that allows users to do an end-to-end run of a patient s record, including tooth segmentation of all teeth, grid morphing of each segmented tooth, and plaque classification of each tooth image. The automated and accurate processing of the captured images to segment each tooth [see Figure 2(a)] and then detect plaque on a tooth-by-tooth basis is a critical component of the PlaqTrak system to do clinical trials and analysis with minimal human intervention. These features offer distinct advantages over other competing systems that analyze groups of teeth or synthetic teeth. PlaqTrak divides each segmented tooth into eight regions using an advanced graphics morphing procedure [see results on a chipped tooth in Figure 2(b)], and a pattern recognition classifier is then used to locate plaque [red regions in Figure 2(d)] and enamel regions. The morphing allows analysis within regions of teeth, thereby facilitating detailed statistical analysis such as the amount of plaque present on the biting surfaces on teeth. This software system is applicable to a host of biomedical applications, such as cell analysis and life detection, or robotic applications, such as product inspection or assembly of parts in space and industry.

  13. High-content analysis of single cells directly assembled on CMOS sensor based on color imaging.

    PubMed

    Tanaka, Tsuyoshi; Saeki, Tatsuya; Sunaga, Yoshihiko; Matsunaga, Tadashi

    2010-12-15

    A complementary metal oxide semiconductor (CMOS) image sensor was applied to high-content analysis of single cells which were assembled closely or directly onto the CMOS sensor surface. The direct assembling of cell groups on CMOS sensor surface allows large-field (6.66 mm×5.32 mm in entire active area of CMOS sensor) imaging within a second. Trypan blue-stained and non-stained cells in the same field area on the CMOS sensor were successfully distinguished as white- and blue-colored images under white LED light irradiation. Furthermore, the chemiluminescent signals of each cell were successfully visualized as blue-colored images on CMOS sensor only when HeLa cells were placed directly on the micro-lens array of the CMOS sensor. Our proposed approach will be a promising technique for real-time and high-content analysis of single cells in a large-field area based on color imaging. Copyright © 2010 Elsevier B.V. All rights reserved.

  14. A new background subtraction method for Western blot densitometry band quantification through image analysis software.

    PubMed

    Gallo-Oller, Gabriel; Ordoñez, Raquel; Dotor, Javier

    2018-06-01

    Since its first description, Western blot has been widely used in molecular labs. It constitutes a multistep method that allows the detection and/or quantification of proteins from simple to complex protein mixtures. Western blot quantification method constitutes a critical step in order to obtain accurate and reproducible results. Due to the technical knowledge required for densitometry analysis together with the resources availability, standard office scanners are often used for the imaging acquisition of developed Western blot films. Furthermore, the use of semi-quantitative software as ImageJ (Java-based image-processing and analysis software) is clearly increasing in different scientific fields. In this work, we describe the use of office scanner coupled with the ImageJ software together with a new image background subtraction method for accurate Western blot quantification. The proposed method represents an affordable, accurate and reproducible approximation that could be used in the presence of limited resources availability. Copyright © 2018 Elsevier B.V. All rights reserved.

  15. Time-resolved multispectral imaging of combustion reactions

    NASA Astrophysics Data System (ADS)

    Huot, Alexandrine; Gagnon, Marc-André; Jahjah, Karl-Alexandre; Tremblay, Pierre; Savary, Simon; Farley, Vincent; Lagueux, Philippe; Guyot, Éric; Chamberland, Martin; Marcotte, Frédérick

    2015-10-01

    Thermal infrared imaging is a field of science that evolves rapidly. Scientists have used for years the simplest tool: thermal broadband cameras. These allow to perform target characterization in both the longwave (LWIR) and midwave (MWIR) infrared spectral range. Infrared thermal imaging is used for a wide range of applications, especially in the combustion domain. For example, it can be used to follow combustion reactions, in order to characterize the injection and the ignition in a combustion chamber or even to observe gases produced by a flare or smokestack. Most combustion gases, such as carbon dioxide (CO2), selectively absorb/emit infrared radiation at discrete energies, i.e. over a very narrow spectral range. Therefore, temperatures derived from broadband imaging are not reliable without prior knowledge of spectral emissivity. This information is not directly available from broadband images. However, spectral information is available using spectral filters. In this work, combustion analysis was carried out using a Telops MS-IR MW camera, which allows multispectral imaging at a high frame rate. A motorized filter wheel allowing synchronized acquisitions on eight (8) different channels was used to provide time-resolved multispectral imaging of combustion products of a candle in which black powder has been burnt to create a burst. It was then possible to estimate the temperature by modeling spectral profiles derived from information obtained with the different spectral filters. Comparison with temperatures obtained using conventional broadband imaging illustrates the benefits of time-resolved multispectral imaging for the characterization of combustion processes.

  16. Time-resolved multispectral imaging of combustion reaction

    NASA Astrophysics Data System (ADS)

    Huot, Alexandrine; Gagnon, Marc-André; Jahjah, Karl-Alexandre; Tremblay, Pierre; Savary, Simon; Farley, Vincent; Lagueux, Philippe; Guyot, Éric; Chamberland, Martin; Marcotte, Fréderick

    2015-05-01

    Thermal infrared imaging is a field of science that evolves rapidly. Scientists have used for years the simplest tool: thermal broadband cameras. This allows to perform target characterization in both the longwave (LWIR) and midwave (MWIR) infrared spectral range. Infrared thermal imaging is used for a wide range of applications, especially in the combustion domain. For example, it can be used to follow combustion reactions, in order to characterize the injection and the ignition in a combustion chamber or even to observe gases produced by a flare or smokestack. Most combustion gases such as carbon dioxide (CO2) selectively absorb/emit infrared radiation at discrete energies, i.e. over a very narrow spectral range. Therefore, temperatures derived from broadband imaging are not reliable without prior knowledge about spectral emissivity. This information is not directly available from broadband images. However, spectral information is available using spectral filters. In this work, combustion analysis was carried out using Telops MS-IR MW camera which allows multispectral imaging at a high frame rate. A motorized filter wheel allowing synchronized acquisitions on eight (8) different channels was used to provide time-resolved multispectral imaging of combustion products of a candle in which black powder has been burnt to create a burst. It was then possible to estimate the temperature by modeling spectral profile derived from information obtained with the different spectral filters. Comparison with temperatures obtained using conventional broadband imaging illustrates the benefits of time-resolved multispectral imaging for the characterization of combustion processes.

  17. Linking brain, mind and behavior.

    PubMed

    Makeig, Scott; Gramann, Klaus; Jung, Tzyy-Ping; Sejnowski, Terrence J; Poizner, Howard

    2009-08-01

    Cortical brain areas and dynamics evolved to organize motor behavior in our three-dimensional environment also support more general human cognitive processes. Yet traditional brain imaging paradigms typically allow and record only minimal participant behavior, then reduce the recorded data to single map features of averaged responses. To more fully investigate the complex links between distributed brain dynamics and motivated natural behavior, we propose the development of wearable mobile brain/body imaging (MoBI) systems that continuously capture the wearer's high-density electrical brain and muscle signals, three-dimensional body movements, audiovisual scene and point of regard, plus new data-driven analysis methods to model their interrelationships. The new imaging modality should allow new insights into how spatially distributed brain dynamics support natural human cognition and agency.

  18. Microvascular Autonomic Composites

    DTIC Science & Technology

    2012-01-06

    thermogravimetric analysis (TGA) was employed. The double wall allowed for increased thermal stability of the microcapsules, which was...fluorescent nanoparticles (Berfield et al. 2006). Digital Image Correlation (DIC) is a data analysis method, which applies a mathematical...Theme IV: Experimental Assessment & Analysis 2.4.1 Optical diagnostics for complex microfluidic systems pg. 50 2.4.2 Fluorescent thermometry

  19. LAS - LAND ANALYSIS SYSTEM, VERSION 5.0

    NASA Technical Reports Server (NTRS)

    Pease, P. B.

    1994-01-01

    The Land Analysis System (LAS) is an image analysis system designed to manipulate and analyze digital data in raster format and provide the user with a wide spectrum of functions and statistical tools for analysis. LAS offers these features under VMS with optional image display capabilities for IVAS and other display devices as well as the X-Windows environment. LAS provides a flexible framework for algorithm development as well as for the processing and analysis of image data. Users may choose between mouse-driven commands or the traditional command line input mode. LAS functions include supervised and unsupervised image classification, film product generation, geometric registration, image repair, radiometric correction and image statistical analysis. Data files accepted by LAS include formats such as Multi-Spectral Scanner (MSS), Thematic Mapper (TM) and Advanced Very High Resolution Radiometer (AVHRR). The enhanced geometric registration package now includes both image to image and map to map transformations. The over 200 LAS functions fall into image processing scenario categories which include: arithmetic and logical functions, data transformations, fourier transforms, geometric registration, hard copy output, image restoration, intensity transformation, multispectral and statistical analysis, file transfer, tape profiling and file management among others. Internal improvements to the LAS code have eliminated the VAX VMS dependencies and improved overall system performance. The maximum LAS image size has been increased to 20,000 lines by 20,000 samples with a maximum of 256 bands per image. The catalog management system used in earlier versions of LAS has been replaced by a more streamlined and maintenance-free method of file management. This system is not dependent on VAX/VMS and relies on file naming conventions alone to allow the use of identical LAS file names on different operating systems. While the LAS code has been improved, the original capabilities of the system have been preserved. These include maintaining associated image history, session logging, and batch, asynchronous and interactive mode of operation. The LAS application programs are integrated under version 4.1 of an interface called the Transportable Applications Executive (TAE). TAE 4.1 has four modes of user interaction: menu, direct command, tutor (or help), and dynamic tutor. In addition TAE 4.1 allows the operation of LAS functions using mouse-driven commands under the TAE-Facelift environment provided with TAE 4.1. These modes of operation allow users, from the beginner to the expert, to exercise specific application options. LAS is written in C-language and FORTRAN 77 for use with DEC VAX computers running VMS with approximately 16Mb of physical memory. This program runs under TAE 4.1. Since TAE 4.1 is not a current version of TAE, TAE 4.1 is included within the LAS distribution. Approximately 130,000 blocks (65Mb) of disk storage space are necessary to store the source code and files generated by the installation procedure for LAS and 44,000 blocks (22Mb) of disk storage space are necessary for TAE 4.1 installation. The only other dependencies for LAS are the subroutine libraries for the specific display device(s) that will be used with LAS/DMS (e.g. X-Windows and/or IVAS). The standard distribution medium for LAS is a set of two 9track 6250 BPI magnetic tapes in DEC VAX BACKUP format. It is also available on a set of two TK50 tape cartridges in DEC VAX BACKUP format. This program was developed in 1986 and last updated in 1992.

  20. Spectral embedding-based registration (SERg) for multimodal fusion of prostate histology and MRI

    NASA Astrophysics Data System (ADS)

    Hwuang, Eileen; Rusu, Mirabela; Karthigeyan, Sudha; Agner, Shannon C.; Sparks, Rachel; Shih, Natalie; Tomaszewski, John E.; Rosen, Mark; Feldman, Michael; Madabhushi, Anant

    2014-03-01

    Multi-modal image registration is needed to align medical images collected from different protocols or imaging sources, thereby allowing the mapping of complementary information between images. One challenge of multimodal image registration is that typical similarity measures rely on statistical correlations between image intensities to determine anatomical alignment. The use of alternate image representations could allow for mapping of intensities into a space or representation such that the multimodal images appear more similar, thus facilitating their co-registration. In this work, we present a spectral embedding based registration (SERg) method that uses non-linearly embedded representations obtained from independent components of statistical texture maps of the original images to facilitate multimodal image registration. Our methodology comprises the following main steps: 1) image-derived textural representation of the original images, 2) dimensionality reduction using independent component analysis (ICA), 3) spectral embedding to generate the alternate representations, and 4) image registration. The rationale behind our approach is that SERg yields embedded representations that can allow for very different looking images to appear more similar, thereby facilitating improved co-registration. Statistical texture features are derived from the image intensities and then reduced to a smaller set by using independent component analysis to remove redundant information. Spectral embedding generates a new representation by eigendecomposition from which only the most important eigenvectors are selected. This helps to accentuate areas of salience based on modality-invariant structural information and therefore better identifies corresponding regions in both the template and target images. The spirit behind SERg is that image registration driven by these areas of salience and correspondence should improve alignment accuracy. In this work, SERg is implemented using Demons to allow the algorithm to more effectively register multimodal images. SERg is also tested within the free-form deformation framework driven by mutual information. Nine pairs of synthetic T1-weighted to T2-weighted brain MRI were registered under the following conditions: five levels of noise (0%, 1%, 3%, 5%, and 7%) and two levels of bias field (20% and 40%) each with and without noise. We demonstrate that across all of these conditions, SERg yields a mean squared error that is 81.51% lower than that of Demons driven by MRI intensity alone. We also spatially align twenty-six ex vivo histology sections and in vivo prostate MRI in order to map the spatial extent of prostate cancer onto corresponding radiologic imaging. SERg performs better than intensity registration by decreasing the root mean squared distance of annotated landmarks in the prostate gland via both Demons algorithm and mutual information-driven free-form deformation. In both synthetic and clinical experiments, the observed improvement in alignment of the template and target images suggest the utility of parametric eigenvector representations and hence SERg for multimodal image registration.

  1. Creation of a virtual cutaneous tissue bank

    NASA Astrophysics Data System (ADS)

    LaFramboise, William A.; Shah, Sujal; Hoy, R. W.; Letbetter, D.; Petrosko, P.; Vennare, R.; Johnson, Peter C.

    2000-04-01

    Cellular and non-cellular constituents of skin contain fundamental morphometric features and structural patterns that correlate with tissue function. High resolution digital image acquisitions performed using an automated system and proprietary software to assemble adjacent images and create a contiguous, lossless, digital representation of individual microscope slide specimens. Serial extraction, evaluation and statistical analysis of cutaneous feature is performed utilizing an automated analysis system, to derive normal cutaneous parameters comprising essential structural skin components. Automated digital cutaneous analysis allows for fast extraction of microanatomic dat with accuracy approximating manual measurement. The process provides rapid assessment of feature both within individual specimens and across sample populations. The images, component data, and statistical analysis comprise a bioinformatics database to serve as an architectural blueprint for skin tissue engineering and as a diagnostic standard of comparison for pathologic specimens.

  2. Morphological and Compositional (S)TEM Analysis of Multiple Exciton Generation Solar Cells

    NASA Astrophysics Data System (ADS)

    Wisnivesky-Rocca-Rivarola, F.; Davis, N. J. L. K.; Bohm, M.; Ducati, C.

    2015-10-01

    Quantum confinement of charge carriers in semiconductor nanocrystals produces optical and electronic properties that have the potential to enhance the power conversion efficiency of solar cells. One of these properties is the efficient formation of more than one electron-hole pair from a single absorbed photon, in a process called multiple exciton generation (MEG). In this work we studied the morphology of nanocrystal multilayers of PbSe treated with CdCl2 using complementary imaging and spectroscopy techniques to characterise the chemical composition and morphology of full MEG devices made with PbSe nanorods (NRs). IN the scanning TEM (STEM), plan view images and chemical maps were obtained of the nanocrystal layers, which allowed for the analysis of crystal structure and orientation, as well as size distribution and aspect ratio. These results were complemented by cross-sectional images of full devices, which allowed accessing the structure of each layer that composes the device, including the nanorod packing in the active nanocrystal layer.

  3. The Images and Emotions of Bilingual Chinese Readers: A Dual Coding Analysis.

    ERIC Educational Resources Information Center

    Steffensen, Margaret S.; Goetz, Ernest T.; Cheng, Xiaoguang

    1999-01-01

    Investigates the nonverbal aspects of bilingual reading with 24 Chinese students who rated text segments for strength of imagery and emotional response. Provides insights into how the bilingual mind accomplishes the task of transforming images on a page into a message that allows the reader to enter and live in a created world. (NH)

  4. Identification of phases, symmetries and defects through local crystallography

    DOE PAGES

    Belianinov, Alex; He, Qian; Kravchenko, Mikhail; ...

    2015-07-20

    Here we report that advances in electron and probe microscopies allow 10 pm or higher precision in measurements of atomic positions. This level of fidelity is sufficient to correlate the length (and hence energy) of bonds, as well as bond angles to functional properties of materials. Traditionally, this relied on mapping locally measured parameters to macroscopic variables, for example, average unit cell. This description effectively ignores the information contained in the microscopic degrees of freedom available in a high-resolution image. Here we introduce an approach for local analysis of material structure based on statistical analysis of individual atomic neighbourhoods. Clusteringmore » and multivariate algorithms such as principal component analysis explore the connectivity of lattice and bond structure, as well as identify minute structural distortions, thus allowing for chemical description and identification of phases. This analysis lays the framework for building image genomes and structure–property libraries, based on conjoining structural and spectral realms through local atomic behaviour.« less

  5. Uterus segmentation in dynamic MRI using LBP texture descriptors

    NASA Astrophysics Data System (ADS)

    Namias, R.; Bellemare, M.-E.; Rahim, M.; Pirró, N.

    2014-03-01

    Pelvic floor disorders cover pathologies of which physiopathology is not well understood. However cases get prevalent with an ageing population. Within the context of a project aiming at modelization of the dynamics of pelvic organs, we have developed an efficient segmentation process. It aims at alleviating the radiologist with a tedious one by one image analysis. From a first contour delineating the uterus-vagina set, the organ border is tracked along a dynamic mri sequence. The process combines movement prediction, local intensity and texture analysis and active contour geometry control. Movement prediction allows a contour intitialization for next image in the sequence. Intensity analysis provides image-based local contour detection enhanced by local binary pattern (lbp) texture descriptors. Geometry control prohibits self intersections and smoothes the contour. Results show the efficiency of the method with images produced in clinical routine.

  6. Computer vision-based analysis of foods: a non-destructive colour measurement tool to monitor quality and safety.

    PubMed

    Mogol, Burçe Ataç; Gökmen, Vural

    2014-05-01

    Computer vision-based image analysis has been widely used in food industry to monitor food quality. It allows low-cost and non-contact measurements of colour to be performed. In this paper, two computer vision-based image analysis approaches are discussed to extract mean colour or featured colour information from the digital images of foods. These types of information may be of particular importance as colour indicates certain chemical changes or physical properties in foods. As exemplified here, the mean CIE a* value or browning ratio determined by means of computer vision-based image analysis algorithms can be correlated with acrylamide content of potato chips or cookies. Or, porosity index as an important physical property of breadcrumb can be calculated easily. In this respect, computer vision-based image analysis provides a useful tool for automatic inspection of food products in a manufacturing line, and it can be actively involved in the decision-making process where rapid quality/safety evaluation is needed. © 2013 Society of Chemical Industry.

  7. Information theoretic analysis of linear shift-invariant edge-detection operators

    NASA Astrophysics Data System (ADS)

    Jiang, Bo; Rahman, Zia-ur

    2012-06-01

    Generally, the designs of digital image processing algorithms and image gathering devices remain separate. Consequently, the performance of digital image processing algorithms is evaluated without taking into account the influences by the image gathering process. However, experiments show that the image gathering process has a profound impact on the performance of digital image processing and the quality of the resulting images. Huck et al. proposed one definitive theoretic analysis of visual communication channels, where the different parts, such as image gathering, processing, and display, are assessed in an integrated manner using Shannon's information theory. We perform an end-to-end information theory based system analysis to assess linear shift-invariant edge-detection algorithms. We evaluate the performance of the different algorithms as a function of the characteristics of the scene and the parameters, such as sampling, additive noise etc., that define the image gathering system. The edge-detection algorithm is regarded as having high performance only if the information rate from the scene to the edge image approaches its maximum possible. This goal can be achieved only by jointly optimizing all processes. Our information-theoretic assessment provides a new tool that allows us to compare different linear shift-invariant edge detectors in a common environment.

  8. Current and future trends in marine image annotation software

    NASA Astrophysics Data System (ADS)

    Gomes-Pereira, Jose Nuno; Auger, Vincent; Beisiegel, Kolja; Benjamin, Robert; Bergmann, Melanie; Bowden, David; Buhl-Mortensen, Pal; De Leo, Fabio C.; Dionísio, Gisela; Durden, Jennifer M.; Edwards, Luke; Friedman, Ariell; Greinert, Jens; Jacobsen-Stout, Nancy; Lerner, Steve; Leslie, Murray; Nattkemper, Tim W.; Sameoto, Jessica A.; Schoening, Timm; Schouten, Ronald; Seager, James; Singh, Hanumant; Soubigou, Olivier; Tojeira, Inês; van den Beld, Inge; Dias, Frederico; Tempera, Fernando; Santos, Ricardo S.

    2016-12-01

    Given the need to describe, analyze and index large quantities of marine imagery data for exploration and monitoring activities, a range of specialized image annotation tools have been developed worldwide. Image annotation - the process of transposing objects or events represented in a video or still image to the semantic level, may involve human interactions and computer-assisted solutions. Marine image annotation software (MIAS) have enabled over 500 publications to date. We review the functioning, application trends and developments, by comparing general and advanced features of 23 different tools utilized in underwater image analysis. MIAS requiring human input are basically a graphical user interface, with a video player or image browser that recognizes a specific time code or image code, allowing to log events in a time-stamped (and/or geo-referenced) manner. MIAS differ from similar software by the capability of integrating data associated to video collection, the most simple being the position coordinates of the video recording platform. MIAS have three main characteristics: annotating events in real time, posteriorly to annotation and interact with a database. These range from simple annotation interfaces, to full onboard data management systems, with a variety of toolboxes. Advanced packages allow to input and display data from multiple sensors or multiple annotators via intranet or internet. Posterior human-mediated annotation often include tools for data display and image analysis, e.g. length, area, image segmentation, point count; and in a few cases the possibility of browsing and editing previous dive logs or to analyze the annotations. The interaction with a database allows the automatic integration of annotations from different surveys, repeated annotation and collaborative annotation of shared datasets, browsing and querying of data. Progress in the field of automated annotation is mostly in post processing, for stable platforms or still images. Integration into available MIAS is currently limited to semi-automated processes of pixel recognition through computer-vision modules that compile expert-based knowledge. Important topics aiding the choice of a specific software are outlined, the ideal software is discussed and future trends are presented.

  9. Collaborative SDOCT Segmentation and Analysis Software.

    PubMed

    Yun, Yeyi; Carass, Aaron; Lang, Andrew; Prince, Jerry L; Antony, Bhavna J

    2017-02-01

    Spectral domain optical coherence tomography (SDOCT) is routinely used in the management and diagnosis of a variety of ocular diseases. This imaging modality also finds widespread use in research, where quantitative measurements obtained from the images are used to track disease progression. In recent years, the number of available scanners and imaging protocols grown and there is a distinct absence of a unified tool that is capable of visualizing, segmenting, and analyzing the data. This is especially noteworthy in longitudinal studies, where data from older scanners and/or protocols may need to be analyzed. Here, we present a graphical user interface (GUI) that allows users to visualize and analyze SDOCT images obtained from two commonly used scanners. The retinal surfaces in the scans can be segmented using a previously described method, and the retinal layer thicknesses can be compared to a normative database. If necessary, the segmented surfaces can also be corrected and the changes applied. The interface also allows users to import and export retinal layer thickness data to an SQL database, thereby allowing for the collation of data from a number of collaborating sites.

  10. Filling the gap: adding super-resolution to array tomography for correlated ultrastructural and molecular identification of electrical synapses at the C. elegans connectome.

    PubMed

    Markert, Sebastian Matthias; Britz, Sebastian; Proppert, Sven; Lang, Marietta; Witvliet, Daniel; Mulcahy, Ben; Sauer, Markus; Zhen, Mei; Bessereau, Jean-Louis; Stigloher, Christian

    2016-10-01

    Correlating molecular labeling at the ultrastructural level with high confidence remains challenging. Array tomography (AT) allows for a combination of fluorescence and electron microscopy (EM) to visualize subcellular protein localization on serial EM sections. Here, we describe an application for AT that combines near-native tissue preservation via high-pressure freezing and freeze substitution with super-resolution light microscopy and high-resolution scanning electron microscopy (SEM) analysis on the same section. We established protocols that combine SEM with structured illumination microscopy (SIM) and direct stochastic optical reconstruction microscopy (dSTORM). We devised a method for easy, precise, and unbiased correlation of EM images and super-resolution imaging data using endogenous cellular landmarks and freely available image processing software. We demonstrate that these methods allow us to identify and label gap junctions in Caenorhabditis elegans with precision and confidence, and imaging of even smaller structures is feasible. With the emergence of connectomics, these methods will allow us to fill in the gap-acquiring the correlated ultrastructural and molecular identity of electrical synapses.

  11. Spatially resolved chemical analysis of cicada wings using laser-ablation electrospray ionization (LAESI) imaging mass spectrometry (IMS).

    PubMed

    Román, Jessica K; Walsh, Callee M; Oh, Junho; Dana, Catherine E; Hong, Sungmin; Jo, Kyoo D; Alleyne, Marianne; Miljkovic, Nenad; Cropek, Donald M

    2018-03-01

    Laser-ablation electrospray ionization (LAESI) imaging mass spectrometry (IMS) is an emerging bioanalytical tool for direct imaging and analysis of biological tissues. Performing ionization in an ambient environment, this technique requires little sample preparation and no additional matrix, and can be performed on natural, uneven surfaces. When combined with optical microscopy, the investigation of biological samples by LAESI allows for spatially resolved compositional analysis. We demonstrate here the applicability of LAESI-IMS for the chemical analysis of thin, desiccated biological samples, specifically Neotibicen pruinosus cicada wings. Positive-ion LAESI-IMS accurate ion-map data was acquired from several wing cells and superimposed onto optical images allowing for compositional comparisons across areas of the wing. Various putative chemical identifications were made indicating the presence of hydrocarbons, lipids/esters, amines/amides, and sulfonated/phosphorylated compounds. With the spatial resolution capability, surprising chemical distribution patterns were observed across the cicada wing, which may assist in correlating trends in surface properties with chemical distribution. Observed ions were either (1) equally dispersed across the wing, (2) more concentrated closer to the body of the insect (proximal end), or (3) more concentrated toward the tip of the wing (distal end). These findings demonstrate LAESI-IMS as a tool for the acquisition of spatially resolved chemical information from fragile, dried insect wings. This LAESI-IMS technique has important implications for the study of functional biomaterials, where understanding the correlation between chemical composition, physical structure, and biological function is critical. Graphical abstract Positive-ion laser-ablation electrospray ionization mass spectrometry coupled with optical imaging provides a powerful tool for the spatially resolved chemical analysis of cicada wings.

  12. CognitionMaster: an object-based image analysis framework

    PubMed Central

    2013-01-01

    Background Automated image analysis methods are becoming more and more important to extract and quantify image features in microscopy-based biomedical studies and several commercial or open-source tools are available. However, most of the approaches rely on pixel-wise operations, a concept that has limitations when high-level object features and relationships between objects are studied and if user-interactivity on the object-level is desired. Results In this paper we present an open-source software that facilitates the analysis of content features and object relationships by using objects as basic processing unit instead of individual pixels. Our approach enables also users without programming knowledge to compose “analysis pipelines“ that exploit the object-level approach. We demonstrate the design and use of example pipelines for the immunohistochemistry-based cell proliferation quantification in breast cancer and two-photon fluorescence microscopy data about bone-osteoclast interaction, which underline the advantages of the object-based concept. Conclusions We introduce an open source software system that offers object-based image analysis. The object-based concept allows for a straight-forward development of object-related interactive or fully automated image analysis solutions. The presented software may therefore serve as a basis for various applications in the field of digital image analysis. PMID:23445542

  13. System of technical vision for autonomous unmanned aerial vehicles

    NASA Astrophysics Data System (ADS)

    Bondarchuk, A. S.

    2018-05-01

    This paper is devoted to the implementation of image recognition algorithm using the LabVIEW software. The created virtual instrument is designed to detect the objects on the frames from the camera mounted on the UAV. The trained classifier is invariant to changes in rotation, as well as to small changes in the camera's viewing angle. Finding objects in the image using particle analysis, allows you to classify regions of different sizes. This method allows the system of technical vision to more accurately determine the location of the objects of interest and their movement relative to the camera.

  14. Development of Nomarski microscopy for quantitative determination of surface topography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hartman, J. S.; Gordon, R. L.; Lessor, D. L.

    1979-01-01

    The use of Nomarski differential interference contrast (DIC) microscopy has been extended to provide nondestructive, quantitative analysis of a sample's surface topography. Theoretical modeling has determined the dependence of the image intensity on the microscope's optical components, the sample's optical properties, and the sample's surface orientation relative to the microscope. Results include expressions to allow the inversion of image intensity data to determine sample surface slopes. A commercial Nomarski system has been modified and characterized to allow the evaluation of the optical model. Data have been recorded with smooth, planar samples that verify the theoretical predictions.

  15. The influence of biological and technical factors on quantitative analysis of amyloid PET: Points to consider and recommendations for controlling variability in longitudinal data.

    PubMed

    Schmidt, Mark E; Chiao, Ping; Klein, Gregory; Matthews, Dawn; Thurfjell, Lennart; Cole, Patricia E; Margolin, Richard; Landau, Susan; Foster, Norman L; Mason, N Scott; De Santi, Susan; Suhy, Joyce; Koeppe, Robert A; Jagust, William

    2015-09-01

    In vivo imaging of amyloid burden with positron emission tomography (PET) provides a means for studying the pathophysiology of Alzheimer's and related diseases. Measurement of subtle changes in amyloid burden requires quantitative analysis of image data. Reliable quantitative analysis of amyloid PET scans acquired at multiple sites and over time requires rigorous standardization of acquisition protocols, subject management, tracer administration, image quality control, and image processing and analysis methods. We review critical points in the acquisition and analysis of amyloid PET, identify ways in which technical factors can contribute to measurement variability, and suggest methods for mitigating these sources of noise. Improved quantitative accuracy could reduce the sample size necessary to detect intervention effects when amyloid PET is used as a treatment end point and allow more reliable interpretation of change in amyloid burden and its relationship to clinical course. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  16. An image processing and analysis tool for identifying and analysing complex plant root systems in 3D soil using non-destructive analysis: Root1.

    PubMed

    Flavel, Richard J; Guppy, Chris N; Rabbi, Sheikh M R; Young, Iain M

    2017-01-01

    The objective of this study was to develop a flexible and free image processing and analysis solution, based on the Public Domain ImageJ platform, for the segmentation and analysis of complex biological plant root systems in soil from x-ray tomography 3D images. Contrasting root architectures from wheat, barley and chickpea root systems were grown in soil and scanned using a high resolution micro-tomography system. A macro (Root1) was developed that reliably identified with good to high accuracy complex root systems (10% overestimation for chickpea, 1% underestimation for wheat, 8% underestimation for barley) and provided analysis of root length and angle. In-built flexibility allowed the user interaction to (a) amend any aspect of the macro to account for specific user preferences, and (b) take account of computational limitations of the platform. The platform is free, flexible and accurate in analysing root system metrics.

  17. Visualizing Microbial Biogeochemistry: NanoSIMS and Stable Isotope Probing (Invited)

    NASA Astrophysics Data System (ADS)

    Pett-Ridge, J.; Weber, P. K.

    2009-12-01

    Linking phylogenetic information to function in microbial communities is a key challenge for microbial ecology. Isotope-labeling experiments provide a useful means to investigate the ecophysiology of microbial populations and cells in the environment and allow measurement of nutrient transfers between cell types, symbionts and consortia. The combination of Nano-Secondary Ion Mass Spectrometry (NanoSIMS) analysis, in situ labeling and high resolution microscopy allows isotopic analysis to be linked to phylogeny and morphology and holds great promise for fine-scale studies of microbial systems. In NanoSIMS analysis, samples are sputtered with an energetic primary beam (Cs+, O-) liberating secondary ions that are separated by the mass spectrometer and detected in a suite of electron multipliers. Five isotopic species may be analyzed concurrently with spatial resolution as fine as 50nm. A high sensitivity isotope ratio ‘map’ can then be generated for the analyzed area. NanoSIMS images of 13C, 15N and Mo (a nitrogenase co-factor) localization in diazotrophic cyanobacteria show how cells differentially allocate resources within filaments and allow calculation of nutrient uptake rates on a cell by cell basis. Images of AM fungal hyphae-root and cyanobacteria-rhizobia associations indicate the mobilization and sharing (stealing?) of newly fixed C and N. In a related technique, “El-FISH”, stable isotope labeled biomass is probed with oligonucleotide-elemental labels and then imaged by NanoSIMS. In microbial consortia and cyanobacterial mats, this technique helps link microbial structure and function simultaneously even in systems with unknown and uncultivated microbes. Finally, the combination of re-engineered universal 16S oligonucleotide microarrays with NanoSIMS analyses may allow microbial identity to be linked to functional roles in complex systems such as mats and cellulose degrading hindgut communities. These newly developed methods provide correlated oligonucleotide, functional enzyme and metabolic image data and should help unravel the metabolic processes of complex microbial communities in soils, biofilms and aquatic systems.

  18. Amplitude image processing by diffractive optics.

    PubMed

    Cagigal, Manuel P; Valle, Pedro J; Canales, V F

    2016-02-22

    In contrast to the standard digital image processing, which operates over the detected image intensity, we propose to perform amplitude image processing. Amplitude processing, like low pass or high pass filtering, is carried out using diffractive optics elements (DOE) since it allows to operate over the field complex amplitude before it has been detected. We show the procedure for designing the DOE that corresponds to each operation. Furthermore, we accomplish an analysis of amplitude image processing performances. In particular, a DOE Laplacian filter is applied to simulated astronomical images for detecting two stars one Airy ring apart. We also check by numerical simulations that the use of a Laplacian amplitude filter produces less noisy images than the standard digital image processing.

  19. An x ray archive on your desk: The Einstein CD-ROM's

    NASA Technical Reports Server (NTRS)

    Prestwich, A.; Mcdowell, J.; Plummer, D.; Manning, K.; Garcia, M.

    1992-01-01

    Data from the Einstein Observatory imaging proportional counter (IPC) and high resolution imager (HRI) were released on several CD-ROM sets. The sets released so far include pointed IPC and HRI observations in both simple image and detailed photon event list format, as well as the IPC slew survey. With the data on these CD-ROMS's the user can perform spatial analysis (e.g., surface brightness distributions), spectral analysis (with the IPC event lists), and timing analysis (with the IPC and HRI event lists). The next CD-ROM set will contain IPC unscreened data, allowing the user to perform custom screening to recover, for instance, data during times of lost aspect data or high particle background rates.

  20. A 2D/3D image analysis system to track fluorescently labeled structures in rod-shaped cells: application to measure spindle pole asymmetry during mitosis.

    PubMed

    Schmitter, Daniel; Wachowicz, Paulina; Sage, Daniel; Chasapi, Anastasia; Xenarios, Ioannis; Simanis; Unser, Michael

    2013-01-01

    The yeast Schizosaccharomyces pombe is frequently used as a model for studying the cell cycle. The cells are rod-shaped and divide by medial fission. The process of cell division, or cytokinesis, is controlled by a network of signaling proteins called the Septation Initiation Network (SIN); SIN proteins associate with the SPBs during nuclear division (mitosis). Some SIN proteins associate with both SPBs early in mitosis, and then display strongly asymmetric signal intensity at the SPBs in late mitosis, just before cytokinesis. This asymmetry is thought to be important for correct regulation of SIN signaling, and coordination of cytokinesis and mitosis. In order to study the dynamics of organelles or large protein complexes such as the spindle pole body (SPB), which have been labeled with a fluorescent protein tag in living cells, a number of the image analysis problems must be solved; the cell outline must be detected automatically, and the position and signal intensity associated with the structures of interest within the cell must be determined. We present a new 2D and 3D image analysis system that permits versatile and robust analysis of motile, fluorescently labeled structures in rod-shaped cells. We have designed an image analysis system that we have implemented as a user-friendly software package allowing the fast and robust image-analysis of large numbers of rod-shaped cells. We have developed new robust algorithms, which we combined with existing methodologies to facilitate fast and accurate analysis. Our software permits the detection and segmentation of rod-shaped cells in either static or dynamic (i.e. time lapse) multi-channel images. It enables tracking of two structures (for example SPBs) in two different image channels. For 2D or 3D static images, the locations of the structures are identified, and then intensity values are extracted together with several quantitative parameters, such as length, width, cell orientation, background fluorescence and the distance between the structures of interest. Furthermore, two kinds of kymographs of the tracked structures can be established, one representing the migration with respect to their relative position, the other representing their individual trajectories inside the cell. This software package, called "RodCellJ", allowed us to analyze a large number of S. pombe cells to understand the rules that govern SIN protein asymmetry. (Continued on next page) (Continued from previous page). "RodCellJ" is freely available to the community as a package of several ImageJ plugins to simultaneously analyze the behavior of a large number of rod-shaped cells in an extensive manner. The integration of different image-processing techniques in a single package, as well as the development of novel algorithms does not only allow to speed up the analysis with respect to the usage of existing tools, but also accounts for higher accuracy. Its utility was demonstrated on both 2D and 3D static and dynamic images to study the septation initiation network of the yeast Schizosaccharomyces pombe. More generally, it can be used in any kind of biological context where fluorescent-protein labeled structures need to be analyzed in rod-shaped cells. RodCellJ is freely available under http://bigwww.epfl.ch/algorithms.html.

  1. Imaging plant leaves to determine changes in radioactive contamination status in Fukushima, Japan.

    PubMed

    Nakajima, Hiroo; Fujiwara, Mamoru; Tanihata, Isao; Saito, Tadashi; Matsuda, Norihiro; Todo, Takeshi

    2014-05-01

    The chemical composition of plant leaves often reflects environmental contamination. The authors analyzed images of plant leaves to investigate the regional radioactivity ecology resulting from the 2011 accident at the Fukushima No. 1 nuclear power plant, Japan. The present study is not an evaluation of the macro radiation dose per weight, which has been performed previously, but rather an image analysis of the radioactive dose per leaf, allowing the capture of various gradual changes in radioactive contamination as a function of elapsed time. In addition, the leaf analysis method has potential applications in the decontamination of food plants or other materials.

  2. Comparison between non-invasive methods used on paintings by Goya and his contemporaries: hyperspectral imaging vs. point-by-point spectroscopic analysis.

    PubMed

    Daniel, Floréal; Mounier, Aurélie; Pérez-Arantegui, Josefina; Pardos, Carlos; Prieto-Taboada, Nagore; Fdez-Ortiz de Vallejuelo, Silvia; Castro, Kepa

    2017-06-01

    The development of non-invasive techniques for the characterization of pigments is crucial in order to preserve the integrity of the artwork. In this sense, the usefulness of hyperspectral imaging was demonstrated. It allows pigment characterization of the whole painting. However, it also sometimes requires the complementation of other point-by-point techniques. In the present article, the advantages of hyperspectral imaging over point-by-point spectroscopic analysis were evaluated. For that purpose, three paintings were analysed by hyperspectral imaging, handheld X-ray fluorescence and handheld Raman spectroscopy in order to determine the best non-invasive technique for pigment identifications. Thanks to this work, the main pigments used in Aragonese artworks, and especially in Goya's paintings, were identified and mapped by imaging reflection spectroscopy. All the analysed pigments corresponded to those used at the time of Goya. Regarding the techniques used, the information obtained by the hyperspectral imaging and point-by-point analysis has been, in general, different and complementary. Given this fact, selecting only one technique is not recommended, and the present work demonstrates the usefulness of the combination of all the techniques used as the best non-invasive methodology for the pigments' characterization. Moreover, the proposed methodology is a relatively quick procedure that allows a larger number of Goya's paintings in the museum to be surveyed, increasing the possibility of obtaining significant results and providing a chance for extensive comparisons, which are relevant from the point of view of art history issues.

  3. Image guided constitutive modeling of the silicone brain phantom

    NASA Astrophysics Data System (ADS)

    Puzrin, Alexander; Skrinjar, Oskar; Ozan, Cem; Kim, Sihyun; Mukundan, Srinivasan

    2005-04-01

    The goal of this work is to develop reliable constitutive models of the mechanical behavior of the in-vivo human brain tissue for applications in neurosurgery. We propose to define the mechanical properties of the brain tissue in-vivo, by taking the global MR or CT images of a brain response to ventriculostomy - the relief of the elevated intracranial pressure. 3D image analysis translates these images into displacement fields, which by using inverse analysis allow for the constitutive models of the brain tissue to be developed. We term this approach Image Guided Constitutive Modeling (IGCM). The presented paper demonstrates performance of the IGCM in the controlled environment: on the silicone brain phantoms closely simulating the in-vivo brain geometry, mechanical properties and boundary conditions. The phantom of the left hemisphere of human brain was cast using silicon gel. An inflatable rubber membrane was placed inside the phantom to model the lateral ventricle. The experiments were carried out in a specially designed setup in a CT scanner with submillimeter isotropic voxels. The non-communicative hydrocephalus and ventriculostomy were simulated by consequently inflating and deflating the internal rubber membrane. The obtained images were analyzed to derive displacement fields, meshed, and incorporated into ABAQUS. The subsequent Inverse Finite Element Analysis (based on Levenberg-Marquardt algorithm) allowed for optimization of the parameters of the Mooney-Rivlin non-linear elastic model for the phantom material. The calculated mechanical properties were consistent with those obtained from the element tests, providing justification for the future application of the IGCM to in-vivo brain tissue.

  4. An automated, high-throughput plant phenotyping system using machine learning-based plant segmentation and image analysis.

    PubMed

    Lee, Unseok; Chang, Sungyul; Putra, Gian Anantrio; Kim, Hyoungseok; Kim, Dong Hwan

    2018-01-01

    A high-throughput plant phenotyping system automatically observes and grows many plant samples. Many plant sample images are acquired by the system to determine the characteristics of the plants (populations). Stable image acquisition and processing is very important to accurately determine the characteristics. However, hardware for acquiring plant images rapidly and stably, while minimizing plant stress, is lacking. Moreover, most software cannot adequately handle large-scale plant imaging. To address these problems, we developed a new, automated, high-throughput plant phenotyping system using simple and robust hardware, and an automated plant-imaging-analysis pipeline consisting of machine-learning-based plant segmentation. Our hardware acquires images reliably and quickly and minimizes plant stress. Furthermore, the images are processed automatically. In particular, large-scale plant-image datasets can be segmented precisely using a classifier developed using a superpixel-based machine-learning algorithm (Random Forest), and variations in plant parameters (such as area) over time can be assessed using the segmented images. We performed comparative evaluations to identify an appropriate learning algorithm for our proposed system, and tested three robust learning algorithms. We developed not only an automatic analysis pipeline but also a convenient means of plant-growth analysis that provides a learning data interface and visualization of plant growth trends. Thus, our system allows end-users such as plant biologists to analyze plant growth via large-scale plant image data easily.

  5. Semantic focusing allows fully automated single-layer slide scanning of cervical cytology slides.

    PubMed

    Lahrmann, Bernd; Valous, Nektarios A; Eisenmann, Urs; Wentzensen, Nicolas; Grabe, Niels

    2013-01-01

    Liquid-based cytology (LBC) in conjunction with Whole-Slide Imaging (WSI) enables the objective and sensitive and quantitative evaluation of biomarkers in cytology. However, the complex three-dimensional distribution of cells on LBC slides requires manual focusing, long scanning-times, and multi-layer scanning. Here, we present a solution that overcomes these limitations in two steps: first, we make sure that focus points are only set on cells. Secondly, we check the total slide focus quality. From a first analysis we detected that superficial dust can be separated from the cell layer (thin layer of cells on the glass slide) itself. Then we analyzed 2,295 individual focus points from 51 LBC slides stained for p16 and Ki67. Using the number of edges in a focus point image, specific color values and size-inclusion filters, focus points detecting cells could be distinguished from focus points on artifacts (accuracy 98.6%). Sharpness as total focus quality of a virtual LBC slide is computed from 5 sharpness features. We trained a multi-parameter SVM classifier on 1,600 images. On an independent validation set of 3,232 cell images we achieved an accuracy of 94.8% for classifying images as focused. Our results show that single-layer scanning of LBC slides is possible and how it can be achieved. We assembled focus point analysis and sharpness classification into a fully automatic, iterative workflow, free of user intervention, which performs repetitive slide scanning as necessary. On 400 LBC slides we achieved a scanning-time of 13.9±10.1 min with 29.1±15.5 focus points. In summary, the integration of semantic focus information into whole-slide imaging allows automatic high-quality imaging of LBC slides and subsequent biomarker analysis.

  6. McIDAS-eXplorer: A version of McIDAS for planetary applications

    NASA Technical Reports Server (NTRS)

    Limaye, Sanjay S.; Saunders, R. Stephen; Sromovsky, Lawrence A.; Martin, Michael

    1994-01-01

    McIDAS-eXplorer is a set of software tools developed for analysis of planetary data published by the Planetary Data System on CD-ROM's. It is built upon McIDAS-X, an environment which has been in use nearly two decades now for earth weather satellite data applications in research and routine operations. The environment allows convenient access, navigation, analysis, display, and animation of planetary data by utilizing the full calibration data accompanying the planetary data. Support currently exists for Voyager images of the giant planets and their satellites; Magellan radar images (F-MIDR and C-MIDR's, global map products (GxDR's), and altimetry data (ARCDR's)); Galileo SSI images of the earth, moon, and Venus; Viking Mars images and MDIM's as well as most earth based telescopic images of solar system objects (FITS). The NAIF/JPL SPICE kernels are used for image navigation when available. For data without the SPICE kernels (such as the bulk of the Voyager Jupiter and Saturn imagery and Pioneer Orbiter images of Venus), tools based on NAIF toolkit allow the user to navigate the images interactively. Multiple navigation types can be attached to a given image (e.g., for ring navigation and planet navigation in the same image). Tools are available to perform common image processing tasks such as digital filtering, cartographic mapping, map overlays, and data extraction. It is also possible to have different planetary radii for an object such as Venus which requires a different radius for the surface and for the cloud level. A graphical user interface based on Tel-Tk scripting language is provided (UNIX only at present) for using the environment and also to provide on-line help. It is possible for end users to add applications of their own to the environment at any time.

  7. A Parallel Independent Component Analysis Approach to Investigate Genomic Influence on Brain Function

    PubMed Central

    Liu, Jingyu; Demirci, Oguz; Calhoun, Vince D.

    2009-01-01

    Relationships between genomic data and functional brain images are of great interest but require new analysis approaches to integrate the high-dimensional data types. This letter presents an extension of a technique called parallel independent component analysis (paraICA), which enables the joint analysis of multiple modalities including interconnections between them. We extend our earlier work by allowing for multiple interconnections and by providing important overfitting controls. Performance was assessed by simulations under different conditions, and indicated reliable results can be extracted by properly balancing overfitting and underfitting. An application to functional magnetic resonance images and single nucleotide polymorphism array produced interesting findings. PMID:19834575

  8. A Parallel Independent Component Analysis Approach to Investigate Genomic Influence on Brain Function.

    PubMed

    Liu, Jingyu; Demirci, Oguz; Calhoun, Vince D

    2008-01-01

    Relationships between genomic data and functional brain images are of great interest but require new analysis approaches to integrate the high-dimensional data types. This letter presents an extension of a technique called parallel independent component analysis (paraICA), which enables the joint analysis of multiple modalities including interconnections between them. We extend our earlier work by allowing for multiple interconnections and by providing important overfitting controls. Performance was assessed by simulations under different conditions, and indicated reliable results can be extracted by properly balancing overfitting and underfitting. An application to functional magnetic resonance images and single nucleotide polymorphism array produced interesting findings.

  9. A novel iris transillumination grading scale allowing flexible assessment with quantitative image analysis and visual matching.

    PubMed

    Wang, Chen; Brancusi, Flavia; Valivullah, Zaheer M; Anderson, Michael G; Cunningham, Denise; Hedberg-Buenz, Adam; Power, Bradley; Simeonov, Dimitre; Gahl, William A; Zein, Wadih M; Adams, David R; Brooks, Brian

    2018-01-01

    To develop a sensitive scale of iris transillumination suitable for clinical and research use, with the capability of either quantitative analysis or visual matching of images. Iris transillumination photographic images were used from 70 study subjects with ocular or oculocutaneous albinism. Subjects represented a broad range of ocular pigmentation. A subset of images was subjected to image analysis and ranking by both expert and nonexpert reviewers. Quantitative ordering of images was compared with ordering by visual inspection. Images were binned to establish an 8-point scale. Ranking consistency was evaluated using the Kendall rank correlation coefficient (Kendall's tau). Visual ranking results were assessed using Kendall's coefficient of concordance (Kendall's W) analysis. There was a high degree of correlation among the image analysis, expert-based and non-expert-based image rankings. Pairwise comparisons of the quantitative ranking with each reviewer generated an average Kendall's tau of 0.83 ± 0.04 (SD). Inter-rater correlation was also high with Kendall's W of 0.96, 0.95, and 0.95 for nonexpert, expert, and all reviewers, respectively. The current standard for assessing iris transillumination is expert assessment of clinical exam findings. We adapted an image-analysis technique to generate quantitative transillumination values. Quantitative ranking was shown to be highly similar to a ranking produced by both expert and nonexpert reviewers. This finding suggests that the image characteristics used to quantify iris transillumination do not require expert interpretation. Inter-rater rankings were also highly similar, suggesting that varied methods of transillumination ranking are robust in terms of producing reproducible results.

  10. Quantitative assessment of image motion blur in diffraction images of moving biological cells

    NASA Astrophysics Data System (ADS)

    Wang, He; Jin, Changrong; Feng, Yuanming; Qi, Dandan; Sa, Yu; Hu, Xin-Hua

    2016-02-01

    Motion blur (MB) presents a significant challenge for obtaining high-contrast image data from biological cells with a polarization diffraction imaging flow cytometry (p-DIFC) method. A new p-DIFC experimental system has been developed to evaluate the MB and its effect on image analysis using a time-delay-integration (TDI) CCD camera. Diffraction images of MCF-7 and K562 cells have been acquired with different speed-mismatch ratios and compared to characterize MB quantitatively. Frequency analysis of the diffraction images shows that the degree of MB can be quantified by bandwidth variations of the diffraction images along the motion direction. The analytical results were confirmed by the p-DIFC image data acquired at different speed-mismatch ratios and used to validate a method of numerical simulation of MB on blur-free diffraction images, which provides a useful tool to examine the blurring effect on diffraction images acquired from the same cell. These results provide insights on the dependence of diffraction image on MB and allow significant improvement on rapid biological cell assay with the p-DIFC method.

  11. Automatic removal of cosmic ray signatures in Deep Impact images

    NASA Astrophysics Data System (ADS)

    Ipatov, S. I.; A'Hearn, M. F.; Klaasen, K. P.

    The results of recognition of cosmic ray (CR) signatures on single images made during the Deep Impact mission were analyzed for several codes written by several authors. For automatic removal of CR signatures on many images, we suggest using the code imgclean ( http://pdssbn.astro.umd.edu/volume/didoc_0001/document/calibration_software/dical_v5/) written by E. Deutsch as other codes considered do not work properly automatically with a large number of images and do not run to completion for some images; however, other codes can be better for analysis of certain specific images. Sometimes imgclean detects false CR signatures near the edge of a comet nucleus, and it often does not recognize all pixels of long CR signatures. Our code rmcr is the only code among those considered that allows one to work with raw images. For most visual images made during low solar activity at exposure time t > 4 s, the number of clusters of bright pixels on an image per second per sq. cm of CCD was about 2-4, both for dark and normal sky images. At high solar activity, it sometimes exceeded 10. The ratio of the number of CR signatures consisting of n pixels obtained at high solar activity to that at low solar activity was greater for greater n. The number of clusters detected as CR signatures on a single infrared image is by at least a factor of several greater than the actual number of CR signatures; the number of clusters based on analysis of two successive dark infrared frames is in agreement with an expected number of CR signatures. Some glitches of false CR signatures include bright pixels repeatedly present on different infrared images. Our interactive code imr allows a user to choose the regions on a considered image where glitches detected by imgclean as CR signatures are ignored. In other regions chosen by the user, the brightness of some pixels is replaced by the local median brightness if the brightness of these pixels is greater by some factor than the median brightness. The interactive code allows one to delete long CR signatures and prevents removal of false CR signatures near the edge of the nucleus of the comet. The interactive code can be applied to editing any digital images. Results obtained can be used for other missions to comets.

  12. Leucocyte classification for leukaemia detection using image processing techniques.

    PubMed

    Putzu, Lorenzo; Caocci, Giovanni; Di Ruberto, Cecilia

    2014-11-01

    The counting and classification of blood cells allow for the evaluation and diagnosis of a vast number of diseases. The analysis of white blood cells (WBCs) allows for the detection of acute lymphoblastic leukaemia (ALL), a blood cancer that can be fatal if left untreated. Currently, the morphological analysis of blood cells is performed manually by skilled operators. However, this method has numerous drawbacks, such as slow analysis, non-standard accuracy, and dependences on the operator's skill. Few examples of automated systems that can analyse and classify blood cells have been reported in the literature, and most of these systems are only partially developed. This paper presents a complete and fully automated method for WBC identification and classification using microscopic images. In contrast to other approaches that identify the nuclei first, which are more prominent than other components, the proposed approach isolates the whole leucocyte and then separates the nucleus and cytoplasm. This approach is necessary to analyse each cell component in detail. From each cell component, different features, such as shape, colour and texture, are extracted using a new approach for background pixel removal. This feature set was used to train different classification models in order to determine which one is most suitable for the detection of leukaemia. Using our method, 245 of 267 total leucocytes were properly identified (92% accuracy) from 33 images taken with the same camera and under the same lighting conditions. Performing this evaluation using different classification models allowed us to establish that the support vector machine with a Gaussian radial basis kernel is the most suitable model for the identification of ALL, with an accuracy of 93% and a sensitivity of 98%. Furthermore, we evaluated the goodness of our new feature set, which displayed better performance with each evaluated classification model. The proposed method permits the analysis of blood cells automatically via image processing techniques, and it represents a medical tool to avoid the numerous drawbacks associated with manual observation. This process could also be used for counting, as it provides excellent performance and allows for early diagnostic suspicion, which can then be confirmed by a haematologist through specialised techniques. Copyright © 2014 Elsevier B.V. All rights reserved.

  13. Semivariogram Analysis of Bone Images Implemented on FPGA Architectures.

    PubMed

    Shirvaikar, Mukul; Lagadapati, Yamuna; Dong, Xuanliang

    2017-03-01

    Osteoporotic fractures are a major concern for the healthcare of elderly and female populations. Early diagnosis of patients with a high risk of osteoporotic fractures can be enhanced by introducing second-order statistical analysis of bone image data using techniques such as variogram analysis. Such analysis is computationally intensive thereby creating an impediment for introduction into imaging machines found in common clinical settings. This paper investigates the fast implementation of the semivariogram algorithm, which has been proven to be effective in modeling bone strength, and should be of interest to readers in the areas of computer-aided diagnosis and quantitative image analysis. The semivariogram is a statistical measure of the spatial distribution of data, and is based on Markov Random Fields (MRFs). Semivariogram analysis is a computationally intensive algorithm that has typically seen applications in the geosciences and remote sensing areas. Recently, applications in the area of medical imaging have been investigated, resulting in the need for efficient real time implementation of the algorithm. A semi-variance, γ ( h ), is defined as the half of the expected squared differences of pixel values between any two data locations with a lag distance of h . Due to the need to examine each pair of pixels in the image or sub-image being processed, the base algorithm complexity for an image window with n pixels is O ( n 2 ) Field Programmable Gate Arrays (FPGAs) are an attractive solution for such demanding applications due to their parallel processing capability. FPGAs also tend to operate at relatively modest clock rates measured in a few hundreds of megahertz. This paper presents a technique for the fast computation of the semivariogram using two custom FPGA architectures. A modular architecture approach is chosen to allow for replication of processing units. This allows for high throughput due to concurrent processing of pixel pairs. The current implementation is focused on isotropic semivariogram computations only. The algorithm is benchmarked using VHDL on a Xilinx XUPV5-LX110T development Kit, which utilizes the Virtex5 FPGA. Medical image data from DXA scans are utilized for the experiments. Implementation results show that a significant advantage in computational speed is attained by the architectures with respect to implementation on a personal computer with an Intel i7 multi-core processor.

  14. Semivariogram Analysis of Bone Images Implemented on FPGA Architectures

    PubMed Central

    Shirvaikar, Mukul; Lagadapati, Yamuna; Dong, Xuanliang

    2016-01-01

    Osteoporotic fractures are a major concern for the healthcare of elderly and female populations. Early diagnosis of patients with a high risk of osteoporotic fractures can be enhanced by introducing second-order statistical analysis of bone image data using techniques such as variogram analysis. Such analysis is computationally intensive thereby creating an impediment for introduction into imaging machines found in common clinical settings. This paper investigates the fast implementation of the semivariogram algorithm, which has been proven to be effective in modeling bone strength, and should be of interest to readers in the areas of computer-aided diagnosis and quantitative image analysis. The semivariogram is a statistical measure of the spatial distribution of data, and is based on Markov Random Fields (MRFs). Semivariogram analysis is a computationally intensive algorithm that has typically seen applications in the geosciences and remote sensing areas. Recently, applications in the area of medical imaging have been investigated, resulting in the need for efficient real time implementation of the algorithm. A semi-variance, γ(h), is defined as the half of the expected squared differences of pixel values between any two data locations with a lag distance of h. Due to the need to examine each pair of pixels in the image or sub-image being processed, the base algorithm complexity for an image window with n pixels is O (n2) Field Programmable Gate Arrays (FPGAs) are an attractive solution for such demanding applications due to their parallel processing capability. FPGAs also tend to operate at relatively modest clock rates measured in a few hundreds of megahertz. This paper presents a technique for the fast computation of the semivariogram using two custom FPGA architectures. A modular architecture approach is chosen to allow for replication of processing units. This allows for high throughput due to concurrent processing of pixel pairs. The current implementation is focused on isotropic semivariogram computations only. The algorithm is benchmarked using VHDL on a Xilinx XUPV5-LX110T development Kit, which utilizes the Virtex5 FPGA. Medical image data from DXA scans are utilized for the experiments. Implementation results show that a significant advantage in computational speed is attained by the architectures with respect to implementation on a personal computer with an Intel i7 multi-core processor. PMID:28428829

  15. Quantitative Imaging In Pathology (QUIP) | Informatics Technology for Cancer Research (ITCR)

    Cancer.gov

    This site hosts web accessible applications, tools and data designed to support analysis, management, and exploration of whole slide tissue images for cancer research. The following tools are included: caMicroscope: A digital pathology data management and visualization plaform that enables interactive viewing of whole slide tissue images and segmentation results. caMicroscope can be also used independently of QUIP. FeatureExplorer: An interactive tool to allow patient-level feature exploration across multiple dimensions.

  16. Exact image theory for the problem of dielectric/magnetic slab

    NASA Technical Reports Server (NTRS)

    Lindell, I. V.

    1987-01-01

    Exact image method, recently introduced for the exact solution of electromagnetic field problems involving homogeneous half spaces and microstrip-like geometries, is developed for the problem of homogeneous slab of dielectric and/or magnetic material in free space. Expressions for image sources, creating the exact reflected and transmitted fields, are given and their numerical evaluation is demonstrated. Nonradiating modes, guided by the slab and responsible for the loss of convergence of the image functions, are considered and extracted. The theory allows, for example, an analysis of finite ground planes in microstrip antenna structures.

  17. BioImageXD: an open, general-purpose and high-throughput image-processing platform.

    PubMed

    Kankaanpää, Pasi; Paavolainen, Lassi; Tiitta, Silja; Karjalainen, Mikko; Päivärinne, Joacim; Nieminen, Jonna; Marjomäki, Varpu; Heino, Jyrki; White, Daniel J

    2012-06-28

    BioImageXD puts open-source computer science tools for three-dimensional visualization and analysis into the hands of all researchers, through a user-friendly graphical interface tuned to the needs of biologists. BioImageXD has no restrictive licenses or undisclosed algorithms and enables publication of precise, reproducible and modifiable workflows. It allows simple construction of processing pipelines and should enable biologists to perform challenging analyses of complex processes. We demonstrate its performance in a study of integrin clustering in response to selected inhibitors.

  18. Fluorescence lifetime imaging of skin cancer

    NASA Astrophysics Data System (ADS)

    Patalay, Rakesh; Talbot, Clifford; Munro, Ian; Breunig, Hans Georg; König, Karsten; Alexandrov, Yuri; Warren, Sean; Neil, Mark A. A.; French, Paul M. W.; Chu, Anthony; Stamp, Gordon W.; Dunsby, Chris

    2011-03-01

    Fluorescence intensity imaging and fluorescence lifetime imaging microscopy (FLIM) using two photon microscopy (TPM) have been used to study tissue autofluorescence in ex vivo skin cancer samples. A commercially available system (DermaInspect®) was modified to collect fluorescence intensity and lifetimes in two spectral channels using time correlated single photon counting and depth-resolved steady state measurements of the fluorescence emission spectrum. Uniquely, image segmentation has been used to allow fluorescence lifetimes to be calculated for each cell. An analysis of lifetime values obtained from a range of pigmented and non-pigmented lesions will be presented.

  19. Accuracy of DSM based on digital aerial image matching. (Polish Title: Dokładność NMPT tworzonego metodą automatycznego dopasowania cyfrowych zdjęć lotniczych)

    NASA Astrophysics Data System (ADS)

    Kubalska, J. L.; Preuss, R.

    2013-12-01

    Digital Surface Models (DSM) are used in GIS data bases as single product more often. They are also necessary to create other products such as3D city models, true-ortho and object-oriented classification. This article presents results of DSM generation for classification of vegetation in urban areas. Source data allowed producing DSM with using of image matching method and ALS data. The creation of DSM from digital images, obtained by Ultra Cam-D digital Vexcel camera, was carried out in Match-T by INPHO. This program optimizes the configuration of images matching process, which ensures high accuracy and minimize gap areas. The analysis of the accuracy of this process was made by comparison of DSM generated in Match-T with DSM generated from ALS data. Because of further purpose of generated DSM it was decided to create model in GRID structure with cell size of 1 m. With this parameter differential model from both DSMs was also built that allowed determining the relative accuracy of the compared models. The analysis indicates that the generation of DSM with multi-image matching method is competitive for the same surface model creation from ALS data. Thus, when digital images with high overlap are available, the additional registration of ALS data seems to be unnecessary.

  20. Small PACS implementation using publicly available software

    NASA Astrophysics Data System (ADS)

    Passadore, Diego J.; Isoardi, Roberto A.; Gonzalez Nicolini, Federico J.; Ariza, P. P.; Novas, C. V.; Omati, S. A.

    1998-07-01

    Building cost effective PACS solutions is a main concern in developing countries. Hardware and software components are generally much more expensive than in developed countries and also more tightened financial constraints are the main reasons contributing to a slow rate of implementation of PACS. The extensive use of Internet for sharing resources and information has brought a broad number of freely available software packages to an ever-increasing number of users. In the field of medical imaging is possible to find image format conversion packages, DICOM compliant servers for all kinds of service classes, databases, web servers, image visualization, manipulation and analysis tools, etc. This paper describes a PACS implementation for review and storage built on freely available software. It currently integrates four diagnostic modalities (PET, CT, MR and NM), a Radiotherapy Treatment Planning workstation and several computers in a local area network, for image storage, database management and image review, processing and analysis. It also includes a web-based application that allows remote users to query the archive for studies from any workstation and to view the corresponding images and reports. We conclude that the advantage of using this approach is twofold. It allows a full understanding of all the issues involved in the implementation of a PACS and also contributes to keep costs down while enabling the development of a functional system for storage, distribution and review that can prove to be helpful for radiologists and referring physicians.

  1. MALDI Imaging Mass Spectrometry (MALDI-IMS)—Application of Spatial Proteomics for Ovarian Cancer Classification and Diagnosis

    PubMed Central

    Gustafsson, Johan O. R.; Oehler, Martin K.; Ruszkiewicz, Andrew; McColl, Shaun R.; Hoffmann, Peter

    2011-01-01

    MALDI imaging mass spectrometry (MALDI-IMS) allows acquisition of mass data for metabolites, lipids, peptides and proteins directly from tissue sections. IMS is typically performed either as a multiple spot profiling experiment to generate tissue specific mass profiles, or a high resolution imaging experiment where relative spatial abundance for potentially hundreds of analytes across virtually any tissue section can be measured. Crucially, imaging can be achieved without prior knowledge of tissue composition and without the use of antibodies. In effect MALDI-IMS allows generation of molecular data which complement and expand upon the information provided by histology including immuno-histochemistry, making its application valuable to both cancer biomarker research and diagnostics. The current state of MALDI-IMS, key biological applications to ovarian cancer research and practical considerations for analysis of peptides and proteins on ovarian tissue are presented in this review. PMID:21340013

  2. MALDI Imaging Mass Spectrometry (MALDI-IMS)-application of spatial proteomics for ovarian cancer classification and diagnosis.

    PubMed

    Gustafsson, Johan O R; Oehler, Martin K; Ruszkiewicz, Andrew; McColl, Shaun R; Hoffmann, Peter

    2011-01-21

    MALDI imaging mass spectrometry (MALDI-IMS) allows acquisition of mass data for metabolites, lipids, peptides and proteins directly from tissue sections. IMS is typically performed either as a multiple spot profiling experiment to generate tissue specific mass profiles, or a high resolution imaging experiment where relative spatial abundance for potentially hundreds of analytes across virtually any tissue section can be measured. Crucially, imaging can be achieved without prior knowledge of tissue composition and without the use of antibodies. In effect MALDI-IMS allows generation of molecular data which complement and expand upon the information provided by histology including immuno-histochemistry, making its application valuable to both cancer biomarker research and diagnostics. The current state of MALDI-IMS, key biological applications to ovarian cancer research and practical considerations for analysis of peptides and proteins on ovarian tissue are presented in this review.

  3. Preclinical Imaging for the Study of Mouse Models of Thyroid Cancer

    PubMed Central

    Greco, Adelaide; Orlandella, Francesca Maria; Iervolino, Paola Lucia Chiara; Klain, Michele; Salvatore, Giuliana

    2017-01-01

    Thyroid cancer, which represents the most common tumors among endocrine malignancies, comprises a wide range of neoplasms with different clinical aggressiveness. One of the most important challenges in research is to identify mouse models that most closely resemble human pathology; other goals include finding a way to detect markers of disease that common to humans and mice and to identify the most appropriate and least invasive therapeutic strategies for specific tumor types. Preclinical thyroid imaging includes a wide range of techniques that allow for morphological and functional characterization of thyroid disease as well as targeting and in most cases, this imaging allows quantitative analysis of the molecular pattern of the thyroid cancer. The aim of this review paper is to provide an overview of all of the imaging techniques used to date both for diagnosis and theranostic purposes in mouse models of thyroid cancer. PMID:29258188

  4. Deferred slanted-edge analysis: a unified approach to spatial frequency response measurement on distorted images and color filter array subsets.

    PubMed

    van den Bergh, F

    2018-03-01

    The slanted-edge method of spatial frequency response (SFR) measurement is usually applied to grayscale images under the assumption that any distortion of the expected straight edge is negligible. By decoupling the edge orientation and position estimation step from the edge spread function construction step, it is shown in this paper that the slanted-edge method can be extended to allow it to be applied to images suffering from significant geometric distortion, such as produced by equiangular fisheye lenses. This same decoupling also allows the slanted-edge method to be applied directly to Bayer-mosaicked images so that the SFR of the color filter array subsets can be measured directly without the unwanted influence of demosaicking artifacts. Numerical simulation results are presented to demonstrate the efficacy of the proposed deferred slanted-edge method in relation to existing methods.

  5. In vivo neuronal calcium imaging in C. elegans.

    PubMed

    Chung, Samuel H; Sun, Lin; Gabel, Christopher V

    2013-04-10

    The nematode worm C. elegans is an ideal model organism for relatively simple, low cost neuronal imaging in vivo. Its small transparent body and simple, well-characterized nervous system allows identification and fluorescence imaging of any neuron within the intact animal. Simple immobilization techniques with minimal impact on the animal's physiology allow extended time-lapse imaging. The development of genetically-encoded calcium sensitive fluorophores such as cameleon and GCaMP allow in vivo imaging of neuronal calcium relating both cell physiology and neuronal activity. Numerous transgenic strains expressing these fluorophores in specific neurons are readily available or can be constructed using well-established techniques. Here, we describe detailed procedures for measuring calcium dynamics within a single neuron in vivo using both GCaMP and cameleon. We discuss advantages and disadvantages of both as well as various methods of sample preparation (animal immobilization) and image analysis. Finally, we present results from two experiments: 1) Using GCaMP to measure the sensory response of a specific neuron to an external electrical field and 2) Using cameleon to measure the physiological calcium response of a neuron to traumatic laser damage. Calcium imaging techniques such as these are used extensively in C. elegans and have been extended to measurements in freely moving animals, multiple neurons simultaneously and comparison across genetic backgrounds. C. elegans presents a robust and flexible system for in vivo neuronal imaging with advantages over other model systems in technical simplicity and cost.

  6. Proton radiography for inline treatment planning and positioning verification of small animals.

    PubMed

    Müller, Johannes; Neubert, Christian; von Neubeck, Cläre; Baumann, Michael; Krause, Mechthild; Enghardt, Wolfgang; Bütof, Rebecca; Dietrich, Antje; Lühr, Armin

    2017-11-01

    As proton therapy becomes increasingly well established, there is a need for high-quality clinically relevant in vivo data to gain better insight into the radiobiological effects of proton irradiation on both healthy and tumor tissue. This requires the development of easily applicable setups that allow for efficient, fractionated, image-guided proton irradiation of small animals, the most widely used pre-clinical model. Here, a method is proposed to perform dual-energy proton radiography for inline positioning verification and treatment planning. Dual-energy proton radiography exploits the differential enhancement of object features in two successively measured two-dimensional (2D) dose distributions at two different proton energies. The two raw images show structures that are dominated by energy absorption (absorption mode) or scattering (scattering mode) of protons in the object, respectively. Data post-processing allowed for the separation of both signal contributions in the respective images. The images were evaluated regarding recognizable object details and feasibility of rigid registration to acquired planar X-ray scans. Robust, automated rigid registration of proton radiography and planar X-ray images in scattering mode could be reliably achieved with the animal bedding unit used as registration landmark. Distinguishable external and internal features of the imaged mouse included the outer body contour, the skull with substructures, the lung, abdominal structures and the hind legs. Image analysis based on the combined information of both imaging modes allowed image enhancement and calculation of 2D water-equivalent path length (WEPL) maps of the object along the beam direction. Fractionated irradiation of exposed target volumes (e.g., subcutaneous tumor model or brain) can be realized with the suggested method being used for daily positioning and range determination. Robust registration of X-ray and proton radiography images allows for the irradiation of tumor entities that require conventional computed tomography (CT)-based planning, such as orthotopic lung or brain tumors, similar to conventional patient treatment.

  7. Low-level processing for real-time image analysis

    NASA Technical Reports Server (NTRS)

    Eskenazi, R.; Wilf, J. M.

    1979-01-01

    A system that detects object outlines in television images in real time is described. A high-speed pipeline processor transforms the raw image into an edge map and a microprocessor, which is integrated into the system, clusters the edges, and represents them as chain codes. Image statistics, useful for higher level tasks such as pattern recognition, are computed by the microprocessor. Peak intensity and peak gradient values are extracted within a programmable window and are used for iris and focus control. The algorithms implemented in hardware and the pipeline processor architecture are described. The strategy for partitioning functions in the pipeline was chosen to make the implementation modular. The microprocessor interface allows flexible and adaptive control of the feature extraction process. The software algorithms for clustering edge segments, creating chain codes, and computing image statistics are also discussed. A strategy for real time image analysis that uses this system is given.

  8. MUNSELL COLOR ANALYSIS OF LANDSAT COLOR-RATIO-COMPOSITE IMAGES OF LIMONITIC AREAS IN SOUTHWEST NEW MEXICO.

    USGS Publications Warehouse

    Kruse, Fred A.

    1984-01-01

    Green areas on Landsat 4/5 - 4/6 - 6/7 (red - blue - green) color-ratio-composite (CRC) images represent limonite on the ground. Color variation on such images was analyzed to determine the causes of the color differences within and between the green areas. Digital transformation of the CRC data into the modified cylindrical Munsell color coordinates - hue, value, and saturation - was used to correlate image color characteristics with properties of surficial materials. The amount of limonite visible to the sensor is the primary cause of color differences in green areas on the CRCs. Vegetation density is a secondary cause of color variation of green areas on Landsat CRC images. Digital color analysis of Landsat CRC images can be used to map unknown areas. Color variations of green pixels allows discrimination among limonitic bedrock, nonlimonitic bedrock, nonlimonitic alluvium, and limonitic alluvium.

  9. AutoCellSeg: robust automatic colony forming unit (CFU)/cell analysis using adaptive image segmentation and easy-to-use post-editing techniques.

    PubMed

    Khan, Arif Ul Maula; Torelli, Angelo; Wolf, Ivo; Gretz, Norbert

    2018-05-08

    In biological assays, automated cell/colony segmentation and counting is imperative owing to huge image sets. Problems occurring due to drifting image acquisition conditions, background noise and high variation in colony features in experiments demand a user-friendly, adaptive and robust image processing/analysis method. We present AutoCellSeg (based on MATLAB) that implements a supervised automatic and robust image segmentation method. AutoCellSeg utilizes multi-thresholding aided by a feedback-based watershed algorithm taking segmentation plausibility criteria into account. It is usable in different operation modes and intuitively enables the user to select object features interactively for supervised image segmentation method. It allows the user to correct results with a graphical interface. This publicly available tool outperforms tools like OpenCFU and CellProfiler in terms of accuracy and provides many additional useful features for end-users.

  10. Biological Parametric Mapping: A Statistical Toolbox for Multi-Modality Brain Image Analysis

    PubMed Central

    Casanova, Ramon; Ryali, Srikanth; Baer, Aaron; Laurienti, Paul J.; Burdette, Jonathan H.; Hayasaka, Satoru; Flowers, Lynn; Wood, Frank; Maldjian, Joseph A.

    2006-01-01

    In recent years multiple brain MR imaging modalities have emerged; however, analysis methodologies have mainly remained modality specific. In addition, when comparing across imaging modalities, most researchers have been forced to rely on simple region-of-interest type analyses, which do not allow the voxel-by-voxel comparisons necessary to answer more sophisticated neuroscience questions. To overcome these limitations, we developed a toolbox for multimodal image analysis called biological parametric mapping (BPM), based on a voxel-wise use of the general linear model. The BPM toolbox incorporates information obtained from other modalities as regressors in a voxel-wise analysis, thereby permitting investigation of more sophisticated hypotheses. The BPM toolbox has been developed in MATLAB with a user friendly interface for performing analyses, including voxel-wise multimodal correlation, ANCOVA, and multiple regression. It has a high degree of integration with the SPM (statistical parametric mapping) software relying on it for visualization and statistical inference. Furthermore, statistical inference for a correlation field, rather than a widely-used T-field, has been implemented in the correlation analysis for more accurate results. An example with in-vivo data is presented demonstrating the potential of the BPM methodology as a tool for multimodal image analysis. PMID:17070709

  11. Syntactic methods of shape feature description and its application in analysis of medical images

    NASA Astrophysics Data System (ADS)

    Ogiela, Marek R.; Tadeusiewicz, Ryszard

    2000-02-01

    The paper presents specialist algorithms of morphologic analysis of shapes of selected organs of abdominal cavity proposed in order to diagnose disease symptoms occurring in the main pancreatic ducts and upper segments of ureters. Analysis of the correct morphology of these structures has been conducted with the use of syntactic methods of pattern recognition. Its main objective is computer-aided support to early diagnosis of neoplastic lesions and pancreatitis based on images taken in the course of examination with the endoscopic retrograde cholangiopancreatography (ERCP) method and a diagnosis of morphological lesions in ureter based on kidney radiogram analysis. In the analysis of ERCP images, the main objective is to recognize morphological lesions in pancreas ducts characteristic for carcinoma and chronic pancreatitis. In the case of kidney radiogram analysis the aim is to diagnose local irregularity of ureter lumen. Diagnosing the above mentioned lesion has been conducted with the use of syntactic methods of pattern recognition, in particular the languages of shape features description and context-free attributed grammars. These methods allow to recognize and describe in a very efficient way the aforementioned lesions on images obtained as a result of initial image processing into diagrams of widths of the examined structures.

  12. High-volume image quality assessment systems: tuning performance with an interactive data visualization tool

    NASA Astrophysics Data System (ADS)

    Bresnahan, Patricia A.; Pukinskis, Madeleine; Wiggins, Michael

    1999-03-01

    Image quality assessment systems differ greatly with respect to the number and types of mags they need to evaluate, and their overall architectures. Managers of these systems, however, all need to be able to tune and evaluate system performance, requirements often overlooked or under-designed during project planning. Performance tuning tools allow users to define acceptable quality standards for image features and attributes by adjusting parameter settings. Performance analysis tools allow users to evaluate and/or predict how well a system performs in a given parameter state. While image assessment algorithms are becoming quite sophisticated, duplicating or surpassing the human decision making process in their speed and reliability, they often require a greater investment in 'training' or fine tuning of parameters in order to achieve optimum performance. This process may involve the analysis of hundreds or thousands of images, generating a large database of files and statistics that can be difficult to sort through and interpret. Compounding the difficulty is the fact that personnel charged with tuning and maintaining the production system may not have the statistical or analytical background required for the task. Meanwhile, hardware innovations have greatly increased the volume of images that can be handled in a given time frame, magnifying the consequences of running a production site with an inadequately tuned system. In this paper, some general requirements for a performance evaluation and tuning data visualization system are discussed. A custom engineered solution to the tuning and evaluation problem is then presented, developed within the context of a high volume image quality assessment, data entry, OCR, and image archival system. A key factor influencing the design of the system was the context-dependent definition of image quality, as perceived by a human interpreter. This led to the development of a five-level, hierarchical approach to image quality evaluation. Lower-level pass-fail conditions and decision rules were coded into the system. Higher-level image quality states were defined by allowing the users to interactively adjust the system's sensitivity to various image attributes by manipulating graphical controls. Results were presented in easily interpreted bar graphs. These graphs were mouse- sensitive, allowing the user to more fully explore the subsets of data indicated by various color blocks. In order to simplify the performance evaluation and tuning process, users could choose to view the results of (1) the existing system parameter state, (2) the results of any arbitrary parameter values they chose, or (3) the results of a quasi-optimum parameter state, derived by applying a decision rule to a large set of possible parameter states. Giving managers easy- to-use tools for defining the more subjective aspects of quality resulted in a system that responded to contextual cues that are difficult to hard-code. It had the additional advantage of allowing the definition of quality to evolve over time, as users became more knowledgeable as to the strengths and limitations of an automated quality inspection system.

  13. Hyperspectral imaging and multivariate analysis in the dried blood spots investigations

    NASA Astrophysics Data System (ADS)

    Majda, Alicja; Wietecha-Posłuszny, Renata; Mendys, Agata; Wójtowicz, Anna; Łydżba-Kopczyńska, Barbara

    2018-04-01

    The aim of this study was to apply a new methodology using the combination of the hyperspectral imaging and the dry blood spot (DBS) collecting. Application of the hyperspectral imaging is fast and non-destructive. DBS method offers the advantage also on the micro-invasive blood collecting and low volume of required sample. During experimental step, the reflected light was recorded by two hyperspectral systems. The collection of 776 spectral bands in the VIS-NIR range (400-1000 nm) and 256 spectral bands in the SWIR range (970-2500 nm) was applied. Pixel has the size of 8 × 8 and 30 × 30 µm for VIS-NIR and SWIR camera, respectively. The obtained data in the form of hyperspectral cubes were treated with chemometric methods, i.e., minimum noise fraction and principal component analysis. It has been shown that the application of these methods on this type of data, by analyzing the scatter plots, allows a rapid analysis of the homogeneity of DBS, and the selection of representative areas for further analysis. It also gives the possibility of tracking the dynamics of changes occurring in biological traces applied on the surface. For the analyzed 28 blood samples, described method allowed to distinguish those blood stains because of time of apply.

  14. High-resolution in-situ thermal imaging of microbial mats at El Tatio Geyser, Chile shows coupling between community color and temperature

    NASA Astrophysics Data System (ADS)

    Dunckel, Anne E.; Cardenas, M. Bayani; Sawyer, Audrey H.; Bennett, Philip C.

    2009-12-01

    Microbial mats have spatially heterogeneous structured communities that manifest visually through vibrant color zonation often associated with environmental gradients. We report the first use of high-resolution thermal infrared imaging to map temperature at four hot springs within the El Tatio Geyser Field, Chile. Thermal images with millimeter resolution show drastic variability and pronounced patterning in temperature, with changes on the order of 30°C within a square decimeter. Paired temperature and visual images show that zones with specific coloration occur within distinct temperature ranges. Unlike previous studies where maximum, minimum, and optimal temperatures for microorganisms are based on isothermally-controlled laboratory cultures, thermal imaging allows for mapping thousands of temperature values in a natural setting. This allows for efficiently constraining natural temperature bounds for visually distinct mat zones. This approach expands current understanding of thermophilic microbial communities and opens doors for detailed analysis of biophysical controls on microbial ecology.

  15. Flexcam Image Capture Viewing and Spot Tracking

    NASA Technical Reports Server (NTRS)

    Rao, Shanti

    2008-01-01

    Flexcam software was designed to allow continuous monitoring of the mechanical deformation of the telescope structure at Palomar Observatory. Flexcam allows the user to watch the motion of a star with a low-cost astronomical camera, to measure the motion of the star on the image plane, and to feed this data back into the telescope s control system. This automatic interaction between the camera and a user interface facilitates integration and testing. Flexcam is a CCD image capture and analysis tool for the ST-402 camera from Santa Barbara Instruments Group (SBIG). This program will automatically take a dark exposure and then continuously display corrected images. The image size, bit depth, magnification, exposure time, resolution, and filter are always displayed on the title bar. Flexcam locates the brightest pixel and then computes the centroid position of the pixels falling in a box around that pixel. This tool continuously writes the centroid position to a network file that can be used by other instruments.

  16. Principal component and spatial correlation analysis of spectroscopic-imaging data in scanning probe microscopy.

    PubMed

    Jesse, Stephen; Kalinin, Sergei V

    2009-02-25

    An approach for the analysis of multi-dimensional, spectroscopic-imaging data based on principal component analysis (PCA) is explored. PCA selects and ranks relevant response components based on variance within the data. It is shown that for examples with small relative variations between spectra, the first few PCA components closely coincide with results obtained using model fitting, and this is achieved at rates approximately four orders of magnitude faster. For cases with strong response variations, PCA allows an effective approach to rapidly process, de-noise, and compress data. The prospects for PCA combined with correlation function analysis of component maps as a universal tool for data analysis and representation in microscopy are discussed.

  17. Contrast enhancement of bite mark images using the grayscale mixer in ACR in Photoshop®.

    PubMed

    Evans, Sam; Noorbhai, Suzanne; Lawson, Zoe; Stacey-Jones, Seren; Carabott, Romina

    2013-05-01

    Enhanced images may improve bite mark edge definition, assisting forensic analysis. Current contrast enhancement involves color extraction, viewing layered images by channel. A novel technique, producing a single enhanced image using the grayscale mix panel within Adobe Camera Raw®, has been developed and assessed here, allowing adjustments of multiple color channels simultaneously. Stage 1 measured RGB values in 72 versions of a color chart image; eight sliders in Photoshop® were adjusted at 25% intervals, all corresponding colors affected. Stage 2 used a bite mark image, and found only red, orange, and yellow sliders had discernable effects. Stage 3 assessed modality preference between color, grayscale, and enhanced images; on average, the 22 survey participants chose the enhanced image as better defined for nine out of 10 bite marks. The study has shown potential benefits for this new technique. However, further research is needed before use in the analysis of bite marks. © 2013 American Academy of Forensic Sciences.

  18. If the skull fits: magnetic resonance imaging and microcomputed tomography for combined analysis of brain and skull phenotypes in the mouse

    PubMed Central

    Blank, Marissa C.; Roman, Brian B.; Henkelman, R. Mark; Millen, Kathleen J.

    2012-01-01

    The mammalian brain and skull develop concurrently in a coordinated manner, consistently producing a brain and skull that fit tightly together. It is common that abnormalities in one are associated with related abnormalities in the other. However, this is not always the case. A complete characterization of the relationship between brain and skull phenotypes is necessary to understand the mechanisms that cause them to be coordinated or divergent and to provide perspective on the potential diagnostic or prognostic significance of brain and skull phenotypes. We demonstrate the combined use of magnetic resonance imaging and microcomputed tomography for analysis of brain and skull phenotypes in the mouse. Co-registration of brain and skull images allows comparison of the relationship between phenotypes in the brain and those in the skull. We observe a close fit between the brain and skull of two genetic mouse models that both show abnormal brain and skull phenotypes. Application of these three-dimensional image analyses in a broader range of mouse mutants will provide a map of the relationships between brain and skull phenotypes generally and allow characterization of patterns of similarities and differences. PMID:22947655

  19. In vivo confocal microscopy of the cornea: New developments in image acquisition, reconstruction and analysis using the HRT-Rostock Corneal Module

    PubMed Central

    Petroll, W. Matthew; Robertson, Danielle M.

    2015-01-01

    The optical sectioning ability of confocal microscopy allows high magnification images to be obtained from different depths within a thick tissue specimen, and is thus ideally suited to the study of intact tissue in living subjects. In vivo confocal microscopy has been used in a variety of corneal research and clinical applications since its development over 25 years ago. In this article we review the latest developments in quantitative corneal imaging with the Heidelberg Retinal Tomograph with Rostock Corneal Module (HRT-RCM). We provide an overview of the unique strengths and weaknesses of the HRT-RCM. We discuss techniques for performing 3-D imaging with the HRT-RCM, including hardware and software modifications that allow full thickness confocal microscopy through focusing (CMTF) of the cornea, which can provide quantitative measurements of corneal sublayer thicknesses, stromal cell and extracellular matrix backscatter, and depth dependent changes in corneal keratocyte density. We also review current approaches for quantitative imaging of the subbasal nerve plexus, which require a combination of advanced image acquisition and analysis procedures, including wide field mapping and 3-D reconstruction of nerve structures. The development of new hardware, software, and acquisition techniques continues to expand the number of applications of the HRT-RCM for quantitative in vivo corneal imaging at the cellular level. Knowledge of these rapidly evolving strategies should benefit corneal clinicians and basic scientists alike. PMID:25998608

  20. Quantitative assessment of corneal vibrations during intraocular pressure measurement with the air-puff method in patients with keratoconus.

    PubMed

    Koprowski, Robert; Ambrósio, Renato

    2015-11-01

    One of the current methods for measuring intraocular pressure is the air-puff method. A tonometer which uses this method is the Corvis device. With the ultra-high-speed (UHS) Scheimpflug camera, it is also possible to observe corneal deformation during measurement. The use of modern image analysis and processing methods allows for analysis of higher harmonics of corneal deflection above 100 Hz. 493 eyes of healthy subjects and 279 eyes of patients with keratoconus were used in the measurements. For each eye, 140 corneal deformation images were recorded during intraocular pressure measurement. Each image was recorded every 230 µs and had a resolution of 200 × 576 pixels. A new, original algorithm for image analysis and processing has been proposed. It enables to separate the eyeball reaction as well as low-frequency and high-frequency corneal deformations from the eye response to an air puff. Furthermore, a method for classification of healthy subjects and patients with keratoconus based on decision trees has been proposed. The obtained results confirm the possibility to distinguish between patients with keratoconus and healthy subjects. The features used in this classification are directly related to corneal vibrations. They are only available in the proposed software and provide specificity of 98%, sensitivity-85%, and accuracy-92%. This confirms the usefulness of the proposed method in this type of classification that uses corneal vibrations during intraocular pressure measurement with the Corvis tonometer. With the new proposed algorithm for image analysis and processing allowing for the separation of individual features from a corneal deformation image, it is possible to: automatically measure corneal vibrations in a few characteristic points of the cornea, obtain fully repeatable measurement of vibrations for the same registered sequence of images and measure vibration parameters for large inter-individual variability in patients. Copyright © 2015 Elsevier Ltd. All rights reserved.

  1. Automated retinal image quality assessment on the UK Biobank dataset for epidemiological studies.

    PubMed

    Welikala, R A; Fraz, M M; Foster, P J; Whincup, P H; Rudnicka, A R; Owen, C G; Strachan, D P; Barman, S A

    2016-04-01

    Morphological changes in the retinal vascular network are associated with future risk of many systemic and vascular diseases. However, uncertainty over the presence and nature of some of these associations exists. Analysis of data from large population based studies will help to resolve these uncertainties. The QUARTZ (QUantitative Analysis of Retinal vessel Topology and siZe) retinal image analysis system allows automated processing of large numbers of retinal images. However, an image quality assessment module is needed to achieve full automation. In this paper, we propose such an algorithm, which uses the segmented vessel map to determine the suitability of retinal images for use in the creation of vessel morphometric data suitable for epidemiological studies. This includes an effective 3-dimensional feature set and support vector machine classification. A random subset of 800 retinal images from UK Biobank (a large prospective study of 500,000 middle aged adults; where 68,151 underwent retinal imaging) was used to examine the performance of the image quality algorithm. The algorithm achieved a sensitivity of 95.33% and a specificity of 91.13% for the detection of inadequate images. The strong performance of this image quality algorithm will make rapid automated analysis of vascular morphometry feasible on the entire UK Biobank dataset (and other large retinal datasets), with minimal operator involvement, and at low cost. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. Identifying fecal matter contamination in produce fields using multispectral reflectance imaging under ambient solar illumination

    NASA Astrophysics Data System (ADS)

    Everard, Colm D.; Kim, Moon S.; Lee, Hoonsoo; O'Donnell, Colm P.

    2016-05-01

    An imaging device to detect fecal contamination in fresh produce fields could allow the producer avoid harvesting fecal contaminated produce. E.coli O157:H7 outbreaks have been associated with fecal contaminated leafy greens. In this study, in-field spectral profiles of bovine fecal matter, soil, and spinach leaves are compared. A common aperture imager designed with two identical monochromatic cameras, a beam splitter, and optical filters was used to simultaneously capture two-spectral images of leaves contaminated with both fecal matter and soil. The optical filters where 10 nm full width half maximum bandpass filters, one at 690 nm and the second at 710 nm. These were mounted in front of the object lenses. New images were created using the ratio of these two spectral images on a pixel by pixel basis. Image analysis results showed that the fecal matter contamination could be distinguished from soil and leaf on the ratio images. The use of this technology has potential to allow detection of fecal contamination in produce fields which can be a source of foodbourne illnesses. It has the added benefit of mitigating cross-contamination during harvesting and processing.

  3. DETECTING UNSPECIFIED STRUCTURE IN LOW-COUNT IMAGES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stein, Nathan M.; Dyk, David A. van; Kashyap, Vinay L.

    Unexpected structure in images of astronomical sources often presents itself upon visual inspection of the image, but such apparent structure may either correspond to true features in the source or be due to noise in the data. This paper presents a method for testing whether inferred structure in an image with Poisson noise represents a significant departure from a baseline (null) model of the image. To infer image structure, we conduct a Bayesian analysis of a full model that uses a multiscale component to allow flexible departures from the posited null model. As a test statistic, we use a tailmore » probability of the posterior distribution under the full model. This choice of test statistic allows us to estimate a computationally efficient upper bound on a p-value that enables us to draw strong conclusions even when there are limited computational resources that can be devoted to simulations under the null model. We demonstrate the statistical performance of our method on simulated images. Applying our method to an X-ray image of the quasar 0730+257, we find significant evidence against the null model of a single point source and uniform background, lending support to the claim of an X-ray jet.« less

  4. Secure annotation for medical images based on reversible watermarking in the Integer Fibonacci-Haar transform domain

    NASA Astrophysics Data System (ADS)

    Battisti, F.; Carli, M.; Neri, A.

    2011-03-01

    The increasing use of digital image-based applications is resulting in huge databases that are often difficult to use and prone to misuse and privacy concerns. These issues are especially crucial in medical applications. The most commonly adopted solution is the encryption of both the image and the patient data in separate files that are then linked. This practice results to be inefficient since, in order to retrieve patient data or analysis details, it is necessary to decrypt both files. In this contribution, an alternative solution for secure medical image annotation is presented. The proposed framework is based on the joint use of a key-dependent wavelet transform, the Integer Fibonacci-Haar transform, of a secure cryptographic scheme, and of a reversible watermarking scheme. The system allows: i) the insertion of the patient data into the encrypted image without requiring the knowledge of the original image, ii) the encryption of annotated images without causing loss in the embedded information, and iii) due to the complete reversibility of the process, it allows recovering the original image after the mark removal. Experimental results show the effectiveness of the proposed scheme.

  5. Cryo-Imaging and Software Platform for Analysis of Molecular MR Imaging of Micrometastases

    PubMed Central

    Qutaish, Mohammed Q.; Zhou, Zhuxian; Prabhu, David; Liu, Yiqiao; Busso, Mallory R.; Izadnegahdar, Donna; Gargesha, Madhusudhana; Lu, Hong; Lu, Zheng-Rong

    2018-01-01

    We created and evaluated a preclinical, multimodality imaging, and software platform to assess molecular imaging of small metastases. This included experimental methods (e.g., GFP-labeled tumor and high resolution multispectral cryo-imaging), nonrigid image registration, and interactive visualization of imaging agent targeting. We describe technological details earlier applied to GFP-labeled metastatic tumor targeting by molecular MR (CREKA-Gd) and red fluorescent (CREKA-Cy5) imaging agents. Optimized nonrigid cryo-MRI registration enabled nonambiguous association of MR signals to GFP tumors. Interactive visualization of out-of-RAM volumetric image data allowed one to zoom to a GFP-labeled micrometastasis, determine its anatomical location from color cryo-images, and establish the presence/absence of targeted CREKA-Gd and CREKA-Cy5. In a mouse with >160 GFP-labeled tumors, we determined that in the MR images every tumor in the lung >0.3 mm2 had visible signal and that some metastases as small as 0.1 mm2 were also visible. More tumors were visible in CREKA-Cy5 than in CREKA-Gd MRI. Tape transfer method and nonrigid registration allowed accurate (<11 μm error) registration of whole mouse histology to corresponding cryo-images. Histology showed inflammation and necrotic regions not labeled by imaging agents. This mouse-to-cells multiscale and multimodality platform should uniquely enable more informative and accurate studies of metastatic cancer imaging and therapy. PMID:29805438

  6. Course for undergraduate students: analysis of the retinal image quality of a human eye model

    NASA Astrophysics Data System (ADS)

    del Mar Pérez, Maria; Yebra, Ana; Fernández-Oliveras, Alicia; Ghinea, Razvan; Ionescu, Ana M.; Cardona, Juan C.

    2014-07-01

    In teaching of Vision Physics or Physiological Optics, the knowledge and analysis of the aberration that the human eye presents are of great interest, since this information allows a proper evaluation of the quality of the retinal image. The objective of the present work is that the students acquire the required competencies which will allow them to evaluate the optical quality of the human visual system for emmetropic and ammetropic eye, both with and without the optical compensation. For this purpose, an optical system corresponding to the Navarro-Escudero eye model, which allows calculating and evaluating the aberration of this eye model in different ammetropic conditions, was developed employing the OSLO LT software. The optical quality of the visual system will be assessed through determinations of the third and fifth order aberration coefficients, the impact diagram, wavefront analysis, calculation of the Point Spread Function and the Modulation Transfer Function for ammetropic individuals, with myopia or hyperopia, both with or without the optical compensation. This course is expected to be of great interest for student of Optics and Optometry Sciences, last courses of Physics or medical sciences related with human vision.

  7. Method for evaluation of human induced pluripotent stem cell quality using image analysis based on the biological morphology of cells.

    PubMed

    Wakui, Takashi; Matsumoto, Tsuyoshi; Matsubara, Kenta; Kawasaki, Tomoyuki; Yamaguchi, Hiroshi; Akutsu, Hidenori

    2017-10-01

    We propose an image analysis method for quality evaluation of human pluripotent stem cells based on biologically interpretable features. It is important to maintain the undifferentiated state of induced pluripotent stem cells (iPSCs) while culturing the cells during propagation. Cell culture experts visually select good quality cells exhibiting the morphological features characteristic of undifferentiated cells. Experts have empirically determined that these features comprise prominent and abundant nucleoli, less intercellular spacing, and fewer differentiating cellular nuclei. We quantified these features based on experts' visual inspection of phase contrast images of iPSCs and found that these features are effective for evaluating iPSC quality. We then developed an iPSC quality evaluation method using an image analysis technique. The method allowed accurate classification, equivalent to visual inspection by experts, of three iPSC cell lines.

  8. Automated three-dimensional quantification of myocardial perfusion and brain SPECT.

    PubMed

    Slomka, P J; Radau, P; Hurwitz, G A; Dey, D

    2001-01-01

    To allow automated and objective reading of nuclear medicine tomography, we have developed a set of tools for clinical analysis of myocardial perfusion tomography (PERFIT) and Brain SPECT/PET (BRASS). We exploit algorithms for image registration and use three-dimensional (3D) "normal models" for individual patient comparisons to composite datasets on a "voxel-by-voxel basis" in order to automatically determine the statistically significant abnormalities. A multistage, 3D iterative inter-subject registration of patient images to normal templates is applied, including automated masking of the external activity before final fit. In separate projects, the software has been applied to the analysis of myocardial perfusion SPECT, as well as brain SPECT and PET data. Automatic reading was consistent with visual analysis; it can be applied to the whole spectrum of clinical images, and aid physicians in the daily interpretation of tomographic nuclear medicine images.

  9. Ultrafast Method for the Analysis of Fluorescence Lifetime Imaging Microscopy Data Based on the Laguerre Expansion Technique

    PubMed Central

    Jo, Javier A.; Fang, Qiyin; Marcu, Laura

    2007-01-01

    We report a new deconvolution method for fluorescence lifetime imaging microscopy (FLIM) based on the Laguerre expansion technique. The performance of this method was tested on synthetic and real FLIM images. The following interesting properties of this technique were demonstrated. 1) The fluorescence intensity decay can be estimated simultaneously for all pixels, without a priori assumption of the decay functional form. 2) The computation speed is extremely fast, performing at least two orders of magnitude faster than current algorithms. 3) The estimated maps of Laguerre expansion coefficients provide a new domain for representing FLIM information. 4) The number of images required for the analysis is relatively small, allowing reduction of the acquisition time. These findings indicate that the developed Laguerre expansion technique for FLIM analysis represents a robust and extremely fast deconvolution method that enables practical applications of FLIM in medicine, biology, biochemistry, and chemistry. PMID:19444338

  10. Large-Scale Overlays and Trends: Visually Mining, Panning and Zooming the Observable Universe.

    PubMed

    Luciani, Timothy Basil; Cherinka, Brian; Oliphant, Daniel; Myers, Sean; Wood-Vasey, W Michael; Labrinidis, Alexandros; Marai, G Elisabeta

    2014-07-01

    We introduce a web-based computing infrastructure to assist the visual integration, mining and interactive navigation of large-scale astronomy observations. Following an analysis of the application domain, we design a client-server architecture to fetch distributed image data and to partition local data into a spatial index structure that allows prefix-matching of spatial objects. In conjunction with hardware-accelerated pixel-based overlays and an online cross-registration pipeline, this approach allows the fetching, displaying, panning and zooming of gigabit panoramas of the sky in real time. To further facilitate the integration and mining of spatial and non-spatial data, we introduce interactive trend images-compact visual representations for identifying outlier objects and for studying trends within large collections of spatial objects of a given class. In a demonstration, images from three sky surveys (SDSS, FIRST and simulated LSST results) are cross-registered and integrated as overlays, allowing cross-spectrum analysis of astronomy observations. Trend images are interactively generated from catalog data and used to visually mine astronomy observations of similar type. The front-end of the infrastructure uses the web technologies WebGL and HTML5 to enable cross-platform, web-based functionality. Our approach attains interactive rendering framerates; its power and flexibility enables it to serve the needs of the astronomy community. Evaluation on three case studies, as well as feedback from domain experts emphasize the benefits of this visual approach to the observational astronomy field; and its potential benefits to large scale geospatial visualization in general.

  11. Laser scanning cytometry as a tool for biomarker validation

    NASA Astrophysics Data System (ADS)

    Mittag, Anja; Füldner, Christiane; Lehmann, Jörg; Tarnok, Attila

    2013-03-01

    Biomarkers are essential for diagnosis, prognosis, and therapy. As diverse is the range of diseases the broad is the range of biomarkers and the material used for analysis. Whereas body fluids can be relatively easily obtained and analyzed, the investigation of tissue is in most cases more complicated. The same applies for the screening and the evaluation of new biomarkers and the estimation of the binding of biomarkers found in animal models which need to be transferred into applications in humans. The latter in particular is difficult if it recognizes proteins or cells in tissue. A better way to find suitable cellular biomarkers for immunoscintigraphy or PET analyses may be therefore the in situ analysis of the cells in the respective tissue. In this study we present a method for biomarker validation using Laser Scanning Cytometry which allows the emulation of future in vivo analysis. The biomarker validation is exemplarily shown for rheumatoid arthritis (RA) on synovial membrane. Cryosections were scanned and analyzed by phantom contouring. Adequate statistical methods allowed the identification of suitable markers and combinations. The fluorescence analysis of the phantoms allowed the discrimination between synovial membrane of RA patients and non-RA control sections by using median fluorescence intensity and the "affected area". As intensity and area are relevant parameters of in vivo imaging (e.g. PET scan) too, the presented method allows emulation of a probable outcome of in vivo imaging, i.e. the binding of the target protein and hence, the validation of the potential of the respective biomarker.

  12. Rapid discrimination of different Apiaceae species based on HPTLC fingerprints and targeted flavonoids determination using multivariate image analysis.

    PubMed

    Shawky, Eman; Abou El Kheir, Rasha M

    2018-02-11

    Species of Apiaceae are used in folk medicine as spices and in officinal medicinal preparations of drugs. They are an excellent source of phenolics exhibiting antioxidant activity, which are of great benefit to human health. Discrimination among Apiaceae medicinal herbs remains an intricate challenge due to their morphological similarity. In this study, a combined "untargeted" and "targeted" approach to investigate different Apiaceae plants species was proposed by using the merging of high-performance thin layer chromatography (HPTLC)-image analysis and pattern recognition methods which were used for fingerprinting and classification of 42 different Apiaceae samples collected from Egypt. Software for image processing was applied for fingerprinting and data acquisition. HPTLC fingerprint assisted by principal component analysis (PCA) and hierarchical cluster analysis (HCA)-heat maps resulted in a reliable untargeted approach for discrimination and classification of different samples. The "targeted" approach was performed by developing and validating an HPTLC method allowing the quantification of eight flavonoids. The combination of quantitative data with PCA and HCA-heat-maps allowed the different samples to be discriminated from each other. The use of chemometrics tools for evaluation of fingerprints reduced expense and analysis time. The proposed method can be adopted for routine discrimination and evaluation of the phytochemical variability in different Apiaceae species extracts. Copyright © 2018 John Wiley & Sons, Ltd.

  13. A Quantitative Framework for Flower Phenotyping in Cultivated Carnation (Dianthus caryophyllus L.)

    PubMed Central

    Chacón, Borja; Ballester, Roberto; Birlanga, Virginia; Rolland-Lagan, Anne-Gaëlle; Pérez-Pérez, José Manuel

    2013-01-01

    Most important breeding goals in ornamental crops are plant appearance and flower characteristics where selection is visually performed on direct offspring of crossings. We developed an image analysis toolbox for the acquisition of flower and petal images from cultivated carnation (Dianthus caryophyllus L.) that was validated by a detailed analysis of flower and petal size and shape in 78 commercial cultivars of D. caryophyllus, including 55 standard, 22 spray and 1 pot carnation cultivars. Correlation analyses allowed us to reduce the number of parameters accounting for the observed variation in flower and petal morphology. Convexity was used as a descriptor for the level of serration in flowers and petals. We used a landmark-based approach that allowed us to identify eight main principal components (PCs) accounting for most of the variance observed in petal shape. The effect and the strength of these PCs in standard and spray carnation cultivars are consistent with shared underlying mechanisms involved in the morphological diversification of petals in both subpopulations. Our results also indicate that neighbor-joining trees built with morphological data might infer certain phylogenetic relationships among carnation cultivars. Based on estimated broad-sense heritability values for some flower and petal features, different genetic determinants shall modulate the responses of flower and petal morphology to environmental cues in this species. We believe our image analysis toolbox could allow capturing flower variation in other species of high ornamental value. PMID:24349209

  14. Ganalyzer: A tool for automatic galaxy image analysis

    NASA Astrophysics Data System (ADS)

    Shamir, Lior

    2011-05-01

    Ganalyzer is a model-based tool that automatically analyzes and classifies galaxy images. Ganalyzer works by separating the galaxy pixels from the background pixels, finding the center and radius of the galaxy, generating the radial intensity plot, and then computing the slopes of the peaks detected in the radial intensity plot to measure the spirality of the galaxy and determine its morphological class. Unlike algorithms that are based on machine learning, Ganalyzer is based on measuring the spirality of the galaxy, a task that is difficult to perform manually, and in many cases can provide a more accurate analysis compared to manual observation. Ganalyzer is simple to use, and can be easily embedded into other image analysis applications. Another advantage is its speed, which allows it to analyze ~10,000,000 galaxy images in five days using a standard modern desktop computer. These capabilities can make Ganalyzer a useful tool in analyzing large datasets of galaxy images collected by autonomous sky surveys such as SDSS, LSST or DES.

  15. Mass spectrometry imaging for visualizing organic analytes in food.

    PubMed

    Handberg, Eric; Chingin, Konstantin; Wang, Nannan; Dai, Ximo; Chen, Huanwen

    2015-01-01

    The demand for rapid chemical imaging of food products steadily increases. Mass spectrometry (MS) is featured by excellent molecular specificity of analysis and is, therefore, a very attractive method for chemical profiling. MS for food imaging has increased significantly over the past decade, aided by the emergence of various ambient ionization techniques that allow direct and rapid analysis in ambient environment. In this article, the current status of food imaging with MSI is reviewed. The described approaches include matrix-assisted laser desorption/ionization (MALDI), but emphasize desorption atmospheric pressure photoionization (DAPPI), electrospray-assisted laser desorption/ionization (ELDI), probe electrospray ionization (PESI), surface desorption atmospheric pressure chemical ionization (SDAPCI), and laser ablation flowing atmospheric pressure afterglow (LA-FAPA). The methods are compared with regard to spatial resolution; analysis speed and time; limit of detection; and technical aspects. The performance of each method is illustrated with the description of a related application. Specific requirements in food imaging are discussed. © 2014 Wiley Periodicals, Inc.

  16. Interpolation of longitudinal shape and image data via optimal mass transport

    NASA Astrophysics Data System (ADS)

    Gao, Yi; Zhu, Liang-Jia; Bouix, Sylvain; Tannenbaum, Allen

    2014-03-01

    Longitudinal analysis of medical imaging data has become central to the study of many disorders. Unfortunately, various constraints (study design, patient availability, technological limitations) restrict the acquisition of data to only a few time points, limiting the study of continuous disease/treatment progression. Having the ability to produce a sensible time interpolation of the data can lead to improved analysis, such as intuitive visualizations of anatomical changes, or the creation of more samples to improve statistical analysis. In this work, we model interpolation of medical image data, in particular shape data, using the theory of optimal mass transport (OMT), which can construct a continuous transition from two time points while preserving "mass" (e.g., image intensity, shape volume) during the transition. The theory even allows a short extrapolation in time and may help predict short-term treatment impact or disease progression on anatomical structure. We apply the proposed method to the hippocampus-amygdala complex in schizophrenia, the heart in atrial fibrillation, and full head MR images in traumatic brain injury.

  17. Automated segmentation of retinal pigment epithelium cells in fluorescence adaptive optics images.

    PubMed

    Rangel-Fonseca, Piero; Gómez-Vieyra, Armando; Malacara-Hernández, Daniel; Wilson, Mario C; Williams, David R; Rossi, Ethan A

    2013-12-01

    Adaptive optics (AO) imaging methods allow the histological characteristics of retinal cell mosaics, such as photoreceptors and retinal pigment epithelium (RPE) cells, to be studied in vivo. The high-resolution images obtained with ophthalmic AO imaging devices are rich with information that is difficult and/or tedious to quantify using manual methods. Thus, robust, automated analysis tools that can provide reproducible quantitative information about the cellular mosaics under examination are required. Automated algorithms have been developed to detect the position of individual photoreceptor cells; however, most of these methods are not well suited for characterizing the RPE mosaic. We have developed an algorithm for RPE cell segmentation and show its performance here on simulated and real fluorescence AO images of the RPE mosaic. Algorithm performance was compared to manual cell identification and yielded better than 91% correspondence. This method can be used to segment RPE cells for morphometric analysis of the RPE mosaic and speed the analysis of both healthy and diseased RPE mosaics.

  18. Single photon detection imaging of Cherenkov light emitted during radiation therapy

    NASA Astrophysics Data System (ADS)

    Adamson, Philip M.; Andreozzi, Jacqueline M.; LaRochelle, Ethan; Gladstone, David J.; Pogue, Brian W.

    2018-03-01

    Cherenkov imaging during radiation therapy has been developed as a tool for dosimetry, which could have applications in patient delivery verification or in regular quality audit. The cameras used are intensified imaging sensors, either ICCD or ICMOS cameras, which allow important features of imaging, including: (1) nanosecond time gating, (2) amplification by 103-104, which together allow for imaging which has (1) real time capture at 10-30 frames per second, (2) sensitivity at the level of single photon event level, and (3) ability to suppress background light from the ambient room. However, the capability to achieve single photon imaging has not been fully analyzed to date, and as such was the focus of this study. The ability to quantitatively characterize how a single photon event appears in amplified camera imaging from the Cherenkov images was analyzed with image processing. The signal seen at normal gain levels appears to be a blur of about 90 counts in the CCD detector, after going through the chain of photocathode detection, amplification through a microchannel plate PMT, excitation onto a phosphor screen and then imaged on the CCD. The analysis of single photon events requires careful interpretation of the fixed pattern noise, statistical quantum noise distributions, and the spatial spread of each pulse through the ICCD.

  19. Image analysis of the blood cells for cytomorphodiagnostics and control of the effectiveness treatment

    NASA Astrophysics Data System (ADS)

    Zhukotsky, Alexander V.; Kogan, Emmanuil M.; Kopylov, Victor F.; Marchenko, Oleg V.; Lomakin, O. A.

    1994-07-01

    A new method for morphodensitometric analysis of blood cells was applied for medically screening some ecological influence and infection pathologies. A complex algorithm of computational image processing was created for supra molecular restructurings of interphase chromatin of lymphocytes research. It includes specific methods of staining and unifies different quantitative analysis methods. Our experience with the use of a television image analyzer in cytological and immunological studies made it possible to carry out some research in morphometric analysis of chromatin structure in interphase lymphocyte nuclei in genetic and virus pathologies. In our study to characterize lymphocytes as an image-forming system by a rigorous mathematical description we used an approach involving contaminant evaluation of the topography of chromatin network intact and victims' lymphocytes. It is also possible to digitize data, which revealed significant distinctions between control and experiment. The method allows us to observe the minute structural changes in chromatin, especially eu- and hetero-chromatin that were previously studied by genetics only in chromosomes.

  20. Auroral Observations from the POLAR Ultraviolet Imager (UVI)

    NASA Technical Reports Server (NTRS)

    Germany, G. A.; Spann, J. F.; Parks, G. K.; Brittnacher, M. J.; Elsen, R.; Chen, L.; Lummerzheim, D.; Rees, M. H.

    1998-01-01

    Because of the importance of the auroral regions as a remote diagnostic of near-Earth plasma processes and magnetospheric structure, spacebased instrumentation for imaging the auroral regions have been designed and operated for the last twenty-five years. The latest generation of imagers, including those flown on the POLAR satellite, extends this quest for multispectral resolution by providing three separate imagers for the visible, ultraviolet, and X ray images of the aurora. The ability to observe extended regions allows imaging missions to significantly extend the observations available from in situ or groundbased instrumentation. The complementary nature of imaging and other observations is illustrated below using results from tile GGS Ultraviolet Imager (UVI). Details of the requisite energy and intensity analysis are also presented.

  1. In vivo imaging of the Drosophila Melanogaster heart using a novel optical coherence tomography microscope

    NASA Astrophysics Data System (ADS)

    Izatt, Susan D.; Choma, Michael A.; Israel, Steven; Wessells, Robert J.; Bodmer, Rolf; Izatt, Joseph A.

    2005-03-01

    Real time in vivo optical coherence tomography (OCT) imaging of the adult fruit fly Drosophila melanogaster heart using a newly designed OCT microscope allows accurate assessment of cardiac anatomy and function. D. melanogaster has been used extensively in genetic research for over a century, but in vivo evaluation of the heart has been limited by available imaging technology. The ability to assess phenotypic changes with micrometer-scale resolution noninvasively in genetic models such as D. melanogaster is needed in the advancing fields of developmental biology and genetics. We have developed a dedicated small animal OCT imaging system incorporating a state-of-the-art, real time OCT scanner integrated into a standard stereo zoom microscope which allows for simultaneous OCT and video imaging. System capabilities include A-scan, B-scan, and M-scan imaging as well as automated 3D volumetric acquisition and visualization. Transverse and sagittal B-mode scans of the four chambered D. melanogaster heart have been obtained with the OCT microscope and are consistent with detailed anatomical studies from the literature. Further analysis by M-mode scanning is currently under way to assess cardiac function as a function of age and sex by determination of shortening fraction and ejection fraction. These studies create control cardiac data on the wild type D. melanogaster, allowing subsequent evaluation of phenotypic cardiac changes in this model after regulated genetic mutation.

  2. High throughput on-chip analysis of high-energy charged particle tracks using lensfree imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luo, Wei; Shabbir, Faizan; Gong, Chao

    2015-04-13

    We demonstrate a high-throughput charged particle analysis platform, which is based on lensfree on-chip microscopy for rapid ion track analysis using allyl diglycol carbonate, i.e., CR-39 plastic polymer as the sensing medium. By adopting a wide-area opto-electronic image sensor together with a source-shifting based pixel super-resolution technique, a large CR-39 sample volume (i.e., 4 cm × 4 cm × 0.1 cm) can be imaged in less than 1 min using a compact lensfree on-chip microscope, which detects partially coherent in-line holograms of the ion tracks recorded within the CR-39 detector. After the image capture, using highly parallelized reconstruction and ion track analysis algorithms running on graphics processingmore » units, we reconstruct and analyze the entire volume of a CR-39 detector within ∼1.5 min. This significant reduction in the entire imaging and ion track analysis time not only increases our throughput but also allows us to perform time-resolved analysis of the etching process to monitor and optimize the growth of ion tracks during etching. This computational lensfree imaging platform can provide a much higher throughput and more cost-effective alternative to traditional lens-based scanning optical microscopes for ion track analysis using CR-39 and other passive high energy particle detectors.« less

  3. Learning Photogrammetry with Interactive Software Tool PhoX

    NASA Astrophysics Data System (ADS)

    Luhmann, T.

    2016-06-01

    Photogrammetry is a complex topic in high-level university teaching, especially in the fields of geodesy, geoinformatics and metrology where high quality results are demanded. In addition, more and more black-box solutions for 3D image processing and point cloud generation are available that generate nice results easily, e.g. by structure-from-motion approaches. Within this context, the classical approach of teaching photogrammetry (e.g. focusing on aerial stereophotogrammetry) has to be reformed in order to educate students and professionals with new topics and provide them with more information behind the scene. Since around 20 years photogrammetry courses at the Jade University of Applied Sciences in Oldenburg, Germany, include the use of digital photogrammetry software that provide individual exercises, deep analysis of calculation results and a wide range of visualization tools for almost all standard tasks in photogrammetry. During the last years the software package PhoX has been developed that is part of a new didactic concept in photogrammetry and related subjects. It also serves as analysis tool in recent research projects. PhoX consists of a project-oriented data structure for images, image data, measured points and features and 3D objects. It allows for almost all basic photogrammetric measurement tools, image processing, calculation methods, graphical analysis functions, simulations and much more. Students use the program in order to conduct predefined exercises where they have the opportunity to analyse results in a high level of detail. This includes the analysis of statistical quality parameters but also the meaning of transformation parameters, rotation matrices, calibration and orientation data. As one specific advantage, PhoX allows for the interactive modification of single parameters and the direct view of the resulting effect in image or object space.

  4. Microfluidic Imaging Flow Cytometry by Asymmetric-detection Time-stretch Optical Microscopy (ATOM).

    PubMed

    Tang, Anson H L; Lai, Queenie T K; Chung, Bob M F; Lee, Kelvin C M; Mok, Aaron T Y; Yip, G K; Shum, Anderson H C; Wong, Kenneth K Y; Tsia, Kevin K

    2017-06-28

    Scaling the number of measurable parameters, which allows for multidimensional data analysis and thus higher-confidence statistical results, has been the main trend in the advanced development of flow cytometry. Notably, adding high-resolution imaging capabilities allows for the complex morphological analysis of cellular/sub-cellular structures. This is not possible with standard flow cytometers. However, it is valuable for advancing our knowledge of cellular functions and can benefit life science research, clinical diagnostics, and environmental monitoring. Incorporating imaging capabilities into flow cytometry compromises the assay throughput, primarily due to the limitations on speed and sensitivity in the camera technologies. To overcome this speed or throughput challenge facing imaging flow cytometry while preserving the image quality, asymmetric-detection time-stretch optical microscopy (ATOM) has been demonstrated to enable high-contrast, single-cell imaging with sub-cellular resolution, at an imaging throughput as high as 100,000 cells/s. Based on the imaging concept of conventional time-stretch imaging, which relies on all-optical image encoding and retrieval through the use of ultrafast broadband laser pulses, ATOM further advances imaging performance by enhancing the image contrast of unlabeled/unstained cells. This is achieved by accessing the phase-gradient information of the cells, which is spectrally encoded into single-shot broadband pulses. Hence, ATOM is particularly advantageous in high-throughput measurements of single-cell morphology and texture - information indicative of cell types, states, and even functions. Ultimately, this could become a powerful imaging flow cytometry platform for the biophysical phenotyping of cells, complementing the current state-of-the-art biochemical-marker-based cellular assay. This work describes a protocol to establish the key modules of an ATOM system (from optical frontend to data processing and visualization backend), as well as the workflow of imaging flow cytometry based on ATOM, using human cells and micro-algae as the examples.

  5. Development of Land Analysis System display modules

    NASA Technical Reports Server (NTRS)

    Gordon, Douglas; Hollaren, Douglas; Huewe, Laurie

    1986-01-01

    The Land Analysis System (LAS) display modules were developed to allow a user to interactively display, manipulate, and store image and image related data. To help accomplish this task, these modules utilize the Transportable Applications Executive and the Display Management System software to interact with the user and the display device. The basic characteristics of a display are outlined and some of the major modifications and additions made to the display management software are discussed. Finally, all available LAS display modules are listed along with a short description of each.

  6. Content-addressable read/write memories for image analysis

    NASA Technical Reports Server (NTRS)

    Snyder, W. E.; Savage, C. D.

    1982-01-01

    The commonly encountered image analysis problems of region labeling and clustering are found to be cases of search-and-rename problem which can be solved in parallel by a system architecture that is inherently suitable for VLSI implementation. This architecture is a novel form of content-addressable memory (CAM) which provides parallel search and update functions, allowing speed reductions down to constant time per operation. It has been proposed in related investigations by Hall (1981) that, with VLSI, CAM-based structures with enhanced instruction sets for general purpose processing will be feasible.

  7. ClearedLeavesDB: an online database of cleared plant leaf images

    PubMed Central

    2014-01-01

    Background Leaf vein networks are critical to both the structure and function of leaves. A growing body of recent work has linked leaf vein network structure to the physiology, ecology and evolution of land plants. In the process, multiple institutions and individual researchers have assembled collections of cleared leaf specimens in which vascular bundles (veins) are rendered visible. In an effort to facilitate analysis and digitally preserve these specimens, high-resolution images are usually created, either of entire leaves or of magnified leaf subsections. In a few cases, collections of digital images of cleared leaves are available for use online. However, these collections do not share a common platform nor is there a means to digitally archive cleared leaf images held by individual researchers (in addition to those held by institutions). Hence, there is a growing need for a digital archive that enables online viewing, sharing and disseminating of cleared leaf image collections held by both institutions and individual researchers. Description The Cleared Leaf Image Database (ClearedLeavesDB), is an online web-based resource for a community of researchers to contribute, access and share cleared leaf images. ClearedLeavesDB leverages resources of large-scale, curated collections while enabling the aggregation of small-scale collections within the same online platform. ClearedLeavesDB is built on Drupal, an open source content management platform. It allows plant biologists to store leaf images online with corresponding meta-data, share image collections with a user community and discuss images and collections via a common forum. We provide tools to upload processed images and results to the database via a web services client application that can be downloaded from the database. Conclusions We developed ClearedLeavesDB, a database focusing on cleared leaf images that combines interactions between users and data via an intuitive web interface. The web interface allows storage of large collections and integrates with leaf image analysis applications via an open application programming interface (API). The open API allows uploading of processed images and other trait data to the database, further enabling distribution and documentation of analyzed data within the community. The initial database is seeded with nearly 19,000 cleared leaf images representing over 40 GB of image data. Extensible storage and growth of the database is ensured by using the data storage resources of the iPlant Discovery Environment. ClearedLeavesDB can be accessed at http://clearedleavesdb.org. PMID:24678985

  8. ClearedLeavesDB: an online database of cleared plant leaf images.

    PubMed

    Das, Abhiram; Bucksch, Alexander; Price, Charles A; Weitz, Joshua S

    2014-03-28

    Leaf vein networks are critical to both the structure and function of leaves. A growing body of recent work has linked leaf vein network structure to the physiology, ecology and evolution of land plants. In the process, multiple institutions and individual researchers have assembled collections of cleared leaf specimens in which vascular bundles (veins) are rendered visible. In an effort to facilitate analysis and digitally preserve these specimens, high-resolution images are usually created, either of entire leaves or of magnified leaf subsections. In a few cases, collections of digital images of cleared leaves are available for use online. However, these collections do not share a common platform nor is there a means to digitally archive cleared leaf images held by individual researchers (in addition to those held by institutions). Hence, there is a growing need for a digital archive that enables online viewing, sharing and disseminating of cleared leaf image collections held by both institutions and individual researchers. The Cleared Leaf Image Database (ClearedLeavesDB), is an online web-based resource for a community of researchers to contribute, access and share cleared leaf images. ClearedLeavesDB leverages resources of large-scale, curated collections while enabling the aggregation of small-scale collections within the same online platform. ClearedLeavesDB is built on Drupal, an open source content management platform. It allows plant biologists to store leaf images online with corresponding meta-data, share image collections with a user community and discuss images and collections via a common forum. We provide tools to upload processed images and results to the database via a web services client application that can be downloaded from the database. We developed ClearedLeavesDB, a database focusing on cleared leaf images that combines interactions between users and data via an intuitive web interface. The web interface allows storage of large collections and integrates with leaf image analysis applications via an open application programming interface (API). The open API allows uploading of processed images and other trait data to the database, further enabling distribution and documentation of analyzed data within the community. The initial database is seeded with nearly 19,000 cleared leaf images representing over 40 GB of image data. Extensible storage and growth of the database is ensured by using the data storage resources of the iPlant Discovery Environment. ClearedLeavesDB can be accessed at http://clearedleavesdb.org.

  9. Column ratio mapping: a processing technique for atomic resolution high-angle annular dark-field (HAADF) images.

    PubMed

    Robb, Paul D; Craven, Alan J

    2008-12-01

    An image processing technique is presented for atomic resolution high-angle annular dark-field (HAADF) images that have been acquired using scanning transmission electron microscopy (STEM). This technique is termed column ratio mapping and involves the automated process of measuring atomic column intensity ratios in high-resolution HAADF images. This technique was developed to provide a fuller analysis of HAADF images than the usual method of drawing single intensity line profiles across a few areas of interest. For instance, column ratio mapping reveals the compositional distribution across the whole HAADF image and allows a statistical analysis and an estimation of errors. This has proven to be a very valuable technique as it can provide a more detailed assessment of the sharpness of interfacial structures from HAADF images. The technique of column ratio mapping is described in terms of a [110]-oriented zinc-blende structured AlAs/GaAs superlattice using the 1 angstroms-scale resolution capability of the aberration-corrected SuperSTEM 1 instrument.

  10. Analysis of Fringe Field Formed Inside LDA Measurement Volume Using Compact Two Hololens Imaging Systems

    NASA Astrophysics Data System (ADS)

    Ghosh, Abhijit; Nirala, A. K.; Yadav, H. L.

    2018-03-01

    We have designed and fabricated four LDA optical setups consisting of aberration compensated four different compact two hololens imaging systems. We have experimentally investigated and realized a hololens recording geometry which is interferogram of converging spherical wavefront with mutually coherent planar wavefront. Proposed real time monitoring and actual fringe field analysis techniques allow complete characterizations of fringes formed at measurement volume and permit to evaluate beam quality, alignment and fringe uniformity with greater precision. After experimentally analyzing the fringes formed at measurement volume by all four imaging systems, it is found that fringes obtained using compact two hololens imaging systems get improved both qualitatively and quantitatively compared to that obtained using conventional imaging system. Results indicate qualitative improvement of non-uniformity in fringe thickness and micro intensity variations perpendicular to the fringes, and quantitative improvement of 39.25% in overall average normalized standard deviations of fringe width formed by compact two hololens imaging systems compare to that of conventional imaging system.

  11. An overview of state-of-the-art image restoration in electron microscopy.

    PubMed

    Roels, J; Aelterman, J; Luong, H Q; Lippens, S; Pižurica, A; Saeys, Y; Philips, W

    2018-06-08

    In Life Science research, electron microscopy (EM) is an essential tool for morphological analysis at the subcellular level as it allows for visualization at nanometer resolution. However, electron micrographs contain image degradations such as noise and blur caused by electromagnetic interference, electron counting errors, magnetic lens imperfections, electron diffraction, etc. These imperfections in raw image quality are inevitable and hamper subsequent image analysis and visualization. In an effort to mitigate these artefacts, many electron microscopy image restoration algorithms have been proposed in the last years. Most of these methods rely on generic assumptions on the image or degradations and are therefore outperformed by advanced methods that are based on more accurate models. Ideally, a method will accurately model the specific degradations that fit the physical acquisition settings. In this overview paper, we discuss different electron microscopy image degradation solutions and demonstrate that dedicated artefact regularisation results in higher quality restoration and is applicable through recently developed probabilistic methods. © 2018 The Authors Journal of Microscopy © 2018 Royal Microscopical Society.

  12. A Computational Observer For Performing Contrast-Detail Analysis Of Ultrasound Images

    NASA Astrophysics Data System (ADS)

    Lopez, H.; Loew, M. H.

    1988-06-01

    Contrast-Detail (C/D) analysis allows the quantitative determination of an imaging system's ability to display a range of varying-size targets as a function of contrast. Using this technique, a contrast-detail plot is obtained which can, in theory, be used to compare image quality from one imaging system to another. The C/D plot, however, is usually obtained by using data from human observer readings. We have shown earlier(7) that the performance of human observers in the task of threshold detection of simulated lesions embedded in random ultrasound noise is highly inaccurate and non-reproducible for untrained observers. We present an objective, computational method for the determination of the C/D curve for ultrasound images. This method utilizes digital images of the C/D phantom developed at CDRH, and lesion-detection algorithms that simulate the Bayesian approach using the likelihood function for an ideal observer. We present the results of this method, and discuss the relationship to the human observer and to the comparability of image quality between systems.

  13. Multi-color electron microscopy by element-guided identification of cells, organelles and molecules.

    PubMed

    Scotuzzi, Marijke; Kuipers, Jeroen; Wensveen, Dasha I; de Boer, Pascal; Hagen, Kees C W; Hoogenboom, Jacob P; Giepmans, Ben N G

    2017-04-07

    Cellular complexity is unraveled at nanometer resolution using electron microscopy (EM), but interpretation of macromolecular functionality is hampered by the difficulty in interpreting grey-scale images and the unidentified molecular content. We perform large-scale EM on mammalian tissue complemented with energy-dispersive X-ray analysis (EDX) to allow EM-data analysis based on elemental composition. Endogenous elements, labels (gold and cadmium-based nanoparticles) as well as stains are analyzed at ultrastructural resolution. This provides a wide palette of colors to paint the traditional grey-scale EM images for composition-based interpretation. Our proof-of-principle application of EM-EDX reveals that endocrine and exocrine vesicles exist in single cells in Islets of Langerhans. This highlights how elemental mapping reveals unbiased biomedical relevant information. Broad application of EM-EDX will further allow experimental analysis on large-scale tissue using endogenous elements, multiple stains, and multiple markers and thus brings nanometer-scale 'color-EM' as a promising tool to unravel molecular (de)regulation in biomedicine.

  14. Multi-color electron microscopy by element-guided identification of cells, organelles and molecules

    PubMed Central

    Scotuzzi, Marijke; Kuipers, Jeroen; Wensveen, Dasha I.; de Boer, Pascal; Hagen, Kees (C.) W.; Hoogenboom, Jacob P.; Giepmans, Ben N. G.

    2017-01-01

    Cellular complexity is unraveled at nanometer resolution using electron microscopy (EM), but interpretation of macromolecular functionality is hampered by the difficulty in interpreting grey-scale images and the unidentified molecular content. We perform large-scale EM on mammalian tissue complemented with energy-dispersive X-ray analysis (EDX) to allow EM-data analysis based on elemental composition. Endogenous elements, labels (gold and cadmium-based nanoparticles) as well as stains are analyzed at ultrastructural resolution. This provides a wide palette of colors to paint the traditional grey-scale EM images for composition-based interpretation. Our proof-of-principle application of EM-EDX reveals that endocrine and exocrine vesicles exist in single cells in Islets of Langerhans. This highlights how elemental mapping reveals unbiased biomedical relevant information. Broad application of EM-EDX will further allow experimental analysis on large-scale tissue using endogenous elements, multiple stains, and multiple markers and thus brings nanometer-scale ‘color-EM’ as a promising tool to unravel molecular (de)regulation in biomedicine. PMID:28387351

  15. Comprehensive Analysis of Immunological Synapse Phenotypes Using Supported Lipid Bilayers.

    PubMed

    Valvo, Salvatore; Mayya, Viveka; Seraia, Elena; Afrose, Jehan; Novak-Kotzer, Hila; Ebner, Daniel; Dustin, Michael L

    2017-01-01

    Supported lipid bilayers (SLB) formed on glass substrates have been a useful tool for study of immune cell signaling since the early 1980s. The mobility of lipid-anchored proteins in the system, first described for antibodies binding to synthetic phospholipid head groups, allows for the measurement of two-dimensional binding reactions and signaling processes in a single imaging plane over time or for fixed samples. The fragility of SLB and the challenges of building and validating individual substrates limit most experimenters to ~10 samples per day, perhaps increasing this few-fold when examining fixed samples. Successful experiments might then require further days to fully analyze. We present methods for automation of many steps in SLB formation, imaging in 96-well glass bottom plates, and analysis that enables >100-fold increase in throughput for fixed samples and wide-field fluorescence. This increased throughput will allow better coverage of relevant parameters and more comprehensive analysis of aspects of the immunological synapse that are well reconstituted by SLB.

  16. Quantitative image feature variability amongst CT scanners with a controlled scan protocol

    NASA Astrophysics Data System (ADS)

    Ger, Rachel B.; Zhou, Shouhao; Chi, Pai-Chun Melinda; Goff, David L.; Zhang, Lifei; Lee, Hannah J.; Fuller, Clifton D.; Howell, Rebecca M.; Li, Heng; Stafford, R. Jason; Court, Laurence E.; Mackin, Dennis S.

    2018-02-01

    Radiomics studies often analyze patient computed tomography (CT) images acquired from different CT scanners. This may result in differences in imaging parameters, e.g. different manufacturers, different acquisition protocols, etc. However, quantifiable differences in radiomics features can occur based on acquisition parameters. A controlled protocol may allow for minimization of these effects, thus allowing for larger patient cohorts from many different CT scanners. In order to test radiomics feature variability across different CT scanners a radiomics phantom was developed with six different cartridges encased in high density polystyrene. A harmonized protocol was developed to control for tube voltage, tube current, scan type, pitch, CTDIvol, convolution kernel, display field of view, and slice thickness across different manufacturers. The radiomics phantom was imaged on 18 scanners using the control protocol. A linear mixed effects model was created to assess the impact of inter-scanner variability with decomposition of feature variation between scanners and cartridge materials. The inter-scanner variability was compared to the residual variability (the unexplained variability) and to the inter-patient variability using two different patient cohorts. The patient cohorts consisted of 20 non-small cell lung cancer (NSCLC) and 30 head and neck squamous cell carcinoma (HNSCC) patients. The inter-scanner standard deviation was at least half of the residual standard deviation for 36 of 49 quantitative image features. The ratio of inter-scanner to patient coefficient of variation was above 0.2 for 22 and 28 of the 49 features for NSCLC and HNSCC patients, respectively. Inter-scanner variability was a significant factor compared to patient variation in this small study for many of the features. Further analysis with a larger cohort will allow more thorough analysis with additional variables in the model to truly isolate the interscanner difference.

  17. Cardiac function and perfusion dynamics measured on a beat-by-beat basis in the live mouse using ultra-fast 4D optoacoustic imaging

    NASA Astrophysics Data System (ADS)

    Ford, Steven J.; Deán-Ben, Xosé L.; Razansky, Daniel

    2015-03-01

    The fast heart rate (~7 Hz) of the mouse makes cardiac imaging and functional analysis difficult when studying mouse models of cardiovascular disease, and cannot be done truly in real-time and 3D using established imaging modalities. Optoacoustic imaging, on the other hand, provides ultra-fast imaging at up to 50 volumetric frames per second, allowing for acquisition of several frames per mouse cardiac cycle. In this study, we combined a recently-developed 3D optoacoustic imaging array with novel analytical techniques to assess cardiac function and perfusion dynamics of the mouse heart at high, 4D spatiotemporal resolution. In brief, the heart of an anesthetized mouse was imaged over a series of multiple volumetric frames. In another experiment, an intravenous bolus of indocyanine green (ICG) was injected and its distribution was subsequently imaged in the heart. Unique temporal features of the cardiac cycle and ICG distribution profiles were used to segment the heart from background and to assess cardiac function. The 3D nature of the experimental data allowed for determination of cardiac volumes at ~7-8 frames per mouse cardiac cycle, providing important cardiac function parameters (e.g., stroke volume, ejection fraction) on a beat-by-beat basis, which has been previously unachieved by any other cardiac imaging modality. Furthermore, ICG distribution dynamics allowed for the determination of pulmonary transit time and thus additional quantitative measures of cardiovascular function. This work demonstrates the potential for optoacoustic cardiac imaging and is expected to have a major contribution toward future preclinical studies of animal models of cardiovascular health and disease.

  18. Computer simulation of schlieren images of rotationally symmetric plasma systems: a simple method.

    PubMed

    Noll, R; Haas, C R; Weikl, B; Herziger, G

    1986-03-01

    Schlieren techniques are commonly used methods for quantitative analysis of cylindrical or spherical index of refraction profiles. Many schlieren objects, however, are characterized by more complex geometries, so we have investigated the more general case of noncylindrical, rotationally symmetric distributions of index of refraction n(r,z). Assuming straight ray paths in the schlieren object we have calculated 2-D beam deviation profiles. It is shown that experimental schlieren images of the noncylindrical plasma generated by a plasma focus device can be simulated with these deviation profiles. The computer simulation allows a quantitative analysis of these schlieren images, which yields, for example, the plasma parameters, electron density, and electron density gradients.

  19. Fractal and Gray Level Cooccurrence Matrix Computational Analysis of Primary Osteosarcoma Magnetic Resonance Images Predicts the Chemotherapy Response.

    PubMed

    Djuričić, Goran J; Radulovic, Marko; Sopta, Jelena P; Nikitović, Marina; Milošević, Nebojša T

    2017-01-01

    The prediction of induction chemotherapy response at the time of diagnosis may improve outcomes in osteosarcoma by allowing for personalized tailoring of therapy. The aim of this study was thus to investigate the predictive potential of the so far unexploited computational analysis of osteosarcoma magnetic resonance (MR) images. Fractal and gray level cooccurrence matrix (GLCM) algorithms were employed in retrospective analysis of MR images of primary osteosarcoma localized in distal femur prior to the OsteoSa induction chemotherapy. The predicted and actual chemotherapy response outcomes were then compared by means of receiver operating characteristic (ROC) analysis and accuracy calculation. Dbin, Λ, and SCN were the standard fractal and GLCM features which significantly associated with the chemotherapy outcome, but only in one of the analyzed planes. Our newly developed normalized fractal dimension, called the space-filling ratio (SFR) exerted an independent and much better predictive value with the prediction significance accomplished in two of the three imaging planes, with accuracy of 82% and area under the ROC curve of 0.20 (95% confidence interval 0-0.41). In conclusion, SFR as the newly designed fractal coefficient provided superior predictive performance in comparison to standard image analysis features, presumably by compensating for the tumor size variation in MR images.

  20. Quantitative histogram analysis of images

    NASA Astrophysics Data System (ADS)

    Holub, Oliver; Ferreira, Sérgio T.

    2006-11-01

    A routine for histogram analysis of images has been written in the object-oriented, graphical development environment LabVIEW. The program converts an RGB bitmap image into an intensity-linear greyscale image according to selectable conversion coefficients. This greyscale image is subsequently analysed by plots of the intensity histogram and probability distribution of brightness, and by calculation of various parameters, including average brightness, standard deviation, variance, minimal and maximal brightness, mode, skewness and kurtosis of the histogram and the median of the probability distribution. The program allows interactive selection of specific regions of interest (ROI) in the image and definition of lower and upper threshold levels (e.g., to permit the removal of a constant background signal). The results of the analysis of multiple images can be conveniently saved and exported for plotting in other programs, which allows fast analysis of relatively large sets of image data. The program file accompanies this manuscript together with a detailed description of two application examples: The analysis of fluorescence microscopy images, specifically of tau-immunofluorescence in primary cultures of rat cortical and hippocampal neurons, and the quantification of protein bands by Western-blot. The possibilities and limitations of this kind of analysis are discussed. Program summaryTitle of program: HAWGC Catalogue identifier: ADXG_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADXG_v1_0 Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Computers: Mobile Intel Pentium III, AMD Duron Installations: No installation necessary—Executable file together with necessary files for LabVIEW Run-time engine Operating systems or monitors under which the program has been tested: WindowsME/2000/XP Programming language used: LabVIEW 7.0 Memory required to execute with typical data:˜16MB for starting and ˜160MB used for loading of an image No. of bits in a word: 32 No. of processors used: 1 Has the code been vectorized or parallelized?: No No of lines in distributed program, including test data, etc.:138 946 No. of bytes in distributed program, including test data, etc.:15 166 675 Distribution format: tar.gz Nature of physical problem: Quantification of image data (e.g., for discrimination of molecular species in gels or fluorescent molecular probes in cell cultures) requires proprietary or complex software packages, which might not include the relevant statistical parameters or make the analysis of multiple images a tedious procedure for the general user. Method of solution: Tool for conversion of RGB bitmap image into luminance-linear image and extraction of luminance histogram, probability distribution, and statistical parameters (average brightness, standard deviation, variance, minimal and maximal brightness, mode, skewness and kurtosis of histogram and median of probability distribution) with possible selection of region of interest (ROI) and lower and upper threshold levels. Restrictions on the complexity of the problem: Does not incorporate application-specific functions (e.g., morphometric analysis) Typical running time: Seconds (depending on image size and processor speed) Unusual features of the program: None

  1. Visualization and correction of automated segmentation, tracking and lineaging from 5-D stem cell image sequences.

    PubMed

    Wait, Eric; Winter, Mark; Bjornsson, Chris; Kokovay, Erzsebet; Wang, Yue; Goderie, Susan; Temple, Sally; Cohen, Andrew R

    2014-10-03

    Neural stem cells are motile and proliferative cells that undergo mitosis, dividing to produce daughter cells and ultimately generating differentiated neurons and glia. Understanding the mechanisms controlling neural stem cell proliferation and differentiation will play a key role in the emerging fields of regenerative medicine and cancer therapeutics. Stem cell studies in vitro from 2-D image data are well established. Visualizing and analyzing large three dimensional images of intact tissue is a challenging task. It becomes more difficult as the dimensionality of the image data increases to include time and additional fluorescence channels. There is a pressing need for 5-D image analysis and visualization tools to study cellular dynamics in the intact niche and to quantify the role that environmental factors play in determining cell fate. We present an application that integrates visualization and quantitative analysis of 5-D (x,y,z,t,channel) and large montage confocal fluorescence microscopy images. The image sequences show stem cells together with blood vessels, enabling quantification of the dynamic behaviors of stem cells in relation to their vascular niche, with applications in developmental and cancer biology. Our application automatically segments, tracks, and lineages the image sequence data and then allows the user to view and edit the results of automated algorithms in a stereoscopic 3-D window while simultaneously viewing the stem cell lineage tree in a 2-D window. Using the GPU to store and render the image sequence data enables a hybrid computational approach. An inference-based approach utilizing user-provided edits to automatically correct related mistakes executes interactively on the system CPU while the GPU handles 3-D visualization tasks. By exploiting commodity computer gaming hardware, we have developed an application that can be run in the laboratory to facilitate rapid iteration through biological experiments. We combine unsupervised image analysis algorithms with an interactive visualization of the results. Our validation interface allows for each data set to be corrected to 100% accuracy, ensuring that downstream data analysis is accurate and verifiable. Our tool is the first to combine all of these aspects, leveraging the synergies obtained by utilizing validation information from stereo visualization to improve the low level image processing tasks.

  2. Current and evolving echocardiographic techniques for the quantitative evaluation of cardiac mechanics: ASE/EAE consensus statement on methodology and indications endorsed by the Japanese Society of Echocardiography.

    PubMed

    Mor-Avi, Victor; Lang, Roberto M; Badano, Luigi P; Belohlavek, Marek; Cardim, Nuno Miguel; Derumeaux, Genevieve; Galderisi, Maurizio; Marwick, Thomas; Nagueh, Sherif F; Sengupta, Partho P; Sicari, Rosa; Smiseth, Otto A; Smulevitz, Beverly; Takeuchi, Masaaki; Thomas, James D; Vannan, Mani; Voigt, Jens-Uwe; Zamorano, Jose Luis

    2011-03-01

    Echocardiographic imaging is ideally suited for the evaluation of cardiac mechanics because of its intrinsically dynamic nature. Because for decades, echocardiography has been the only imaging modality that allows dynamic imaging of the heart, it is only natural that new, increasingly automated techniques for sophisticated analysis of cardiac mechanics have been driven by researchers and manufacturers of ultrasound imaging equipment. Several such techniques have emerged over the past decades to address the issue of reader's experience and inter-measurement variability in interpretation. Some were widely embraced by echocardiographers around the world and became part of the clinical routine, whereas others remained limited to research and exploration of new clinical applications. Two such techniques have dominated the research arena of echocardiography: (1) Doppler-based tissue velocity measurements, frequently referred to as tissue Doppler or myocardial Doppler, and (2) speckle tracking on the basis of displacement measurements. Both types of measurements lend themselves to the derivation of multiple parameters of myocardial function. The goal of this document is to focus on the currently available techniques that allow quantitative assessment of myocardial function via image-based analysis of local myocardial dynamics, including Doppler tissue imaging and speckle-tracking echocardiography, as well as integrated back- scatter analysis. This document describes the current and potential clinical applications of these techniques and their strengths and weaknesses, briefly surveys a selection of the relevant published literature while highlighting normal and abnormal findings in the context of different cardiovascular pathologies, and summarizes the unresolved issues, future research priorities, and recommended indications for clinical use.

  3. Current and evolving echocardiographic techniques for the quantitative evaluation of cardiac mechanics: ASE/EAE consensus statement on methodology and indications endorsed by the Japanese Society of Echocardiography.

    PubMed

    Mor-Avi, Victor; Lang, Roberto M; Badano, Luigi P; Belohlavek, Marek; Cardim, Nuno Miguel; Derumeaux, Geneviève; Galderisi, Maurizio; Marwick, Thomas; Nagueh, Sherif F; Sengupta, Partho P; Sicari, Rosa; Smiseth, Otto A; Smulevitz, Beverly; Takeuchi, Masaaki; Thomas, James D; Vannan, Mani; Voigt, Jens-Uwe; Zamorano, José Luis

    2011-03-01

    Echocardiographic imaging is ideally suited for the evaluation of cardiac mechanics because of its intrinsically dynamic nature. Because for decades, echocardiography has been the only imaging modality that allows dynamic imaging of the heart, it is only natural that new, increasingly automated techniques for sophisticated analysis of cardiac mechanics have been driven by researchers and manufacturers of ultrasound imaging equipment.Several such technique shave emerged over the past decades to address the issue of reader's experience and inter measurement variability in interpretation.Some were widely embraced by echocardiographers around the world and became part of the clinical routine,whereas others remained limited to research and exploration of new clinical applications.Two such techniques have dominated the research arena of echocardiography: (1) Doppler based tissue velocity measurements,frequently referred to as tissue Doppler or myocardial Doppler, and (2) speckle tracking on the basis of displacement measurements.Both types of measurements lend themselves to the derivation of multiple parameters of myocardial function. The goal of this document is to focus on the currently available techniques that allow quantitative assessment of myocardial function via image-based analysis of local myocardial dynamics, including Doppler tissue imaging and speckle-tracking echocardiography, as well as integrated backscatter analysis. This document describes the current and potential clinical applications of these techniques and their strengths and weaknesses,briefly surveys a selection of the relevant published literature while highlighting normal and abnormal findings in the context of different cardiovascular pathologies, and summarizes the unresolved issues, future research priorities, and recommended indications for clinical use.

  4. A theoretical-experimental methodology for assessing the sensitivity of biomedical spectral imaging platforms, assays, and analysis methods.

    PubMed

    Leavesley, Silas J; Sweat, Brenner; Abbott, Caitlyn; Favreau, Peter; Rich, Thomas C

    2018-01-01

    Spectral imaging technologies have been used for many years by the remote sensing community. More recently, these approaches have been applied to biomedical problems, where they have shown great promise. However, biomedical spectral imaging has been complicated by the high variance of biological data and the reduced ability to construct test scenarios with fixed ground truths. Hence, it has been difficult to objectively assess and compare biomedical spectral imaging assays and technologies. Here, we present a standardized methodology that allows assessment of the performance of biomedical spectral imaging equipment, assays, and analysis algorithms. This methodology incorporates real experimental data and a theoretical sensitivity analysis, preserving the variability present in biomedical image data. We demonstrate that this approach can be applied in several ways: to compare the effectiveness of spectral analysis algorithms, to compare the response of different imaging platforms, and to assess the level of target signature required to achieve a desired performance. Results indicate that it is possible to compare even very different hardware platforms using this methodology. Future applications could include a range of optimization tasks, such as maximizing detection sensitivity or acquisition speed, providing high utility for investigators ranging from design engineers to biomedical scientists. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Topographic profiling and refractive-index analysis by use of differential interference contrast with bright-field intensity and atomic force imaging.

    PubMed

    Axelrod, Noel; Radko, Anna; Lewis, Aaron; Ben-Yosef, Nissim

    2004-04-10

    A methodology is described for phase restoration of an object function from differential interference contrast (DIC) images. The methodology involves collecting a set of DIC images in the same plane with different bias retardation between the two illuminating light components produced by a Wollaston prism. These images, together with one conventional bright-field image, allows for reduction of the phase deconvolution restoration problem from a highly complex nonlinear mathematical formulation to a set of linear equations that can be applied to resolve the phase for images with a relatively large number of pixels. Additionally, under certain conditions, an on-line atomic force imaging system that does not interfere with the standard DIC illumination modes resolves uncertainties in large topographical variations that generally lead to a basic problem in DIC imaging, i.e., phase unwrapping. Furthermore, the availability of confocal detection allows for a three-dimensional reconstruction with high accuracy of the refractive-index measurement of the object that is to be imaged. This has been applied to reconstruction of the refractive index of an arrayed waveguide in a region in which a defect in the sample is present. The results of this paper highlight the synergism of far-field microscopies integrated with scanned probe microscopies and restoration algorithms for phase reconstruction.

  6. Imaging mass spectrometry in drug development and toxicology.

    PubMed

    Karlsson, Oskar; Hanrieder, Jörg

    2017-06-01

    During the last decades, imaging mass spectrometry has gained significant relevance in biomedical research. Recent advances in imaging mass spectrometry have paved the way for in situ studies on drug development, metabolism and toxicology. In contrast to whole-body autoradiography that images the localization of radiolabeled compounds, imaging mass spectrometry provides the possibility to simultaneously determine the discrete tissue distribution of the parent compound and its metabolites. In addition, imaging mass spectrometry features high molecular specificity and allows comprehensive, multiplexed detection and localization of hundreds of proteins, peptides and lipids directly in tissues. Toxicologists traditionally screen for adverse findings by histopathological examination. However, studies of the molecular and cellular processes underpinning toxicological and pathologic findings induced by candidate drugs or toxins are important to reach a mechanistic understanding and an effective risk assessment strategy. One of IMS strengths is the ability to directly overlay the molecular information from the mass spectrometric analysis with the tissue section and allow correlative comparisons of molecular and histologic information. Imaging mass spectrometry could therefore be a powerful tool for omics profiling of pharmacological/toxicological effects of drug candidates and toxicants in discrete tissue regions. The aim of the present review is to provide an overview of imaging mass spectrometry, with particular focus on MALDI imaging mass spectrometry, and its use in drug development and toxicology in general.

  7. Using optical coherence tomography to evaluate glaucoma implant healing response in rabbit eyes

    NASA Astrophysics Data System (ADS)

    Gossage, Kirk W.; Tkaczyk, Tomasz S.; Barton, Jennifer K.

    2002-06-01

    Glaucoma is a set of diseases that cause optic nerve damage and visual field loss. The most important risk factor for the development of glaucoma is elevated intraocular pressure. One approach used to alleviate the pressure increase is to surgically install glaucoma implants. Two standard Ahmed and ten experimental ePTFE implants were evaluated in this study in rabbit eyes. The implants were imaged with optical coherence tomography (OCT) at 0, 7, 15, 30, and 90 days after implantation. Histology was collected at days 7, 15, 30, and 90 and compared to the OCT images. Preliminary analysis of images indicates that OCT can visualize the development of fibrous encapsulation of the implant, tissue erosion, fibrin accumulation in the implant tube, and tube position in the anterior chamber. A new OCT handheld probe was developed to facilitate in vivo imaging in rabbit eye studies. The OCT probe consists of a mechanical scaffold designed to allow the imaging fiber to be held in a fixed position with respect to the rabbit eye, with minimal anesthesia. A piezo electric lateral scanning device allows the imaging fiber to be scanned across the tissue so that 2D images may be acquired.

  8. Diagnostic imaging advances in murine models of colitis.

    PubMed

    Brückner, Markus; Lenz, Philipp; Mücke, Marcus M; Gohar, Faekah; Willeke, Peter; Domagk, Dirk; Bettenworth, Dominik

    2016-01-21

    Inflammatory bowel diseases (IBD) such as Crohn's disease and ulcerative colitis are chronic-remittent inflammatory disorders of the gastrointestinal tract still evoking challenging clinical diagnostic and therapeutic situations. Murine models of experimental colitis are a vital component of research into human IBD concerning questions of its complex pathogenesis or the evaluation of potential new drugs. To monitor the course of colitis, to the present day, classical parameters like histological tissue alterations or analysis of mucosal cytokine/chemokine expression often require euthanasia of animals. Recent advances mean revolutionary non-invasive imaging techniques for in vivo murine colitis diagnostics are increasingly available. These novel and emerging imaging techniques not only allow direct visualization of intestinal inflammation, but also enable molecular imaging and targeting of specific alterations of the inflamed murine mucosa. For the first time, in vivo imaging techniques allow for longitudinal examinations and evaluation of intra-individual therapeutic response. This review discusses the latest developments in the different fields of ultrasound, molecularly targeted contrast agent ultrasound, fluorescence endoscopy, confocal laser endomicroscopy as well as tomographic imaging with magnetic resonance imaging, computed tomography and fluorescence-mediated tomography, discussing their individual limitations and potential future diagnostic applications in the management of human patients with IBD.

  9. 3D digital image correlation methods for full-field vibration measurement

    NASA Astrophysics Data System (ADS)

    Helfrick, Mark N.; Niezrecki, Christopher; Avitabile, Peter; Schmidt, Timothy

    2011-04-01

    In the area of modal test/analysis/correlation, significant effort has been expended over the past twenty years in order to make reduced models and to expand test data for correlation and eventual updating of the finite element models. This has been restricted by vibration measurements which are traditionally limited to the location of relatively few applied sensors. Advances in computers and digital imaging technology have allowed 3D digital image correlation (DIC) methods to measure the shape and deformation of a vibrating structure. This technique allows for full-field measurement of structural response, thus providing a wealth of simultaneous test data. This paper presents some preliminary results for the test/analysis/correlation of data measured using the DIC approach along with traditional accelerometers and a scanning laser vibrometer for comparison to a finite element model. The results indicate that all three approaches correlated well with the finite element model and provide validation for the DIC approach for full-field vibration measurement. Some of the advantages and limitations of the technique are presented and discussed.

  10. Random, double- and single-strand DNA breaks can be differentiated in the method of Comet assay by the shape of the comet image.

    PubMed

    Georgieva, Milena; Zagorchev, Plamen; Miloshev, George

    2015-10-01

    Comet assay is an invaluable tool in DNA research. It is widely used to detect DNA damage as an indicator of exposure to genotoxic stress. A canonical set of parameters and specialized software programs exist for Comet assay data quantification and analysis. None of them so far has proven its potential to employ a computer-based algorithm for assessment of the shape of the comet as an indicator of the exact mechanism by which the studied genotoxins cut in the molecule of DNA. Here, we present 14 unique measurements of the comet image based on the comet morphology. Their mathematical derivation and statistical analysis allowed precise description of the shape of the comet image which in turn discriminated the cause of genotoxic stress. This algorithm led to the development of the "CometShape" software which allowed easy discrimination among different genotoxins depending on the type of DNA damage they induce. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. CellProfiler Tracer: exploring and validating high-throughput, time-lapse microscopy image data.

    PubMed

    Bray, Mark-Anthony; Carpenter, Anne E

    2015-11-04

    Time-lapse analysis of cellular images is an important and growing need in biology. Algorithms for cell tracking are widely available; what researchers have been missing is a single open-source software package to visualize standard tracking output (from software like CellProfiler) in a way that allows convenient assessment of track quality, especially for researchers tuning tracking parameters for high-content time-lapse experiments. This makes quality assessment and algorithm adjustment a substantial challenge, particularly when dealing with hundreds of time-lapse movies collected in a high-throughput manner. We present CellProfiler Tracer, a free and open-source tool that complements the object tracking functionality of the CellProfiler biological image analysis package. Tracer allows multi-parametric morphological data to be visualized on object tracks, providing visualizations that have already been validated within the scientific community for time-lapse experiments, and combining them with simple graph-based measures for highlighting possible tracking artifacts. CellProfiler Tracer is a useful, free tool for inspection and quality control of object tracking data, available from http://www.cellprofiler.org/tracer/.

  12. Automatic treatment of flight test images using modern tools: SAAB and Aeritalia joint approach

    NASA Astrophysics Data System (ADS)

    Kaelldahl, A.; Duranti, P.

    The use of onboard cine cameras, as well as that of on ground cinetheodolites, is very popular in flight tests. The high resolution of film and the high frame rate of cinecameras are still not exceeded by video technology. Video technology can successfully enter the flight test scenario once the availability of solid-state optical sensors dramatically reduces the dimensions, and weight of TV cameras, thus allowing to locate them in positions compatible with space or operational limitations (e.g., HUD cameras). A proper combination of cine and video cameras is the typical solution for a complex flight test program. The output of such devices is very helpful in many flight areas. Several sucessful applications of this technology are summarized. Analysis of the large amount of data produced (frames of images) requires a very long time. The analysis is normally carried out manually. In order to improve the situation, in the last few years, several flight test centers have devoted their attention to possible techniques which allow for quicker and more effective image treatment.

  13. Optical Coherence Tomography in the UK Biobank Study - Rapid Automated Analysis of Retinal Thickness for Large Population-Based Studies.

    PubMed

    Keane, Pearse A; Grossi, Carlota M; Foster, Paul J; Yang, Qi; Reisman, Charles A; Chan, Kinpui; Peto, Tunde; Thomas, Dhanes; Patel, Praveen J

    2016-01-01

    To describe an approach to the use of optical coherence tomography (OCT) imaging in large, population-based studies, including methods for OCT image acquisition, storage, and the remote, rapid, automated analysis of retinal thickness. In UK Biobank, OCT images were acquired between 2009 and 2010 using a commercially available "spectral domain" OCT device (3D OCT-1000, Topcon). Images were obtained using a raster scan protocol, 6 mm x 6 mm in area, and consisting of 128 B-scans. OCT image sets were stored on UK Biobank servers in a central repository, adjacent to high performance computers. Rapid, automated analysis of retinal thickness was performed using custom image segmentation software developed by the Topcon Advanced Biomedical Imaging Laboratory (TABIL). This software employs dual-scale gradient information to allow for automated segmentation of nine intraretinal boundaries in a rapid fashion. 67,321 participants (134,642 eyes) in UK Biobank underwent OCT imaging of both eyes as part of the ocular module. 134,611 images were successfully processed with 31 images failing segmentation analysis due to corrupted OCT files or withdrawal of subject consent for UKBB study participation. Average time taken to call up an image from the database and complete segmentation analysis was approximately 120 seconds per data set per login, and analysis of the entire dataset was completed in approximately 28 days. We report an approach to the rapid, automated measurement of retinal thickness from nearly 140,000 OCT image sets from the UK Biobank. In the near future, these measurements will be publically available for utilization by researchers around the world, and thus for correlation with the wealth of other data collected in UK Biobank. The automated analysis approaches we describe may be of utility for future large population-based epidemiological studies, clinical trials, and screening programs that employ OCT imaging.

  14. A comparative study of 2 computer-assisted methods of quantifying brightfield microscopy images.

    PubMed

    Tse, George H; Marson, Lorna P

    2013-10-01

    Immunohistochemistry continues to be a powerful tool for the detection of antigens. There are several commercially available software packages that allow image analysis; however, these can be complex, require relatively high level of computer skills, and can be expensive. We compared 2 commonly available software packages, Adobe Photoshop CS6 and ImageJ, in their ability to quantify percentage positive area after picrosirius red (PSR) staining and 3,3'-diaminobenzidine (DAB) staining. On analysis of DAB-stained B cells in the mouse spleen, with a biotinylated primary rat anti-mouse-B220 antibody, there was no significant difference on converting images from brightfield microscopy to binary images to measure black and white pixels using ImageJ compared with measuring a range of brown pixels with Photoshop (Student t test, P=0.243, correlation r=0.985). When analyzing mouse kidney allografts stained with PSR, Photoshop achieved a greater interquartile range while maintaining a lower 10th percentile value compared with analysis with ImageJ. A lower 10% percentile reflects that Photoshop analysis is better at analyzing tissues with low levels of positive pixels; particularly relevant for control tissues or negative controls, whereas after ImageJ analysis the same images would result in spuriously high levels of positivity. Furthermore comparing the 2 methods by Bland-Altman plot revealed that these 2 methodologies did not agree when measuring images with a higher percentage of positive staining and correlation was poor (r=0.804). We conclude that for computer-assisted analysis of images of DAB-stained tissue there is no difference between using Photoshop or ImageJ. However, for analysis of color images where differentiation into a binary pattern is not easy, such as with PSR, Photoshop is superior at identifying higher levels of positivity while maintaining differentiation of low levels of positive staining.

  15. Optical Coherence Tomography in the UK Biobank Study – Rapid Automated Analysis of Retinal Thickness for Large Population-Based Studies

    PubMed Central

    Grossi, Carlota M.; Foster, Paul J.; Yang, Qi; Reisman, Charles A.; Chan, Kinpui; Peto, Tunde; Thomas, Dhanes; Patel, Praveen J.

    2016-01-01

    Purpose To describe an approach to the use of optical coherence tomography (OCT) imaging in large, population-based studies, including methods for OCT image acquisition, storage, and the remote, rapid, automated analysis of retinal thickness. Methods In UK Biobank, OCT images were acquired between 2009 and 2010 using a commercially available “spectral domain” OCT device (3D OCT-1000, Topcon). Images were obtained using a raster scan protocol, 6 mm x 6 mm in area, and consisting of 128 B-scans. OCT image sets were stored on UK Biobank servers in a central repository, adjacent to high performance computers. Rapid, automated analysis of retinal thickness was performed using custom image segmentation software developed by the Topcon Advanced Biomedical Imaging Laboratory (TABIL). This software employs dual-scale gradient information to allow for automated segmentation of nine intraretinal boundaries in a rapid fashion. Results 67,321 participants (134,642 eyes) in UK Biobank underwent OCT imaging of both eyes as part of the ocular module. 134,611 images were successfully processed with 31 images failing segmentation analysis due to corrupted OCT files or withdrawal of subject consent for UKBB study participation. Average time taken to call up an image from the database and complete segmentation analysis was approximately 120 seconds per data set per login, and analysis of the entire dataset was completed in approximately 28 days. Conclusions We report an approach to the rapid, automated measurement of retinal thickness from nearly 140,000 OCT image sets from the UK Biobank. In the near future, these measurements will be publically available for utilization by researchers around the world, and thus for correlation with the wealth of other data collected in UK Biobank. The automated analysis approaches we describe may be of utility for future large population-based epidemiological studies, clinical trials, and screening programs that employ OCT imaging. PMID:27716837

  16. Automated Modular Magnetic Resonance Imaging Clinical Decision Support System (MIROR): An Application in Pediatric Cancer Diagnosis.

    PubMed

    Zarinabad, Niloufar; Meeus, Emma M; Manias, Karen; Foster, Katharine; Peet, Andrew

    2018-05-02

    Advances in magnetic resonance imaging and the introduction of clinical decision support systems has underlined the need for an analysis tool to extract and analyze relevant information from magnetic resonance imaging data to aid decision making, prevent errors, and enhance health care. The aim of this study was to design and develop a modular medical image region of interest analysis tool and repository (MIROR) for automatic processing, classification, evaluation, and representation of advanced magnetic resonance imaging data. The clinical decision support system was developed and evaluated for diffusion-weighted imaging of body tumors in children (cohort of 48 children, with 37 malignant and 11 benign tumors). Mevislab software and Python have been used for the development of MIROR. Regions of interests were drawn around benign and malignant body tumors on different diffusion parametric maps, and extracted information was used to discriminate the malignant tumors from benign tumors. Using MIROR, the various histogram parameters derived for each tumor case when compared with the information in the repository provided additional information for tumor characterization and facilitated the discrimination between benign and malignant tumors. Clinical decision support system cross-validation showed high sensitivity and specificity in discriminating between these tumor groups using histogram parameters. MIROR, as a diagnostic tool and repository, allowed the interpretation and analysis of magnetic resonance imaging images to be more accessible and comprehensive for clinicians. It aims to increase clinicians' skillset by introducing newer techniques and up-to-date findings to their repertoire and make information from previous cases available to aid decision making. The modular-based format of the tool allows integration of analyses that are not readily available clinically and streamlines the future developments. ©Niloufar Zarinabad, Emma M Meeus, Karen Manias, Katharine Foster, Andrew Peet. Originally published in JMIR Medical Informatics (http://medinform.jmir.org), 02.05.2018.

  17. Multi-object segmentation framework using deformable models for medical imaging analysis.

    PubMed

    Namías, Rafael; D'Amato, Juan Pablo; Del Fresno, Mariana; Vénere, Marcelo; Pirró, Nicola; Bellemare, Marc-Emmanuel

    2016-08-01

    Segmenting structures of interest in medical images is an important step in different tasks such as visualization, quantitative analysis, simulation, and image-guided surgery, among several other clinical applications. Numerous segmentation methods have been developed in the past three decades for extraction of anatomical or functional structures on medical imaging. Deformable models, which include the active contour models or snakes, are among the most popular methods for image segmentation combining several desirable features such as inherent connectivity and smoothness. Even though different approaches have been proposed and significant work has been dedicated to the improvement of such algorithms, there are still challenging research directions as the simultaneous extraction of multiple objects and the integration of individual techniques. This paper presents a novel open-source framework called deformable model array (DMA) for the segmentation of multiple and complex structures of interest in different imaging modalities. While most active contour algorithms can extract one region at a time, DMA allows integrating several deformable models to deal with multiple segmentation scenarios. Moreover, it is possible to consider any existing explicit deformable model formulation and even to incorporate new active contour methods, allowing to select a suitable combination in different conditions. The framework also introduces a control module that coordinates the cooperative evolution of the snakes and is able to solve interaction issues toward the segmentation goal. Thus, DMA can implement complex object and multi-object segmentations in both 2D and 3D using the contextual information derived from the model interaction. These are important features for several medical image analysis tasks in which different but related objects need to be simultaneously extracted. Experimental results on both computed tomography and magnetic resonance imaging show that the proposed framework has a wide range of applications especially in the presence of adjacent structures of interest or under intra-structure inhomogeneities giving excellent quantitative results.

  18. Web-accessible cervigram automatic segmentation tool

    NASA Astrophysics Data System (ADS)

    Xue, Zhiyun; Antani, Sameer; Long, L. Rodney; Thoma, George R.

    2010-03-01

    Uterine cervix image analysis is of great importance to the study of uterine cervix cancer, which is among the leading cancers affecting women worldwide. In this paper, we describe our proof-of-concept, Web-accessible system for automated segmentation of significant tissue regions in uterine cervix images, which also demonstrates our research efforts toward promoting collaboration between engineers and physicians for medical image analysis projects. Our design and implementation unifies the merits of two commonly used languages, MATLAB and Java. It circumvents the heavy workload of recoding the sophisticated segmentation algorithms originally developed in MATLAB into Java while allowing remote users who are not experienced programmers and algorithms developers to apply those processing methods to their own cervicographic images and evaluate the algorithms. Several other practical issues of the systems are also discussed, such as the compression of images and the format of the segmentation results.

  19. Trans-dimensional MCMC methods for fully automatic motion analysis in tagged MRI.

    PubMed

    Smal, Ihor; Carranza-Herrezuelo, Noemí; Klein, Stefan; Niessen, Wiro; Meijering, Erik

    2011-01-01

    Tagged magnetic resonance imaging (tMRI) is a well-known noninvasive method allowing quantitative analysis of regional heart dynamics. Its clinical use has so far been limited, in part due to the lack of robustness and accuracy of existing tag tracking algorithms in dealing with low (and intrinsically time-varying) image quality. In this paper, we propose a novel probabilistic method for tag tracking, implemented by means of Bayesian particle filtering and a trans-dimensional Markov chain Monte Carlo (MCMC) approach, which efficiently combines information about the imaging process and tag appearance with prior knowledge about the heart dynamics obtained by means of non-rigid image registration. Experiments using synthetic image data (with ground truth) and real data (with expert manual annotation) from preclinical (small animal) and clinical (human) studies confirm that the proposed method yields higher consistency, accuracy, and intrinsic tag reliability assessment in comparison with other frequently used tag tracking methods.

  20. MOPEX: a software package for astronomical image processing and visualization

    NASA Astrophysics Data System (ADS)

    Makovoz, David; Roby, Trey; Khan, Iffat; Booth, Hartley

    2006-06-01

    We present MOPEX - a software package for astronomical image processing and display. The package is a combination of command-line driven image processing software written in C/C++ with a Java-based GUI. The main image processing capabilities include creating mosaic images, image registration, background matching, point source extraction, as well as a number of minor image processing tasks. The combination of the image processing and display capabilities allows for much more intuitive and efficient way of performing image processing. The GUI allows for the control over the image processing and display to be closely intertwined. Parameter setting, validation, and specific processing options are entered by the user through a set of intuitive dialog boxes. Visualization feeds back into further processing by providing a prompt feedback of the processing results. The GUI also allows for further analysis by accessing and displaying data from existing image and catalog servers using a virtual observatory approach. Even though originally designed for the Spitzer Space Telescope mission, a lot of functionalities are of general usefulness and can be used for working with existing astronomical data and for new missions. The software used in the package has undergone intensive testing and benefited greatly from effective software reuse. The visualization part has been used for observation planning for both the Spitzer and Herschel Space Telescopes as part the tool Spot. The visualization capabilities of Spot have been enhanced and integrated with the image processing functionality of the command-line driven MOPEX. The image processing software is used in the Spitzer automated pipeline processing, which has been in operation for nearly 3 years. The image processing capabilities have also been tested in off-line processing by numerous astronomers at various institutions around the world. The package is multi-platform and includes automatic update capabilities. The software package has been developed by a small group of software developers and scientists at the Spitzer Science Center. It is available for distribution at the Spitzer Science Center web page.

  1. Autofluorescence endoscopy with "real-time" digital image processing in differential diagnostics of selected benign and malignant lesions in the oesophagus.

    PubMed

    Sieroń-Stołtny, Karolina; Kwiatek, Sebastian; Latos, Wojciech; Kawczyk-Krupka, Aleksandra; Cieślar, Grzegorz; Stanek, Agata; Ziaja, Damian; Bugaj, Andrzej M; Sieroń, Aleksander

    2012-03-01

    Oesophageal papilloma and Barrett's oesophagus are benign lesions known as risk factors of carcinoma in the oesophagus. Therefore, it is important to diagnose these early changes before neoplastic transformation. Autofluorescence endoscopy is a fast and non-invasive method of imaging of tissues based on the natural fluorescence of endogenous fluorophores. The aim of this study was to prove the diagnostic utility of autofluorescence endoscopy with digital image processing in histological diagnosis of endoscopic findings in the upper digestive tract, primarily in the imaging of oesophageal papilloma. During the retrospective analysis of about 200 endoscopic procedures in the upper digestive tract, 67 cases of benign, precancerous or cancerous changes were found. White light endoscopy (WLE) image, single-channel (red or green) autofluorescence images, as well as green and red fluorescence intensities in two modal fluorescence image and red-to-green (R/G) ratio (Numerical Colour Value, NCV) were correlated with histopathologic results. The NCV analysis in autofluorescence imaging (AFI) showed increased R/G ratio in cancerous changes in 96% vs. 85% in WLE. Simultaneous analysis with digital image processing allowed us to diagnose suspicious tissue as cancerous in all of cases. Barrett's metaplasia was confirmed in 90% vs. 79% (AFI vs. WLE), and 98% in imaging with digital image processing. In benign lesions, WLE allowed us to exclude tissue as malignant in 85%. Using autofluorescence endoscopy R/G ratio was increased in only 10% of benign changes causing the picture to be interpreted as suspicious, but when both methods were used together, 97.5% were cases excluded as malignancies. Mean R/G ratios were estimated to be 2.5 in cancers, 1.25 in Barrett's metaplasia and 0.75 in benign changes and were statistically significant (p=0.04). Autofluorescence imaging is a sensitive method to diagnose precancerous and cancerous early stages of the diseases located in oesophagus. Especially in two-modal imaging including white light endoscopy, autofluorescence imaging with digital image processing seems to be a useful modality of early diagnostics. Also in observation of papilloma changes, it facilitates differentiation between neoplastic and benign lesions and more accurate estimation of the risk of potential malignancy. Copyright © 2011 Elsevier B.V. All rights reserved.

  2. Automated Dermoscopy Image Analysis of Pigmented Skin Lesions

    PubMed Central

    Baldi, Alfonso; Quartulli, Marco; Murace, Raffaele; Dragonetti, Emanuele; Manganaro, Mario; Guerra, Oscar; Bizzi, Stefano

    2010-01-01

    Dermoscopy (dermatoscopy, epiluminescence microscopy) is a non-invasive diagnostic technique for the in vivo observation of pigmented skin lesions (PSLs), allowing a better visualization of surface and subsurface structures (from the epidermis to the papillary dermis). This diagnostic tool permits the recognition of morphologic structures not visible by the naked eye, thus opening a new dimension in the analysis of the clinical morphologic features of PSLs. In order to reduce the learning-curve of non-expert clinicians and to mitigate problems inherent in the reliability and reproducibility of the diagnostic criteria used in pattern analysis, several indicative methods based on diagnostic algorithms have been introduced in the last few years. Recently, numerous systems designed to provide computer-aided analysis of digital images obtained by dermoscopy have been reported in the literature. The goal of this article is to review these systems, focusing on the most recent approaches based on content-based image retrieval systems (CBIR). PMID:24281070

  3. Using CT Data to Improve the Quantitative Analysis of 18F-FBB PET Neuroimages

    PubMed Central

    Segovia, Fermín; Sánchez-Vañó, Raquel; Górriz, Juan M.; Ramírez, Javier; Sopena-Novales, Pablo; Testart Dardel, Nathalie; Rodríguez-Fernández, Antonio; Gómez-Río, Manuel

    2018-01-01

    18F-FBB PET is a neuroimaging modality that is been increasingly used to assess brain amyloid deposits in potential patients with Alzheimer's disease (AD). In this work, we analyze the usefulness of these data to distinguish between AD and non-AD patients. A dataset with 18F-FBB PET brain images from 94 subjects diagnosed with AD and other disorders was evaluated by means of multiple analyses based on t-test, ANOVA, Fisher Discriminant Analysis and Support Vector Machine (SVM) classification. In addition, we propose to calculate amyloid standardized uptake values (SUVs) using only gray-matter voxels, which can be estimated using Computed Tomography (CT) images. This approach allows assessing potential brain amyloid deposits along with the gray matter loss and takes advantage of the structural information provided by most of the scanners used for PET examination, which allow simultaneous PET and CT data acquisition. The results obtained in this work suggest that SUVs calculated according to the proposed method allow AD and non-AD subjects to be more accurately differentiated than using SUVs calculated with standard approaches. PMID:29930505

  4. A New Submersible Imaging-in-flow Instrument to Monitor Nano- and Microplankton: Imaging FlowCytobot

    NASA Astrophysics Data System (ADS)

    Olson, R. J.; Sosik, H. M.; Shalapyonok, A.

    2004-12-01

    Understanding of how coastal plankton communities are regulated has traditionally been limited by undersampling, but cabled observatories now provide opportunities to deploy submersible sensors that have high power and data transmission requirements. We have developed an in situ instrument to carry out high-resolution, long term monitoring of phytoplankton and microzooplankton in the size range 10 to100 micrometers, to be deployed at cabled research facilities such as the Martha's Vineyard Coastal Observatory (MVCO). The new instrument is designed to complement FlowCytobot, a submersible flow cytometer currently deployed at MVCO that uses fluorescence and light scattering signals from a laser beam to characterize the smallest phytoplankton cells (less than 10 micrometers). Imaging FlowCytobot uses a combination of flow cytometric and video technology to capture images of organisms for identification and to measure chlorophyll fluorescence associated with each image. Images will be classified using neural net software, while the measurements of chlorophyll fluorescence will allow us to discriminate heterotrophic from phototrophic cells. The new instrument, like the original FlowCytobot is autonomous but remotely programmable. It utilizes a computer controlled syringe pump and distribution valve that allows periodic anti-fouling treatment and analysis of standard beads. Samples are analyzed continuously (0.25 to 2.5 ml per min) and data is sent over a fiber optic link to a remote computer for analysis. Preliminary results indicate that we can detect cells as small as 5 micrometers and discriminate several taxa of diatoms and dinoflagellates.

  5. [Image-guided radiotherapy and partial delegation to radiotherapy technicians: Clermont-Ferrand experience].

    PubMed

    Loos, G; Moreau, J; Miroir, J; Benhaïm, C; Biau, J; Caillé, C; Bellière, A; Lapeyre, M

    2013-10-01

    The various image-guided radiotherapy techniques raise the question of how to achieve the control of patient positioning before irradiation session and sharing of tasks between radiation oncologists and radiotherapy technicians. We have put in place procedures and operating methods to make a partial delegation of tasks to radiotherapy technicians and secure the process in three situations: control by orthogonal kV imaging (kV-kV) of bony landmarks, control by kV-kV imaging of intraprostatic fiducial goldmarkers and control by cone beam CT (CBCT) imaging for prostate cancer. Significant medical overtime is required to control these three IGRT techniques. Because of their competence in imaging, these daily controls can be delegated to radiotherapy technicians. However, to secure the process, initial training and regular evaluation are essential. The analysis of the comparison of the use of kV/kV on bone structures allowed us to achieve a partial delegation of control to radiotherapy technicians. Controlling the positioning of the prostate through the use and automatic registration of fiducial goldmarkers allows better tracking of the prostate and can be easily delegated to radiotherapy technicians. The analysis of the use of daily cone beam CT for patients treated with intensity modulated irradiation is underway, and a comparison of practices between radiotherapy technicians and radiation oncologists is ongoing to know if a partial delegation of this control is possible. Copyright © 2013. Published by Elsevier SAS.

  6. Optimization of oncological {sup 18}F-FDG PET/CT imaging based on a multiparameter analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Menezes, Vinicius O., E-mail: vinicius@radtec.com.br; Machado, Marcos A. D.; Queiroz, Cleiton C.

    2016-02-15

    Purpose: This paper describes a method to achieve consistent clinical image quality in {sup 18}F-FDG scans accounting for patient habitus, dose regimen, image acquisition, and processing techniques. Methods: Oncological PET/CT scan data for 58 subjects were evaluated retrospectively to derive analytical curves that predict image quality. Patient noise equivalent count rate and coefficient of variation (CV) were used as metrics in their analysis. Optimized acquisition protocols were identified and prospectively applied to 179 subjects. Results: The adoption of different schemes for three body mass ranges (<60 kg, 60–90 kg, >90 kg) allows improved image quality with both point spread functionmore » and ordered-subsets expectation maximization-3D reconstruction methods. The application of this methodology showed that CV improved significantly (p < 0.0001) in clinical practice. Conclusions: Consistent oncological PET/CT image quality on a high-performance scanner was achieved from an analysis of the relations existing between dose regimen, patient habitus, acquisition, and processing techniques. The proposed methodology may be used by PET/CT centers to develop protocols to standardize PET/CT imaging procedures and achieve better patient management and cost-effective operations.« less

  7. Development of an ultralow-light-level luminescence image analysis system for dynamic measurements of transcriptional activity in living and migrating cells.

    PubMed

    Maire, E; Lelièvre, E; Brau, D; Lyons, A; Woodward, M; Fafeur, V; Vandenbunder, B

    2000-04-10

    We have developed an approach to study in single living epithelial cells both cell migration and transcriptional activation, which was evidenced by the detection of luminescence emission from cells transfected with luciferase reporter vectors. The image acquisition chain consists of an epifluorescence inverted microscope, connected to an ultralow-light-level photon-counting camera and an image-acquisition card associated to specialized image analysis software running on a PC computer. Using a simple method based on a thin calibrated light source, the image acquisition chain has been optimized following comparisons of the performance of microscopy objectives and photon-counting cameras designed to observe luminescence. This setup allows us to measure by image analysis the luminescent light emitted by individual cells stably expressing a luciferase reporter vector. The sensitivity of the camera was adjusted to a high value, which required the use of a segmentation algorithm to eliminate the background noise. Following mathematical morphology treatments, kinetic changes of luminescent sources were analyzed and then correlated with the distance and speed of migration. Our results highlight the usefulness of our image acquisition chain and mathematical morphology software to quantify the kinetics of luminescence changes in migrating cells.

  8. Temporal and spatial resolution required for imaging myocardial function

    NASA Astrophysics Data System (ADS)

    Eusemann, Christian D.; Robb, Richard A.

    2004-05-01

    4-D functional analysis of myocardial mechanics is an area of significant interest and research in cardiology and vascular/interventional radiology. Current multidimensional analysis is limited by insufficient temporal resolution of x-ray and magnetic resonance based techniques, but recent improvements in system design holds hope for faster and higher resolution scans to improve images of moving structures allowing more accurate functional studies, such as in the heart. This paper provides a basis for the requisite temporal and spatial resolution for useful imaging during individual segments of the cardiac cycle. Multiple sample rates during systole and diastole are compared to determine an adequate sample frequency to reduce regional myocardial tracking errors. Concurrently, out-of-plane resolution has to be sufficiently high to minimize partial volume effect. Temporal resolution and out-of-plane spatial resolution are related factors that must be considered together. The data used for this study is a DSR dynamic volume image dataset with high temporal and spatial resolution using implanted fiducial markers to track myocardial motion. The results of this study suggest a reduced exposure and scan time for x-ray and magnetic resonance imaging methods, since a lower sample rate during systole is sufficient, whereas the period of rapid filling during diastole requires higher sampling. This could potentially reduce the cost of these procedures and allow higher patient throughput.

  9. Quantitative phase-digital holographic microscopy: a new imaging modality to identify original cellular biomarkers of diseases

    NASA Astrophysics Data System (ADS)

    Marquet, P.; Rothenfusser, K.; Rappaz, B.; Depeursinge, C.; Jourdain, P.; Magistretti, P. J.

    2016-03-01

    Quantitative phase microscopy (QPM) has recently emerged as a powerful label-free technique in the field of living cell imaging allowing to non-invasively measure with a nanometric axial sensitivity cell structure and dynamics. Since the phase retardation of a light wave when transmitted through the observed cells, namely the quantitative phase signal (QPS), is sensitive to both cellular thickness and intracellular refractive index related to the cellular content, its accurate analysis allows to derive various cell parameters and monitor specific cell processes, which are very likely to identify new cell biomarkers. Specifically, quantitative phase-digital holographic microscopy (QP-DHM), thanks to its numerical flexibility facilitating parallelization and automation processes, represents an appealing imaging modality to both identify original cellular biomarkers of diseases as well to explore the underlying pathophysiological processes.

  10. A litmus-type colorimetric and fluorometric volatile organic compound sensor based on inkjet-printed polydiacetylenes on paper substrates.

    PubMed

    Yoon, Bora; Park, In Sung; Shin, Hyora; Park, Hye Jin; Lee, Chan Woo; Kim, Jong-Man

    2013-05-14

    Inkjet-printed paper-based volatile organic compound (VOC) sensor strips imaged with polydiacetylenes (PDAs) are developed. A microemulsion ink containing bisurethane-substituted diacetylene (DA) monomers, 4BCMU, was inkjet printed onto paper using a conventional inkjet office printer. UV irradiation of the printed image allowed fabrication of blue-colored poly-4BCMU on the paper and the polymer was found to display colorimetric responses to VOCs. Interestingly, a blue-to-yellow color change was observed when the strip was exposed to chloroform vapor, which was accompanied by the generation of green fluorescence. The principal component analysis plot of the color and fluorescence images of the VOC-exposed polymers allowed a more precise discrimination of VOC vapors. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. Advances in Surface Plasmon Resonance Imaging allowing for quantitative measurement of laterally heterogeneous samples

    NASA Astrophysics Data System (ADS)

    Raegen, Adam; Reiter, Kyle; Clarke, Anthony; Lipkowski, Jacek; Dutcher, John

    2012-02-01

    The Surface Plasmon Resonance (SPR) phenomenon is routinely exploited to qualitatively probe changes to materials on metallic surfaces for use in probes and sensors. Unfortunately, extracting truly quantitative information is usually limited to a select few cases -- uniform absorption/desorption of small biomolecules and films, in which a continuous ``slab'' model is a good approximation. We present advancements in the SPR technique that expand the number of cases for which the technique can provide meaningful results. Use of a custom, angle-scanning SPR imaging system, together with a refined data analysis method, allow for quantitative kinetic measurements of laterally heterogeneous systems. The degradation of cellulose microfibrils and bundles of microfibrils due to the action of cellulolytic enzymes will be presented as an excellent example of the capabilities of the SPR imaging system.

  12. Analysis of neoplastic lesions in magnetic resonance imaging using self-organizing maps.

    PubMed

    Mei, Paulo Afonso; de Carvalho Carneiro, Cleyton; Fraser, Stephen J; Min, Li Li; Reis, Fabiano

    2015-12-15

    To provide an improved method for the identification and analysis of brain tumors in MRI scans using a semi-automated computational approach, that has the potential to provide a more objective, precise and quantitatively rigorous analysis, compared to human visual analysis. Self-Organizing Maps (SOM) is an unsupervised, exploratory data analysis tool, which can automatically domain an image into selfsimilar regions or clusters, based on measures of similarity. It can be used to perform image-domain of brain tissue on MR images, without prior knowledge. We used SOM to analyze T1, T2 and FLAIR acquisitions from two MRI machines in our service from 14 patients with brain tumors confirmed by biopsies--three lymphomas, six glioblastomas, one meningioma, one ganglioglioma, two oligoastrocytomas and one astrocytoma. The SOM software was used to analyze the data from the three image acquisitions from each patient and generated a self-organized map for each containing 25 clusters. Damaged tissue was separated from the normal tissue using the SOM technique. Furthermore, in some cases it allowed to separate different areas from within the tumor--like edema/peritumoral infiltration and necrosis. In lesions with less precise boundaries in FLAIR, the estimated damaged tissue area in the resulting map appears bigger. Our results showed that SOM has the potential to be a powerful MR imaging analysis technique for the assessment of brain tumors. Copyright © 2015. Published by Elsevier B.V.

  13. Visual grading characteristics and ordinal regression analysis during optimisation of CT head examinations.

    PubMed

    Zarb, Francis; McEntee, Mark F; Rainford, Louise

    2015-06-01

    To evaluate visual grading characteristics (VGC) and ordinal regression analysis during head CT optimisation as a potential alternative to visual grading assessment (VGA), traditionally employed to score anatomical visualisation. Patient images (n = 66) were obtained using current and optimised imaging protocols from two CT suites: a 16-slice scanner at the national Maltese centre for trauma and a 64-slice scanner in a private centre. Local resident radiologists (n = 6) performed VGA followed by VGC and ordinal regression analysis. VGC alone indicated that optimised protocols had similar image quality as current protocols. Ordinal logistic regression analysis provided an in-depth evaluation, criterion by criterion allowing the selective implementation of the protocols. The local radiology review panel supported the implementation of optimised protocols for brain CT examinations (including trauma) in one centre, achieving radiation dose reductions ranging from 24 % to 36 %. In the second centre a 29 % reduction in radiation dose was achieved for follow-up cases. The combined use of VGC and ordinal logistic regression analysis led to clinical decisions being taken on the implementation of the optimised protocols. This improved method of image quality analysis provided the evidence to support imaging protocol optimisation, resulting in significant radiation dose savings. • There is need for scientifically based image quality evaluation during CT optimisation. • VGC and ordinal regression analysis in combination led to better informed clinical decisions. • VGC and ordinal regression analysis led to dose reductions without compromising diagnostic efficacy.

  14. Analysis of x-ray tomography data of an extruded low density styrenic foam: an image analysis study

    NASA Astrophysics Data System (ADS)

    Lin, Jui-Ching; Heeschen, William

    2016-10-01

    Extruded styrenic foams are low density foams that are widely used for thermal insulation. It is difficult to precisely characterize the structure of the cells in low density foams by traditional cross-section viewing due to the frailty of the walls of the cells. X-ray computed tomography (CT) is a non-destructive, three dimensional structure characterization technique that has great potential for structure characterization of styrenic foams. Unfortunately the intrinsic artifacts of the data and the artifacts generated during image reconstruction are often comparable in size and shape to the thin walls of the foam, making robust and reliable analysis of cell sizes challenging. We explored three different image processing methods to clean up artifacts in the reconstructed images, thus allowing quantitative three dimensional determination of cell size in a low density styrenic foam. Three image processing approaches - an intensity based approach, an intensity variance based approach, and a machine learning based approach - are explored in this study, and the machine learning image feature classification method was shown to be the best. Individual cells are segmented within the images after the images were cleaned up using the three different methods and the cell sizes are measured and compared in the study. Although the collected data with the image analysis methods together did not yield enough measurements for a good statistic of the measurement of cell sizes, the problem can be resolved by measuring multiple samples or increasing imaging field of view.

  15. Starshade Assembly Enabled by the Deep Space Gateway Architecture

    NASA Astrophysics Data System (ADS)

    Grunsfeld, J. M.; Siegler, N.; Mukherjee, R.

    2018-02-01

    A starshade is a large external coronagraph which will allow the direct imaging and analysis of planets around nearby stars. We present how the Deep Space Gateway would enable the robotic/astronaut construction of a starshade.

  16. Blind source separation of ex-vivo aorta tissue multispectral images

    PubMed Central

    Galeano, July; Perez, Sandra; Montoya, Yonatan; Botina, Deivid; Garzón, Johnson

    2015-01-01

    Blind Source Separation methods (BSS) aim for the decomposition of a given signal in its main components or source signals. Those techniques have been widely used in the literature for the analysis of biomedical images, in order to extract the main components of an organ or tissue under study. The analysis of skin images for the extraction of melanin and hemoglobin is an example of the use of BSS. This paper presents a proof of concept of the use of source separation of ex-vivo aorta tissue multispectral Images. The images are acquired with an interference filter-based imaging system. The images are processed by means of two algorithms: Independent Components analysis and Non-negative Matrix Factorization. In both cases, it is possible to obtain maps that quantify the concentration of the main chromophores present in aortic tissue. Also, the algorithms allow for spectral absorbance of the main tissue components. Those spectral signatures were compared against the theoretical ones by using correlation coefficients. Those coefficients report values close to 0.9, which is a good estimator of the method’s performance. Also, correlation coefficients lead to the identification of the concentration maps according to the evaluated chromophore. The results suggest that Multi/hyper-spectral systems together with image processing techniques is a potential tool for the analysis of cardiovascular tissue. PMID:26137366

  17. Regional Lung Ventilation Analysis Using Temporally Resolved Magnetic Resonance Imaging.

    PubMed

    Kolb, Christoph; Wetscherek, Andreas; Buzan, Maria Teodora; Werner, René; Rank, Christopher M; Kachelrie, Marc; Kreuter, Michael; Dinkel, Julien; Heuel, Claus Peter; Maier-Hein, Klaus

    We propose a computer-aided method for regional ventilation analysis and observation of lung diseases in temporally resolved magnetic resonance imaging (4D MRI). A shape model-based segmentation and registration workflow was used to create an atlas-derived reference system in which regional tissue motion can be quantified and multimodal image data can be compared regionally. Model-based temporal registration of the lung surfaces in 4D MRI data was compared with the registration of 4D computed tomography (CT) images. A ventilation analysis was performed on 4D MR images of patients with lung fibrosis; 4D MR ventilation maps were compared with corresponding diagnostic 3D CT images of the patients and 4D CT maps of subjects without impaired lung function (serving as reference). Comparison between the computed patient-specific 4D MR regional ventilation maps and diagnostic CT images shows good correlation in conspicuous regions. Comparison to 4D CT-derived ventilation maps supports the plausibility of the 4D MR maps. Dynamic MRI-based flow-volume loops and spirograms further visualize the free-breathing behavior. The proposed methods allow for 4D MR-based regional analysis of tissue dynamics and ventilation in spontaneous breathing and comparison of patient data. The proposed atlas-based reference coordinate system provides an automated manner of annotating and comparing multimodal lung image data.

  18. Platform for Post-Processing Waveform-Based NDE

    NASA Technical Reports Server (NTRS)

    Roth, Don J.

    2010-01-01

    Signal- and image-processing methods are commonly needed to extract information from the waves, improve resolution of, and highlight defects in an image. Since some similarity exists for all waveform-based nondestructive evaluation (NDE) methods, it would seem that a common software platform containing multiple signal- and image-processing techniques to process the waveforms and images makes sense where multiple techniques, scientists, engineers, and organizations are involved. NDE Wave & Image Processor Version 2.0 software provides a single, integrated signal- and image-processing and analysis environment for total NDE data processing and analysis. It brings some of the most useful algorithms developed for NDE over the past 20 years into a commercial-grade product. The software can import signal/spectroscopic data, image data, and image series data. This software offers the user hundreds of basic and advanced signal- and image-processing capabilities including esoteric 1D and 2D wavelet-based de-noising, de-trending, and filtering. Batch processing is included for signal- and image-processing capability so that an optimized sequence of processing operations can be applied to entire folders of signals, spectra, and images. Additionally, an extensive interactive model-based curve-fitting facility has been included to allow fitting of spectroscopy data such as from Raman spectroscopy. An extensive joint-time frequency module is included for analysis of non-stationary or transient data such as that from acoustic emission, vibration, or earthquake data.

  19. imzML: Imaging Mass Spectrometry Markup Language: A common data format for mass spectrometry imaging.

    PubMed

    Römpp, Andreas; Schramm, Thorsten; Hester, Alfons; Klinkert, Ivo; Both, Jean-Pierre; Heeren, Ron M A; Stöckli, Markus; Spengler, Bernhard

    2011-01-01

    Imaging mass spectrometry is the method of scanning a sample of interest and generating an "image" of the intensity distribution of a specific analyte. The data sets consist of a large number of mass spectra which are usually acquired with identical settings. Existing data formats are not sufficient to describe an MS imaging experiment completely. The data format imzML was developed to allow the flexible and efficient exchange of MS imaging data between different instruments and data analysis software.For this purpose, the MS imaging data is divided in two separate files. The mass spectral data is stored in a binary file to ensure efficient storage. All metadata (e.g., instrumental parameters, sample details) are stored in an XML file which is based on the standard data format mzML developed by HUPO-PSI. The original mzML controlled vocabulary was extended to include specific parameters of imaging mass spectrometry (such as x/y position and spatial resolution). The two files (XML and binary) are connected by offset values in the XML file and are unambiguously linked by a universally unique identifier. The resulting datasets are comparable in size to the raw data and the separate metadata file allows flexible handling of large datasets.Several imaging MS software tools already support imzML. This allows choosing from a (growing) number of processing tools. One is no longer limited to proprietary software, but is able to use the processing software which is best suited for a specific question or application. On the other hand, measurements from different instruments can be compared within one software application using identical settings for data processing. All necessary information for evaluating and implementing imzML can be found at http://www.imzML.org .

  20. Managing biomedical image metadata for search and retrieval of similar images.

    PubMed

    Korenblum, Daniel; Rubin, Daniel; Napel, Sandy; Rodriguez, Cesar; Beaulieu, Chris

    2011-08-01

    Radiology images are generally disconnected from the metadata describing their contents, such as imaging observations ("semantic" metadata), which are usually described in text reports that are not directly linked to the images. We developed a system, the Biomedical Image Metadata Manager (BIMM) to (1) address the problem of managing biomedical image metadata and (2) facilitate the retrieval of similar images using semantic feature metadata. Our approach allows radiologists, researchers, and students to take advantage of the vast and growing repositories of medical image data by explicitly linking images to their associated metadata in a relational database that is globally accessible through a Web application. BIMM receives input in the form of standard-based metadata files using Web service and parses and stores the metadata in a relational database allowing efficient data query and maintenance capabilities. Upon querying BIMM for images, 2D regions of interest (ROIs) stored as metadata are automatically rendered onto preview images included in search results. The system's "match observations" function retrieves images with similar ROIs based on specific semantic features describing imaging observation characteristics (IOCs). We demonstrate that the system, using IOCs alone, can accurately retrieve images with diagnoses matching the query images, and we evaluate its performance on a set of annotated liver lesion images. BIMM has several potential applications, e.g., computer-aided detection and diagnosis, content-based image retrieval, automating medical analysis protocols, and gathering population statistics like disease prevalences. The system provides a framework for decision support systems, potentially improving their diagnostic accuracy and selection of appropriate therapies.

  1. Spheres of Earth: An Introduction to Making Observations of Earth Using an Earth System's Science Approach. Student Guide

    NASA Technical Reports Server (NTRS)

    Graff, Paige Valderrama; Baker, Marshalyn (Editor); Graff, Trevor (Editor); Lindgren, Charlie (Editor); Mailhot, Michele (Editor); McCollum, Tim (Editor); Runco, Susan (Editor); Stefanov, William (Editor); Willis, Kim (Editor)

    2010-01-01

    Scientists from the Image Science and Analysis Laboratory (ISAL) at NASA's Johnson Space Center (JSC) work with astronauts onboard the International Space Station (ISS) who take images of Earth. Astronaut photographs, sometimes referred to as Crew Earth Observations, are taken using hand-held digital cameras onboard the ISS. These digital images allow scientists to study our Earth from the unique perspective of space. Astronauts have taken images of Earth since the 1960s. There is a database of over 900,000 astronaut photographs available at http://eol.jsc.nasa.gov . Images are requested by ISAL scientists at JSC and astronauts in space personally frame and acquire them from the Destiny Laboratory or other windows in the ISS. By having astronauts take images, they can specifically frame them according to a given request and need. For example, they can choose to use different lenses to vary the amount of area (field of view) an image will cover. Images can be taken at different times of the day which allows different lighting conditions to bring out or highlight certain features. The viewing angle at which an image is acquired can also be varied to show the same area from different perspectives. Pointing the camera straight down gives you a nadir shot. Pointing the camera at an angle to get a view across an area would be considered an oblique shot. Being able to change these variables makes astronaut photographs a unique and useful data set. Astronaut photographs are taken from the ISS from altitudes of 300 - 400 km (185 to 250 miles). One of the current cameras being used, the Nikon D3X digital camera, can take images using a 50, 100, 250, 400 or 800mm lens. These different lenses allow for a wider or narrower field of view. The higher the focal length (800mm for example) the narrower the field of view (less area will be covered). Higher focal lengths also show greater detail of the area on the surface being imaged. Scientists from the Image Science and Analysis Laboratory (ISAL) at NASA s Johnson Space Center (JSC) work with astronauts onboard the International Space Station (ISS) who take images of Earth. Astronaut photographs, sometimes referred to as Crew Earth Observations, are taken using hand-held digital cameras onboard the ISS. These digital images allow scientists to study our Earth from the unique perspective of space. Astronauts have taken images of Earth since the 1960s. There is a database of over 900,000 astronaut photographs available at http://eol.jsc.nasa.gov . Images are requested by ISAL scientists at JSC and astronauts in space personally frame and acquire them from the Destiny Laboratory or other windows in the ISS. By having astronauts take images, they can specifically frame them according to a given request and need. For example, they can choose to use different lenses to vary the amount of area (field of view) an image will cover. Images can be taken at different times of the day which allows different lighting conditions to bring out or highlight certain features. The viewing angle at which an image is acquired can also be varied to show the same area from different perspectives. Pointing the camera straight down gives you a nadir shot. Pointing the camera at an angle to get a view across an area would be considered an oblique shot. Being able to change these variables makes astronaut photographs a unique and useful data set. Astronaut photographs are taken from the ISS from altitudes of 300 - 400 km (approx.185 to 250 miles). One of the current cameras being used, the Nikon D3X digital camera, can take images using a 50, 100, 250, 400 or 800mm lens. These different lenses allow for a wider or narrower field of view. The higher the focal length (800mm for example) the narrower the field of view (less area will be covered). Higher focal lengths also show greater detail of the area on the surface being imaged. There are four major systems or spheres of Earth. They are: Atmosphere, Biosphere, Hydrosphe, and Litho/Geosphere.

  2. Microanalysis of dental caries using laser-scanned fluorescence

    NASA Astrophysics Data System (ADS)

    Barron, Joseph R.; Paton, Barry E.; Zakariasen, Kenneth L.

    1992-06-01

    It is well known that enamel and dentin fluoresce when illuminated by short-wavelength optical radiation. Fluorescence emission from carious and non-carious regions of teeth have been studied using a new experimental scanning technique for fluorescence analysis of dental sections. Scanning in 2 dimensions will allow surface maps of dental caries to be created. These surface images are then enhanced using the conventional and newer image processing techniques. Carious regions can be readily identified and contour maps can be used to graphically display the degree of damage on both surfaces and transverse sections. Numerous studies have shown that carious fluorescence is significantly different than non-carious regions. The scanning laser fluorescence spectrometer focuses light from a 25 mW He-Cd laser at 442 nm through an objective lens onto a cross-section area as small as 3 micrometers in diameter. Microtome prepared dental samples 100 micrometers thick are laid flat onto an optical bench perpendicular to the incident beam. The sample is moved under computer control in X & Y with an absolute precision of 0.1 micrometers . The backscattered light is both spatial and wavelength filtered before being measured on a long wavelength sensitized photomultiplier tube. High precision analysis of dental samples allow detailed maps of carious regions to be determined. Successive images allow time studies of caries growth and even the potential for remineralization studies of decalcified regions.

  3. Wide-field OCT imaging of oral lesions in vivo: quantification and classification (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Raizada, Rashika; Lee, Anthony M. D.; Liu, Kelly Y.; MacAulay, Calum E.; Ng, Samson; Poh, Catherine F.; Lane, Pierre M.

    2017-02-01

    Worldwide, there are over 450,000 new cases of oral cancer reported each year. Late-stage diagnosis remains a significant factor responsible for its high mortality rate (>50%). In-vivo non-invasive rapid imaging techniques, that can visualise clinically significant changes in the oral mucosa, may improve the management of oral cancer. We present an analysis of features extracted from oral images obtained using our hand- held wide-field Optical Coherence Tomography (OCT) instrument. The images were analyzed for epithelial scattering, overall tissue scattering, and 3D basement membrane topology. The associations between these three features and disease state (benign, pre-cancer, or cancer), as measured by clinical assessment or pathology, were determined. While scattering coefficient has previously been shown to be sensitive to cancer and dysplasia, likely due to changes in nuclear and cellular density, the addition of basement membrane topology may increase diagnostic ability- as it is known that the presence of bulbous rete pegs in the basement membrane are characteristic of dysplasia. The resolution and field-of-view of our oral OCT system allowed analysis of these features over large areas of up to 2.5mm x 90mm, in a timely fashion, which allow for application in clinical settings.

  4. PACS-based interface for 3D anatomical structure visualization and surgical planning

    NASA Astrophysics Data System (ADS)

    Koehl, Christophe; Soler, Luc; Marescaux, Jacques

    2002-05-01

    The interpretation of radiological image is routine but it remains a rather difficult task for physicians. It requires complex mental processes, that permit translation from 2D slices into 3D localization and volume determination of visible diseases. An easier and more extensive visualization and exploitation of medical images can be reached through the use of computer-based systems that provide real help from patient admission to post-operative followup. In this way, we have developed a 3D visualization interface linked to a PACS database that allows manipulation and interaction on virtual organs delineated from CT-scan or MRI. This software provides the 3D real-time surface rendering of anatomical structures, an accurate evaluation of volumes and distances and the improvement of radiological image analysis and exam annotation through a negatoscope tool. It also provides a tool for surgical planning allowing the positioning of an interactive laparoscopic instrument and the organ resection. The software system could revolutionize the field of computerized imaging technology. Indeed, it provides a handy and portable tool for pre-operative and intra-operative analysis of anatomy and pathology in various medical fields. This constitutes the first step of the future development of augmented reality and surgical simulation systems.

  5. GPCALMA: A Tool For Mammography With A GRID-Connected Distributed Database

    NASA Astrophysics Data System (ADS)

    Bottigli, U.; Cerello, P.; Cheran, S.; Delogu, P.; Fantacci, M. E.; Fauci, F.; Golosio, B.; Lauria, A.; Lopez Torres, E.; Magro, R.; Masala, G. L.; Oliva, P.; Palmiero, R.; Raso, G.; Retico, A.; Stumbo, S.; Tangaro, S.

    2003-09-01

    The GPCALMA (Grid Platform for Computer Assisted Library for MAmmography) collaboration involves several departments of physics, INFN (National Institute of Nuclear Physics) sections, and italian hospitals. The aim of this collaboration is developing a tool that can help radiologists in early detection of breast cancer. GPCALMA has built a large distributed database of digitised mammographic images (about 5500 images corresponding to 1650 patients) and developed a CAD (Computer Aided Detection) software which is integrated in a station that can also be used to acquire new images, as archive and to perform statistical analysis. The images (18×24 cm2, digitised by a CCD linear scanner with a 85 μm pitch and 4096 gray levels) are completely described: pathological ones have a consistent characterization with radiologist's diagnosis and histological data, non pathological ones correspond to patients with a follow up at least three years. The distributed database is realized throught the connection of all the hospitals and research centers in GRID tecnology. In each hospital local patients digital images are stored in the local database. Using GRID connection, GPCALMA will allow each node to work on distributed database data as well as local database data. Using its database the GPCALMA tools perform several analysis. A texture analysis, i.e. an automated classification on adipose, dense or glandular texture, can be provided by the system. GPCALMA software also allows classification of pathological features, in particular massive lesions (both opacities and spiculated lesions) analysis and microcalcification clusters analysis. The detection of pathological features is made using neural network software that provides a selection of areas showing a given "suspicion level" of lesion occurrence. The performance of the GPCALMA system will be presented in terms of the ROC (Receiver Operating Characteristic) curves. The results of GPCALMA system as "second reader" will also be presented.

  6. Histopathology mapping of biochemical changes in myocardial infarction by Fourier transform infrared spectral imaging.

    PubMed

    Yang, Tian T; Weng, Shi F; Zheng, Na; Pan, Qing H; Cao, Hong L; Liu, Liang; Zhang, Hai D; Mu, Da W

    2011-04-15

    Fourier transform infrared (FTIR) imaging and microspectroscopy have been extensively applied in the identification and investigation of both healthy and diseased tissues. FTIR imaging can be used to determine the biodistribution of several molecules of interest (carbohydrates, lipids, proteins) for tissue analysis, without the need for prior staining of these tissues. Molecular structure data, such as protein secondary structure and collagen triple helix exhibits, can also be obtained from the same analysis. Thus, several histopathological lesions, for example myocardial infarction, can be identified from FTIR-analyzed tissue images, the latter which can allow for more accurate discrimination between healthy tissues and pathological lesions. Accordingly, we propose FTIR imaging as a new tool integrating both molecular and histopathological assessment to investigate the degree of pathological changes in tissues. In this study, myocardial infarction is presented as an illustrative example of the wide potential of FTIR imaging for biomedical applications. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  7. Reducing noise component on medical images

    NASA Astrophysics Data System (ADS)

    Semenishchev, Evgeny; Voronin, Viacheslav; Dub, Vladimir; Balabaeva, Oksana

    2018-04-01

    Medical visualization and analysis of medical data is an actual direction. Medical images are used in microbiology, genetics, roentgenology, oncology, surgery, ophthalmology, etc. Initial data processing is a major step towards obtaining a good diagnostic result. The paper considers the approach allows an image filtering with preservation of objects borders. The algorithm proposed in this paper is based on sequential data processing. At the first stage, local areas are determined, for this purpose the method of threshold processing, as well as the classical ICI algorithm, is applied. The second stage uses a method based on based on two criteria, namely, L2 norm and the first order square difference. To preserve the boundaries of objects, we will process the transition boundary and local neighborhood the filtering algorithm with a fixed-coefficient. For example, reconstructed images of CT, x-ray, and microbiological studies are shown. The test images show the effectiveness of the proposed algorithm. This shows the applicability of analysis many medical imaging applications.

  8. The Magneto-optical Filter, Working Principles and Recent Progress

    NASA Technical Reports Server (NTRS)

    Cacciani, A.; Rhodes, E. J., Jr.

    1984-01-01

    The Magneto-Optical Filter is described which allows simultaneous magnetic and velocity measurements (in both imaging and non-imaging modes) without the need for a spectrograph. In this way the stability and alignment problems of the spectrograph are completely overcome. Its major advantages are: wavelength absolute reference and stability, high signal to noise ratio and independence of the transmission profile from the incidence angle of the solar beam. It is an imaging instrument allowing high wave number analysis in the solar oscillation spectrum and a continuous monitoring of the image position through the chromospheric facular structures. The apparatus in use at Mt. Wilson is assembled in a modular form. The most important part of it is a glass cell containing the sodium vapor. The filter is easy to use but the cell is not easy to construct in an optimal way. The technology is in progress both to use Na and K together and to prevent the windows from becoming coated during a long-term operation.

  9. Application of Particle Image Velocimetry and Reference Image Topography to jet shock cells using the hydraulic analogy

    NASA Astrophysics Data System (ADS)

    Kumar, Vaibhav; Ng, Ivan; Sheard, Gregory J.; Brocher, Eric; Hourigan, Kerry; Fouras, Andreas

    2011-08-01

    This paper examines the shock cell structure, vorticity and velocity field at the exit of an underexpanded jet nozzle using a hydraulic analogy and the Reference Image Topography technique. Understanding the flow in this region is important for the mitigation of screech, an aeroacoustic problem harmful to aircraft structures. Experiments are conducted on a water table, allowing detailed quantitative investigation of this important flow regime at a greatly reduced expense. Conventional Particle Image Velocimetry is employed to determine the velocity and vorticity fields of the nozzle exit region. Applying Reference Image Topography, the wavy water surface is reconstructed and when combined with the hydraulic analogy, provides a pressure map of the region. With this approach subtraction of surfaces is used to highlight the unsteady regions of the flow, which is not as convenient or quantitative with conventional Schlieren techniques. This allows a detailed analysis of the shock cell structures and their interaction with flow instabilities in the shear layer that are the underlying cause of jet screech.

  10. Ship Speed Retrieval From Single Channel TerraSAR-X Data

    NASA Astrophysics Data System (ADS)

    Soccorsi, Matteo; Lehner, Susanne

    2010-04-01

    A method to estimate the speed of a moving ship is presented. The technique, introduced in Kirscht (1998), is extended to marine application and validated on TerraSAR-X High-Resolution (HR) data. The generation of a sequence of single-look SAR images from a single- channel image corresponds to an image time series with reduced resolution. This allows applying change detection techniques on the time series to evaluate the velocity components in range and azimuth of the ship. The evaluation of the displacement vector of a moving target in consecutive images of the sequence allows the estimation of the azimuth velocity component. The range velocity component is estimated by evaluating the variation of the signal amplitude during the sequence. In order to apply the technique on TerraSAR-X Spot Light (SL) data a further processing step is needed. The phase has to be corrected as presented in Eineder et al. (2009) due to the SL acquisition mode; otherwise the image sequence cannot be generated. The analysis, when possible validated by the Automatic Identification System (AIS), was performed in the framework of the ESA project MARISS.

  11. A Stochastic-Variational Model for Soft Mumford-Shah Segmentation

    PubMed Central

    2006-01-01

    In contemporary image and vision analysis, stochastic approaches demonstrate great flexibility in representing and modeling complex phenomena, while variational-PDE methods gain enormous computational advantages over Monte Carlo or other stochastic algorithms. In combination, the two can lead to much more powerful novel models and efficient algorithms. In the current work, we propose a stochastic-variational model for soft (or fuzzy) Mumford-Shah segmentation of mixture image patterns. Unlike the classical hard Mumford-Shah segmentation, the new model allows each pixel to belong to each image pattern with some probability. Soft segmentation could lead to hard segmentation, and hence is more general. The modeling procedure, mathematical analysis on the existence of optimal solutions, and computational implementation of the new model are explored in detail, and numerical examples of both synthetic and natural images are presented. PMID:23165059

  12. Multispectral UV imaging for fast and non-destructive quality control of chemical and physical tablet attributes.

    PubMed

    Klukkert, Marten; Wu, Jian X; Rantanen, Jukka; Carstensen, Jens M; Rades, Thomas; Leopold, Claudia S

    2016-07-30

    Monitoring of tablet quality attributes in direct vicinity of the production process requires analytical techniques that allow fast, non-destructive, and accurate tablet characterization. The overall objective of this study was to investigate the applicability of multispectral UV imaging as a reliable, rapid technique for estimation of the tablet API content and tablet hardness, as well as determination of tablet intactness and the tablet surface density profile. One of the aims was to establish an image analysis approach based on multivariate image analysis and pattern recognition to evaluate the potential of UV imaging for automatized quality control of tablets with respect to their intactness and surface density profile. Various tablets of different composition and different quality regarding their API content, radial tensile strength, intactness, and surface density profile were prepared using an eccentric as well as a rotary tablet press at compression pressures from 20MPa up to 410MPa. It was found, that UV imaging can provide both, relevant information on chemical and physical tablet attributes. The tablet API content and radial tensile strength could be estimated by UV imaging combined with partial least squares analysis. Furthermore, an image analysis routine was developed and successfully applied to the UV images that provided qualitative information on physical tablet surface properties such as intactness and surface density profiles, as well as quantitative information on variations in the surface density. In conclusion, this study demonstrates that UV imaging combined with image analysis is an effective and non-destructive method to determine chemical and physical quality attributes of tablets and is a promising approach for (near) real-time monitoring of the tablet compaction process and formulation optimization purposes. Copyright © 2015 Elsevier B.V. All rights reserved.

  13. High-definition Fourier Transform Infrared (FT-IR) Spectroscopic Imaging of Human Tissue Sections towards Improving Pathology

    PubMed Central

    Nguyen, Peter L.; Davidson, Bennett; Akkina, Sanjeev; Guzman, Grace; Setty, Suman; Kajdacsy-Balla, Andre; Walsh, Michael J.

    2015-01-01

    High-definition Fourier Transform Infrared (FT-IR) spectroscopic imaging is an emerging approach to obtain detailed images that have associated biochemical information. FT-IR imaging of tissue is based on the principle that different regions of the mid-infrared are absorbed by different chemical bonds (e.g., C=O, C-H, N-H) within cells or tissue that can then be related to the presence and composition of biomolecules (e.g., lipids, DNA, glycogen, protein, collagen). In an FT-IR image, every pixel within the image comprises an entire Infrared (IR) spectrum that can give information on the biochemical status of the cells that can then be exploited for cell-type or disease-type classification. In this paper, we show: how to obtain IR images from human tissues using an FT-IR system, how to modify existing instrumentation to allow for high-definition imaging capabilities, and how to visualize FT-IR images. We then present some applications of FT-IR for pathology using the liver and kidney as examples. FT-IR imaging holds exciting applications in providing a novel route to obtain biochemical information from cells and tissue in an entirely label-free non-perturbing route towards giving new insight into biomolecular changes as part of disease processes. Additionally, this biochemical information can potentially allow for objective and automated analysis of certain aspects of disease diagnosis. PMID:25650759

  14. Flow Structure and Surface Topology on a UCAV Planform

    NASA Astrophysics Data System (ADS)

    Elkhoury, Michel; Yavuz, Metin; Rockwell, Donald

    2003-11-01

    Flow past a X-45 UCAV planform involves the complex generation and interaction of vortices, their breakdown and occurrence of surface separation and stall. A cinema technique of high-image-density particle image velocimetry, in conjunction with dye visualization, allows characterization of the time-averaged and instantaneous states of the flow, in terms of critical points of the near-surface streamlines. These features are related to patterns of surface normal vorticity and velocity fluctuation. Spectral analysis of the naturally occurring unsteadiness of the flow allows definition of the most effective frequencies for small-amplitude perturbation of the wing, which leads to substantial alterations of the aforementioned patterns of flow structure and topology adjacent to the surface.

  15. One Size Fits All: Evaluation of the Transferability of a New "Learning" Histologic Image Analysis Application.

    PubMed

    Arlt, Janine; Homeyer, André; Sänger, Constanze; Dahmen, Uta; Dirsch, Olaf

    2016-01-01

    Quantitative analysis of histologic slides is of importance for pathology and also to address surgical questions. Recently, a novel application was developed for the automated quantification of whole-slide images. The aim of this study was to test and validate the underlying image analysis algorithm with respect to user friendliness, accuracy, and transferability to different histologic scenarios. The algorithm splits the images into tiles of a predetermined size and identifies the tissue class of each tile. In the training procedure, the user specifies example tiles of the different tissue classes. In the subsequent analysis procedure, the algorithm classifies each tile into the previously specified classes. User friendliness was evaluated by recording training time and testing reproducibility of the training procedure of users with different background. Accuracy was determined with respect to single and batch analysis. Transferability was demonstrated by analyzing tissue of different organs (rat liver, kidney, small bowel, and spleen) and with different stainings (glutamine synthetase and hematoxylin-eosin). Users of different educational background could apply the program efficiently after a short introduction. When analyzing images with similar properties, accuracy of >90% was reached in single images as well as in batch mode. We demonstrated that the novel application is user friendly and very accurate. With the "training" procedure the application can be adapted to novel image characteristics simply by giving examples of relevant tissue structures. Therefore, it is suitable for the fast and efficient analysis of high numbers of fully digitalized histologic sections, potentially allowing "high-throughput" quantitative "histomic" analysis.

  16. Empirical validation of statistical parametric mapping for group imaging of fast neural activity using electrical impedance tomography.

    PubMed

    Packham, B; Barnes, G; Dos Santos, G Sato; Aristovich, K; Gilad, O; Ghosh, A; Oh, T; Holder, D

    2016-06-01

    Electrical impedance tomography (EIT) allows for the reconstruction of internal conductivity from surface measurements. A change in conductivity occurs as ion channels open during neural activity, making EIT a potential tool for functional brain imaging. EIT images can have  >10 000 voxels, which means statistical analysis of such images presents a substantial multiple testing problem. One way to optimally correct for these issues and still maintain the flexibility of complicated experimental designs is to use random field theory. This parametric method estimates the distribution of peaks one would expect by chance in a smooth random field of a given size. Random field theory has been used in several other neuroimaging techniques but never validated for EIT images of fast neural activity, such validation can be achieved using non-parametric techniques. Both parametric and non-parametric techniques were used to analyze a set of 22 images collected from 8 rats. Significant group activations were detected using both techniques (corrected p  <  0.05). Both parametric and non-parametric analyses yielded similar results, although the latter was less conservative. These results demonstrate the first statistical analysis of such an image set and indicate that such an analysis is an approach for EIT images of neural activity.

  17. Empirical validation of statistical parametric mapping for group imaging of fast neural activity using electrical impedance tomography

    PubMed Central

    Packham, B; Barnes, G; dos Santos, G Sato; Aristovich, K; Gilad, O; Ghosh, A; Oh, T; Holder, D

    2016-01-01

    Abstract Electrical impedance tomography (EIT) allows for the reconstruction of internal conductivity from surface measurements. A change in conductivity occurs as ion channels open during neural activity, making EIT a potential tool for functional brain imaging. EIT images can have  >10 000 voxels, which means statistical analysis of such images presents a substantial multiple testing problem. One way to optimally correct for these issues and still maintain the flexibility of complicated experimental designs is to use random field theory. This parametric method estimates the distribution of peaks one would expect by chance in a smooth random field of a given size. Random field theory has been used in several other neuroimaging techniques but never validated for EIT images of fast neural activity, such validation can be achieved using non-parametric techniques. Both parametric and non-parametric techniques were used to analyze a set of 22 images collected from 8 rats. Significant group activations were detected using both techniques (corrected p  <  0.05). Both parametric and non-parametric analyses yielded similar results, although the latter was less conservative. These results demonstrate the first statistical analysis of such an image set and indicate that such an analysis is an approach for EIT images of neural activity. PMID:27203477

  18. Volumetric image interpretation in radiology: scroll behavior and cognitive processes.

    PubMed

    den Boer, Larissa; van der Schaaf, Marieke F; Vincken, Koen L; Mol, Chris P; Stuijfzand, Bobby G; van der Gijp, Anouk

    2018-05-16

    The interpretation of medical images is a primary task for radiologists. Besides two-dimensional (2D) images, current imaging technologies allow for volumetric display of medical images. Whereas current radiology practice increasingly uses volumetric images, the majority of studies on medical image interpretation is conducted on 2D images. The current study aimed to gain deeper insight into the volumetric image interpretation process by examining this process in twenty radiology trainees who all completed four volumetric image cases. Two types of data were obtained concerning scroll behaviors and think-aloud data. Types of scroll behavior concerned oscillations, half runs, full runs, image manipulations, and interruptions. Think-aloud data were coded by a framework of knowledge and skills in radiology including three cognitive processes: perception, analysis, and synthesis. Relating scroll behavior to cognitive processes showed that oscillations and half runs coincided more often with analysis and synthesis than full runs, whereas full runs coincided more often with perception than oscillations and half runs. Interruptions were characterized by synthesis and image manipulations by perception. In addition, we investigated relations between cognitive processes and found an overall bottom-up way of reasoning with dynamic interactions between cognitive processes, especially between perception and analysis. In sum, our results highlight the dynamic interactions between these processes and the grounding of cognitive processes in scroll behavior. It suggests, that the types of scroll behavior are relevant to describe how radiologists interact with and manipulate volumetric images.

  19. Hyperspectral image analysis for plant stress detection

    USDA-ARS?s Scientific Manuscript database

    Abiotic and disease-induced stress significantly reduces plant productivity. Automated on-the-go mapping of plant stress allows timely intervention and mitigating of the problem before critical thresholds are exceeded, thereby, maximizing productivity. A hyperspectral camera analyzed the spectral ...

  20. Video Traffic Analysis for Abnormal Event Detection

    DOT National Transportation Integrated Search

    2010-01-01

    We propose the use of video imaging sensors for the detection and classification of abnormal events to be used primarily for mitigation of traffic congestion. Successful detection of such events will allow for new road guidelines; for rapid deploymen...

  1. Video traffic analysis for abnormal event detection.

    DOT National Transportation Integrated Search

    2010-01-01

    We propose the use of video imaging sensors for the detection and classification of abnormal events to : be used primarily for mitigation of traffic congestion. Successful detection of such events will allow for : new road guidelines; for rapid deplo...

  2. A geometric level set model for ultrasounds analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sarti, A.; Malladi, R.

    We propose a partial differential equation (PDE) for filtering and segmentation of echocardiographic images based on a geometric-driven scheme. The method allows edge-preserving image smoothing and a semi-automatic segmentation of the heart chambers, that regularizes the shapes and improves edge fidelity especially in presence of distinct gaps in the edge map as is common in ultrasound imagery. A numerical scheme for solving the proposed PDE is borrowed from level set methods. Results on human in vivo acquired 2D, 2D+time,3D, 3D+time echocardiographic images are shown.

  3. Picosecond imaging of signal propagation in integrated circuits

    NASA Astrophysics Data System (ADS)

    Frohmann, Sven; Dietz, Enrico; Dittrich, Helmar; Hübers, Heinz-Wilhelm

    2017-04-01

    Optical analysis of integrated circuits (IC) is a powerful tool for analyzing security functions that are implemented in an IC. We present a photon emission microscope for picosecond imaging of hot carrier luminescence in ICs in the near-infrared spectral range from 900 to 1700 nm. It allows for a semi-invasive signal tracking in fully operational ICs on the gate or transistor level with a timing precision of approximately 6 ps. The capabilities of the microscope are demonstrated by imaging the operation of two ICs made by 180 and 60 nm process technology.

  4. Single-cell in vivo imaging of adult neural stem cells in the zebrafish telencephalon.

    PubMed

    Barbosa, Joana S; Di Giaimo, Rossella; Götz, Magdalena; Ninkovic, Jovica

    2016-08-01

    Adult neural stem cells (aNSCs) in zebrafish produce mature neurons throughout their entire life span in both the intact and regenerating brain. An understanding of the behavior of aNSCs in their intact niche and during regeneration in vivo should facilitate the identification of the molecular mechanisms controlling regeneration-specific cellular events. A greater understanding of the process in regeneration-competent species may enable regeneration to be achieved in regeneration-incompetent species, including humans. Here we describe a protocol for labeling and repetitive imaging of aNSCs in vivo. We label single aNSCs, allowing nonambiguous re-identification of single cells in repetitive imaging sessions using electroporation of a red-reporter plasmid in Tg(gfap:GFP)mi2001 transgenic fish expressing GFP in aNSCs. We image using two-photon microscopy through the thinned skull of anesthetized and immobilized fish. Our protocol allows imaging every 2 d for a period of up to 1 month. This methodology allowed the visualization of aNSC behavior in vivo in their natural niche, in contrast to previously available technologies, which rely on the imaging of either dissociated cells or tissue slices. We used this protocol to follow the mode of aNSC division, fate changes and cell death in both the intact and injured zebrafish telencephalon. This experimental setup can be widely used, with minimal prior experience, to assess key factors for processes that modulate aNSC behavior. A typical experiment with data analysis takes up to 1.5 months.

  5. Imaging patients with glaucoma using spectral-domain optical coherence tomography and optical microangiography

    NASA Astrophysics Data System (ADS)

    Auyeung, Kris; Auyeung, Kelsey; Kono, Rei; Chen, Chieh-Li; Zhang, Qinqin; Wang, Ruikang K.

    2015-03-01

    In ophthalmology, a reliable means of diagnosing glaucoma in its early stages is still an open issue. Past efforts, including forays into fluorescent angiography (FA) and early optical coherence tomography (OCT) systems, to develop a potential biomarker for the disease have been explored. However, this development has been hindered by the inability of the current techniques to provide useful depth and microvasculature information of the optic nerve head (ONH), which have been debated as possible hallmarks of glaucoma progression. We reasoned that a system incorporating a spectral-domain OCT (SD-OCT) based Optical Microangiography (OMAG) system, could allow an effective, non-invasive methodology to evaluate effects on microvasculature by glaucoma. SD-OCT follows the principle of light reflection and interference to produce detailed cross-sectional and 3D images of the eye. OMAG produces imaging contrasts via endogenous light scattering from moving particles, allowing for 3D image productions of dynamic blood perfusion at capillary-level resolution. The purpose of this study was to investigate the optic cup perfusion (flow) differences in glaucomatous and normal eyes. Images from three normal and five glaucomatous subjects were analyzed our OCT based OMAG system for blood perfusion and structural images, allowing for comparisons. Preliminary results from blood flow analysis revealed reduced blood perfusion within the whole-depth region encompassing the Lamina Cribrosa in glaucomatous cases as compared to normal ones. We conclude that our OCT-OMAG system may provide promise and viability for glaucoma screening.

  6. Neutron tomography of particulate filters: A non-destructive investigation tool for applied and industrial research

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Toops, Todd J.; Bilheux, Hassina Z.; Voisin, Sophie

    2013-08-19

    This research describes the development and implementation of high-fidelity neutron imaging and the associated analysis of the images. This advanced capability allows the non-destructive, non-invasive imaging of particulate filters (PFs) and how the deposition of particulate and catalytic washcoat occurs within the filter. The majority of the efforts described here were performed at the High Flux Isotope Reactor (HFIR) CG-1D neutron imaging beamline at Oak Ridge National Laboratory; the current spatial resolution is approximately 50 μm. The sample holder is equipped with a high-precision rotation stage that allows 3D imaging (i.e., computed tomography) of the sample when combined with computerizedmore » reconstruction tools. What enables the neutron-based image is the ability of some elements to absorb or scatter neutrons where other elements allow the neutron to pass through them with negligible interaction. Of particular interest in this study is the scattering of neutrons by hydrogen-containing molecules, such as hydrocarbons (HCs) and/or water, which are adsorbed to the surface of soot, ash and catalytic washcoat. Even so, the interactions with this adsorbed water/HC is low and computational techniques were required to enhance the contrast, primarily a modified simultaneous iterative reconstruction technique (SIRT). Lastly, this effort describes the following systems: particulate randomly distributed in a PF, ash deposition in PFs, a catalyzed washcoat layer in a PF, and three particulate loadings in a SiC PF.« less

  7. Fully automatic three-dimensional visualization of intravascular optical coherence tomography images: methods and feasibility in vivo

    PubMed Central

    Ughi, Giovanni J; Adriaenssens, Tom; Desmet, Walter; D’hooge, Jan

    2012-01-01

    Intravascular optical coherence tomography (IV-OCT) is an imaging modality that can be used for the assessment of intracoronary stents. Recent publications pointed to the fact that 3D visualizations have potential advantages compared to conventional 2D representations. However, 3D imaging still requires a time consuming manual procedure not suitable for on-line application during coronary interventions. We propose an algorithm for a rapid and fully automatic 3D visualization of IV-OCT pullbacks. IV-OCT images are first processed for the segmentation of the different structures. This also allows for automatic pullback calibration. Then, according to the segmentation results, different structures are depicted with different colors to visualize the vessel wall, the stent and the guide-wire in details. Final 3D rendering results are obtained through the use of a commercial 3D DICOM viewer. Manual analysis was used as ground-truth for the validation of the segmentation algorithms. A correlation value of 0.99 and good limits of agreement (Bland Altman statistics) were found over 250 images randomly extracted from 25 in vivo pullbacks. Moreover, 3D rendering was compared to angiography, pictures of deployed stents made available by the manufacturers and to conventional 2D imaging corroborating visualization results. Computational time for the visualization of an entire data sets resulted to be ~74 sec. The proposed method allows for the on-line use of 3D IV-OCT during percutaneous coronary interventions, potentially allowing treatments optimization. PMID:23243578

  8. Performance of a gaseous detector based energy dispersive X-ray fluorescence imaging system: Analysis of human teeth treated with dental amalgam

    NASA Astrophysics Data System (ADS)

    Silva, A. L. M.; Figueroa, R.; Jaramillo, A.; Carvalho, M. L.; Veloso, J. F. C. A.

    2013-08-01

    Energy dispersive X-ray fluorescence (EDXRF) imaging systems are of great interest in many applications of different areas, once they allow us to get images of the spatial elemental distribution in the samples. The detector system used in this study is based on a micro patterned gas detector, named Micro-Hole and Strip Plate. The full field of view system, with an active area of 28 × 28 mm2 presents some important features for EDXRF imaging applications, such as a position resolution below 125 μm, an intrinsic energy resolution of about 14% full width at half maximum for 5.9 keV X-rays, and a counting rate capability of 0.5 MHz. In this work, analysis of human teeth treated by dental amalgam was performed by using the EDXRF imaging system mentioned above. The goal of the analysis is to evaluate the system capabilities in the biomedical field by measuring the drift of the major constituents of a dental amalgam, Zn and Hg, throughout the tooth structures. The elemental distribution pattern of these elements obtained during the analysis suggests diffusion of these elements from the amalgam to teeth tissues.

  9. Autonomous stress imaging cores: from concept to reality

    NASA Astrophysics Data System (ADS)

    van der Velden, Stephen; Rajic, Nik; Brooks, Chris; Galea, Steve

    2016-04-01

    The historical reliance of thermoelastic stress analysis on cooled infrared detection has created significant cost and practical impediments to the widespread use of this powerful full-field stress measurement technique. The emergence of low-cost microbolometers as a practical alternative has allowed for an expansion of the traditional role of thermoelastic stress analysis, and raises the possibility that it may in future become a viable structural health monitoring modality. Experimental results are shown to confirm that high resolution stress imagery can be obtained from an uncooled thermal camera core significantly smaller than any infrared imaging device previously applied to TSA. The paper provides a summary of progress toward the development of an autonomous stress-imaging capability based on this core.

  10. Data Analysis of the Gated-LEH X-Ray Imaging Diagnostic at the NIF

    NASA Astrophysics Data System (ADS)

    Thibodeau, Matthew; Chen, Hui

    2017-10-01

    The Gated Laser Entrance Hole (G-LEH) x-ray imaging diagnostic in use at the NIF offers a desirable combination of spatial and temporal resolution. By looking inside of NIF hohlraums with time resolution, G-LEH measures target features including LEH size and capsule size. A framework is presented for automated and systematic analysis of G-LEH images that measures several physical parameters of interest and their evolution over time. The results from these analyses enable comparisons with hohlraum models and allow model validation of LEH closure velocity and the extent of capsule blow-off. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  11. Image-based red cell counting for wild animals blood.

    PubMed

    Mauricio, Claudio R M; Schneider, Fabio K; Dos Santos, Leonilda Correia

    2010-01-01

    An image-based red blood cell (RBC) automatic counting system is presented for wild animals blood analysis. Images with 2048×1536-pixel resolution acquired on an optical microscope using Neubauer chambers are used to evaluate RBC counting for three animal species (Leopardus pardalis, Cebus apella and Nasua nasua) and the error found using the proposed method is similar to that obtained for inter observer visual counting method, i.e., around 10%. Smaller errors (e.g., 3%) can be obtained in regions with less grid artifacts. These promising results allow the use of the proposed method either as a complete automatic counting tool in laboratories for wild animal's blood analysis or as a first counting stage in a semi-automatic counting tool.

  12. A statistical model for radar images of agricultural scenes

    NASA Technical Reports Server (NTRS)

    Frost, V. S.; Shanmugan, K. S.; Holtzman, J. C.; Stiles, J. A.

    1982-01-01

    The presently derived and validated statistical model for radar images containing many different homogeneous fields predicts the probability density functions of radar images of entire agricultural scenes, thereby allowing histograms of large scenes composed of a variety of crops to be described. Seasat-A SAR images of agricultural scenes are accurately predicted by the model on the basis of three assumptions: each field has the same SNR, all target classes cover approximately the same area, and the true reflectivity characterizing each individual target class is a uniformly distributed random variable. The model is expected to be useful in the design of data processing algorithms and for scene analysis using radar images.

  13. Quantitative Analysis of Venus Radar Backscatter Data in ArcGIS

    NASA Technical Reports Server (NTRS)

    Long, S. M.; Grosfils, E. B.

    2005-01-01

    Ongoing mapping of the Ganiki Planitia (V14) quadrangle of Venus and definition of material units has involved an integrated but qualitative analysis of Magellan radar backscatter images and topography using standard geomorphological mapping techniques. However, such analyses do not take full advantage of the quantitative information contained within the images. Analysis of the backscatter coefficient allows a much more rigorous statistical comparison between mapped units, permitting first order selfsimilarity tests of geographically separated materials assigned identical geomorphological labels. Such analyses cannot be performed directly on pixel (DN) values from Magellan backscatter images, because the pixels are scaled to the Muhleman law for radar echoes on Venus and are not corrected for latitudinal variations in incidence angle. Therefore, DN values must be converted based on pixel latitude back to their backscatter coefficient values before accurate statistical analysis can occur. Here we present a method for performing the conversions and analysis of Magellan backscatter data using commonly available ArcGIS software and illustrate the advantages of the process for geological mapping.

  14. Wavelength feature mapping as a proxy to mineral chemistry for investigating geologic systems: An example from the Rodalquilar epithermal system

    NASA Astrophysics Data System (ADS)

    van der Meer, Freek; Kopačková, Veronika; Koucká, Lucie; van der Werff, Harald M. A.; van Ruitenbeek, Frank J. A.; Bakker, Wim H.

    2018-02-01

    The final product of a geologic remote sensing data analysis using multi spectral and hyperspectral images is a mineral (abundance) map. Multispectral data, such as ASTER, Landsat, SPOT, Sentinel-2, typically allow to determine qualitative estimates of what minerals are in a pixel, while hyperspectral data allow to quantify this. As input to most image classification or spectral processing approach, endmembers are required. An alternative approach to classification is to derive absorption feature characteristics such as the wavelength position of the deepest absorption, depth of the absorption and symmetry of the absorption feature from hyperspectral data. Two approaches are presented, tested and compared in this paper: the 'Wavelength Mapper' and the 'QuanTools'. Although these algorithms use a different mathematical solution to derive absorption feature wavelength and depth, and use different image post-processing, the results are consistent, comparable and reproducible. The wavelength images can be directly linked to mineral type and abundance, but more importantly also to mineral chemical composition and subtle changes thereof. This in turn allows to interpret hyperspectral data in terms of mineral chemistry changes which is a proxy to pressure-temperature of formation of minerals. We show the case of the Rodalquilar epithermal system of the southern Spanish Gabo de Gata volcanic area using HyMAP airborne hyperspectral images.

  15. A novel model for ectopic, chronic, intravital multiphoton imaging of bone marrow vasculature and architecture in split femurs

    PubMed Central

    Bălan, Mirela; Kiefer, Friedemann

    2015-01-01

    Creating a model for intravital visualization of femoral bone marrow, a major site of hematopoiesis in adult mammalian organisms, poses a serious challenge, in that it needs to overcome bone opacity and the inaccessibility of marrow. Furthermore, meaningful analysis of bone marrow developmental and differentiation processes requires the repetitive observation of the same site over long periods of time, which we refer to as chronic imaging. To surmount these issues, we developed a chronic intravital imaging model that allows the observation of split femurs, ectopically transplanted into a dorsal skinfold chamber of a host mouse. Repeated, long term observations are facilitated by multiphoton microscopy, an imaging technique that combines superior imaging capacity at greater tissue depth with low phototoxicity. The transplanted, ectopic femur was stabilized by its sterile environment and rapidly connected to the host vasculature, allowing further development and observation of extended processes. After optimizing transplant age and grafting procedure, we observed the development of new woven bone and maturation of secondary ossification centers in the transplanted femurs, preceded by the sprouting of a sinusoidal-like vascular network, which was almost entirely composed of femoral endothelial cells. After two weeks, the transplant was still populated with stromal and haematopoietic cells belonging both to donor and host. Over this time frame, the transplant partially retained myeloid progenitor cells with single and multi-lineage differentiation capacity. In summary, our model allowed repeated intravital imaging of bone marrow angiogenesis and hematopoiesis. It represents a promising starting point for the development of improved chronic optical imaging models for femoral bone marrow. PMID:28243515

  16. SamuROI, a Python-Based Software Tool for Visualization and Analysis of Dynamic Time Series Imaging at Multiple Spatial Scales.

    PubMed

    Rueckl, Martin; Lenzi, Stephen C; Moreno-Velasquez, Laura; Parthier, Daniel; Schmitz, Dietmar; Ruediger, Sten; Johenning, Friedrich W

    2017-01-01

    The measurement of activity in vivo and in vitro has shifted from electrical to optical methods. While the indicators for imaging activity have improved significantly over the last decade, tools for analysing optical data have not kept pace. Most available analysis tools are limited in their flexibility and applicability to datasets obtained at different spatial scales. Here, we present SamuROI (Structured analysis of multiple user-defined ROIs), an open source Python-based analysis environment for imaging data. SamuROI simplifies exploratory analysis and visualization of image series of fluorescence changes in complex structures over time and is readily applicable at different spatial scales. In this paper, we show the utility of SamuROI in Ca 2+ -imaging based applications at three spatial scales: the micro-scale (i.e., sub-cellular compartments including cell bodies, dendrites and spines); the meso-scale, (i.e., whole cell and population imaging with single-cell resolution); and the macro-scale (i.e., imaging of changes in bulk fluorescence in large brain areas, without cellular resolution). The software described here provides a graphical user interface for intuitive data exploration and region of interest (ROI) management that can be used interactively within Jupyter Notebook: a publicly available interactive Python platform that allows simple integration of our software with existing tools for automated ROI generation and post-processing, as well as custom analysis pipelines. SamuROI software, source code and installation instructions are publicly available on GitHub and documentation is available online. SamuROI reduces the energy barrier for manual exploration and semi-automated analysis of spatially complex Ca 2+ imaging datasets, particularly when these have been acquired at different spatial scales.

  17. SamuROI, a Python-Based Software Tool for Visualization and Analysis of Dynamic Time Series Imaging at Multiple Spatial Scales

    PubMed Central

    Rueckl, Martin; Lenzi, Stephen C.; Moreno-Velasquez, Laura; Parthier, Daniel; Schmitz, Dietmar; Ruediger, Sten; Johenning, Friedrich W.

    2017-01-01

    The measurement of activity in vivo and in vitro has shifted from electrical to optical methods. While the indicators for imaging activity have improved significantly over the last decade, tools for analysing optical data have not kept pace. Most available analysis tools are limited in their flexibility and applicability to datasets obtained at different spatial scales. Here, we present SamuROI (Structured analysis of multiple user-defined ROIs), an open source Python-based analysis environment for imaging data. SamuROI simplifies exploratory analysis and visualization of image series of fluorescence changes in complex structures over time and is readily applicable at different spatial scales. In this paper, we show the utility of SamuROI in Ca2+-imaging based applications at three spatial scales: the micro-scale (i.e., sub-cellular compartments including cell bodies, dendrites and spines); the meso-scale, (i.e., whole cell and population imaging with single-cell resolution); and the macro-scale (i.e., imaging of changes in bulk fluorescence in large brain areas, without cellular resolution). The software described here provides a graphical user interface for intuitive data exploration and region of interest (ROI) management that can be used interactively within Jupyter Notebook: a publicly available interactive Python platform that allows simple integration of our software with existing tools for automated ROI generation and post-processing, as well as custom analysis pipelines. SamuROI software, source code and installation instructions are publicly available on GitHub and documentation is available online. SamuROI reduces the energy barrier for manual exploration and semi-automated analysis of spatially complex Ca2+ imaging datasets, particularly when these have been acquired at different spatial scales. PMID:28706482

  18. Hyperspectral imaging coupled with chemometric analysis for non-invasive differentiation of black pens

    NASA Astrophysics Data System (ADS)

    Chlebda, Damian K.; Majda, Alicja; Łojewski, Tomasz; Łojewska, Joanna

    2016-11-01

    Differentiation of the written text can be performed with a non-invasive and non-contact tool that connects conventional imaging methods with spectroscopy. Hyperspectral imaging (HSI) is a relatively new and rapid analytical technique that can be applied in forensic science disciplines. It allows an image of the sample to be acquired, with full spectral information within every pixel. For this paper, HSI and three statistical methods (hierarchical cluster analysis, principal component analysis, and spectral angle mapper) were used to distinguish between traces of modern black gel pen inks. Non-invasiveness and high efficiency are among the unquestionable advantages of ink differentiation using HSI. It is also less time-consuming than traditional methods such as chromatography. In this study, a set of 45 modern gel pen ink marks deposited on a paper sheet were registered. The spectral characteristics embodied in every pixel were extracted from an image and analysed using statistical methods, externally and directly on the hypercube. As a result, different black gel inks deposited on paper can be distinguished and classified into several groups, in a non-invasive manner.

  19. Modelling the degree of porosity of the ceramic surface intended for implants.

    PubMed

    Stach, Sebastian; Kędzia, Olga; Garczyk, Żaneta; Wróbel, Zygmunt

    2018-05-18

    The main goal of the study was to develop a model of the degree of surface porosity of a biomaterial intended for implants. The model was implemented using MATLAB. A computer simulation was carried out based on the developed model, which resulted in a two-dimensional image of the modelled surface. Then, an algorithm for computerised image analysis of the surface of the actual oxide bioceramic layer was developed, which enabled determining its degree of porosity. In order to obtain the confocal micrographs of a few areas of the biomaterial, measurements were performed using the LEXT OLS4000 confocal laser microscope. The image analysis was carried out using MountainsMap Premium and SPIP. The obtained results allowed determining the input parameters of the program, on the basis of which porous biomaterial surface images were generated. The last part of the study involved verification of the developed model. The modelling method was tested by comparing the obtained results with the experimental data obtained from the analysis of surface images of the test material.

  20. Cutting-edge analysis of extracellular microparticles using ImageStream(X) imaging flow cytometry.

    PubMed

    Headland, Sarah E; Jones, Hefin R; D'Sa, Adelina S V; Perretti, Mauro; Norling, Lucy V

    2014-06-10

    Interest in extracellular vesicle biology has exploded in the past decade, since these microstructures seem endowed with multiple roles, from blood coagulation to inter-cellular communication in pathophysiology. In order for microparticle research to evolve as a preclinical and clinical tool, accurate quantification of microparticle levels is a fundamental requirement, but their size and the complexity of sample fluids present major technical challenges. Flow cytometry is commonly used, but suffers from low sensitivity and accuracy. Use of Amnis ImageStream(X) Mk II imaging flow cytometer afforded accurate analysis of calibration beads ranging from 1 μm to 20 nm; and microparticles, which could be observed and quantified in whole blood, platelet-rich and platelet-free plasma and in leukocyte supernatants. Another advantage was the minimal sample preparation and volume required. Use of this high throughput analyzer allowed simultaneous phenotypic definition of the parent cells and offspring microparticles along with real time microparticle generation kinetics. With the current paucity of reliable techniques for the analysis of microparticles, we propose that the ImageStream(X) could be used effectively to advance this scientific field.

  1. Multispectral UV imaging for surface analysis of MUPS tablets with special focus on the pellet distribution.

    PubMed

    Novikova, Anna; Carstensen, Jens M; Rades, Thomas; Leopold, Prof Dr Claudia S

    2016-12-30

    In the present study the applicability of multispectral UV imaging in combination with multivariate image analysis for surface evaluation of MUPS tablets was investigated with respect to the differentiation of the API pellets from the excipients matrix, estimation of the drug content as well as pellet distribution, and influence of the coating material and tablet thickness on the predictive model. Different formulations consisting of coated drug pellets with two coating polymers (Aquacoat ® ECD and Eudragit ® NE 30 D) at three coating levels each were compressed to MUPS tablets with various amounts of coated pellets and different tablet thicknesses. The coated drug pellets were clearly distinguishable from the excipients matrix using a partial least squares approach regardless of the coating layer thickness and coating material used. Furthermore, the number of the detected drug pellets on the tablet surface allowed an estimation of the true drug content in the respective MUPS tablet. In addition, the pellet distribution in the MUPS formulations could be estimated by UV image analysis of the tablet surface. In conclusion, this study revealed that UV imaging in combination with multivariate image analysis is a promising approach for the automatic quality control of MUPS tablets during the manufacturing process. Copyright © 2016 Elsevier B.V. All rights reserved.

  2. Variable Threshold Method for Determining the Boundaries of Imaged Subvisible Particles.

    PubMed

    Cavicchi, Richard E; Collett, Cayla; Telikepalli, Srivalli; Hu, Zhishang; Carrier, Michael; Ripple, Dean C

    2017-06-01

    An accurate assessment of particle characteristics and concentrations in pharmaceutical products by flow imaging requires accurate particle sizing and morphological analysis. Analysis of images begins with the definition of particle boundaries. Commonly a single threshold defines the level for a pixel in the image to be included in the detection of particles, but depending on the threshold level, this results in either missing translucent particles or oversizing of less transparent particles due to the halos and gradients in intensity near the particle boundaries. We have developed an imaging analysis algorithm that sets the threshold for a particle based on the maximum gray value of the particle. We show that this results in tighter boundaries for particles with high contrast, while conserving the number of highly translucent particles detected. The method is implemented as a plugin for FIJI, an open-source image analysis software. The method is tested for calibration beads in water and glycerol/water solutions, a suspension of microfabricated rods, and stir-stressed aggregates made from IgG. The result is that appropriate thresholds are automatically set for solutions with a range of particle properties, and that improved boundaries will allow for more accurate sizing results and potentially improved particle classification studies. Published by Elsevier Inc.

  3. Direct tissue analysis by matrix-assisted laser desorption ionization mass spectrometry: application to kidney biology.

    PubMed

    Herring, Kristen D; Oppenheimer, Stacey R; Caprioli, Richard M

    2007-11-01

    Direct tissue analysis using matrix-assisted laser desorption ionization mass spectrometry (MALDI MS) provides in situ molecular analysis of a wide variety of biological molecules including xenobiotics. This technology allows measurement of these species in their native biological environment without the use of target-specific reagents such as antibodies. It can be used to profile discrete cellular regions and obtain region-specific images, providing information on the relative abundance and spatial distribution of proteins, peptides, lipids, and drugs. In this article, we report the sample preparation, MS data acquisition and analysis, and protein identification methodologies used in our laboratory for profiling/imaging MS and how this has been applied to kidney disease and toxicity.

  4. TLM-Tracker: software for cell segmentation, tracking and lineage analysis in time-lapse microscopy movies.

    PubMed

    Klein, Johannes; Leupold, Stefan; Biegler, Ilona; Biedendieck, Rebekka; Münch, Richard; Jahn, Dieter

    2012-09-01

    Time-lapse imaging in combination with fluorescence microscopy techniques enable the investigation of gene regulatory circuits and uncovered phenomena like culture heterogeneity. In this context, computational image processing for the analysis of single cell behaviour plays an increasing role in systems biology and mathematical modelling approaches. Consequently, we developed a software package with graphical user interface for the analysis of single bacterial cell behaviour. A new software called TLM-Tracker allows for the flexible and user-friendly interpretation for the segmentation, tracking and lineage analysis of microbial cells in time-lapse movies. The software package, including manual, tutorial video and examples, is available as Matlab code or executable binaries at http://www.tlmtracker.tu-bs.de.

  5. Big Data in Reciprocal Space: Sliding Fast Fourier Transforms for Determining Periodicity

    DOE PAGES

    Vasudevan, Rama K.; Belianinov, Alex; Gianfrancesco, Anthony G.; ...

    2015-03-03

    Significant advances in atomically resolved imaging of crystals and surfaces have occurred in the last decade allowing unprecedented insight into local crystal structures and periodicity. Yet, the analysis of the long-range periodicity from the local imaging data, critical to correlation of functional properties and chemistry to the local crystallography, remains a challenge. Here, we introduce a Sliding Fast Fourier Transform (FFT) filter to analyze atomically resolved images of in-situ grown La5/8Ca3/8MnO3 films. We demonstrate the ability of sliding FFT algorithm to differentiate two sub-lattices, resulting from a mixed-terminated surface. Principal Component Analysis (PCA) and Independent Component Analysis (ICA) of themore » Sliding FFT dataset reveal the distinct changes in crystallography, step edges and boundaries between the multiple sub-lattices. The method is universal for images with any periodicity, and is especially amenable to atomically resolved probe and electron-microscopy data for rapid identification of the sub-lattices present.« less

  6. Big Data in Reciprocal Space: Sliding Fast Fourier Transforms for Determining Periodicity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vasudevan, Rama K.; Belianinov, Alex; Gianfrancesco, Anthony G.

    Significant advances in atomically resolved imaging of crystals and surfaces have occurred in the last decade allowing unprecedented insight into local crystal structures and periodicity. Yet, the analysis of the long-range periodicity from the local imaging data, critical to correlation of functional properties and chemistry to the local crystallography, remains a challenge. Here, we introduce a Sliding Fast Fourier Transform (FFT) filter to analyze atomically resolved images of in-situ grown La5/8Ca3/8MnO3 films. We demonstrate the ability of sliding FFT algorithm to differentiate two sub-lattices, resulting from a mixed-terminated surface. Principal Component Analysis (PCA) and Independent Component Analysis (ICA) of themore » Sliding FFT dataset reveal the distinct changes in crystallography, step edges and boundaries between the multiple sub-lattices. The method is universal for images with any periodicity, and is especially amenable to atomically resolved probe and electron-microscopy data for rapid identification of the sub-lattices present.« less

  7. DNA origami-based shape IDs for single-molecule nanomechanical genotyping

    NASA Astrophysics Data System (ADS)

    Zhang, Honglu; Chao, Jie; Pan, Dun; Liu, Huajie; Qiang, Yu; Liu, Ke; Cui, Chengjun; Chen, Jianhua; Huang, Qing; Hu, Jun; Wang, Lianhui; Huang, Wei; Shi, Yongyong; Fan, Chunhai

    2017-04-01

    Variations on DNA sequences profoundly affect how we develop diseases and respond to pathogens and drugs. Atomic force microscopy (AFM) provides a nanomechanical imaging approach for genetic analysis with nanometre resolution. However, unlike fluorescence imaging that has wavelength-specific fluorophores, the lack of shape-specific labels largely hampers widespread applications of AFM imaging. Here we report the development of a set of differentially shaped, highly hybridizable self-assembled DNA origami nanostructures serving as shape IDs for magnified nanomechanical imaging of single-nucleotide polymorphisms. Using these origami shape IDs, we directly genotype single molecules of human genomic DNA with an ultrahigh resolution of ~10 nm and the multiplexing ability. Further, we determine three types of disease-associated, long-range haplotypes in samples from the Han Chinese population. Single-molecule analysis allows robust haplotyping even for samples with low labelling efficiency. We expect this generic shape ID-based nanomechanical approach to hold great potential in genetic analysis at the single-molecule level.

  8. DNA origami-based shape IDs for single-molecule nanomechanical genotyping

    PubMed Central

    Zhang, Honglu; Chao, Jie; Pan, Dun; Liu, Huajie; Qiang, Yu; Liu, Ke; Cui, Chengjun; Chen, Jianhua; Huang, Qing; Hu, Jun; Wang, Lianhui; Huang, Wei; Shi, Yongyong; Fan, Chunhai

    2017-01-01

    Variations on DNA sequences profoundly affect how we develop diseases and respond to pathogens and drugs. Atomic force microscopy (AFM) provides a nanomechanical imaging approach for genetic analysis with nanometre resolution. However, unlike fluorescence imaging that has wavelength-specific fluorophores, the lack of shape-specific labels largely hampers widespread applications of AFM imaging. Here we report the development of a set of differentially shaped, highly hybridizable self-assembled DNA origami nanostructures serving as shape IDs for magnified nanomechanical imaging of single-nucleotide polymorphisms. Using these origami shape IDs, we directly genotype single molecules of human genomic DNA with an ultrahigh resolution of ∼10 nm and the multiplexing ability. Further, we determine three types of disease-associated, long-range haplotypes in samples from the Han Chinese population. Single-molecule analysis allows robust haplotyping even for samples with low labelling efficiency. We expect this generic shape ID-based nanomechanical approach to hold great potential in genetic analysis at the single-molecule level. PMID:28382928

  9. Development of a Time Domain Fluorimeter for Fluorescent Lifetime Multiplexing Analysis

    PubMed Central

    Weissleder, Ralph; Mahmood, Umar

    2009-01-01

    We show that a portable, inexpensive USB-powered time domain fluorimeter (TDF) and analysis scheme were developed for use in evaluating a new class of fluorescent lifetime multiplexed dyes. Fluorescent proteins, organic dyes, and quantum dots allow the labeling of more and more individual features within biological systems, but the wide absorption and emission spectra of these fluorophores limit the number of distinct processes which may be simultaneously imaged using spectral separation alone. By additionally separating reporters in a second dimension, fluorescent lifetime multiplexing provides a means to multiply the number of available imaging channels. PMID:19830273

  10. The Open Microscopy Environment: open image informatics for the biological sciences

    NASA Astrophysics Data System (ADS)

    Blackburn, Colin; Allan, Chris; Besson, Sébastien; Burel, Jean-Marie; Carroll, Mark; Ferguson, Richard K.; Flynn, Helen; Gault, David; Gillen, Kenneth; Leigh, Roger; Leo, Simone; Li, Simon; Lindner, Dominik; Linkert, Melissa; Moore, Josh; Moore, William J.; Ramalingam, Balaji; Rozbicki, Emil; Rustici, Gabriella; Tarkowska, Aleksandra; Walczysko, Petr; Williams, Eleanor; Swedlow, Jason R.

    2016-07-01

    Despite significant advances in biological imaging and analysis, major informatics challenges remain unsolved: file formats are proprietary, storage and analysis facilities are lacking, as are standards for sharing image data and results. While the open FITS file format is ubiquitous in astronomy, astronomical imaging shares many challenges with biological imaging, including the need to share large image sets using secure, cross-platform APIs, and the need for scalable applications for processing and visualization. The Open Microscopy Environment (OME) is an open-source software framework developed to address these challenges. OME tools include: an open data model for multidimensional imaging (OME Data Model); an open file format (OME-TIFF) and library (Bio-Formats) enabling free access to images (5D+) written in more than 145 formats from many imaging domains, including FITS; and a data management server (OMERO). The Java-based OMERO client-server platform comprises an image metadata store, an image repository, visualization and analysis by remote access, allowing sharing and publishing of image data. OMERO provides a means to manage the data through a multi-platform API. OMERO's model-based architecture has enabled its extension into a range of imaging domains, including light and electron microscopy, high content screening, digital pathology and recently into applications using non-image data from clinical and genomic studies. This is made possible using the Bio-Formats library. The current release includes a single mechanism for accessing image data of all types, regardless of original file format, via Java, C/C++ and Python and a variety of applications and environments (e.g. ImageJ, Matlab and R).

  11. Covariance of lucky images for increasing objects contrast: diffraction-limited images in ground-based telescopes

    NASA Astrophysics Data System (ADS)

    Cagigal, Manuel P.; Valle, Pedro J.; Colodro-Conde, Carlos; Villó-Pérez, Isidro; Pérez-Garrido, Antonio

    2016-01-01

    Images of stars adopt shapes far from the ideal Airy pattern due to atmospheric density fluctuations. Hence, diffraction-limited images can only be achieved by telescopes without atmospheric influence, e.g. spatial telescopes, or by using techniques like adaptive optics or lucky imaging. In this paper, we propose a new computational technique based on the evaluation of the COvariancE of Lucky Images (COELI). This technique allows us to discover companions to main stars by taking advantage of the atmospheric fluctuations. We describe the algorithm and we carry out a theoretical analysis of the improvement in contrast. We have used images taken with 2.2-m Calar Alto telescope as a test bed for the technique resulting that, under certain conditions, telescope diffraction limit is clearly reached.

  12. Optically gated beating-heart imaging

    PubMed Central

    Taylor, Jonathan M.

    2014-01-01

    The constant motion of the beating heart presents an obstacle to clear optical imaging, especially 3D imaging, in small animals where direct optical imaging would otherwise be possible. Gating techniques exploit the periodic motion of the heart to computationally “freeze” this movement and overcome motion artifacts. Optically gated imaging represents a recent development of this, where image analysis is used to synchronize acquisition with the heartbeat in a completely non-invasive manner. This article will explain the concept of optical gating, discuss a range of different implementation strategies and their strengths and weaknesses. Finally we will illustrate the usefulness of the technique by discussing applications where optical gating has facilitated novel biological findings by allowing 3D in vivo imaging of cardiac myocytes in their natural environment of the beating heart. PMID:25566083

  13. Optimization of a Biometric System Based on Acoustic Images

    PubMed Central

    Izquierdo Fuente, Alberto; Del Val Puente, Lara; Villacorta Calvo, Juan J.; Raboso Mateos, Mariano

    2014-01-01

    On the basis of an acoustic biometric system that captures 16 acoustic images of a person for 4 frequencies and 4 positions, a study was carried out to improve the performance of the system. On a first stage, an analysis to determine which images provide more information to the system was carried out showing that a set of 12 images allows the system to obtain results that are equivalent to using all of the 16 images. Finally, optimization techniques were used to obtain the set of weights associated with each acoustic image that maximizes the performance of the biometric system. These results improve significantly the performance of the preliminary system, while reducing the time of acquisition and computational burden, since the number of acoustic images was reduced. PMID:24616643

  14. Three-dimensional micro-scale strain mapping in living biological soft tissues.

    PubMed

    Moo, Eng Kuan; Sibole, Scott C; Han, Sang Kuy; Herzog, Walter

    2018-04-01

    Non-invasive characterization of the mechanical micro-environment surrounding cells in biological tissues at multiple length scales is important for the understanding of the role of mechanics in regulating the biosynthesis and phenotype of cells. However, there is a lack of imaging methods that allow for characterization of the cell micro-environment in three-dimensional (3D) space. The aims of this study were (i) to develop a multi-photon laser microscopy protocol capable of imprinting 3D grid lines onto living tissue at a high spatial resolution, and (ii) to develop image processing software capable of analyzing the resulting microscopic images and performing high resolution 3D strain analyses. Using articular cartilage as the biological tissue of interest, we present a novel two-photon excitation imaging technique for measuring the internal 3D kinematics in intact cartilage at sub-micrometer resolution, spanning length scales from the tissue to the cell level. Using custom image processing software, we provide accurate and robust 3D micro-strain analysis that allows for detailed qualitative and quantitative assessment of the 3D tissue kinematics. This novel technique preserves tissue structural integrity post-scanning, therefore allowing for multiple strain measurements at different time points in the same specimen. The proposed technique is versatile and opens doors for experimental and theoretical investigations on the relationship between tissue deformation and cell biosynthesis. Studies of this nature may enhance our understanding of the mechanisms underlying cell mechano-transduction, and thus, adaptation and degeneration of soft connective tissues. We presented a novel two-photon excitation imaging technique for measuring the internal 3D kinematics in intact cartilage at sub-micrometer resolution, spanning from tissue length scale to cellular length scale. Using a custom image processing software (lsmgridtrack), we provide accurate and robust micro-strain analysis that allowed for detailed qualitative and quantitative assessment of the 3D tissue kinematics. The approach presented here can also be applied to other biological tissues such as meniscus and annulus fibrosus, as well as tissue-engineered tissues for the characterization of their mechanical properties. This imaging technique opens doors for experimental and theoretical investigation on the relationship between tissue deformation and cell biosynthesis. Studies of this nature may enhance our understanding of the mechanisms underlying cell mechano-transduction, and thus, adaptation and degeneration of soft connective tissues. Copyright © 2018 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.

  15. Physical vs. photolithographic patterning of plasma polymers: an investigation by ToF-SSIMS and multivariate analysis

    PubMed Central

    Mishra, Gautam; Easton, Christopher D.; McArthur, Sally L.

    2009-01-01

    Physical and photolithographic techniques are commonly used to create chemical patterns for a range of technologies including cell culture studies, bioarrays and other biomedical applications. In this paper, we describe the fabrication of chemical micropatterns from commonly used plasma polymers. Atomic force microcopy (AFM) imaging, Time-of-Flight Static Secondary Ion Mass Spectrometry (ToF-SSIMS) imaging and multivariate analysis have been employed to visualize the chemical boundaries created by these patterning techniques and assess the spatial and chemical resolution of the patterns. ToF-SSIMS analysis demonstrated that well defined chemical and spatial boundaries were obtained from photolithographic patterning, while the resolution of physical patterning via a transmission electron microscopy (TEM) grid varied depending on the properties of the plasma system including the substrate material. In general, physical masking allowed diffusion of the plasma species below the mask and bleeding of the surface chemistries. Multivariate analysis techniques including Principal Component Analysis (PCA) and Region of Interest (ROI) assessment were used to investigate the ToF-SSIMS images of a range of different plasma polymer patterns. In the most challenging case, where two strongly reacting polymers, allylamine and acrylic acid were deposited, PCA confirmed the fabrication of micropatterns with defined spatial resolution. ROI analysis allowed for the identification of an interface between the two plasma polymers for patterns fabricated using the photolithographic technique which has been previously overlooked. This study clearly demonstrated the versatility of photolithographic patterning for the production of multichemistry plasma polymer arrays and highlighted the need for complimentary characterization and analytical techniques during the fabrication plasma polymer micropatterns. PMID:19950941

  16. Features and limitations of mobile tablet devices for viewing radiological images.

    PubMed

    Grunert, J H

    2015-03-01

    Mobile radiological image display systems are becoming increasingly common, necessitating a comparison of the features of these systems, specifically the operating system employed, connection to stationary PACS, data security and rang of image display and image analysis functions. In the fall of 2013, a total of 17 PACS suppliers were surveyed regarding the technical features of 18 mobile radiological image display systems using a standardized questionnaire. The study also examined to what extent the technical specifications of the mobile image display systems satisfy the provisions of the Germany Medical Devices Act as well as the provisions of the German X-ray ordinance (RöV). There are clear differences in terms of how the mobile systems connected to the stationary PACS. Web-based solutions allow the mobile image display systems to function independently of their operating systems. The examined systems differed very little in terms of image display and image analysis functions. Mobile image display systems complement stationary PACS and can be used to view images. The impacts of the new quality assurance guidelines (QS-RL) as well as the upcoming new standard DIN 6868 - 157 on the acceptance testing of mobile image display units for the purpose of image evaluation are discussed. © Georg Thieme Verlag KG Stuttgart · New York.

  17. Automated vessel segmentation using cross-correlation and pooled covariance matrix analysis.

    PubMed

    Du, Jiang; Karimi, Afshin; Wu, Yijing; Korosec, Frank R; Grist, Thomas M; Mistretta, Charles A

    2011-04-01

    Time-resolved contrast-enhanced magnetic resonance angiography (CE-MRA) provides contrast dynamics in the vasculature and allows vessel segmentation based on temporal correlation analysis. Here we present an automated vessel segmentation algorithm including automated generation of regions of interest (ROIs), cross-correlation and pooled sample covariance matrix analysis. The dynamic images are divided into multiple equal-sized regions. In each region, ROIs for artery, vein and background are generated using an iterative thresholding algorithm based on the contrast arrival time map and contrast enhancement map. Region-specific multi-feature cross-correlation analysis and pooled covariance matrix analysis are performed to calculate the Mahalanobis distances (MDs), which are used to automatically separate arteries from veins. This segmentation algorithm is applied to a dual-phase dynamic imaging acquisition scheme where low-resolution time-resolved images are acquired during the dynamic phase followed by high-frequency data acquisition at the steady-state phase. The segmented low-resolution arterial and venous images are then combined with the high-frequency data in k-space and inverse Fourier transformed to form the final segmented arterial and venous images. Results from volunteer and patient studies demonstrate the advantages of this automated vessel segmentation and dual phase data acquisition technique. Copyright © 2011 Elsevier Inc. All rights reserved.

  18. Ganalyzer: A Tool for Automatic Galaxy Image Analysis

    NASA Astrophysics Data System (ADS)

    Shamir, Lior

    2011-08-01

    We describe Ganalyzer, a model-based tool that can automatically analyze and classify galaxy images. Ganalyzer works by separating the galaxy pixels from the background pixels, finding the center and radius of the galaxy, generating the radial intensity plot, and then computing the slopes of the peaks detected in the radial intensity plot to measure the spirality of the galaxy and determine its morphological class. Unlike algorithms that are based on machine learning, Ganalyzer is based on measuring the spirality of the galaxy, a task that is difficult to perform manually, and in many cases can provide a more accurate analysis compared to manual observation. Ganalyzer is simple to use, and can be easily embedded into other image analysis applications. Another advantage is its speed, which allows it to analyze ~10,000,000 galaxy images in five days using a standard modern desktop computer. These capabilities can make Ganalyzer a useful tool in analyzing large data sets of galaxy images collected by autonomous sky surveys such as SDSS, LSST, or DES. The software is available for free download at http://vfacstaff.ltu.edu/lshamir/downloads/ganalyzer, and the data used in the experiment are available at http://vfacstaff.ltu.edu/lshamir/downloads/ganalyzer/GalaxyImages.zip.

  19. Structure and properties of clinical coralline implants measured via 3D imaging and analysis.

    PubMed

    Knackstedt, Mark Alexander; Arns, Christoph H; Senden, Tim J; Gross, Karlis

    2006-05-01

    The development and design of advanced porous materials for biomedical applications requires a thorough understanding of how material structure impacts on mechanical and transport properties. This paper illustrates a 3D imaging and analysis study of two clinically proven coral bone graft samples (Porites and Goniopora). Images are obtained from X-ray micro-computed tomography (micro-CT) at a resolution of 16.8 microm. A visual comparison of the two images shows very different structure; Porites has a homogeneous structure and consistent pore size while Goniopora has a bimodal pore size and a strongly disordered structure. A number of 3D structural characteristics are measured directly on the images including pore volume-to-surface-area, pore and solid size distributions, chord length measurements and tortuosity. Computational results made directly on the digitized tomographic images are presented for the permeability, diffusivity and elastic modulus of the coral samples. The results allow one to quantify differences between the two samples. 3D digital analysis can provide a more thorough assessment of biomaterial structure including the pore wall thickness, local flow, mechanical properties and diffusion pathways. We discuss the implications of these results to the development of optimal scaffold design for tissue ingrowth.

  20. microMS: A Python Platform for Image-Guided Mass Spectrometry Profiling

    NASA Astrophysics Data System (ADS)

    Comi, Troy J.; Neumann, Elizabeth K.; Do, Thanh D.; Sweedler, Jonathan V.

    2017-09-01

    Image-guided mass spectrometry (MS) profiling provides a facile framework for analyzing samples ranging from single cells to tissue sections. The fundamental workflow utilizes a whole-slide microscopy image to select targets of interest, determine their spatial locations, and subsequently perform MS analysis at those locations. Improving upon prior reported methodology, a software package was developed for working with microscopy images. microMS, for microscopy-guided mass spectrometry, allows the user to select and profile diverse samples using a variety of target patterns and mass analyzers. Written in Python, the program provides an intuitive graphical user interface to simplify image-guided MS for novice users. The class hierarchy of instrument interactions permits integration of new MS systems while retaining the feature-rich image analysis framework. microMS is a versatile platform for performing targeted profiling experiments using a series of mass spectrometers. The flexibility in mass analyzers greatly simplifies serial analyses of the same targets by different instruments. The current capabilities of microMS are presented, and its application for off-line analysis of single cells on three distinct instruments is demonstrated. The software has been made freely available for research purposes. [Figure not available: see fulltext.

  1. microMS: A Python Platform for Image-Guided Mass Spectrometry Profiling.

    PubMed

    Comi, Troy J; Neumann, Elizabeth K; Do, Thanh D; Sweedler, Jonathan V

    2017-09-01

    Image-guided mass spectrometry (MS) profiling provides a facile framework for analyzing samples ranging from single cells to tissue sections. The fundamental workflow utilizes a whole-slide microscopy image to select targets of interest, determine their spatial locations, and subsequently perform MS analysis at those locations. Improving upon prior reported methodology, a software package was developed for working with microscopy images. microMS, for microscopy-guided mass spectrometry, allows the user to select and profile diverse samples using a variety of target patterns and mass analyzers. Written in Python, the program provides an intuitive graphical user interface to simplify image-guided MS for novice users. The class hierarchy of instrument interactions permits integration of new MS systems while retaining the feature-rich image analysis framework. microMS is a versatile platform for performing targeted profiling experiments using a series of mass spectrometers. The flexibility in mass analyzers greatly simplifies serial analyses of the same targets by different instruments. The current capabilities of microMS are presented, and its application for off-line analysis of single cells on three distinct instruments is demonstrated. The software has been made freely available for research purposes. Graphical Abstract ᅟ.

  2. Combining semi-automated image analysis techniques with machine learning algorithms to accelerate large-scale genetic studies.

    PubMed

    Atkinson, Jonathan A; Lobet, Guillaume; Noll, Manuel; Meyer, Patrick E; Griffiths, Marcus; Wells, Darren M

    2017-10-01

    Genetic analyses of plant root systems require large datasets of extracted architectural traits. To quantify such traits from images of root systems, researchers often have to choose between automated tools (that are prone to error and extract only a limited number of architectural traits) or semi-automated ones (that are highly time consuming). We trained a Random Forest algorithm to infer architectural traits from automatically extracted image descriptors. The training was performed on a subset of the dataset, then applied to its entirety. This strategy allowed us to (i) decrease the image analysis time by 73% and (ii) extract meaningful architectural traits based on image descriptors. We also show that these traits are sufficient to identify the quantitative trait loci that had previously been discovered using a semi-automated method. We have shown that combining semi-automated image analysis with machine learning algorithms has the power to increase the throughput of large-scale root studies. We expect that such an approach will enable the quantification of more complex root systems for genetic studies. We also believe that our approach could be extended to other areas of plant phenotyping. © The Authors 2017. Published by Oxford University Press.

  3. Spectral Properties and Dynamics of Gold Nanorods Revealed by EMCCD Based Spectral-Phasor Method

    PubMed Central

    Chen, Hongtao; Digman, Michelle A.

    2015-01-01

    Gold nanorods (NRs) with tunable plasmon-resonant absorption in the near-infrared region have considerable advantages over organic fluorophores as imaging agents. However, the luminescence spectral properties of NRs have not been fully explored at the single particle level in bulk due to lack of proper analytic tools. Here we present a global spectral phasor analysis method which allows investigations of NRs' spectra at single particle level with their statistic behavior and spatial information during imaging. The wide phasor distribution obtained by the spectral phasor analysis indicates spectra of NRs are different from particle to particle. NRs with different spectra can be identified graphically in corresponding spatial images with high spectral resolution. Furthermore, spectral behaviors of NRs under different imaging conditions, e.g. different excitation powers and wavelengths, were carefully examined by our laser-scanning multiphoton microscope with spectral imaging capability. Our results prove that the spectral phasor method is an easy and efficient tool in hyper-spectral imaging analysis to unravel subtle changes of the emission spectrum. Moreover, we applied this method to study the spectral dynamics of NRs during direct optical trapping and by optothermal trapping. Interestingly, spectral shifts were observed in both trapping phenomena. PMID:25684346

  4. Combining semi-automated image analysis techniques with machine learning algorithms to accelerate large-scale genetic studies

    PubMed Central

    Atkinson, Jonathan A.; Lobet, Guillaume; Noll, Manuel; Meyer, Patrick E.; Griffiths, Marcus

    2017-01-01

    Abstract Genetic analyses of plant root systems require large datasets of extracted architectural traits. To quantify such traits from images of root systems, researchers often have to choose between automated tools (that are prone to error and extract only a limited number of architectural traits) or semi-automated ones (that are highly time consuming). We trained a Random Forest algorithm to infer architectural traits from automatically extracted image descriptors. The training was performed on a subset of the dataset, then applied to its entirety. This strategy allowed us to (i) decrease the image analysis time by 73% and (ii) extract meaningful architectural traits based on image descriptors. We also show that these traits are sufficient to identify the quantitative trait loci that had previously been discovered using a semi-automated method. We have shown that combining semi-automated image analysis with machine learning algorithms has the power to increase the throughput of large-scale root studies. We expect that such an approach will enable the quantification of more complex root systems for genetic studies. We also believe that our approach could be extended to other areas of plant phenotyping. PMID:29020748

  5. Automatic Image Processing Workflow for the Keck/NIRC2 Vortex Coronagraph

    NASA Astrophysics Data System (ADS)

    Xuan, Wenhao; Cook, Therese; Ngo, Henry; Zawol, Zoe; Ruane, Garreth; Mawet, Dimitri

    2018-01-01

    The Keck/NIRC2 camera, equipped with the vortex coronagraph, is an instrument targeted at the high contrast imaging of extrasolar planets. To uncover a faint planet signal from the overwhelming starlight, we utilize the Vortex Image Processing (VIP) library, which carries out principal component analysis to model and remove the stellar point spread function. To bridge the gap between data acquisition and data reduction, we implement a workflow that 1) downloads, sorts, and processes data with VIP, 2) stores the analysis products into a database, and 3) displays the reduced images, contrast curves, and auxiliary information on a web interface. Both angular differential imaging and reference star differential imaging are implemented in the analysis module. A real-time version of the workflow runs during observations, allowing observers to make educated decisions about time distribution on different targets, hence optimizing science yield. The post-night version performs a standardized reduction after the observation, building up a valuable database that not only helps uncover new discoveries, but also enables a statistical study of the instrument itself. We present the workflow, and an examination of the contrast performance of the NIRC2 vortex with respect to factors including target star properties and observing conditions.

  6. [The procedure for documentation of digital images in forensic medical histology].

    PubMed

    Putintsev, V A; Bogomolov, D V; Fedulova, M V; Gribunov, Iu P; Kul'bitskiĭ, B N

    2012-01-01

    This paper is devoted to the novel computer technologies employed in the studies of histological preparations. These technologies allow to visualize digital images, structurize the data obtained and store the results in computer memory. The authors emphasize the necessity to properly document digital images obtained during forensic-histological studies and propose the procedure for the formulation of electronic documents in conformity with the relevant technical and legal requirements. It is concluded that the use of digital images as a new study object permits to obviate the drawbacks inherent in the work with the traditional preparations and pass from descriptive microscopy to their quantitative analysis.

  7. Multispectral Imaging Broadens Cellular Analysis

    NASA Technical Reports Server (NTRS)

    2007-01-01

    Amnis Corporation, a Seattle-based biotechnology company, developed ImageStream to produce sensitive fluorescence images of cells in flow. The company responded to an SBIR solicitation from Ames Research Center, and proposed to evaluate several methods of extending the depth of field for its ImageStream system and implement the best as an upgrade to its commercial products. This would allow users to view whole cells at the same time, rather than just one section of each cell. Through Phase I and II SBIR contracts, Ames provided Amnis the funding the company needed to develop this extended functionality. For NASA, the resulting high-speed image flow cytometry process made its way into Medusa, a life-detection instrument built to collect, store, and analyze sample organisms from erupting hydrothermal vents, and has the potential to benefit space flight health monitoring. On the commercial end, Amnis has implemented the process in ImageStream, combining high-resolution microscopy and flow cytometry in a single instrument, giving researchers the power to conduct quantitative analyses of individual cells and cell populations at the same time, in the same experiment. ImageStream is also built for many other applications, including cell signaling and pathway analysis; classification and characterization of peripheral blood mononuclear cell populations; quantitative morphology; apoptosis (cell death) assays; gene expression analysis; analysis of cell conjugates; molecular distribution; and receptor mapping and distribution.

  8. Thermal imaging of Al-CuO thermites

    NASA Astrophysics Data System (ADS)

    Densmore, John; Sullivan, Kyle; Kuntz, Joshua; Gash, Alex

    2013-06-01

    We have performed spatial in-situ temperature measurements of aluminum-copper oxide thermite reactions using high-speed color pyrometry. Electrophoretic deposition was used to create thermite microstructures. Tests were performed with micron- and nano-sized particles at different stoichiometries. The color pyrometry was performed using a high-speed color camera. The color filter array on the image sensor collects light within three spectral bands. Assuming a gray-body emission spectrum a multi-wavelength ratio analysis allows a temperature to be calculated. An advantage of using a two-dimensional image sensor is that it allows heterogeneous flames to be measured with high spatial resolution. Light from the initial combustion of the Al-CuO can be differentiated from the light created by the late time oxidization with atmosphere. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  9. Vision-based in-line fabric defect detection using yarn-specific shape features

    NASA Astrophysics Data System (ADS)

    Schneider, Dorian; Aach, Til

    2012-01-01

    We develop a methodology for automatic in-line flaw detection in industrial woven fabrics. Where state of the art detection algorithms apply texture analysis methods to operate on low-resolved ({200 ppi) image data, we describe here a process flow to segment single yarns in high-resolved ({1000 ppi) textile images. Four yarn shape features are extracted, allowing a precise detection and measurement of defects. The degree of precision reached allows a classification of detected defects according to their nature, providing an innovation in the field of automatic fabric flaw detection. The design has been carried out to meet real time requirements and face adverse conditions caused by loom vibrations and dirt. The entire process flow is discussed followed by an evaluation using a database with real-life industrial fabric images. This work pertains to the construction of an on-loom defect detection system to be used in manufacturing practice.

  10. Platform for Postprocessing Waveform-Based NDE

    NASA Technical Reports Server (NTRS)

    Roth, Don

    2008-01-01

    Taking advantage of the similarities that exist among all waveform-based non-destructive evaluation (NDE) methods, a common software platform has been developed containing multiple- signal and image-processing techniques for waveforms and images. The NASA NDE Signal and Image Processing software has been developed using the latest versions of LabVIEW, and its associated Advanced Signal Processing and Vision Toolkits. The software is useable on a PC with Windows XP and Windows Vista. The software has been designed with a commercial grade interface in which two main windows, Waveform Window and Image Window, are displayed if the user chooses a waveform file to display. Within these two main windows, most actions are chosen through logically conceived run-time menus. The Waveform Window has plots for both the raw time-domain waves and their frequency- domain transformations (fast Fourier transform and power spectral density). The Image Window shows the C-scan image formed from information of the time-domain waveform (such as peak amplitude) or its frequency-domain transformation at each scan location. The user also has the ability to open an image, or series of images, or a simple set of X-Y paired data set in text format. Each of the Waveform and Image Windows contains menus from which to perform many user actions. An option exists to use raw waves obtained directly from scan, or waves after deconvolution if system wave response is provided. Two types of deconvolution, time-based subtraction or inverse-filter, can be performed to arrive at a deconvolved wave set. Additionally, the menu on the Waveform Window allows preprocessing of waveforms prior to image formation, scaling and display of waveforms, formation of different types of images (including non-standard types such as velocity), gating of portions of waves prior to image formation, and several other miscellaneous and specialized operations. The menu available on the Image Window allows many further image processing and analysis operations, some of which are found in commercially-available image-processing software programs (such as Adobe Photoshop), and some that are not (removing outliers, Bscan information, region-of-interest analysis, line profiles, and precision feature measurements).

  11. Introduction of statistical information in a syntactic analyzer for document image recognition

    NASA Astrophysics Data System (ADS)

    Maroneze, André O.; Coüasnon, Bertrand; Lemaitre, Aurélie

    2011-01-01

    This paper presents an improvement to document layout analysis systems, offering a possible solution to Sayre's paradox (which states that an element "must be recognized before it can be segmented; and it must be segmented before it can be recognized"). This improvement, based on stochastic parsing, allows integration of statistical information, obtained from recognizers, during syntactic layout analysis. We present how this fusion of numeric and symbolic information in a feedback loop can be applied to syntactic methods to improve document description expressiveness. To limit combinatorial explosion during exploration of solutions, we devised an operator that allows optional activation of the stochastic parsing mechanism. Our evaluation on 1250 handwritten business letters shows this method allows the improvement of global recognition scores.

  12. A gradient method for the quantitative analysis of cell movement and tissue flow and its application to the analysis of multicellular Dictyostelium development.

    PubMed

    Siegert, F; Weijer, C J; Nomura, A; Miike, H

    1994-01-01

    We describe the application of a novel image processing method, which allows quantitative analysis of cell and tissue movement in a series of digitized video images. The result is a vector velocity field showing average direction and velocity of movement for every pixel in the frame. We apply this method to the analysis of cell movement during different stages of the Dictyostelium developmental cycle. We analysed time-lapse video recordings of cell movement in single cells, mounds and slugs. The program can correctly assess the speed and direction of movement of either unlabelled or labelled cells in a time series of video images depending on the illumination conditions. Our analysis of cell movement during multicellular development shows that the entire morphogenesis of Dictyostelium is characterized by rotational cell movement. The analysis of cell and tissue movement by the velocity field method should be applicable to the analysis of morphogenetic processes in other systems such as gastrulation and neurulation in vertebrate embryos.

  13. Digitizing an Analog Radiography Teaching File Under Time Constraint: Trade-Offs in Efficiency and Image Quality.

    PubMed

    Loehfelm, Thomas W; Prater, Adam B; Debebe, Tequam; Sekhar, Aarti K

    2017-02-01

    We digitized the radiography teaching file at Black Lion Hospital (Addis Ababa, Ethiopia) during a recent trip, using a standard digital camera and a fluorescent light box. Our goal was to photograph every radiograph in the existing library while optimizing the final image size to the maximum resolution of a high quality tablet computer, preserving the contrast resolution of the radiographs, and minimizing total library file size. A secondary important goal was to minimize the cost and time required to take and process the images. Three workers were able to efficiently remove the radiographs from their storage folders, hang them on the light box, operate the camera, catalog the image, and repack the radiographs back to the storage folder. Zoom, focal length, and film speed were fixed, while aperture and shutter speed were manually adjusted for each image, allowing for efficiency and flexibility in image acquisition. Keeping zoom and focal length fixed, which kept the view box at the same relative position in all of the images acquired during a single photography session, allowed unused space to be batch-cropped, saving considerable time in post-processing, at the expense of final image resolution. We present an analysis of the trade-offs in workflow efficiency and final image quality, and demonstrate that a few people with minimal equipment can efficiently digitize a teaching file library.

  14. Fusion of UAV photogrammetry and digital optical granulometry for detection of structural changes in floodplains

    NASA Astrophysics Data System (ADS)

    Langhammer, Jakub; Lendzioch, Theodora; Mirijovsky, Jakub

    2016-04-01

    Granulometric analysis represents a traditional, important and for the description of sedimentary material substantial method with various applications in sedimentology, hydrology and geomorphology. However, the conventional granulometric field survey methods are time consuming, laborious, costly and are invasive to the surface being sampled, which can be limiting factor for their applicability in protected areas.. The optical granulometry has recently emerged as an image analysis technique, enabling non-invasive survey, employing semi-automated identification of clasts from calibrated digital imagery, taken on site by conventional high resolution digital camera and calibrated frame. The image processing allows detection and measurement of mixed size natural grains, their sorting and quantitative analysis using standard granulometric approaches. Despite known limitations, the technique today presents reliable tool, significantly easing and speeding the field survey in fluvial geomorphology. However, the nature of such survey has still limitations in spatial coverage of the sites and applicability in research at multitemporal scale. In our study, we are presenting novel approach, based on fusion of two image analysis techniques - optical granulometry and UAV-based photogrammetry, allowing to bridge the gap between the needs of high resolution structural information for granulometric analysis and spatially accurate and data coverage. We have developed and tested a workflow that, using UAV imaging platform enabling to deliver seamless, high resolution and spatially accurate imagery of the study site from which can be derived the granulometric properties of the sedimentary material. We have set up a workflow modeling chain, providing (i) the optimum flight parameters for UAV imagery to balance the two key divergent requirements - imagery resolution and seamless spatial coverage, (ii) the workflow for the processing of UAV acquired imagery by means of the optical granulometry and (iii) the workflow for analysis of spatial distribution and temporal changes of granulometric properties across the point bar. The proposed technique was tested on a case study of an active point bar of mid-latitude mountain stream at Sumava mountains, Czech Republic, exposed to repeated flooding. The UAV photogrammetry was used to acquire very high resolution imagery to build high-precision digital terrain models and orthoimage. The orthoimage was then analyzed using the digital optical granulometric tool BaseGrain. This approach allowed us (i) to analyze the spatial distribution of the grain size in a seamless transects over an active point bar and (ii) to assess the multitemporal changes of granulometric properties of the point bar material resulting from flooding. The tested framework prove the applicability of the proposed method for granulometric analysis with accuracy comparable with field optical granulometry. The seamless nature of the data enables to study spatial distribution of granulometric properties across the study sites as well as the analysis of multitemporal changes, resulting from repeated imaging.

  15. A Framework of Hyperspectral Image Compression using Neural Networks

    DOE PAGES

    Masalmah, Yahya M.; Martínez Nieves, Christian; Rivera Soto, Rafael; ...

    2015-01-01

    Hyperspectral image analysis has gained great attention due to its wide range of applications. Hyperspectral images provide a vast amount of information about underlying objects in an image by using a large range of the electromagnetic spectrum for each pixel. However, since the same image is taken multiple times using distinct electromagnetic bands, the size of such images tend to be significant, which leads to greater processing requirements. The aim of this paper is to present a proposed framework for image compression and to study the possible effects of spatial compression on quality of unmixing results. Image compression allows usmore » to reduce the dimensionality of an image while still preserving most of the original information, which could lead to faster image processing. Lastly, this paper presents preliminary results of different training techniques used in Artificial Neural Network (ANN) based compression algorithm.« less

  16. [Self-images and perceptions of other professions among students of nursing, physiotherapy, and occupational therapy and their importance for interprofessional cooperation].

    PubMed

    Boggatz, Thomas; Altmeppen, Sandra; Unger, Angelika

    2010-07-01

    Interdisciplinary cooperation is necessary to provide effective and high quality treatment for clients of the health care system. Interaction between professional groups depends on how their members perceive their self-image and the image of other professions. Within the framework of the project "Quality in the education of health-professionals" a qualitative study with 23 nurses, 24 physiotherapists and 15 occupational therapists in the second or third year of training was conducted. Participants were asked to report their self-image and the image of the other two professions. A qualitative content analysis according to Mayring was used for data analysis. Four categories emerged that allowed describing the self image and the image of the other professions: roles of the respective health professions, relation of the health professionals to their clients, attributes that typically characterized members of a particular profession, and relationship between the health professions. Latent conflicts between professional groups became apparent. Contradicting perceptions are due to subjective bias in favour of the own professional group. Interdisciplinary collaboration requires a new culture of co-operation.

  17. Computerized image analysis for acetic acid induced intraepithelial lesions

    NASA Astrophysics Data System (ADS)

    Li, Wenjing; Ferris, Daron G.; Lieberman, Rich W.

    2008-03-01

    Cervical Intraepithelial Neoplasia (CIN) exhibits certain morphologic features that can be identified during a visual inspection exam. Immature and dysphasic cervical squamous epithelium turns white after application of acetic acid during the exam. The whitening process occurs visually over several minutes and subjectively discriminates between dysphasic and normal tissue. Digital imaging technologies allow us to assist the physician analyzing the acetic acid induced lesions (acetowhite region) in a fully automatic way. This paper reports a study designed to measure multiple parameters of the acetowhitening process from two images captured with a digital colposcope. One image is captured before the acetic acid application, and the other is captured after the acetic acid application. The spatial change of the acetowhitening is extracted using color and texture information in the post acetic acid image; the temporal change is extracted from the intensity and color changes between the post acetic acid and pre acetic acid images with an automatic alignment. The imaging and data analysis system has been evaluated with a total of 99 human subjects and demonstrate its potential to screening underserved women where access to skilled colposcopists is limited.

  18. Large scale track analysis for wide area motion imagery surveillance

    NASA Astrophysics Data System (ADS)

    van Leeuwen, C. J.; van Huis, J. R.; Baan, J.

    2016-10-01

    Wide Area Motion Imagery (WAMI) enables image based surveillance of areas that can cover multiple square kilometers. Interpreting and analyzing information from such sources, becomes increasingly time consuming as more data is added from newly developed methods for information extraction. Captured from a moving Unmanned Aerial Vehicle (UAV), the high-resolution images allow detection and tracking of moving vehicles, but this is a highly challenging task. By using a chain of computer vision detectors and machine learning techniques, we are capable of producing high quality track information of more than 40 thousand vehicles per five minutes. When faced with such a vast number of vehicular tracks, it is useful for analysts to be able to quickly query information based on region of interest, color, maneuvers or other high-level types of information, to gain insight and find relevant activities in the flood of information. In this paper we propose a set of tools, combined in a graphical user interface, which allows data analysts to survey vehicles in a large observed area. In order to retrieve (parts of) images from the high-resolution data, we developed a multi-scale tile-based video file format that allows to quickly obtain only a part, or a sub-sampling of the original high resolution image. By storing tiles of a still image according to a predefined order, we can quickly retrieve a particular region of the image at any relevant scale, by skipping to the correct frames and reconstructing the image. Location based queries allow a user to select tracks around a particular region of interest such as landmark, building or street. By using an integrated search engine, users can quickly select tracks that are in the vicinity of locations of interest. Another time-reducing method when searching for a particular vehicle, is to filter on color or color intensity. Automatic maneuver detection adds information to the tracks that can be used to find vehicles based on their behavior.

  19. Ultra-wide-field imaging in diabetic retinopathy.

    PubMed

    Ghasemi Falavarjani, Khalil; Tsui, Irena; Sadda, Srinivas R

    2017-10-01

    Since 1991, 7-field images captured with 30-50 degree cameras in the Early Treatment Diabetic Retinopathy Study were the gold standard for fundus imaging to study diabetic retinopathy. Ultra-wide-field images cover significantly more area (up to 82%) of the fundus and with ocular steering can in many cases image 100% of the fundus ("panretinal"). Recent advances in image analysis of ultra-wide-field imaging allow for precise measurements of the peripheral retinal lesions. There is a growing consensus in the literature that ultra-wide-field imaging improves detection of peripheral lesions in diabetic retinopathy and leads to more accurate classification of the disease. There is discordance among studies, however, on the correlation between peripheral diabetic lesions and diabetic macular edema and optimal management strategies to treat diabetic retinopathy. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. The comparative effectiveness of conventional and digital image libraries.

    PubMed

    McColl, R I; Johnson, A

    2001-03-01

    Before introducing a hospital-wide image database to improve access, navigation and retrieval speed, a comparative study between a conventional slide library and a matching image database was undertaken to assess its relative benefits. Paired time trials and personal questionnaires revealed faster retrieval rates, higher image quality, and easier viewing for the pilot digital image database. Analysis of confidentiality, copyright and data protection exposed similar issues for both systems, thus concluding that the digital image database is a more effective library system. The authors suggest that in the future, medical images will be stored on large, professionally administered, centrally located file servers, allowing specialist image libraries to be tailored locally for individual users. The further integration of the database with web technology will enable cheap and efficient remote access for a wide range of users.

  1. High-resolution, continuous field-of-view (FOV), non-rotating imaging system

    NASA Technical Reports Server (NTRS)

    Huntsberger, Terrance L. (Inventor); Stirbl, Robert C. (Inventor); Aghazarian, Hrand (Inventor); Padgett, Curtis W. (Inventor)

    2010-01-01

    A high resolution CMOS imaging system especially suitable for use in a periscope head. The imaging system includes a sensor head for scene acquisition, and a control apparatus inclusive of distributed processors and software for device-control, data handling, and display. The sensor head encloses a combination of wide field-of-view CMOS imagers and narrow field-of-view CMOS imagers. Each bank of imagers is controlled by a dedicated processing module in order to handle information flow and image analysis of the outputs of the camera system. The imaging system also includes automated or manually controlled display system and software for providing an interactive graphical user interface (GUI) that displays a full 360-degree field of view and allows the user or automated ATR system to select regions for higher resolution inspection.

  2. Minimizing Interpolation Bias and Precision Error in In Vivo μCT-based Measurements of Bone Structure and Dynamics

    PubMed Central

    de Bakker, Chantal M. J.; Altman, Allison R.; Li, Connie; Tribble, Mary Beth; Lott, Carina; Tseng, Wei-Ju; Liu, X. Sherry

    2016-01-01

    In vivo μCT imaging allows for high-resolution, longitudinal evaluation of bone properties. Based on this technology, several recent studies have developed in vivo dynamic bone histomorphometry techniques that utilize registered μCT images to identify regions of bone formation and resorption, allowing for longitudinal assessment of bone remodeling. However, this analysis requires a direct voxel-by-voxel subtraction between image pairs, necessitating rotation of the images into the same coordinate system, which introduces interpolation errors. We developed a novel image transformation scheme, matched-angle transformation (MAT), whereby the interpolation errors are minimized by equally rotating both the follow-up and baseline images instead of the standard of rotating one image while the other remains fixed. This new method greatly reduced interpolation biases caused by the standard transformation. Additionally, our study evaluated the reproducibility and precision of bone remodeling measurements made via in vivo dynamic bone histomorphometry. Although bone remodeling measurements showed moderate baseline noise, precision was adequate to measure physiologically relevant changes in bone remodeling, and measurements had relatively good reproducibility, with intra-class correlation coefficients of 0.75-0.95. This indicates that, when used in conjunction with MAT, in vivo dynamic histomorphometry provides a reliable assessment of bone remodeling. PMID:26786342

  3. Minimizing Interpolation Bias and Precision Error in In Vivo µCT-Based Measurements of Bone Structure and Dynamics.

    PubMed

    de Bakker, Chantal M J; Altman, Allison R; Li, Connie; Tribble, Mary Beth; Lott, Carina; Tseng, Wei-Ju; Liu, X Sherry

    2016-08-01

    In vivo µCT imaging allows for high-resolution, longitudinal evaluation of bone properties. Based on this technology, several recent studies have developed in vivo dynamic bone histomorphometry techniques that utilize registered µCT images to identify regions of bone formation and resorption, allowing for longitudinal assessment of bone remodeling. However, this analysis requires a direct voxel-by-voxel subtraction between image pairs, necessitating rotation of the images into the same coordinate system, which introduces interpolation errors. We developed a novel image transformation scheme, matched-angle transformation (MAT), whereby the interpolation errors are minimized by equally rotating both the follow-up and baseline images instead of the standard of rotating one image while the other remains fixed. This new method greatly reduced interpolation biases caused by the standard transformation. Additionally, our study evaluated the reproducibility and precision of bone remodeling measurements made via in vivo dynamic bone histomorphometry. Although bone remodeling measurements showed moderate baseline noise, precision was adequate to measure physiologically relevant changes in bone remodeling, and measurements had relatively good reproducibility, with intra-class correlation coefficients of 0.75-0.95. This indicates that, when used in conjunction with MAT, in vivo dynamic histomorphometry provides a reliable assessment of bone remodeling.

  4. Image Stability Requirements For a Geostationary Imaging Fourier Transform Spectrometer (GIFTS)

    NASA Technical Reports Server (NTRS)

    Bingham, G. E.; Cantwell, G.; Robinson, R. C.; Revercomb, H. E.; Smith, W. L.

    2001-01-01

    A Geostationary Imaging Fourier Transform Spectrometer (GIFTS) has been selected for the NASA New Millennium Program (NMP) Earth Observing-3 (EO-3) mission. Our paper will discuss one of the key GIFTS measurement requirements, Field of View (FOV) stability, and its impact on required system performance. The GIFTS NMP mission is designed to demonstrate new and emerging sensor and data processing technologies with the goal of making revolutionary improvements in meteorological observational capability and forecasting accuracy. The GIFTS payload is a versatile imaging FTS with programmable spectral resolution and spatial scene selection that allows radiometric accuracy and atmospheric sounding precision to be traded in near real time for area coverage. The GIFTS sensor combines high sensitivity with a massively parallel spatial data collection scheme to allow high spatial resolution measurement of the Earth's atmosphere and rapid broad area coverage. An objective of the GIFTS mission is to demonstrate the advantages of high spatial resolution (4 km ground sample distance - gsd) on temperature and water vapor retrieval by allowing sampling in broken cloud regions. This small gsd, combined with the relatively long scan time required (approximately 10 s) to collect high resolution spectra from geostationary (GEO) orbit, may require extremely good pointing control. This paper discusses the analysis of this requirement.

  5. Digital image analysis: improving accuracy and reproducibility of radiographic measurement.

    PubMed

    Bould, M; Barnard, S; Learmonth, I D; Cunningham, J L; Hardy, J R

    1999-07-01

    To assess the accuracy and reproducibility of a digital image analyser and the human eye, in measuring radiographic dimensions. We experimentally compared radiographic measurement using either an image analyser system or the human eye with digital caliper. The assessment of total hip arthroplasty wear from radiographs relies on both the accuracy of radiographic images and the accuracy of radiographic measurement. Radiographs were taken of a slip gauge (30+/-0.00036 mm) and slip gauge with a femoral stem. The projected dimensions of the radiographic images were calculated by trigonometry. The radiographic dimensions were then measured by blinded observers using both techniques. For a single radiograph, the human eye was accurate to 0.26 mm and reproducible to +/-0.1 mm. In comparison the digital image analyser system was accurate to 0.01 mm with a reproducibility of +/-0.08 mm. In an arthroplasty model, where the dimensions of an object were corrected for magnification by the known dimensions of a femoral head, the human eye was accurate to 0.19 mm, whereas the image analyser system was accurate to 0.04 mm. The digital image analysis system is up to 20 times more accurate than the human eye, and in an arthroplasty model the accuracy of measurement increases four-fold. We believe such image analysis may allow more accurate and reproducible measurement of wear from standard follow-up radiographs.

  6. Dextran or Saline Can Replace Contrast for Intravascular Optical Coherence Tomography in Lower Extremity Arteries.

    PubMed

    Kendrick, Daniel E; Allemang, Matthew T; Gosling, Andre F; Nagavalli, Anil; Kim, Ann H; Nishino, Setsu; Parikh, Sahil A; Bezerra, Hiram G; Kashyap, Vikram S

    2016-10-01

    To examine the hypothesis that alternative flush media could be used for lower extremity optical coherence tomography (OCT) imaging in long lesions that would normally require excessive use of contrast. The OPTical Imaging Measurement of Intravascular Solution Efficacy (OPTIMISE) trial was a single-center, prospective study (ClinicalTrials.gov identifier NCT01743872) that enrolled 23 patients (mean age 68±11 years; 14 men) undergoing endovascular intervention involving the superficial femoral artery. Four flush media (heparinized saline, dextran, carbon dioxide, and contrast) were used in succession in random order for each image pullback. Quality was defined as ≥270° visualization of vessel wall layers from each axial image. Mean proportions (± standard deviation) of image quality for each flush medium were assessed using 1-way analysis of variance and are reported with the 95% confidence intervals (CI). Four OCT catheters failed, leaving 19 patients who completed the OCT imaging protocol; from this cohort, 51 highest quality runs were selected for analysis. Average vessel diameter was 3.99±1.01 mm. OCT imaging allowed 10- to 15-μm resolution of the lumen border, with diminishing quality as vessel diameter increased. Plaque characterization revealed fibrotic lesions. Mean proportions of image quality were dextran 87.2%±12% (95% CI 0.81 to 0.94), heparinized saline 74.3%±24.8% (95% CI 0.66 to 0.93), contrast 70.1%±30.5% (95% CI 0.52 to 0.88), and carbon dioxide 10.0%±10.4% (95% CI 0.00 to 0.26). Dextran, saline, and contrast provided better quality than carbon dioxide (p<0.001). OCT is feasible in peripheral vessels <5 mm in diameter. Dextran or saline flush media can allow lesion characterization, avoiding iodinated contrast. Carbon dioxide is inadequate for peripheral OCT imaging. Axial imaging may aid in enhancing durability of peripheral endovascular interventions. © The Author(s) 2016.

  7. A Flexible Method for Producing F.E.M. Analysis of Bone Using Open-Source Software

    NASA Technical Reports Server (NTRS)

    Boppana, Abhishektha; Sefcik, Ryan; Meyers, Jerry G.; Lewandowski, Beth E.

    2016-01-01

    This project, performed in support of the NASA GRC Space Academy summer program, sought to develop an open-source workflow methodology that segmented medical image data, created a 3D model from the segmented data, and prepared the model for finite-element analysis. In an initial step, a technological survey evaluated the performance of various existing open-source software that claim to perform these tasks. However, the survey concluded that no single software exhibited the wide array of functionality required for the potential NASA application in the area of bone, muscle and bio fluidic studies. As a result, development of a series of Python scripts provided the bridging mechanism to address the shortcomings of the available open source tools. The implementation of the VTK library provided the most quick and effective means of segmenting regions of interest from the medical images; it allowed for the export of a 3D model by using the marching cubes algorithm to build a surface mesh. To facilitate the development of the model domain from this extracted information required a surface mesh to be processed in the open-source software packages Blender and Gmsh. The Preview program of the FEBio suite proved to be sufficient for volume filling the model with an unstructured mesh and preparing boundaries specifications for finite element analysis. To fully allow FEM modeling, an in house developed Python script allowed assignment of material properties on an element by element basis by performing a weighted interpolation of voxel intensity of the parent medical image correlated to published information of image intensity to material properties, such as ash density. A graphical user interface combined the Python scripts and other software into a user friendly interface. The work using Python scripts provides a potential alternative to expensive commercial software and inadequate, limited open-source freeware programs for the creation of 3D computational models. More work will be needed to validate this approach in creating finite-element models.

  8. An augmented parametric response map with consideration of image registration error: towards guidance of locally adaptive radiotherapy

    NASA Astrophysics Data System (ADS)

    Lausch, Anthony; Chen, Jeff; Ward, Aaron D.; Gaede, Stewart; Lee, Ting-Yim; Wong, Eugene

    2014-11-01

    Parametric response map (PRM) analysis is a voxel-wise technique for predicting overall treatment outcome, which shows promise as a tool for guiding personalized locally adaptive radiotherapy (RT). However, image registration error (IRE) introduces uncertainty into this analysis which may limit its use for guiding RT. Here we extend the PRM method to include an IRE-related PRM analysis confidence interval and also incorporate multiple graded classification thresholds to facilitate visualization. A Gaussian IRE model was used to compute an expected value and confidence interval for PRM analysis. The augmented PRM (A-PRM) was evaluated using CT-perfusion functional image data from patients treated with RT for glioma and hepatocellular carcinoma. Known rigid IREs were simulated by applying one thousand different rigid transformations to each image set. PRM and A-PRM analyses of the transformed images were then compared to analyses of the original images (ground truth) in order to investigate the two methods in the presence of controlled IRE. The A-PRM was shown to help visualize and quantify IRE-related analysis uncertainty. The use of multiple graded classification thresholds also provided additional contextual information which could be useful for visually identifying adaptive RT targets (e.g. sub-volume boosts). The A-PRM should facilitate reliable PRM guided adaptive RT by allowing the user to identify if a patient’s unique IRE-related PRM analysis uncertainty has the potential to influence target delineation.

  9. Scanning laser ophthalmoscope fundus cyclometry in near-natural viewing conditions.

    PubMed

    Ehrt, O; Boergen, K P

    2001-09-01

    For a better understanding of motor and sensory adaptations in cyclodeviations, subjective and objective ocular torsion have to be measured under the same conditions. The search coil technique and videooculography allow natural viewing but only assess relative cycloduction, the dynamics of torsion over a short period of time. Cycloposition, on the other hand, can be measured by analysing the position of the foveola relative to the optic disc with fundus photographs but only in nonphysiological viewing. The aim of the study was to develop a technique that allows natural viewing conditions during fundus cyclometry. The scanning laser beam of the SLO was deflected by 90 degrees with a semitransparent mirror in front of the patient's eyes. The patient was able to look through the semitransparent mirror with both eyes into the room, e.g. at Harms' tangent screen. The infrared SLO images the central retina via the mirror through the undilated pupil. Digital image analysis quantifies the cycloposition of the eye. Controlled head movements while fixating the centre of Harms' tangent screen allow measurements in reproducible gaze positions. The semitransparent mirror reduces SLO image brightness, but image quality is sufficient for cyclometry after contrast enhancement. The laser light can be vaguely perceived by the patient but does not interfere with natural viewing. Reproducibility of the measurement is within +/- 1 degree SD. Our modification of SLO fundus cyclometry allows direct measurements of cycloposition in natural viewing conditions. This opens a new field for investigations of cyclodeviations and their sensory and motor adaptations.

  10. Cloud solution for histopathological image analysis using region of interest based compression.

    PubMed

    Kanakatte, Aparna; Subramanya, Rakshith; Delampady, Ashik; Nayak, Rajarama; Purushothaman, Balamuralidhar; Gubbi, Jayavardhana

    2017-07-01

    Recent technological gains have led to the adoption of innovative cloud based solutions in medical imaging field. Once the medical image is acquired, it can be viewed, modified, annotated and shared on many devices. This advancement is mainly due to the introduction of Cloud computing in medical domain. Tissue pathology images are complex and are normally collected at different focal lengths using a microscope. The single whole slide image contains many multi resolution images stored in a pyramidal structure with the highest resolution image at the base and the smallest thumbnail image at the top of the pyramid. Highest resolution image will be used for tissue pathology diagnosis and analysis. Transferring and storing such huge images is a big challenge. Compression is a very useful and effective technique to reduce the size of these images. As pathology images are used for diagnosis, no information can be lost during compression (lossless compression). A novel method of extracting the tissue region and applying lossless compression on this region and lossy compression on the empty regions has been proposed in this paper. The resulting compression ratio along with lossless compression on tissue region is in acceptable range allowing efficient storage and transmission to and from the Cloud.

  11. Objective measurement of accommodative biometric changes using ultrasound biomicroscopy

    PubMed Central

    Ramasubramanian, Viswanathan; Glasser, Adrian

    2015-01-01

    PURPOSE To demonstrate that ultrasound biomicroscopy (UBM) can be used for objective quantitative measurements of anterior segment accommodative changes. SETTING College of Optometry, University of Houston, Houston, Texas, USA. DESIGN Prospective cross-sectional study. METHODS Anterior segment biometric changes in response to 0 to 6.0 diopters (D) of accommodative stimuli in 1.0 D steps were measured in eyes of human subjects aged 21 to 36 years. Imaging was performed in the left eye using a 35 MHz UBM (Vumax) and an A-scan ultrasound (A-5500) while the right eye viewed the accommodative stimuli. An automated Matlab image-analysis program was developed to measure the biometry parameters from the UBM images. RESULTS The UBM-measured accommodative changes in anterior chamber depth (ACD), lens thickness, anterior lens radius of curvature, posterior lens radius of curvature, and anterior segment length were statistically significantly (P < .0001) linearly correlated with accommodative stimulus amplitudes. Standard deviations of the UBM-measured parameters were independent of the accommodative stimulus demands (ACD 0.0176 mm, lens thickness 0.0294 mm, anterior lens radius of curvature 0.3350 mm, posterior lens radius of curvature 0.1580 mm, and anterior segment length 0.0340 mm). The mean difference between the A-scan and UBM measurements was −0.070 mm for ACD and 0.166 mm for lens thickness. CONCLUSIONS Accommodating phakic eyes imaged using UBM allowed visualization of the accommodative response, and automated image analysis of the UBM images allowed reliable, objective, quantitative measurements of the accommodative intraocular biometric changes. PMID:25804579

  12. From the Cover: Visualization of maltose uptake in living yeast cells by fluorescent nanosensors

    NASA Astrophysics Data System (ADS)

    Fehr, Marcus; Frommer, Wolf B.; Lalonde, Sylvie

    2002-07-01

    Compartmentation of metabolic reactions and thus transport within and between cells can be understood only if we know subcellular distribution based on nondestructive dynamic monitoring. Currently, methods are not available for in vivo metabolite imaging at cellular or subcellular levels. Limited information derives from methods requiring fixation or fractionation of tissue (1, 2). We thus developed a flexible strategy for designing protein-based nanosensors for a wide spectrum of solutes, allowing analysis of changes in solute concentration in living cells. We made use of bacterial periplasmic binding proteins (PBPs), where we show that, on binding of the substrate, PBPs transform their hinge-bend movement into increased fluorescence resonance energy transfer (FRET) between two coupled green fluorescent proteins. By using the maltose-binding protein as a prototype, nanosensors were constructed allowing in vitro determination of FRET changes in a concentration-dependent fashion. For physiological applications, mutants with different binding affinities were generated, allowing dynamic in vivo imaging of the increase in cytosolic maltose concentration in single yeast cells. Control sensors allow the exclusion of the effect from other cellular or environmental parameters on ratio imaging. Thus the myriad of PBPs recognizing a wide spectrum of different substrates is suitable for FRET-based in vivo detection, providing numerous scientific, medical, and environmental applications.

  13. Computer-based analysis of microvascular alterations in a mouse model for Alzheimer's disease

    NASA Astrophysics Data System (ADS)

    Heinzer, Stefan; Müller, Ralph; Stampanoni, Marco; Abela, Rafael; Meyer, Eric P.; Ulmann-Schuler, Alexandra; Krucker, Thomas

    2007-03-01

    Vascular factors associated with Alzheimer's disease (AD) have recently gained increased attention. To investigate changes in vascular, particularly microvascular architecture, we developed a hierarchical imaging framework to obtain large-volume, high-resolution 3D images from brains of transgenic mice modeling AD. In this paper, we present imaging and data analysis methods which allow compiling unique characteristics from several hundred gigabytes of image data. Image acquisition is based on desktop micro-computed tomography (µCT) and local synchrotron-radiation µCT (SRµCT) scanning with a nominal voxel size of 16 µm and 1.4 µm, respectively. Two visualization approaches were implemented: stacks of Z-buffer projections for fast data browsing, and progressive-mesh based surface rendering for detailed 3D visualization of the large datasets. In a first step, image data was assessed visually via a Java client connected to a central database. Identified characteristics of interest were subsequently quantified using global morphometry software. To obtain even deeper insight into microvascular alterations, tree analysis software was developed providing local morphometric parameters such as number of vessel segments or vessel tortuosity. In the context of ever increasing image resolution and large datasets, computer-aided analysis has proven both powerful and indispensable. The hierarchical approach maintains the context of local phenomena, while proper visualization and morphometry provide the basis for detailed analysis of the pathology related to structure. Beyond analysis of microvascular changes in AD this framework will have significant impact considering that vascular changes are involved in other neurodegenerative diseases as well as in cancer, cardiovascular disease, asthma, and arthritis.

  14. Blue intensity matters for cell cycle profiling in fluorescence DAPI-stained images.

    PubMed

    Ferro, Anabela; Mestre, Tânia; Carneiro, Patrícia; Sahumbaiev, Ivan; Seruca, Raquel; Sanches, João M

    2017-05-01

    In the past decades, there has been an amazing progress in the understanding of the molecular mechanisms of the cell cycle. This has been possible largely due to a better conceptualization of the cycle itself, but also as a consequence of technological advances. Herein, we propose a new fluorescence image-based framework targeted at the identification and segmentation of stained nuclei with the purpose to determine DNA content in distinct cell cycle stages. The method is based on discriminative features, such as total intensity and area, retrieved from in situ stained nuclei by fluorescence microscopy, allowing the determination of the cell cycle phase of both single and sub-population of cells. The analysis framework was built on a modified k-means clustering strategy and refined with a Gaussian mixture model classifier, which enabled the definition of highly accurate classification clusters corresponding to G1, S and G2 phases. Using the information retrieved from area and fluorescence total intensity, the modified k-means (k=3) cluster imaging framework classified 64.7% of the imaged nuclei, as being at G1 phase, 12.0% at G2 phase and 23.2% at S phase. Performance of the imaging framework was ascertained with normal murine mammary gland cells constitutively expressing the Fucci2 technology, exhibiting an overall sensitivity of 94.0%. Further, the results indicate that the imaging framework has a robust capacity to both identify a given DAPI-stained nucleus to its correct cell cycle phase, as well as to determine, with very high probability, true negatives. Importantly, this novel imaging approach is a non-disruptive method that allows an integrative and simultaneous quantitative analysis of molecular and morphological parameters, thus awarding the possibility of cell cycle profiling in cytological and histological samples.

  15. Radiotracer Imaging Allows for Noninvasive Detection and Quantification of Abnormalities in Angiosome Foot Perfusion in Diabetic Patients With Critical Limb Ischemia and Nonhealing Wounds

    PubMed Central

    Alvelo, Jessica L.; Papademetris, Xenophon; Mena-Hurtado, Carlos; Jeon, Sangchoon; Sumpio, Bauer E.; Sinusas, Albert J.

    2018-01-01

    Background: Single photon emission computed tomography (SPECT)/computed tomography (CT) imaging allows for assessment of skeletal muscle microvascular perfusion but has not been quantitatively assessed in angiosomes, or 3-dimensional vascular territories, of the foot. This study assessed and compared resting angiosome foot perfusion between healthy subjects and diabetic patients with critical limb ischemia (CLI). Additionally, the relationship between SPECT/CT imaging and the ankle–brachial index—a standard tool for evaluating peripheral artery disease—was assessed. Methods and Results: Healthy subjects (n=9) and diabetic patients with CLI and nonhealing ulcers (n=42) underwent SPECT/CT perfusion imaging of the feet. CT images were segmented into angiosomes for quantification of relative radiotracer uptake, expressed as standardized uptake values. Standardized uptake values were assessed in ulcerated angiosomes of patients with CLI and compared with whole-foot standardized uptake values in healthy subjects. Serial SPECT/CT imaging was performed to assess uptake kinetics of technetium-99m-tetrofosmin. The relationship between angiosome perfusion and ankle–brachial index was assessed via correlational analysis. Resting perfusion was significantly lower in CLI versus healthy subjects (P=0.0007). Intraclass correlation coefficients of 0.95 (healthy) and 0.93 (CLI) demonstrated excellent agreement between serial perfusion measurements. Correlational analysis, including healthy and CLI subjects, demonstrated a significant relationship between ankle–brachial index and SPECT/CT (P=0.01); however, this relationship was not significant for diabetic CLI patients only (P=0.2). Conclusions: SPECT/CT imaging assesses regional foot perfusion and detects abnormalities in microvascular perfusion that may be undetectable by conventional ankle–brachial index in patients with diabetes mellitus. SPECT/CT may provide a novel approach for evaluating responses to targeted therapies. PMID:29748311

  16. Measurement of Vibrated Bulk Density of Coke Particle Blends Using Image Texture Analysis

    NASA Astrophysics Data System (ADS)

    Azari, Kamran; Bogoya-Forero, Wilinthon; Duchesne, Carl; Tessier, Jayson

    2017-09-01

    A rapid and nondestructive machine vision sensor was developed for predicting the vibrated bulk density (VBD) of petroleum coke particles based on image texture analysis. It could be used for making corrective adjustments to a paste plant operation to reduce green anode variability (e.g., changes in binder demand). Wavelet texture analysis (WTA) and gray level co-occurrence matrix (GLCM) algorithms were used jointly for extracting the surface textural features of coke aggregates from images. These were correlated with the VBD using partial least-squares (PLS) regression. Coke samples of several sizes and from different sources were used to test the sensor. Variations in the coke surface texture introduced by coke size and source allowed for making good predictions of the VBD of individual coke samples and mixtures of them (blends involving two sources and different sizes). Promising results were also obtained for coke blends collected from an industrial-baked carbon anode manufacturer.

  17. TASI: A software tool for spatial-temporal quantification of tumor spheroid dynamics.

    PubMed

    Hou, Yue; Konen, Jessica; Brat, Daniel J; Marcus, Adam I; Cooper, Lee A D

    2018-05-08

    Spheroid cultures derived from explanted cancer specimens are an increasingly utilized resource for studying complex biological processes like tumor cell invasion and metastasis, representing an important bridge between the simplicity and practicality of 2-dimensional monolayer cultures and the complexity and realism of in vivo animal models. Temporal imaging of spheroids can capture the dynamics of cell behaviors and microenvironments, and when combined with quantitative image analysis methods, enables deep interrogation of biological mechanisms. This paper presents a comprehensive open-source software framework for Temporal Analysis of Spheroid Imaging (TASI) that allows investigators to objectively characterize spheroid growth and invasion dynamics. TASI performs spatiotemporal segmentation of spheroid cultures, extraction of features describing spheroid morpho-phenotypes, mathematical modeling of spheroid dynamics, and statistical comparisons of experimental conditions. We demonstrate the utility of this tool in an analysis of non-small cell lung cancer spheroids that exhibit variability in metastatic and proliferative behaviors.

  18. Target identification by image analysis.

    PubMed

    Fetz, V; Prochnow, H; Brönstrup, M; Sasse, F

    2016-05-04

    Covering: 1997 to the end of 2015Each biologically active compound induces phenotypic changes in target cells that are characteristic for its mode of action. These phenotypic alterations can be directly observed under the microscope or made visible by labelling structural elements or selected proteins of the cells with dyes. A comparison of the cellular phenotype induced by a compound of interest with the phenotypes of reference compounds with known cellular targets allows predicting its mode of action. While this approach has been successfully applied to the characterization of natural products based on a visual inspection of images, recent studies used automated microscopy and analysis software to increase speed and to reduce subjective interpretation. In this review, we give a general outline of the workflow for manual and automated image analysis, and we highlight natural products whose bacterial and eucaryotic targets could be identified through such approaches.

  19. A New Method for Automated Identification and Morphometry of Myelinated Fibers Through Light Microscopy Image Analysis.

    PubMed

    Novas, Romulo Bourget; Fazan, Valeria Paula Sassoli; Felipe, Joaquim Cezar

    2016-02-01

    Nerve morphometry is known to produce relevant information for the evaluation of several phenomena, such as nerve repair, regeneration, implant, transplant, aging, and different human neuropathies. Manual morphometry is laborious, tedious, time consuming, and subject to many sources of error. Therefore, in this paper, we propose a new method for the automated morphometry of myelinated fibers in cross-section light microscopy images. Images from the recurrent laryngeal nerve of adult rats and the vestibulocochlear nerve of adult guinea pigs were used herein. The proposed pipeline for fiber segmentation is based on the techniques of competitive clustering and concavity analysis. The evaluation of the proposed method for segmentation of images was done by comparing the automatic segmentation with the manual segmentation. To further evaluate the proposed method considering morphometric features extracted from the segmented images, the distributions of these features were tested for statistical significant difference. The method achieved a high overall sensitivity and very low false-positive rates per image. We detect no statistical difference between the distribution of the features extracted from the manual and the pipeline segmentations. The method presented a good overall performance, showing widespread potential in experimental and clinical settings allowing large-scale image analysis and, thus, leading to more reliable results.

  20. Measurement of migration of soft tissue by modified Roentgen stereophotogrammetric analysis (RSA): validation of a new technique to monitor rotator cuff tears.

    PubMed

    Cashman, P M M; Baring, T; Reilly, P; Emery, R J H; Amis, A A

    2010-04-01

    The purpose of this study was to develop a technique to use Roentgen stereophotogrammetric analysis (RSA) to measure migration of soft-tissue structures after rotator cuff repair. RSA stereo films were obtained; images were analysed using a semi-automatic software program allowing 3D viewing of results. RSA imaging experiments were performed to validate the technique, using a glass phantom with implanted RSA beads and an animal model with steel sutures as RSA markers which were moved known distances. Repeated measurements allowed assessment of inter- and intra-observer variability at a maximum of 1.06 mm. RSA analysis of the phantom showed a variation up to 0.22 mm for static and 0.28 mm for dynamic studies. The ovine tissue specimen demonstrated that using steel sutures as RSA markers in soft tissue is feasible, although less accurate than when measuring bone motion. This novel application of RSA to measure soft tissue migration is practicable and can be extended to in vivo studies.

  1. The RABiT: a rapid automated biodosimetry tool for radiological triage. II. Technological developments.

    PubMed

    Garty, Guy; Chen, Youhua; Turner, Helen C; Zhang, Jian; Lyulko, Oleksandra V; Bertucci, Antonella; Xu, Yanping; Wang, Hongliang; Simaan, Nabil; Randers-Pehrson, Gerhard; Lawrence Yao, Y; Brenner, David J

    2011-08-01

    Over the past five years the Center for Minimally Invasive Radiation Biodosimetry at Columbia University has developed the Rapid Automated Biodosimetry Tool (RABiT), a completely automated, ultra-high throughput biodosimetry workstation. This paper describes recent upgrades and reliability testing of the RABiT. The RABiT analyses fingerstick-derived blood samples to estimate past radiation exposure or to identify individuals exposed above or below a cut-off dose. Through automated robotics, lymphocytes are extracted from fingerstick blood samples into filter-bottomed multi-well plates. Depending on the time since exposure, the RABiT scores either micronuclei or phosphorylation of the histone H2AX, in an automated robotic system, using filter-bottomed multi-well plates. Following lymphocyte culturing, fixation and staining, the filter bottoms are removed from the multi-well plates and sealed prior to automated high-speed imaging. Image analysis is performed online using dedicated image processing hardware. Both the sealed filters and the images are archived. We have developed a new robotic system for lymphocyte processing, making use of an upgraded laser power and parallel processing of four capillaries at once. This system has allowed acceleration of lymphocyte isolation, the main bottleneck of the RABiT operation, from 12 to 2 sec/sample. Reliability tests have been performed on all robotic subsystems. Parallel handling of multiple samples through the use of dedicated, purpose-built, robotics and high speed imaging allows analysis of up to 30,000 samples per day.

  2. Coupling image processing and stress analysis for damage identification in a human premolar tooth.

    PubMed

    Andreaus, U; Colloca, M; Iacoviello, D

    2011-08-01

    Non-carious cervical lesions are characterized by the loss of dental hard tissue at the cement-enamel junction (CEJ). Exceeding stresses are therefore generated in the cervical region of the tooth that cause disruption of the bonds between the hydroxyapatite crystals, leading to crack formation and eventual loss of enamel and the underlying dentine. Damage identification was performed by image analysis techniques and allowed to quantitatively assess changes in teeth. A computerized two-step procedure was generated and applied to the first left maxillary human premolar. In the first step, dental images were digitally processed by a segmentation method in order to identify the damage. The considered morphological properties were the enamel thickness and total area, the number of fragments in which the enamel is chipped. The information retrieved by the data processing of the section images allowed to orient the stress investigation toward selected portions of the tooth. In the second step, a three-dimensional finite element model based on CT images of both the tooth and the periodontal ligament was employed to compare the changes occurring in the stress distributions in normal occlusion and malocclusion. The stress states were analyzed exclusively in the critical zones designated in the first step. The risk of failure at the CEJ and of crack initiation at the dentin-enamel junction through the quantification of first and third principal stresses, von Mises stress, and normal and tangential stresses, were also estimated. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  3. The RABiT: A Rapid Automated Biodosimetry Tool For Radiological Triage. II. Technological Developments

    PubMed Central

    Garty, Guy; Chen, Youhua; Turner, Helen; Zhang, Jian; Lyulko, Oleksandra; Bertucci, Antonella; Xu, Yanping; Wang, Hongliang; Simaan, Nabil; Randers-Pehrson, Gerhard; Yao, Y. Lawrence; Brenner, David J.

    2011-01-01

    Purpose Over the past five years the Center for Minimally Invasive Radiation Biodosimetry at Columbia University has developed the Rapid Automated Biodosimetry Tool (RABiT), a completely automated, ultra-high throughput biodosimetry workstation. This paper describes recent upgrades and reliability testing of the RABiT. Materials and methods The RABiT analyzes fingerstick-derived blood samples to estimate past radiation exposure or to identify individuals exposed above or below a cutoff dose. Through automated robotics, lymphocytes are extracted from fingerstick blood samples into filter-bottomed multi-well plates. Depending on the time since exposure, the RABiT scores either micronuclei or phosphorylation of the histone H2AX, in an automated robotic system, using filter-bottomed multi-well plates. Following lymphocyte culturing, fixation and staining, the filter bottoms are removed from the multi-well plates and sealed prior to automated high-speed imaging. Image analysis is performed online using dedicated image processing hardware. Both the sealed filters and the images are archived. Results We have developed a new robotic system for lymphocyte processing, making use of an upgraded laser power and parallel processing of four capillaries at once. This system has allowed acceleration of lymphocyte isolation, the main bottleneck of the RABiT operation, from 12 to 2 sec/sample. Reliability tests have been performed on all robotic subsystems. Conclusions Parallel handling of multiple samples through the use of dedicated, purpose-built, robotics and high speed imaging allows analysis of up to 30,000 samples per day. PMID:21557703

  4. Operation Guiding Light-Scientific Program and Field Plan. The Pilot Field Experiment for NORDA Project Chemical Dynamics in Ocean Frontal Areas

    DTIC Science & Technology

    1985-03-01

    distribution. Samples of suspended partici’lates will also be collected for later image and elemental analysis . 25 Method of analysis for particle...will be flow injection analysis . This method will allow rapid, continuous analysis of seawater nutrients. Measurements will be made at one minute...5 m intervals) as well as from the underway pumping system. Method of pigment analysis for porphyrin and carotenoid pigments will be separation by

  5. Gold nanoparticle-enhanced target (AuNPET) as universal solution for laser desorption/ionization mass spectrometry analysis and imaging of low molecular weight compounds.

    PubMed

    Sekuła, Justyna; Nizioł, Joanna; Rode, Wojciech; Ruman, Tomasz

    2015-05-22

    Preparation is described of a durable surface of cationic gold nanoparticles (AuNPs), covering commercial and custom-made MALDI targets, along with characterization of the nanoparticle surface properties and examples of the use in MS analyses and MS imaging (IMS) of low molecular weight (LMW) organic compounds. Tested compounds include nucleosides, saccharides, amino acids, glycosides, and nucleic bases for MS measurements, as well as over one hundred endogenous compounds in imaging experiment. The nanoparticles covering target plate were enriched in sodium in order to promote sodium-adduct formation. The new surface allows fast analysis, high sensitivity of detection and high mass determination accuracy. Example of application of new Au nanoparticle-enhanced target for fast and simple MS imaging of a fingerprint is also presented. Copyright © 2015 Elsevier B.V. All rights reserved.

  6. Imaging challenges in biomaterials and tissue engineering

    PubMed Central

    Appel, Alyssa A.; Anastasio, Mark A.; Larson, Jeffery C.; Brey, Eric M.

    2013-01-01

    Biomaterials are employed in the fields of tissue engineering and regenerative medicine (TERM) in order to enhance the regeneration or replacement of tissue function and/or structure. The unique environments resulting from the presence of biomaterials, cells, and tissues result in distinct challenges in regards to monitoring and assessing the results of these interventions. Imaging technologies for three-dimensional (3D) analysis have been identified as a strategic priority in TERM research. Traditionally, histological and immunohistochemical techniques have been used to evaluate engineered tissues. However, these methods do not allow for an accurate volume assessment, are invasive, and do not provide information on functional status. Imaging techniques are needed that enable non-destructive, longitudinal, quantitative, and three-dimensional analysis of TERM strategies. This review focuses on evaluating the application of available imaging modalities for assessment of biomaterials and tissue in TERM applications. Included is a discussion of limitations of these techniques and identification of areas for further development. PMID:23768903

  7. Image analysis for microelectronic retinal prosthesis.

    PubMed

    Hallum, L E; Cloherty, S L; Lovell, N H

    2008-01-01

    By way of extracellular, stimulating electrodes, a microelectronic retinal prosthesis aims to render discrete, luminous spots-so-called phosphenes-in the visual field, thereby providing a phosphene image (PI) as a rudimentary remediation of profound blindness. As part thereof, a digital camera, or some other photosensitive array, captures frames, frames are analyzed, and phosphenes are actuated accordingly by way of modulated charge injections. Here, we present a method that allows the assessment of image analysis schemes for integration with a prosthetic device, that is, the means of converting the captured image (high resolution) to modulated charge injections (low resolution). We use the mutual-information function to quantify the amount of information conveyed to the PI observer (device implantee), while accounting for the statistics of visual stimuli. We demonstrate an effective scheme involving overlapping, Gaussian kernels, and discuss extensions of the method to account for shortterm visual memory in observers, and their perceptual errors of omission and commission.

  8. Physics-based deformable organisms for medical image analysis

    NASA Astrophysics Data System (ADS)

    Hamarneh, Ghassan; McIntosh, Chris

    2005-04-01

    Previously, "Deformable organisms" were introduced as a novel paradigm for medical image analysis that uses artificial life modelling concepts. Deformable organisms were designed to complement the classical bottom-up deformable models methodologies (geometrical and physical layers), with top-down intelligent deformation control mechanisms (behavioral and cognitive layers). However, a true physical layer was absent and in order to complete medical image segmentation tasks, deformable organisms relied on pure geometry-based shape deformations guided by sensory data, prior structural knowledge, and expert-generated schedules of behaviors. In this paper we introduce the use of physics-based shape deformations within the deformable organisms framework yielding additional robustness by allowing intuitive real-time user guidance and interaction when necessary. We present the results of applying our physics-based deformable organisms, with an underlying dynamic spring-mass mesh model, to segmenting and labelling the corpus callosum in 2D midsagittal magnetic resonance images.

  9. Computer-implemented system and method for automated and highly accurate plaque analysis, reporting, and visualization

    NASA Technical Reports Server (NTRS)

    Kemp, James Herbert (Inventor); Talukder, Ashit (Inventor); Lambert, James (Inventor); Lam, Raymond (Inventor)

    2008-01-01

    A computer-implemented system and method of intra-oral analysis for measuring plaque removal is disclosed. The system includes hardware for real-time image acquisition and software to store the acquired images on a patient-by-patient basis. The system implements algorithms to segment teeth of interest from surrounding gum, and uses a real-time image-based morphing procedure to automatically overlay a grid onto each segmented tooth. Pattern recognition methods are used to classify plaque from surrounding gum and enamel, while ignoring glare effects due to the reflection of camera light and ambient light from enamel regions. The system integrates these components into a single software suite with an easy-to-use graphical user interface (GUI) that allows users to do an end-to-end run of a patient record, including tooth segmentation of all teeth, grid morphing of each segmented tooth, and plaque classification of each tooth image.

  10. Combinational pixel-by-pixel and object-level classifying, segmenting, and agglomerating in performing quantitative image analysis that distinguishes between healthy non-cancerous and cancerous cell nuclei and delineates nuclear, cytoplasm, and stromal material objects from stained biological tissue materials

    DOEpatents

    Boucheron, Laura E

    2013-07-16

    Quantitative object and spatial arrangement-level analysis of tissue are detailed using expert (pathologist) input to guide the classification process. A two-step method is disclosed for imaging tissue, by classifying one or more biological materials, e.g. nuclei, cytoplasm, and stroma, in the tissue into one or more identified classes on a pixel-by-pixel basis, and segmenting the identified classes to agglomerate one or more sets of identified pixels into segmented regions. Typically, the one or more biological materials comprises nuclear material, cytoplasm material, and stromal material. The method further allows a user to markup the image subsequent to the classification to re-classify said materials. The markup is performed via a graphic user interface to edit designated regions in the image.

  11. Spatial Phase Imaging

    NASA Technical Reports Server (NTRS)

    2006-01-01

    Frequently, scientists grow crystals by dissolving a protein in a specific liquid solution, and then allowing that solution to evaporate. The methods used next have been, variously, invasive (adding a dye that is absorbed by the protein), destructive (crushing protein/salt-crystal mixtures and observing differences between the crushing of salt and protein), or costly and time-consuming (X-ray crystallography). In contrast to these methods, a new technology for monitoring protein growth, developed in part through NASA Small Business Innovation Research (SBIR) funding from Marshall Space Flight Center, is noninvasive, nondestructive, rapid, and more cost effective than X-ray analysis. The partner for this SBIR, Photon-X, Inc., of Huntsville, Alabama, developed spatial phase imaging technology that can monitor crystal growth in real time and in an automated mode. Spatial phase imaging scans for flaws quickly and produces a 3-D structured image of a crystal, showing volumetric growth analysis for future automated growth.

  12. High-throughput Analysis of Large Microscopy Image Datasets on CPU-GPU Cluster Platforms

    PubMed Central

    Teodoro, George; Pan, Tony; Kurc, Tahsin M.; Kong, Jun; Cooper, Lee A. D.; Podhorszki, Norbert; Klasky, Scott; Saltz, Joel H.

    2014-01-01

    Analysis of large pathology image datasets offers significant opportunities for the investigation of disease morphology, but the resource requirements of analysis pipelines limit the scale of such studies. Motivated by a brain cancer study, we propose and evaluate a parallel image analysis application pipeline for high throughput computation of large datasets of high resolution pathology tissue images on distributed CPU-GPU platforms. To achieve efficient execution on these hybrid systems, we have built runtime support that allows us to express the cancer image analysis application as a hierarchical data processing pipeline. The application is implemented as a coarse-grain pipeline of stages, where each stage may be further partitioned into another pipeline of fine-grain operations. The fine-grain operations are efficiently managed and scheduled for computation on CPUs and GPUs using performance aware scheduling techniques along with several optimizations, including architecture aware process placement, data locality conscious task assignment, data prefetching, and asynchronous data copy. These optimizations are employed to maximize the utilization of the aggregate computing power of CPUs and GPUs and minimize data copy overheads. Our experimental evaluation shows that the cooperative use of CPUs and GPUs achieves significant improvements on top of GPU-only versions (up to 1.6×) and that the execution of the application as a set of fine-grain operations provides more opportunities for runtime optimizations and attains better performance than coarser-grain, monolithic implementations used in other works. An implementation of the cancer image analysis pipeline using the runtime support was able to process an image dataset consisting of 36,848 4Kx4K-pixel image tiles (about 1.8TB uncompressed) in less than 4 minutes (150 tiles/second) on 100 nodes of a state-of-the-art hybrid cluster system. PMID:25419546

  13. Non-invasive intraoperative optical coherence tomography of the resection cavity during surgery of intrinsic brain tumors

    NASA Astrophysics Data System (ADS)

    Giese, A.; Böhringer, H. J.; Leppert, J.; Kantelhardt, S. R.; Lankenau, E.; Koch, P.; Birngruber, R.; Hüttmann, G.

    2006-02-01

    Optical coherence tomography (OCT) is a non-invasive imaging technique with a micrometer resolution. It allows non-contact / non-invasive analysis of central nervous system tissues with a penetration depth of 1-3,5 mm reaching a spatial resolution of approximately 4-15 μm. We have adapted spectral-domain OCT (SD-OCT) and time-domain OCT (TD-OCT) for intraoperative detection of residual tumor during brain tumor surgery. Human brain tumor tissue and areas of the resection cavity were analyzed during the resection of gliomas using this new technology. The site of analysis was registered using a neuronavigation system and biopsies were taken and submitted to routine histology. We have used post image acquisition processing to compensate for movements of the brain and to realign A-scan images for calculation of a light attenuation factor. OCT imaging of normal cortex and white matter showed a typical light attenuation profile. Tumor tissue depending on the cellularity of the specimen showed a loss of the normal light attenuation profile resulting in altered light attenuation coefficients compared to normal brain. Based on this parameter and the microstructure of the tumor tissue, which was entirely absent in normal tissue, OCT analysis allowed the discrimination of normal brain tissue, invaded brain, solid tumor tissue, and necrosis. Following macroscopically complete resections OCT analysis of the resection cavity displayed the typical microstructure and light attenuation profile of tumor tissue in some specimens, which in routine histology contained microscopic residual tumor tissue. We have demonstrated that this technology may be applied to the intraoperative detection of residual tumor during resection of human gliomas.

  14. Fast super-resolution with affine motion using an adaptive Wiener filter and its application to airborne imaging.

    PubMed

    Hardie, Russell C; Barnard, Kenneth J; Ordonez, Raul

    2011-12-19

    Fast nonuniform interpolation based super-resolution (SR) has traditionally been limited to applications with translational interframe motion. This is in part because such methods are based on an underlying assumption that the warping and blurring components in the observation model commute. For translational motion this is the case, but it is not true in general. This presents a problem for applications such as airborne imaging where translation may be insufficient. Here we present a new Fourier domain analysis to show that, for many image systems, an affine warping model with limited zoom and shear approximately commutes with the point spread function when diffraction effects are modeled. Based on this important result, we present a new fast adaptive Wiener filter (AWF) SR algorithm for non-translational motion and study its performance with affine motion. The fast AWF SR method employs a new smart observation window that allows us to precompute all the needed filter weights for any type of motion without sacrificing much of the full performance of the AWF. We evaluate the proposed algorithm using simulated data and real infrared airborne imagery that contains a thermal resolution target allowing for objective resolution analysis.

  15. Aberration correction in wide-field fluorescence microscopy by segmented-pupil image interferometry.

    PubMed

    Scrimgeour, Jan; Curtis, Jennifer E

    2012-06-18

    We present a new technique for the correction of optical aberrations in wide-field fluorescence microscopy. Segmented-Pupil Image Interferometry (SPII) uses a liquid crystal spatial light modulator placed in the microscope's pupil plane to split the wavefront originating from a fluorescent object into an array of individual beams. Distortion of the wavefront arising from either system or sample aberrations results in displacement of the images formed from the individual pupil segments. Analysis of image registration allows for the local tilt in the wavefront at each segment to be corrected with respect to a central reference. A second correction step optimizes the image intensity by adjusting the relative phase of each pupil segment through image interferometry. This ensures that constructive interference between all segments is achieved at the image plane. Improvements in image quality are observed when Segmented-Pupil Image Interferometry is applied to correct aberrations arising from the microscope's optical path.

  16. SPAM- SPECTRAL ANALYSIS MANAGER (DEC VAX/VMS VERSION)

    NASA Technical Reports Server (NTRS)

    Solomon, J. E.

    1994-01-01

    The Spectral Analysis Manager (SPAM) was developed to allow easy qualitative analysis of multi-dimensional imaging spectrometer data. Imaging spectrometers provide sufficient spectral sampling to define unique spectral signatures on a per pixel basis. Thus direct material identification becomes possible for geologic studies. SPAM provides a variety of capabilities for carrying out interactive analysis of the massive and complex datasets associated with multispectral remote sensing observations. In addition to normal image processing functions, SPAM provides multiple levels of on-line help, a flexible command interpretation, graceful error recovery, and a program structure which can be implemented in a variety of environments. SPAM was designed to be visually oriented and user friendly with the liberal employment of graphics for rapid and efficient exploratory analysis of imaging spectrometry data. SPAM provides functions to enable arithmetic manipulations of the data, such as normalization, linear mixing, band ratio discrimination, and low-pass filtering. SPAM can be used to examine the spectra of an individual pixel or the average spectra over a number of pixels. SPAM also supports image segmentation, fast spectral signature matching, spectral library usage, mixture analysis, and feature extraction. High speed spectral signature matching is performed by using a binary spectral encoding algorithm to separate and identify mineral components present in the scene. The same binary encoding allows automatic spectral clustering. Spectral data may be entered from a digitizing tablet, stored in a user library, compared to the master library containing mineral standards, and then displayed as a timesequence spectral movie. The output plots, histograms, and stretched histograms produced by SPAM can be sent to a lineprinter, stored as separate RGB disk files, or sent to a Quick Color Recorder. SPAM is written in C for interactive execution and is available for two different machine environments. There is a DEC VAX/VMS version with a central memory requirement of approximately 242K of 8 bit bytes and a machine independent UNIX 4.2 version. The display device currently supported is the Raster Technologies display processor. Other 512 x 512 resolution color display devices, such as De Anza, may be added with minor code modifications. This program was developed in 1986.

  17. SPAM- SPECTRAL ANALYSIS MANAGER (UNIX VERSION)

    NASA Technical Reports Server (NTRS)

    Solomon, J. E.

    1994-01-01

    The Spectral Analysis Manager (SPAM) was developed to allow easy qualitative analysis of multi-dimensional imaging spectrometer data. Imaging spectrometers provide sufficient spectral sampling to define unique spectral signatures on a per pixel basis. Thus direct material identification becomes possible for geologic studies. SPAM provides a variety of capabilities for carrying out interactive analysis of the massive and complex datasets associated with multispectral remote sensing observations. In addition to normal image processing functions, SPAM provides multiple levels of on-line help, a flexible command interpretation, graceful error recovery, and a program structure which can be implemented in a variety of environments. SPAM was designed to be visually oriented and user friendly with the liberal employment of graphics for rapid and efficient exploratory analysis of imaging spectrometry data. SPAM provides functions to enable arithmetic manipulations of the data, such as normalization, linear mixing, band ratio discrimination, and low-pass filtering. SPAM can be used to examine the spectra of an individual pixel or the average spectra over a number of pixels. SPAM also supports image segmentation, fast spectral signature matching, spectral library usage, mixture analysis, and feature extraction. High speed spectral signature matching is performed by using a binary spectral encoding algorithm to separate and identify mineral components present in the scene. The same binary encoding allows automatic spectral clustering. Spectral data may be entered from a digitizing tablet, stored in a user library, compared to the master library containing mineral standards, and then displayed as a timesequence spectral movie. The output plots, histograms, and stretched histograms produced by SPAM can be sent to a lineprinter, stored as separate RGB disk files, or sent to a Quick Color Recorder. SPAM is written in C for interactive execution and is available for two different machine environments. There is a DEC VAX/VMS version with a central memory requirement of approximately 242K of 8 bit bytes and a machine independent UNIX 4.2 version. The display device currently supported is the Raster Technologies display processor. Other 512 x 512 resolution color display devices, such as De Anza, may be added with minor code modifications. This program was developed in 1986.

  18. Insight into plant cell wall chemistry and structure by combination of multiphoton microscopy with Raman imaging.

    PubMed

    Heiner, Zsuzsanna; Zeise, Ingrid; Elbaum, Rivka; Kneipp, Janina

    2018-04-01

    Spontaneous Raman scattering microspectroscopy, second harmonic generation (SHG) and 2-photon excited fluorescence (2PF) were used in combination to characterize the morphology together with the chemical composition of the cell wall in native plant tissues. As the data obtained with unstained sections of Sorghum bicolor root and leaf tissues illustrate, nonresonant as well as pre-resonant Raman microscopy in combination with hyperspectral analysis reveals details about the distribution and composition of the major cell wall constituents. Multivariate analysis of the Raman data allows separation of different tissue regions, specifically the endodermis, xylem and lumen. The orientation of cellulose microfibrils is obtained from polarization-resolved SHG signals. Furthermore, 2-photon autofluorescence images can be used to image lignification. The combined compositional, morphological and orientational information in the proposed coupling of SHG, Raman imaging and 2PF presents an extension of existing vibrational microspectroscopic imaging and multiphoton microscopic approaches not only for plant tissues. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  19. Multiparticle imaging technique for two-phase fluid flows using pulsed laser speckle velocimetry. Final report, September 1988--November 1992

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hassan, T.A.

    1992-12-01

    The practical use of Pulsed Laser Velocimetry (PLV) requires the use of fast, reliable computer-based methods for tracking numerous particles suspended in a fluid flow. Two methods for performing tracking are presented. One method tracks a particle through multiple sequential images (minimum of four required) by prediction and verification of particle displacement and direction. The other method, requiring only two sequential images uses a dynamic, binary, spatial, cross-correlation technique. The algorithms are tested on computer-generated synthetic data and experimental data which was obtained with traditional PLV methods. This allowed error analysis and testing of the algorithms on real engineering flows.more » A novel method is proposed which eliminates tedious, undersirable, manual, operator assistance in removing erroneous vectors. This method uses an iterative process involving an interpolated field produced from the most reliable vectors. Methods are developed to allow fast analysis and presentation of sets of PLV image data. Experimental investigation of a two-phase, horizontal, stratified, flow regime was performed to determine the interface drag force, and correspondingly, the drag coefficient. A horizontal, stratified flow test facility using water and air was constructed to allow interface shear measurements with PLV techniques. The experimentally obtained local drag measurements were compared with theoretical results given by conventional interfacial drag theory. Close agreement was shown when local conditions near the interface were similar to space-averaged conditions. However, theory based on macroscopic, space-averaged flow behavior was shown to give incorrect results if the local gas velocity near the interface as unstable, transient, and dissimilar from the average gas velocity through the test facility.« less

  20. Multiparticle imaging technique for two-phase fluid flows using pulsed laser speckle velocimetry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hassan, T.A.

    1992-12-01

    The practical use of Pulsed Laser Velocimetry (PLV) requires the use of fast, reliable computer-based methods for tracking numerous particles suspended in a fluid flow. Two methods for performing tracking are presented. One method tracks a particle through multiple sequential images (minimum of four required) by prediction and verification of particle displacement and direction. The other method, requiring only two sequential images uses a dynamic, binary, spatial, cross-correlation technique. The algorithms are tested on computer-generated synthetic data and experimental data which was obtained with traditional PLV methods. This allowed error analysis and testing of the algorithms on real engineering flows.more » A novel method is proposed which eliminates tedious, undersirable, manual, operator assistance in removing erroneous vectors. This method uses an iterative process involving an interpolated field produced from the most reliable vectors. Methods are developed to allow fast analysis and presentation of sets of PLV image data. Experimental investigation of a two-phase, horizontal, stratified, flow regime was performed to determine the interface drag force, and correspondingly, the drag coefficient. A horizontal, stratified flow test facility using water and air was constructed to allow interface shear measurements with PLV techniques. The experimentally obtained local drag measurements were compared with theoretical results given by conventional interfacial drag theory. Close agreement was shown when local conditions near the interface were similar to space-averaged conditions. However, theory based on macroscopic, space-averaged flow behavior was shown to give incorrect results if the local gas velocity near the interface as unstable, transient, and dissimilar from the average gas velocity through the test facility.« less

  1. 3D imaging of cleared human skin biopsies using light-sheet microscopy: A new way to visualize in-depth skin structure.

    PubMed

    Abadie, S; Jardet, C; Colombelli, J; Chaput, B; David, A; Grolleau, J-L; Bedos, P; Lobjois, V; Descargues, P; Rouquette, J

    2018-05-01

    Human skin is composed of the superimposition of tissue layers of various thicknesses and components. Histological staining of skin sections is the benchmark approach to analyse the organization and integrity of human skin biopsies; however, this approach does not allow 3D tissue visualization. Alternatively, confocal or two-photon microscopy is an effective approach to perform fluorescent-based 3D imaging. However, owing to light scattering, these methods display limited light penetration in depth. The objectives of this study were therefore to combine optical clearing and light-sheet fluorescence microscopy (LSFM) to perform in-depth optical sectioning of 5 mm-thick human skin biopsies and generate 3D images of entire human skin biopsies. A benzyl alcohol and benzyl benzoate solution was used to successfully optically clear entire formalin fixed human skin biopsies, making them transparent. In-depth optical sectioning was performed with LSFM on the basis of tissue-autofluorescence observations. 3D image analysis of optical sections generated with LSFM was performed by using the Amira ® software. This new approach allowed us to observe in situ the different layers and compartments of human skin, such as the stratum corneum, the dermis and epidermal appendages. With this approach, we easily performed 3D reconstruction to visualise an entire human skin biopsy. Finally, we demonstrated that this method is useful to visualise and quantify histological anomalies, such as epidermal hyperplasia. The combination of optical clearing and LSFM has new applications in dermatology and dermatological research by allowing 3D visualization and analysis of whole human skin biopsies. © 2018 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  2. Using Multispectral False Color Imaging to Characterize Tropical Cyclone Structure and Environment

    NASA Astrophysics Data System (ADS)

    Cossuth, J.; Bankert, R.; Richardson, K.; Surratt, M. L.

    2016-12-01

    The Naval Research Laboratory's (NRL) tropical cyclone (TC) web page (http://www.nrlmry.navy.mil/TC.html) has provided nearly two decades of near real-time access to TC-centric images and products by TC forecasters and enthusiasts around the world. Particularly, microwave imager and sounder information that is featured on this site provides crucial internal storm structure information by allowing users to perceive hydrometeor structure, providing key details beyond cloud top information provided by visible and infrared channels. Towards improving TC analysis techniques and helping advance the utility of the NRL TC webpage resource, new research efforts are presented. This work demonstrates results as well as the methodology used to develop new automated, objective satellite-based TC structure and intensity guidance and enhanced data fusion imagery products that aim to bolster and streamline TC forecast operations. This presentation focuses on the creation and interpretation of false color RGB composite imagery that leverages the different emissive and scattering properties of atmospheric ice, liquid, and vapor water as well as ocean surface roughness as seen by microwave radiometers. Specifically, a combination of near-realtime data and a standardized digital database of global TCs in microwave imagery from 1987-2012 is employed as a climatology of TC structures. The broad range of TC structures, from pinhole eyes through multiple eyewall configurations, is characterized as resolved by passive microwave sensors. The extraction of these characteristic features from historical data also lends itself to statistical analysis. For example, histograms of brightness temperature distributions allows a rigorous examination of how structural features are conveyed in image products, allowing a better representation of colors and breakpoints as they relate to physical features. Such climatological work also suggests steps to better inform the near-real time application of upcoming satellite datasets to TC analyses.

  3. Automated Diatom Analysis Applied to Traditional Light Microscopy: A Proof-of-Concept Study

    NASA Astrophysics Data System (ADS)

    Little, Z. H. L.; Bishop, I.; Spaulding, S. A.; Nelson, H.; Mahoney, C.

    2017-12-01

    Diatom identification and enumeration by high resolution light microscopy is required for many areas of research and water quality assessment. Such analyses, however, are both expertise and labor-intensive. These challenges motivate the need for an automated process to efficiently and accurately identify and enumerate diatoms. Improvements in particle analysis software have increased the likelihood that diatom enumeration can be automated. VisualSpreadsheet software provides a possible solution for automated particle analysis of high-resolution light microscope diatom images. We applied the software, independent of its complementary FlowCam hardware, to automated analysis of light microscope images containing diatoms. Through numerous trials, we arrived at threshold settings to correctly segment 67% of the total possible diatom valves and fragments from broad fields of view. (183 light microscope images were examined containing 255 diatom particles. Of the 255 diatom particles present, 216 diatoms valves and fragments of valves were processed, with 170 properly analyzed and focused upon by the software). Manual analysis of the images yielded 255 particles in 400 seconds, whereas the software yielded a total of 216 particles in 68 seconds, thus highlighting that the software has an approximate five-fold efficiency advantage in particle analysis time. As in past efforts, incomplete or incorrect recognition was found for images with multiple valves in contact or valves with little contrast. The software has potential to be an effective tool in assisting taxonomists with diatom enumeration by completing a large portion of analyses. Benefits and limitations of the approach are presented to allow for development of future work in image analysis and automated enumeration of traditional light microscope images containing diatoms.

  4. Photoacoustic image reconstruction: a quantitative analysis

    NASA Astrophysics Data System (ADS)

    Sperl, Jonathan I.; Zell, Karin; Menzenbach, Peter; Haisch, Christoph; Ketzer, Stephan; Marquart, Markus; Koenig, Hartmut; Vogel, Mika W.

    2007-07-01

    Photoacoustic imaging is a promising new way to generate unprecedented contrast in ultrasound diagnostic imaging. It differs from other medical imaging approaches, in that it provides spatially resolved information about optical absorption of targeted tissue structures. Because the data acquisition process deviates from standard clinical ultrasound, choice of the proper image reconstruction method is crucial for successful application of the technique. In the literature, multiple approaches have been advocated, and the purpose of this paper is to compare four reconstruction techniques. Thereby, we focused on resolution limits, stability, reconstruction speed, and SNR. We generated experimental and simulated data and reconstructed images of the pressure distribution using four different methods: delay-and-sum (DnS), circular backprojection (CBP), generalized 2D Hough transform (HTA), and Fourier transform (FTA). All methods were able to depict the point sources properly. DnS and CBP produce blurred images containing typical superposition artifacts. The HTA provides excellent SNR and allows a good point source separation. The FTA is the fastest and shows the best FWHM. In our study, we found the FTA to show the best overall performance. It allows a very fast and theoretically exact reconstruction. Only a hardware-implemented DnS might be faster and enable real-time imaging. A commercial system may also perform several methods to fully utilize the new contrast mechanism and guarantee optimal resolution and fidelity.

  5. Analysis of scene distortions in stereoscopic images due to the variation of the ideal viewing conditions

    NASA Astrophysics Data System (ADS)

    Viale, Alberto; Villa, Dario

    2011-03-01

    Recently stereoscopy has increased a lot its popularity and various technologies are spreading in theaters and homes allowing observation of stereoscopic images and movies, becoming affordable even for home users. However there are some golden rules that users should follow to ensure a better enjoyment of stereoscopic images, first of all the viewing condition should not be too different from the ideal ones, which were assumed during the production process. To allow the user to perceive stereo depth instead of a flat image, two different views of the same scene are shown to the subject, one is seen just through his left eye and the other just through the right one; the vision process is making the work of merging the two images in a virtual three-dimensional scene, giving to the user the perception of depth. The two images presented to the user were created, either from image synthesis or from more traditional techniques, following the rules of perspective. These rules need some boundary conditions to be explicit, such as eye separation, field of view, parallax distance, viewer position and orientation. In this paper we are interested in studying how the variation of the viewer position and orientation from the ideal ones expressed as specified parameters in the image creation process, is affecting the correctness of the reconstruction of the three-dimensional virtual scene.

  6. Model-based estimation of breast percent density in raw and processed full-field digital mammography images from image-acquisition physics and patient-image characteristics

    NASA Astrophysics Data System (ADS)

    Keller, Brad M.; Nathan, Diane L.; Conant, Emily F.; Kontos, Despina

    2012-03-01

    Breast percent density (PD%), as measured mammographically, is one of the strongest known risk factors for breast cancer. While the majority of studies to date have focused on PD% assessment from digitized film mammograms, digital mammography (DM) is becoming increasingly common, and allows for direct PD% assessment at the time of imaging. This work investigates the accuracy of a generalized linear model-based (GLM) estimation of PD% from raw and postprocessed digital mammograms, utilizing image acquisition physics, patient characteristics and gray-level intensity features of the specific image. The model is trained in a leave-one-woman-out fashion on a series of 81 cases for which bilateral, mediolateral-oblique DM images were available in both raw and post-processed format. Baseline continuous and categorical density estimates were provided by a trained breast-imaging radiologist. Regression analysis is performed and Pearson's correlation, r, and Cohen's kappa, κ, are computed. The GLM PD% estimation model performed well on both processed (r=0.89, p<0.001) and raw (r=0.75, p<0.001) images. Model agreement with radiologist assigned density categories was also high for processed (κ=0.79, p<0.001) and raw (κ=0.76, p<0.001) images. Model-based prediction of breast PD% could allow for a reproducible estimation of breast density, providing a rapid risk assessment tool for clinical practice.

  7. Textural and Mineralogical Analysis of Volcanic Rocks by µ-XRF Mapping.

    PubMed

    Germinario, Luigi; Cossio, Roberto; Maritan, Lara; Borghi, Alessandro; Mazzoli, Claudio

    2016-06-01

    In this study, µ-XRF was applied as a novel surface technique for quick acquisition of elemental X-ray maps of rocks, image analysis of which provides quantitative information on texture and rock-forming minerals. Bench-top µ-XRF is cost-effective, fast, and non-destructive, can be applied to both large (up to a few tens of cm) and fragile samples, and yields major and trace element analysis with good sensitivity. Here, X-ray mapping was performed with a resolution of 103.5 µm and spot size of 30 µm over sample areas of about 5×4 cm of Euganean trachyte, a volcanic porphyritic rock from the Euganean Hills (NE Italy) traditionally used in cultural heritage. The relative abundance of phenocrysts and groundmass, as well as the size and shape of the various mineral phases, were obtained from image analysis of the elemental maps. The quantified petrographic features allowed identification of various extraction sites, revealing an objective method for archaeometric provenance studies exploiting µ-XRF imaging.

  8. Observation of FeGe skyrmions by electron phase microscopy with hole-free phase plate

    NASA Astrophysics Data System (ADS)

    Kotani, Atsuhiro; Harada, Ken; Malac, Marek; Salomons, Mark; Hayashida, Misa; Mori, Shigeo

    2018-05-01

    We report application of hole-free phase plate (HFPP) to imaging of magnetic skyrmion lattices. Using HFPP imaging, we observed skyrmions in FeGe, and succeeded in obtaining phase contrast images that reflect the sample magnetization distribution. According to the Aharonov-Bohm effect, the electron phase is shifted by the magnetic flux due to sample magnetization. The differential processing of the intensity in a HFPP image allows us to successfully reconstruct the magnetization map of the skyrmion lattice. Furthermore, the calculated phase shift due to the magnetization of the thin film was consistent with that measured by electron holography experiment, which demonstrates that HFPP imaging can be utilized for analysis of magnetic fields and electrostatic potential distribution at the nanoscale.

  9. Deep and Structured Robust Information Theoretic Learning for Image Analysis.

    PubMed

    Deng, Yue; Bao, Feng; Deng, Xuesong; Wang, Ruiping; Kong, Youyong; Dai, Qionghai

    2016-07-07

    This paper presents a robust information theoretic (RIT) model to reduce the uncertainties, i.e. missing and noisy labels, in general discriminative data representation tasks. The fundamental pursuit of our model is to simultaneously learn a transformation function and a discriminative classifier that maximize the mutual information of data and their labels in the latent space. In this general paradigm, we respectively discuss three types of the RIT implementations with linear subspace embedding, deep transformation and structured sparse learning. In practice, the RIT and deep RIT are exploited to solve the image categorization task whose performances will be verified on various benchmark datasets. The structured sparse RIT is further applied to a medical image analysis task for brain MRI segmentation that allows group-level feature selections on the brain tissues.

  10. Measurement of Galactic Logarithmic Spiral Arm Pitch Angle Using Two-dimensional Fast Fourier Transform Decomposition

    NASA Astrophysics Data System (ADS)

    Davis, Benjamin L.; Berrier, Joel C.; Shields, Douglas W.; Kennefick, Julia; Kennefick, Daniel; Seigar, Marc S.; Lacy, Claud H. S.; Puerari, Ivânio

    2012-04-01

    A logarithmic spiral is a prominent feature appearing in a majority of observed galaxies. This feature has long been associated with the traditional Hubble classification scheme, but historical quotes of pitch angle of spiral galaxies have been almost exclusively qualitative. We have developed a methodology, utilizing two-dimensional fast Fourier transformations of images of spiral galaxies, in order to isolate and measure the pitch angles of their spiral arms. Our technique provides a quantitative way to measure this morphological feature. This will allow comparison of spiral galaxy pitch angle to other galactic parameters and test spiral arm genesis theories. In this work, we detail our image processing and analysis of spiral galaxy images and discuss the robustness of our analysis techniques.

  11. Performance of an image analysis processing system for hen tracking in an environmental preference chamber.

    PubMed

    Kashiha, Mohammad Amin; Green, Angela R; Sales, Tatiana Glogerley; Bahr, Claudia; Berckmans, Daniel; Gates, Richard S

    2014-10-01

    Image processing systems have been widely used in monitoring livestock for many applications, including identification, tracking, behavior analysis, occupancy rates, and activity calculations. The primary goal of this work was to quantify image processing performance when monitoring laying hens by comparing length of stay in each compartment as detected by the image processing system with the actual occurrences registered by human observations. In this work, an image processing system was implemented and evaluated for use in an environmental animal preference chamber to detect hen navigation between 4 compartments of the chamber. One camera was installed above each compartment to produce top-view images of the whole compartment. An ellipse-fitting model was applied to captured images to detect whether the hen was present in a compartment. During a choice-test study, mean ± SD success detection rates of 95.9 ± 2.6% were achieved when considering total duration of compartment occupancy. These results suggest that the image processing system is currently suitable for determining the response measures for assessing environmental choices. Moreover, the image processing system offered a comprehensive analysis of occupancy while substantially reducing data processing time compared with the time-intensive alternative of manual video analysis. The above technique was used to monitor ammonia aversion in the chamber. As a preliminary pilot study, different levels of ammonia were applied to different compartments while hens were allowed to navigate between compartments. Using the automated monitor tool to assess occupancy, a negative trend of compartment occupancy with ammonia level was revealed, though further examination is needed. ©2014 Poultry Science Association Inc.

  12. A combined use of multispectral and SAR images for ship detection and characterization through object based image analysis

    NASA Astrophysics Data System (ADS)

    Aiello, Martina; Gianinetto, Marco

    2017-10-01

    Marine routes represent a huge portion of commercial and human trades, therefore surveillance, security and environmental protection themes are gaining increasing importance. Being able to overcome the limits imposed by terrestrial means of monitoring, ship detection from satellite has recently prompted a renewed interest for a continuous monitoring of illegal activities. This paper describes an automatic Object Based Image Analysis (OBIA) approach to detect vessels made of different materials in various sea environments. The combined use of multispectral and SAR images allows for a regular observation unrestricted by lighting and atmospheric conditions and complementarity in terms of geographic coverage and geometric detail. The method developed adopts a region growing algorithm to segment the image in homogeneous objects, which are then classified through a decision tree algorithm based on spectral and geometrical properties. Then, a spatial analysis retrieves the vessels' position, length and heading parameters and a speed range is associated. Optimization of the image processing chain is performed by selecting image tiles through a statistical index. Vessel candidates are detected over amplitude SAR images using an adaptive threshold Constant False Alarm Rate (CFAR) algorithm prior the object based analysis. Validation is carried out by comparing the retrieved parameters with the information provided by the Automatic Identification System (AIS), when available, or with manual measurement when AIS data are not available. The estimation of length shows R2=0.85 and estimation of heading R2=0.92, computed as the average of R2 values obtained for both optical and radar images.

  13. A new set of wavelet- and fractals-based features for Gleason grading of prostate cancer histopathology images

    NASA Astrophysics Data System (ADS)

    Mosquera Lopez, Clara; Agaian, Sos

    2013-02-01

    Prostate cancer detection and staging is an important step towards patient treatment selection. Advancements in digital pathology allow the application of new quantitative image analysis algorithms for computer-assisted diagnosis (CAD) on digitized histopathology images. In this paper, we introduce a new set of features to automatically grade pathological images using the well-known Gleason grading system. The goal of this study is to classify biopsy images belonging to Gleason patterns 3, 4, and 5 by using a combination of wavelet and fractal features. For image classification we use pairwise coupling Support Vector Machine (SVM) classifiers. The accuracy of the system, which is close to 97%, is estimated through three different cross-validation schemes. The proposed system offers the potential for automating classification of histological images and supporting prostate cancer diagnosis.

  14. Image-guided automatic triggering of a fractional CO2 laser in aesthetic procedures.

    PubMed

    Wilczyński, Sławomir; Koprowski, Robert; Wiernek, Barbara K; Błońska-Fajfrowska, Barbara

    2016-09-01

    Laser procedures in dermatology and aesthetic medicine are associated with the need for manual laser triggering. This leads to pulse overlapping and side effects. Automatic laser triggering based on image analysis can provide a secure fit to each successive doses of radiation. A fractional CO2 laser was used in the study. 500 images of the human skin of healthy subjects were acquired. Automatic triggering was initiated by an application together with a camera which tracks and analyses the skin in visible light. The tracking algorithm uses the methods of image analysis to overlap images. After locating the characteristic points in analysed adjacent areas, the correspondence of graphs is found. The point coordinates derived from the images are the vertices of graphs with respect to which isomorphism is sought. When the correspondence of graphs is found, it is possible to overlap the neighbouring parts of the image. The proposed method of laser triggering owing to the automatic image fitting method allows for 100% repeatability. To meet this requirement, there must be at least 13 graph vertices obtained from the image. For this number of vertices, the time of analysis of a single image is less than 0.5s. The proposed method, applied in practice, may help reduce the number of side effects during dermatological laser procedures resulting from laser pulse overlapping. In addition, it reduces treatment time and enables to propose new techniques of treatment through controlled, precise laser pulse overlapping. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. Label-free observation of tissues by high-speed stimulated Raman spectral microscopy and independent component analysis

    NASA Astrophysics Data System (ADS)

    Ozeki, Yasuyuki; Otsuka, Yoichi; Sato, Shuya; Hashimoto, Hiroyuki; Umemura, Wataru; Sumimura, Kazuhiko; Nishizawa, Norihiko; Fukui, Kiichi; Itoh, Kazuyoshi

    2013-02-01

    We have developed a video-rate stimulated Raman scattering (SRS) microscope with frame-by-frame wavenumber tunability. The system uses a 76-MHz picosecond Ti:sapphire laser and a subharmonically synchronized, 38-MHz Yb fiber laser. The Yb fiber laser pulses are spectrally sliced by a fast wavelength-tunable filter, which consists of a galvanometer scanner, a 4-f optical system and a reflective grating. The spectral resolution of the filter is ~ 3 cm-1. The wavenumber was scanned from 2800 to 3100 cm-1 with an arbitrary waveform synchronized to the frame trigger. For imaging, we introduced a 8-kHz resonant scanner and a galvanometer scanner. We were able to acquire SRS images of 500 x 480 pixels at a frame rate of 30.8 frames/s. Then these images were processed by principal component analysis followed by a modified algorithm of independent component analysis. This algorithm allows blind separation of constituents with overlapping Raman bands from SRS spectral images. The independent component (IC) spectra give spectroscopic information, and IC images can be used to produce pseudo-color images. We demonstrate various label-free imaging modalities such as 2D spectral imaging of the rat liver, two-color 3D imaging of a vessel in the rat liver, and spectral imaging of several sections of intestinal villi in the mouse. Various structures in the tissues such as lipid droplets, cytoplasm, fibrous texture, nucleus, and water-rich region were successfully visualized.

  16. PDS Archive Release of Apollo 11, Apollo 12, and Apollo 17 Lunar Rock Sample Images

    NASA Technical Reports Server (NTRS)

    Garcia, P. A.; Stefanov, W. L.; Lofgren, G. E.; Todd, N. S.; Gaddis, L. R.

    2013-01-01

    Scientists at the Johnson Space Center (JSC) Lunar Sample Laboratory, Information Resources Directorate, and Image Science & Analysis Laboratory have been working to digitize (scan) the original film negatives of Apollo Lunar Rock Sample photographs [1, 2]. The rock samples, and associated regolith and lunar core samples, were obtained during the Apollo 11, 12, 14, 15, 16 and 17 missions. The images allow scientists to view the individual rock samples in their original or subdivided state prior to requesting physical samples for their research. In cases where access to the actual physical samples is not practical, the images provide an alternate mechanism for study of the subject samples. As the negatives are being scanned, they have been formatted and documented for permanent archive in the NASA Planetary Data System (PDS). The Astromaterials Research and Exploration Science Directorate (which includes the Lunar Sample Laboratory and Image Science & Analysis Laboratory) at JSC is working collaboratively with the Imaging Node of the PDS on the archiving of these valuable data. The PDS Imaging Node is now pleased to announce the release of the image archives for Apollo missions 11, 12, and 17.

  17. Instant Grainification: Real-Time Grain-Size Analysis from Digital Images in the Field

    NASA Astrophysics Data System (ADS)

    Rubin, D. M.; Chezar, H.

    2007-12-01

    Over the past few years, digital cameras and underwater microscopes have been developed to collect in-situ images of sand-sized bed sediment, and software has been developed to measure grain size from those digital images (Chezar and Rubin, 2004; Rubin, 2004; Rubin et al., 2006). Until now, all image processing and grain- size analysis was done back in the office where images were uploaded from cameras and processed on desktop computers. Computer hardware has become small and rugged enough to process images in the field, which for the first time allows real-time grain-size analysis of sand-sized bed sediment. We present such a system consisting of weatherproof tablet computer, open source image-processing software (autocorrelation code of Rubin, 2004, running under Octave and Cygwin), and digital camera with macro lens. Chezar, H., and Rubin, D., 2004, Underwater microscope system: U.S. Patent and Trademark Office, patent number 6,680,795, January 20, 2004. Rubin, D.M., 2004, A simple autocorrelation algorithm for determining grain size from digital images of sediment: Journal of Sedimentary Research, v. 74, p. 160-165. Rubin, D.M., Chezar, H., Harney, J.N., Topping, D.J., Melis, T.S., and Sherwood, C.R., 2006, Underwater microscope for measuring spatial and temporal changes in bed-sediment grain size: USGS Open-File Report 2006-1360.

  18. Combined application of imaging techniques for the characterization and authentication of ancient weapons

    NASA Astrophysics Data System (ADS)

    Salvemini, Filomena; Grazzi, Francesco; Kardjilov, Nikolay; Wieder, Frank; Manke, Ingo; Edge, David; Williams, Alan; Zoppi, Marco

    2017-05-01

    Non-invasive experimental methods play an important role in the field of cultural heritage. Benefiting from the technical progress in recent years, neutron imaging has been demonstrated to complement effectively studies based on surface analysis, allowing for a non-invasive characterization of the whole three-dimensional volume. This study focuses on a kris and a kanjar, two weapons from ancient Asia, to show the potential of the combined use of X-ray and neutron imaging techniques for the characterisation of the manufacturing methods and the authentication of objects of cultural and historical interest.

  19. Analysis of lithology: Vegetation mixes in multispectral images

    NASA Technical Reports Server (NTRS)

    Adams, J. B.; Smith, M.; Adams, J. D.

    1982-01-01

    Discrimination and identification of lithologies from multispectral images is discussed. Rock/soil identification can be facilitated by removing the component of the signal in the images that is contributed by the vegetation. Mixing models were developed to predict the spectra of combinations of pure end members, and those models were refined using laboratory measurements of real mixtures. Models in use include a simple linear (checkerboard) mix, granular mixing, semi-transparent coatings, and combinations of the above. The use of interactive computer techniques that allow quick comparison of the spectrum of a pixel stack (in a multiband set) with laboratory spectra is discussed.

  20. Non-destructive terahertz imaging of illicit drugs using spectral fingerprints

    NASA Astrophysics Data System (ADS)

    Kawase, Kodo; Ogawa, Yuichi; Watanabe, Yuuki; Inoue, Hiroyuki

    2003-10-01

    The absence of non-destructive inspection techniques for illicit drugs hidden in mail envelopes has resulted in such drugs being smuggled across international borders freely. We have developed a novel basic technology for terahertz imaging, which allows detection and identification of drugs concealed in envelopes, by introducing the component spatial pattern analysis. The spatial distributions of the targets are obtained from terahertz multispectral transillumination images, using absorption spectra measured with a tunable terahertz-wave source. The samples we used were methamphetamine and MDMA, two of the most widely consumed illegal drugs in Japan, and aspirin as a reference.

  1. Server-based Approach to Web Visualization of Integrated Three-dimensional Brain Imaging Data

    PubMed Central

    Poliakov, Andrew V.; Albright, Evan; Hinshaw, Kevin P.; Corina, David P.; Ojemann, George; Martin, Richard F.; Brinkley, James F.

    2005-01-01

    The authors describe a client-server approach to three-dimensional (3-D) visualization of neuroimaging data, which enables researchers to visualize, manipulate, and analyze large brain imaging datasets over the Internet. All computationally intensive tasks are done by a graphics server that loads and processes image volumes and 3-D models, renders 3-D scenes, and sends the renderings back to the client. The authors discuss the system architecture and implementation and give several examples of client applications that allow visualization and analysis of integrated language map data from single and multiple patients. PMID:15561787

  2. Invariant approach to the character classification

    NASA Astrophysics Data System (ADS)

    Šariri, Kristina; Demoli, Nazif

    2008-04-01

    Image moments analysis is a very useful tool which allows image description invariant to translation and rotation, scale change and some types of image distortions. The aim of this work was development of simple method for fast and reliable classification of characters by using Hu's and affine moment invariants. Measure of Eucleidean distance was used as a discrimination feature with statistical parameters estimated. The method was tested in classification of Times New Roman font letters as well as sets of the handwritten characters. It is shown that using all Hu's and three affine invariants as discrimination set improves recognition rate by 30%.

  3. Systems Imaging of the Immune Synapse.

    PubMed

    Ambler, Rachel; Ruan, Xiangtao; Murphy, Robert F; Wülfing, Christoph

    2017-01-01

    Three-dimensional live cell imaging of the interaction of T cells with antigen-presenting cells (APCs) visualizes the subcellular distributions of signaling intermediates during T cell activation at thousands of resolved positions within a cell. These information-rich maps of local protein concentrations are a valuable resource in understanding T cell signaling. Here, we describe a protocol for the efficient acquisition of such imaging data and their computational processing to create four-dimensional maps of local concentrations. This protocol allows quantitative analysis of T cell signaling as it occurs inside live cells with resolution in time and space across thousands of cells.

  4. High Definition Confocal Imaging Modalities for the Characterization of Tissue-Engineered Substitutes.

    PubMed

    Mayrand, Dominique; Fradette, Julie

    2018-01-01

    Optimal imaging methods are necessary in order to perform a detailed characterization of thick tissue samples from either native or engineered tissues. Tissue-engineered substitutes are featuring increasing complexity including multiple cell types and capillary-like networks. Therefore, technical approaches allowing the visualization of the inner structural organization and cellular composition of tissues are needed. This chapter describes an optical clearing technique which facilitates the detailed characterization of whole-mount samples from skin and adipose tissues (ex vivo tissues and in vitro tissue-engineered substitutes) when combined with spectral confocal microscopy and quantitative analysis on image renderings.

  5. Remote Sensing and Imaging Physics

    DTIC Science & Technology

    2012-03-07

    Model Analysis Process Wire-frame Shape Model a s s u m e d a p rio ri k n o w le d g e No material BRDF library employed in retrieval...a collection of information if it does not display a currently valid OMB control number. 1. REPORT DATE 07 MAR 2012 2. REPORT TYPE 3. DATES COVERED...imaging estimation problems Allows properties of local maxima to be derived from the Kolmogorov model of atmospheric turbulence: Each speckle

  6. Effect of Picture Archiving and Communication System Image Manipulation on the Agreement of Chest Radiograph Interpretation in the Neonatal Intensive Care Unit.

    PubMed

    Castro, Denise A; Naqvi, Asad Ahmed; Vandenkerkhof, Elizabeth; Flavin, Michael P; Manson, David; Soboleski, Donald

    2016-01-01

    Variability in image interpretation has been attributed to differences in the interpreters' knowledge base, experience level, and access to the clinical scenario. Picture archiving and communication system (PACS) has allowed the user to manipulate the images while developing their impression of the radiograph. The aim of this study was to determine the agreement of chest radiograph (CXR) impressions among radiologists and neonatologists and help determine the effect of image manipulation with PACS on report impression. Prospective cohort study included 60 patients from the Neonatal Intensive Care Unit undergoing CXRs. Three radiologists and three neonatologists reviewed two consecutive frontal CXRs of each patient. Each physician was allowed manipulation of images as needed to provide a decision of "improved," "unchanged," or "disease progression" lung disease for each patient. Each physician repeated the process once more; this time, they were not allowed to individually manipulate the images, but an independent radiologist presets the image brightness and contrast to best optimize the CXR appearance. Percent agreement and opposing reporting views were calculated between all six physicians for each of the two methods (allowing and not allowing image manipulation). One hundred percent agreement in image impression between all six observers was only seen in 5% of cases when allowing image manipulation; 100% agreement was seen in 13% of the cases when there was no manipulation of the images. Agreement in CXR interpretation is poor; the ability to manipulate the images on PACS results in a decrease in agreement in the interpretation of these studies. New methods to standardize image appearance and allow improved comparison with previous studies should be sought to improve clinician agreement in interpretation consistency and advance patient care.

  7. Impact of CT perfusion imaging on the assessment of peripheral chronic pulmonary thromboembolism: clinical experience in 62 patients.

    PubMed

    Le Faivre, Julien; Duhamel, Alain; Khung, Suonita; Faivre, Jean-Baptiste; Lamblin, Nicolas; Remy, Jacques; Remy-Jardin, Martine

    2016-11-01

    To evaluate the impact of CT perfusion imaging on the detection of peripheral chronic pulmonary embolisms (CPE). 62 patients underwent a dual-energy chest CT angiographic examination with (a) reconstruction of diagnostic and perfusion images; (b) enabling depiction of vascular features of peripheral CPE on diagnostic images and perfusion defects (20 segments/patient; total: 1240 segments examined). The interpretation of diagnostic images was of two types: (a) standard (i.e., based on cross-sectional images alone) or (b) detailed (i.e., based on cross-sectional images and MIPs). The segment-based analysis showed (a) 1179 segments analyzable on both imaging modalities and 61 segments rated as nonanalyzable on perfusion images; (b) the percentage of diseased segments was increased by 7.2 % when perfusion imaging was compared to the detailed reading of diagnostic images, and by 26.6 % when compared to the standard reading of images. At a patient level, the extent of peripheral CPE was higher on perfusion imaging, with a greater impact when compared to the standard reading of diagnostic images (number of patients with a greater number of diseased segments: n = 45; 72.6 % of the study population). Perfusion imaging allows recognition of a greater extent of peripheral CPE compared to diagnostic imaging. • Dual-energy computed tomography generates standard diagnostic imaging and lung perfusion analysis. • Depiction of CPE on central arteries relies on standard diagnostic imaging. • Detection of peripheral CPE is improved by perfusion imaging.

  8. Semi-automatic mapping of fault rocks on a Digital Outcrop Model, Gole Larghe Fault Zone (Southern Alps, Italy)

    NASA Astrophysics Data System (ADS)

    Mittempergher, Silvia; Vho, Alice; Bistacchi, Andrea

    2016-04-01

    A quantitative analysis of fault-rock distribution in outcrops of exhumed fault zones is of fundamental importance for studies of fault zone architecture, fault and earthquake mechanics, and fluid circulation. We present a semi-automatic workflow for fault-rock mapping on a Digital Outcrop Model (DOM), developed on the Gole Larghe Fault Zone (GLFZ), a well exposed strike-slip fault in the Adamello batholith (Italian Southern Alps). The GLFZ has been exhumed from ca. 8-10 km depth, and consists of hundreds of individual seismogenic slip surfaces lined by green cataclasites (crushed wall rocks cemented by the hydrothermal epidote and K-feldspar) and black pseudotachylytes (solidified frictional melts, considered as a marker for seismic slip). A digital model of selected outcrop exposures was reconstructed with photogrammetric techniques, using a large number of high resolution digital photographs processed with VisualSFM software. The resulting DOM has a resolution up to 0.2 mm/pixel. Most of the outcrop was imaged using images each one covering a 1 x 1 m2 area, while selected structural features, such as sidewall ripouts or stepovers, were covered with higher-resolution images covering 30 x 40 cm2 areas.Image processing algorithms were preliminarily tested using the ImageJ-Fiji package, then a workflow in Matlab was developed to process a large collection of images sequentially. Particularly in detailed 30 x 40 cm images, cataclasites and hydrothermal veins were successfully identified using spectral analysis in RGB and HSV color spaces. This allows mapping the network of cataclasites and veins which provided the pathway for hydrothermal fluid circulation, and also the volume of mineralization, since we are able to measure the thickness of cataclasites and veins on the outcrop surface. The spectral signature of pseudotachylyte veins is indistinguishable from that of biotite grains in the wall rock (tonalite), so we tested morphological analysis tools to discriminate them with respect to biotite. In higher resolution images this could be performed using circularity and size thresholds, however this could not be easily implemented in an automated procedure since the thresholds must be varied by the interpreter almost for each image. In 1 x 1 m images the resolution is generally too low to distinguish cataclasite and pseudotachylyte, so most of the time fault rocks were treated together. For this analysis we developed a fully automated workflow that, after applying noise correction, classification and skeletonization algorithms, returns labeled edge images of fault segments together with vector polylines associated to edge properties. Vector and edge properties represent a useful format to perform further quantitative analysis, for instance for classifying fault segments based on structural criteria, detect continuous fault traces, and detect the kind of termination of faults/fractures. This approach allows to collect statistically relevant datasets useful for further quantitative structural analysis.

  9. Modeling semantic aspects for cross-media image indexing.

    PubMed

    Monay, Florent; Gatica-Perez, Daniel

    2007-10-01

    To go beyond the query-by-example paradigm in image retrieval, there is a need for semantic indexing of large image collections for intuitive text-based image search. Different models have been proposed to learn the dependencies between the visual content of an image set and the associated text captions, then allowing for the automatic creation of semantic indices for unannotated images. The task, however, remains unsolved. In this paper, we present three alternatives to learn a Probabilistic Latent Semantic Analysis model (PLSA) for annotated images, and evaluate their respective performance for automatic image indexing. Under the PLSA assumptions, an image is modeled as a mixture of latent aspects that generates both image features and text captions, and we investigate three ways to learn the mixture of aspects. We also propose a more discriminative image representation than the traditional Blob histogram, concatenating quantized local color information and quantized local texture descriptors. The first learning procedure of a PLSA model for annotated images is a standard EM algorithm, which implicitly assumes that the visual and the textual modalities can be treated equivalently. The other two models are based on an asymmetric PLSA learning, allowing to constrain the definition of the latent space on the visual or on the textual modality. We demonstrate that the textual modality is more appropriate to learn a semantically meaningful latent space, which translates into improved annotation performance. A comparison of our learning algorithms with respect to recent methods on a standard dataset is presented, and a detailed evaluation of the performance shows the validity of our framework.

  10. Forensic analysis of latent fingermarks by silver-assisted LDI imaging MS on nonconductive surfaces.

    PubMed

    Lauzon, N; Dufresne, M; Beaudoin, A; Chaurand, P

    2017-06-01

    Silver-assisted laser desorption ionization (AgLDI) imaging mass spectrometry (IMS) has been demonstrated to be a useful technology for fingermark analysis allowing for the detection of several classes of endogenous as well as exogenous compounds. Ideally, in IMS analyses, the fingermarks are deposited under controlled conditions on metallized conductive target slides. However, in forensic investigations, fingermarks are often found on a variety of nonconductive surfaces. A sputtered silver layer renders the target surface conductive, which allows the analyses of insulating surfaces by time-of-flight IMS. Ultimately, the major consideration when developing analytical methods for the analysis of latent fingermarks is their capability to be incorporated within forensic standard operational procedures. To demonstrate the potential of AgLDI IMS for forensic applications, fingermarks deposited on nonconductive surfaces commonly found during an investigation, including paper, cardboard, plastic bags and lifting tape, were first revealed by the Sûreté du Québec by using forensic enhancement techniques prior to the IMS analyses. Numerous endogenous compounds including fatty acids, cholesterol, squalene, wax esters, triglycerides and several exogenous substances were detected and imaged. Here, we show that silver sputtering can provide visual enhancements of fingerprint patterns after FET procedures through different scenarios in which AgLDI IMS can contribute to forensic investigations. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  11. Medical image integrity control and forensics based on watermarking--approximating local modifications and identifying global image alterations.

    PubMed

    Huang, H; Coatrieux, G; Shu, H Z; Luo, L M; Roux, Ch

    2011-01-01

    In this paper we present a medical image integrity verification system that not only allows detecting and approximating malevolent local image alterations (e.g. removal or addition of findings) but is also capable to identify the nature of global image processing applied to the image (e.g. lossy compression, filtering …). For that purpose, we propose an image signature derived from the geometric moments of pixel blocks. Such a signature is computed over regions of interest of the image and then watermarked in regions of non interest. Image integrity analysis is conducted by comparing embedded and recomputed signatures. If any, local modifications are approximated through the determination of the parameters of the nearest generalized 2D Gaussian. Image moments are taken as image features and serve as inputs to one classifier we learned to discriminate the type of global image processing. Experimental results with both local and global modifications illustrate the overall performances of our approach.

  12. In situ microscopy for on-line determination of biomass.

    PubMed

    Bittner, C; Wehnert, G; Scheper, T

    1998-10-05

    A sensor is presented, which allows on-line microscopic observation of microorganisms during fermentations in bioreactors. This sensor, an In Situ Microscope (ISM) consists of a direct-light microscope with a measuring chamber, integrated in a 25 mm stainless steel tube, two CCD-cameras, and two frame-grabbers. The data obtained are processed by an automatic image analysis system. The ISM is connected with the bioreactor via a standard port, and it is immersed directly in the culture liquid-in our case Saccharomyces cerevisiae in a synthetic medium. The microscopic examination of the liquid is performed in the measuring chamber, which is situated near the front end of the sensor head. The measuring chamber is opened and closed periodically. In the open state, the liquid in the bioreactor flows unrestricted through the chamber. In closing, a defined volume of 2,2. 10(-8) mL of the liquid becomes enclosed. After a few seconds, when the movement of the cells in the enclosed culture has stopped, they are examined with the microscope. The microscopic images of the cells are registered with the CCD-cameras and are visualized on a monitor, allowing a direct view of the cell population. After detection, the measuring chamber reopens, and the enclosed liquid is released. The images obtained are evaluated as to cell concentration, cell size, cell volume, biomass, and other relevant parameters simultaneously by automatic image analysis. With a PC (486/33 MHz), image processing takes about 15 s per image. The detection range tested when measuring cells of S. cerevisiae is about 10(6) to 10(9) cells/mL (equivalent to a biomass of 0.01 g/L to 12 g/L). The calculated biomass values correlate very well with those obtained using dry weight analysis. Furthermore, histograms can be calculated, which are comparable to those obtained by flow cytometry. Copyright 1998 John Wiley & Sons, Inc.

  13. An integrative approach for analyzing hundreds of neurons in task performing mice using wide-field calcium imaging.

    PubMed

    Mohammed, Ali I; Gritton, Howard J; Tseng, Hua-an; Bucklin, Mark E; Yao, Zhaojie; Han, Xue

    2016-02-08

    Advances in neurotechnology have been integral to the investigation of neural circuit function in systems neuroscience. Recent improvements in high performance fluorescent sensors and scientific CMOS cameras enables optical imaging of neural networks at a much larger scale. While exciting technical advances demonstrate the potential of this technique, further improvement in data acquisition and analysis, especially those that allow effective processing of increasingly larger datasets, would greatly promote the application of optical imaging in systems neuroscience. Here we demonstrate the ability of wide-field imaging to capture the concurrent dynamic activity from hundreds to thousands of neurons over millimeters of brain tissue in behaving mice. This system allows the visualization of morphological details at a higher spatial resolution than has been previously achieved using similar functional imaging modalities. To analyze the expansive data sets, we developed software to facilitate rapid downstream data processing. Using this system, we show that a large fraction of anatomically distinct hippocampal neurons respond to discrete environmental stimuli associated with classical conditioning, and that the observed temporal dynamics of transient calcium signals are sufficient for exploring certain spatiotemporal features of large neural networks.

  14. Reconstruction of vessel structures from serial whole slide sections of murine liver samples

    NASA Astrophysics Data System (ADS)

    Schwier, Michael; Hahn, Horst K.; Dahmen, Uta; Dirsch, Olaf

    2013-03-01

    Image-based analysis of the vascular structures of murine liver samples is an important tool for scientists to understand liver physiology and morphology. Typical assessment methods are MicroCT, which allows for acquiring images of the whole organ while lacking resolution for fine details, and confocal laser scanning microscopy, which allows detailed insights into fine structures while lacking the broader context. Imaging of histological serial whole slide sections is a recent technology able to fill this gap, since it provides a fine resolution up to the cellular level, but on a whole organ scale. However, whole slide imaging is a modality providing only 2D images. Therefore the challenge is to use stacks of serial sections from which to reconstruct the 3D vessel structures. In this paper we present a semi-automatic procedure to achieve this goal. We employ an automatic method that detects vessel structures based on continuity and shape characteristics. Furthermore it supports the user to perform manual corrections where required. With our methods we were able to successfully extract and reconstruct vessel structures from a stack of 100 and a stack of 397 serial sections of a mouse liver lobe, thus proving the potential of our approach.

  15. Neuroimaging to Investigate Multisystem Involvement and Provide Biomarkers in Amyotrophic Lateral Sclerosis

    PubMed Central

    Pradat, Pierre-François; El Mendili, Mohamed-Mounir

    2014-01-01

    Neuroimaging allows investigating the extent of neurological systems degeneration in amyotrophic lateral sclerosis (ALS). Advanced MRI methods can detect changes related to the degeneration of upper motor neurons but have also demonstrated the participation of other systems such as the sensory system or basal ganglia, demonstrating in vivo that ALS is a multisystem disorder. Structural and functional imaging also allows studying dysfunction of brain areas associated with cognitive signs. From a biomarker perspective, numerous studies using diffusion tensor imaging showed a decrease of fractional anisotropy in the intracranial portion of the corticospinal tract but its diagnostic value at the individual level remains limited. A multiparametric approach will be required to use MRI in the diagnostic workup of ALS. A promising avenue is the new methodological developments of spinal cord imaging that has the advantage to investigate the two motor system components that are involved in ALS, that is, the lower and upper motor neuron. For all neuroimaging modalities, due to the intrinsic heterogeneity of ALS, larger pooled banks of images with standardized image acquisition and analysis procedures are needed. In this paper, we will review the main findings obtained with MRI, PET, SPECT, and nuclear magnetic resonance spectroscopy in ALS. PMID:24949452

  16. Geometric processing of digital images of the planets

    NASA Technical Reports Server (NTRS)

    Edwards, Kathleen

    1987-01-01

    New procedures and software have been developed for geometric transformation of images to support digital cartography of the planets. The procedures involve the correction of spacecraft camera orientation of each image with the use of ground control and the transformation of each image to a Sinusoidal Equal-Area map projection with an algorithm which allows the number of transformation calculations to vary as the distortion varies within the image. When the distortion is low in an area of an image, few transformation computations are required, and most pixels can be interpolated. When distortion is extreme, the location of each pixel is computed. Mosaics are made of these images and stored as digital databases. Completed Sinusoidal databases may be used for digital analysis and registration with other spatial data. They may also be reproduced as published image maps by digitally transforming them to appropriate map projections.

  17. Local/non-local regularized image segmentation using graph-cuts: application to dynamic and multispectral MRI.

    PubMed

    Hanson, Erik A; Lundervold, Arvid

    2013-11-01

    Multispectral, multichannel, or time series image segmentation is important for image analysis in a wide range of applications. Regularization of the segmentation is commonly performed using local image information causing the segmented image to be locally smooth or piecewise constant. A new spatial regularization method, incorporating non-local information, was developed and tested. Our spatial regularization method applies to feature space classification in multichannel images such as color images and MR image sequences. The spatial regularization involves local edge properties, region boundary minimization, as well as non-local similarities. The method is implemented in a discrete graph-cut setting allowing fast computations. The method was tested on multidimensional MRI recordings from human kidney and brain in addition to simulated MRI volumes. The proposed method successfully segment regions with both smooth and complex non-smooth shapes with a minimum of user interaction.

  18. 3D real-time visualization of blood flow in cerebral aneurysms by light field particle image velocimetry

    NASA Astrophysics Data System (ADS)

    Carlsohn, Matthias F.; Kemmling, André; Petersen, Arne; Wietzke, Lennart

    2016-04-01

    Cerebral aneurysms require endovascular treatment to eliminate potentially lethal hemorrhagic rupture by hemostasis of blood flow within the aneurysm. Devices (e.g. coils and flow diverters) promote homeostasis, however, measurement of blood flow within an aneurysm or cerebral vessel before and after device placement on a microscopic level has not been possible so far. This would allow better individualized treatment planning and improve manufacture design of devices. For experimental analysis, direct measurement of real-time microscopic cerebrovascular flow in micro-structures may be an alternative to computed flow simulations. An application of microscopic aneurysm flow measurement on a regular basis to empirically assess a high number of different anatomic shapes and the corresponding effect of different devices would require a fast and reliable method at low cost with high throughout assessment. Transparent three dimensional 3D models of brain vessels and aneurysms may be used for microscopic flow measurements by particle image velocimetry (PIV), however, up to now the size of structures has set the limits for conventional 3D-imaging camera set-ups. On line flow assessment requires additional computational power to cope with the processing large amounts of data generated by sequences of multi-view stereo images, e.g. generated by a light field camera capturing the 3D information by plenoptic imaging of complex flow processes. Recently, a fast and low cost workflow for producing patient specific three dimensional models of cerebral arteries has been established by stereo-lithographic (SLA) 3D printing. These 3D arterial models are transparent an exhibit a replication precision within a submillimeter range required for accurate flow measurements under physiological conditions. We therefore test the feasibility of microscopic flow measurements by PIV analysis using a plenoptic camera system capturing light field image sequences. Averaging across a sequence of single double or triple shots of flashed images enables reconstruction of the real-time corpuscular flow through the vessel system before and after device placement. This approach could enable 3D-insight of microscopic flow within blood vessels and aneurysms at submillimeter resolution. We present an approach that allows real-time assessment of 3D particle flow by high-speed light field image analysis including a solution that addresses high computational load by image processing. The imaging set-up accomplishes fast and reliable PIV analysis in transparent 3D models of brain aneurysms at low cost. High throughput microscopic flow assessment of different shapes of brain aneurysms may therefore be possibly required for patient specific device designs.

  19. Combined X-ray CT and mass spectrometry for biomedical imaging applications

    NASA Astrophysics Data System (ADS)

    Schioppa, E., Jr.; Ellis, S.; Bruinen, A. L.; Visser, J.; Heeren, R. M. A.; Uher, J.; Koffeman, E.

    2014-04-01

    Imaging technologies play a key role in many branches of science, especially in biology and medicine. They provide an invaluable insight into both internal structure and processes within a broad range of samples. There are many techniques that allow one to obtain images of an object. Different techniques are based on the analysis of a particular sample property by means of a dedicated imaging system, and as such, each imaging modality provides the researcher with different information. The use of multimodal imaging (imaging with several different techniques) can provide additional and complementary information that is not possible when employing a single imaging technique alone. In this study, we present for the first time a multi-modal imaging technique where X-ray computerized tomography (CT) is combined with mass spectrometry imaging (MSI). While X-ray CT provides 3-dimensional information regarding the internal structure of the sample based on X-ray absorption coefficients, MSI of thin sections acquired from the same sample allows the spatial distribution of many elements/molecules, each distinguished by its unique mass-to-charge ratio (m/z), to be determined within a single measurement and with a spatial resolution as low as 1 μm or even less. The aim of the work is to demonstrate how molecular information from MSI can be spatially correlated with 3D structural information acquired from X-ray CT. In these experiments, frozen samples are imaged in an X-ray CT setup using Medipix based detectors equipped with a CO2 cooled sample holder. Single projections are pre-processed before tomographic reconstruction using a signal-to-thickness calibration. In the second step, the object is sliced into thin sections (circa 20 μm) that are then imaged using both matrix-assisted laser desorption/ionization mass spectrometry (MALDI-MS) and secondary ion (SIMS) mass spectrometry, where the spatial distribution of specific molecules within the sample is determined. The combination of two vastly different imaging approaches provides complementary information (i.e., anatomical and molecular distributions) that allows the correlation of distinct structural features with specific molecules distributions leading to unique insights in disease development.

  20. FISSA: A neuropil decontamination toolbox for calcium imaging signals.

    PubMed

    Keemink, Sander W; Lowe, Scott C; Pakan, Janelle M P; Dylda, Evelyn; van Rossum, Mark C W; Rochefort, Nathalie L

    2018-02-22

    In vivo calcium imaging has become a method of choice to image neuronal population activity throughout the nervous system. These experiments generate large sequences of images. Their analysis is computationally intensive and typically involves motion correction, image segmentation into regions of interest (ROIs), and extraction of fluorescence traces from each ROI. Out of focus fluorescence from surrounding neuropil and other cells can strongly contaminate the signal assigned to a given ROI. In this study, we introduce the FISSA toolbox (Fast Image Signal Separation Analysis) for neuropil decontamination. Given pre-defined ROIs, the FISSA toolbox automatically extracts the surrounding local neuropil and performs blind-source separation with non-negative matrix factorization. Using both simulated and in vivo data, we show that this toolbox performs similarly or better than existing published methods. FISSA requires only little RAM, and allows for fast processing of large datasets even on a standard laptop. The FISSA toolbox is available in Python, with an option for MATLAB format outputs, and can easily be integrated into existing workflows. It is available from Github and the standard Python repositories.

Top