Sample records for basic image analysis

  1. Image analysis library software development

    NASA Technical Reports Server (NTRS)

    Guseman, L. F., Jr.; Bryant, J.

    1977-01-01

    The Image Analysis Library consists of a collection of general purpose mathematical/statistical routines and special purpose data analysis/pattern recognition routines basic to the development of image analysis techniques for support of current and future Earth Resources Programs. Work was done to provide a collection of computer routines and associated documentation which form a part of the Image Analysis Library.

  2. Research relative to automated multisensor image registration

    NASA Technical Reports Server (NTRS)

    Kanal, L. N.

    1983-01-01

    The basic aproaches to image registration are surveyed. Three image models are presented as models of the subpixel problem. A variety of approaches to the analysis of subpixel analysis are presented using these models.

  3. Basics of image analysis

    USDA-ARS?s Scientific Manuscript database

    Hyperspectral imaging technology has emerged as a powerful tool for quality and safety inspection of food and agricultural products and in precision agriculture over the past decade. Image analysis is a critical step in implementing hyperspectral imaging technology; it is aimed to improve the qualit...

  4. TH-A-16A-01: Image Quality for the Radiation Oncology Physicist: Review of the Fundamentals and Implementation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seibert, J; Imbergamo, P

    The expansion and integration of diagnostic imaging technologies such as On Board Imaging (OBI) and Cone Beam Computed Tomography (CBCT) into radiation oncology has required radiation oncology physicists to be responsible for and become familiar with assessing image quality. Unfortunately many radiation oncology physicists have had little or no training or experience in measuring and assessing image quality. Many physicists have turned to automated QA analysis software without having a fundamental understanding of image quality measures. This session will review the basic image quality measures of imaging technologies used in the radiation oncology clinic, such as low contrast resolution, highmore » contrast resolution, uniformity, noise, and contrast scale, and how to measure and assess them in a meaningful way. Additionally a discussion of the implementation of an image quality assurance program in compliance with Task Group recommendations will be presented along with the advantages and disadvantages of automated analysis methods. Learning Objectives: Review and understanding of the fundamentals of image quality. Review and understanding of the basic image quality measures of imaging modalities used in the radiation oncology clinic. Understand how to implement an image quality assurance program and to assess basic image quality measures in a meaningful way.« less

  5. Digital Radiographic Image Processing and Analysis.

    PubMed

    Yoon, Douglas C; Mol, André; Benn, Douglas K; Benavides, Erika

    2018-07-01

    This article describes digital radiographic imaging and analysis from the basics of image capture to examples of some of the most advanced digital technologies currently available. The principles underlying the imaging technologies are described to provide a better understanding of their strengths and limitations. Copyright © 2018 Elsevier Inc. All rights reserved.

  6. Integrated analysis of remote sensing products from basic geological surveys. [Brazil

    NASA Technical Reports Server (NTRS)

    Dasilvafagundesfilho, E. (Principal Investigator)

    1984-01-01

    Recent advances in remote sensing led to the development of several techniques to obtain image information. These techniques as effective tools in geological maping are analyzed. A strategy for optimizing the images in basic geological surveying is presented. It embraces as integrated analysis of spatial, spectral, and temporal data through photoptic (color additive viewer) and computer processing at different scales, allowing large areas survey in a fast, precise, and low cost manner.

  7. Comparative Analysis of Reconstructed Image Quality in a Simulated Chromotomographic Imager

    DTIC Science & Technology

    2014-03-01

    quality . This example uses five basic images a backlit bar chart with random intensity, 100 nm separation. A total of 54 initial target...compared for a variety of scenes. Reconstructed image quality is highly dependent on the initial target hypercube so a total of 54 initial target...COMPARATIVE ANALYSIS OF RECONSTRUCTED IMAGE QUALITY IN A SIMULATED CHROMOTOMOGRAPHIC IMAGER THESIS

  8. Memory-Augmented Cellular Automata for Image Analysis.

    DTIC Science & Technology

    1978-11-01

    case in which each cell has memory size proportional to the logarithm of the input size, showing the increased capabilities of these machines for executing a variety of basic image analysis and recognition tasks. (Author)

  9. IMAGE EXPLORER: Astronomical Image Analysis on an HTML5-based Web Application

    NASA Astrophysics Data System (ADS)

    Gopu, A.; Hayashi, S.; Young, M. D.

    2014-05-01

    Large datasets produced by recent astronomical imagers cause the traditional paradigm for basic visual analysis - typically downloading one's entire image dataset and using desktop clients like DS9, Aladin, etc. - to not scale, despite advances in desktop computing power and storage. This paper describes Image Explorer, a web framework that offers several of the basic visualization and analysis functionality commonly provided by tools like DS9, on any HTML5 capable web browser on various platforms. It uses a combination of the modern HTML5 canvas, JavaScript, and several layers of lossless PNG tiles producted from the FITS image data. Astronomers are able to rapidly and simultaneously open up several images on their web-browser, adjust the intensity min/max cutoff or its scaling function, and zoom level, apply color-maps, view position and FITS header information, execute typically used data reduction codes on the corresponding FITS data using the FRIAA framework, and overlay tiles for source catalog objects, etc.

  10. Quantitative Assay for Starch by Colorimetry Using a Desktop Scanner

    ERIC Educational Resources Information Center

    Matthews, Kurt R.; Landmark, James D.; Stickle, Douglas F.

    2004-01-01

    The procedure to produce standard curve for starch concentration measurement by image analysis using a color scanner and computer for data acquisition and color analysis is described. Color analysis is performed by a Visual Basic program that measures red, green, and blue (RGB) color intensities for pixels within the scanner image.

  11. The application of digital techniques to the analysis of metallurgical experiments

    NASA Technical Reports Server (NTRS)

    Rathz, T. J.

    1977-01-01

    The application of a specific digital computer system (known as the Image Data Processing System) to the analysis of three NASA-sponsored metallurgical experiments is discussed in some detail. The basic hardware and software components of the Image Data Processing System are presented. Many figures are presented in the discussion of each experimental analysis in an attempt to show the accuracy and speed that the Image Data Processing System affords in analyzing photographic images dealing with metallurgy, and in particular with material processing.

  12. A modeling analysis program for the JPL table mountain Io sodium cloud data

    NASA Technical Reports Server (NTRS)

    Smyth, W. H.; Goldberg, B. A.

    1984-01-01

    A detailed review of 110 of the 263 Region B/C images of the 1981 data set is undertaken and a preliminary assessment of 39 images of the 1976-79 data set is presented. The basic spatial characteristics of these images are discussed. Modeling analysis of these images after further data processing will provide useful information about Io and the planetary magnetosphere. Plans for data processing and modeling analysis are outlined. Results of very preliminary modeling activities are presented.

  13. The Graduate Training Programme "Molecular Imaging for the Analysis of Gene and Protein Expression": A Case Study with an Insight into the Participation of Universities of Applied Sciences

    ERIC Educational Resources Information Center

    Hafner, Mathias

    2008-01-01

    Cell biology and molecular imaging technologies have made enormous progress in basic research. However, the transfer of this knowledge to the pharmaceutical drug discovery process, or even therapeutic improvements for disorders such as neuronal diseases, is still in its infancy. This transfer needs scientists who can integrate basic research with…

  14. Photon Limited Images and Their Restoration

    DTIC Science & Technology

    1976-03-01

    arises from noise inherent in the detected image data. In the first part of this report a model is developed which can be used to mathematically and...statistically describe an image detected at low light levels. This rodel serves to clarify some basic properties of photon noise , and provides a basis...for the analysi.s of image restoration. In the second part the problem of linear least-square restoration of imagery limited by photon noise is

  15. Flightspeed Integral Image Analysis Toolkit

    NASA Technical Reports Server (NTRS)

    Thompson, David R.

    2009-01-01

    The Flightspeed Integral Image Analysis Toolkit (FIIAT) is a C library that provides image analysis functions in a single, portable package. It provides basic low-level filtering, texture analysis, and subwindow descriptor for applications dealing with image interpretation and object recognition. Designed with spaceflight in mind, it addresses: Ease of integration (minimal external dependencies) Fast, real-time operation using integer arithmetic where possible (useful for platforms lacking a dedicated floatingpoint processor) Written entirely in C (easily modified) Mostly static memory allocation 8-bit image data The basic goal of the FIIAT library is to compute meaningful numerical descriptors for images or rectangular image regions. These n-vectors can then be used directly for novelty detection or pattern recognition, or as a feature space for higher-level pattern recognition tasks. The library provides routines for leveraging training data to derive descriptors that are most useful for a specific data set. Its runtime algorithms exploit a structure known as the "integral image." This is a caching method that permits fast summation of values within rectangular regions of an image. This integral frame facilitates a wide range of fast image-processing functions. This toolkit has applicability to a wide range of autonomous image analysis tasks in the space-flight domain, including novelty detection, object and scene classification, target detection for autonomous instrument placement, and science analysis of geomorphology. It makes real-time texture and pattern recognition possible for platforms with severe computational restraints. The software provides an order of magnitude speed increase over alternative software libraries currently in use by the research community. FIIAT can commercially support intelligent video cameras used in intelligent surveillance. It is also useful for object recognition by robots or other autonomous vehicles

  16. Basic research planning in mathematical pattern recognition and image analysis

    NASA Technical Reports Server (NTRS)

    Bryant, J.; Guseman, L. F., Jr.

    1981-01-01

    Fundamental problems encountered while attempting to develop automated techniques for applications of remote sensing are discussed under the following categories: (1) geometric and radiometric preprocessing; (2) spatial, spectral, temporal, syntactic, and ancillary digital image representation; (3) image partitioning, proportion estimation, and error models in object scene interference; (4) parallel processing and image data structures; and (5) continuing studies in polarization; computer architectures and parallel processing; and the applicability of "expert systems" to interactive analysis.

  17. Multi-Sensor Scene Synthesis and Analysis

    DTIC Science & Technology

    1981-09-01

    Quad Trees for Image Representation and Processing ...... ... 126 2.6.2 Databases ..... ..... ... ..... ... ..... ..... 138 2.6.2.1 Definitions and...Basic Concepts ....... 138 2.6.3 Use of Databases in Hierarchical Scene Analysis ...... ... ..................... 147 2.6.4 Use of Relational Tables...Multisensor Image Database Systems (MIDAS) . 161 2.7.2 Relational Database System for Pictures .... ..... 168 2.7.3 Relational Pictorial Database

  18. Fundamental remote sensing science research program. Part 1: Status report of the mathematical pattern recognition and image analysis project

    NASA Technical Reports Server (NTRS)

    Heydorn, R. D.

    1984-01-01

    The Mathematical Pattern Recognition and Image Analysis (MPRIA) Project is concerned with basic research problems related to the study of the Earth from remotely sensed measurement of its surface characteristics. The program goal is to better understand how to analyze the digital image that represents the spatial, spectral, and temporal arrangement of these measurements for purposing of making selected inference about the Earth.

  19. Methods of training the graduate level and professional geologist in remote sensing technology

    NASA Technical Reports Server (NTRS)

    Kolm, K. E.

    1981-01-01

    Requirements for a basic course in remote sensing to accommodate the needs of the graduate level and professional geologist are described. The course should stress the general topics of basic remote sensing theory, the theory and data types relating to different remote sensing systems, an introduction to the basic concepts of computer image processing and analysis, the characteristics of different data types, the development of methods for geological interpretations, the integration of all scales and data types of remote sensing in a given study, the integration of other data bases (geophysical and geochemical) into a remote sensing study, and geological remote sensing applications. The laboratories should stress hands on experience to reinforce the concepts and procedures presented in the lecture. The geologist should then be encouraged to pursue a second course in computer image processing and analysis of remotely sensed data.

  20. Analysis of live cell images: Methods, tools and opportunities.

    PubMed

    Nketia, Thomas A; Sailem, Heba; Rohde, Gustavo; Machiraju, Raghu; Rittscher, Jens

    2017-02-15

    Advances in optical microscopy, biosensors and cell culturing technologies have transformed live cell imaging. Thanks to these advances live cell imaging plays an increasingly important role in basic biology research as well as at all stages of drug development. Image analysis methods are needed to extract quantitative information from these vast and complex data sets. The aim of this review is to provide an overview of available image analysis methods for live cell imaging, in particular required preprocessing image segmentation, cell tracking and data visualisation methods. The potential opportunities recent advances in machine learning, especially deep learning, and computer vision provide are being discussed. This review includes overview of the different available software packages and toolkits. Copyright © 2017. Published by Elsevier Inc.

  1. [Quantitative data analysis for live imaging of bone.

    PubMed

    Seno, Shigeto

    Bone tissue is a hard tissue, it was difficult to observe the interior of the bone tissue alive. With the progress of microscopic technology and fluorescent probe technology in recent years, it becomes possible to observe various activities of various cells forming bone society. On the other hand, the quantitative increase in data and the diversification and complexity of the images makes it difficult to perform quantitative analysis by visual inspection. It has been expected to develop a methodology for processing microscopic images and data analysis. In this article, we introduce the research field of bioimage informatics which is the boundary area of biology and information science, and then outline the basic image processing technology for quantitative analysis of live imaging data of bone.

  2. A manual for microcomputer image analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rich, P.M.; Ranken, D.M.; George, J.S.

    1989-12-01

    This manual is intended to serve three basic purposes: as a primer in microcomputer image analysis theory and techniques, as a guide to the use of IMAGE{copyright}, a public domain microcomputer program for image analysis, and as a stimulus to encourage programmers to develop microcomputer software suited for scientific use. Topics discussed include the principals of image processing and analysis, use of standard video for input and display, spatial measurement techniques, and the future of microcomputer image analysis. A complete reference guide that lists the commands for IMAGE is provided. IMAGE includes capabilities for digitization, input and output of images,more » hardware display lookup table control, editing, edge detection, histogram calculation, measurement along lines and curves, measurement of areas, examination of intensity values, output of analytical results, conversion between raster and vector formats, and region movement and rescaling. The control structure of IMAGE emphasizes efficiency, precision of measurement, and scientific utility. 18 refs., 18 figs., 2 tabs.« less

  3. Characterization and recognition of mixed emotional expressions in thermal face image

    NASA Astrophysics Data System (ADS)

    Saha, Priya; Bhattacharjee, Debotosh; De, Barin K.; Nasipuri, Mita

    2016-05-01

    Facial expressions in infrared imaging have been introduced to solve the problem of illumination, which is an integral constituent of visual imagery. The paper investigates facial skin temperature distribution on mixed thermal facial expressions of our created face database where six are basic expressions and rest 12 are a mixture of those basic expressions. Temperature analysis has been performed on three facial regions of interest (ROIs); periorbital, supraorbital and mouth. Temperature variability of the ROIs in different expressions has been measured using statistical parameters. The temperature variation measurement in ROIs of a particular expression corresponds to a vector, which is later used in recognition of mixed facial expressions. Investigations show that facial features in mixed facial expressions can be characterized by positive emotion induced facial features and negative emotion induced facial features. Supraorbital is a useful facial region that can differentiate basic expressions from mixed expressions. Analysis and interpretation of mixed expressions have been conducted with the help of box and whisker plot. Facial region containing mixture of two expressions is generally less temperature inducing than corresponding facial region containing basic expressions.

  4. EduGATE - basic examples for educative purpose using the GATE simulation platform.

    PubMed

    Pietrzyk, Uwe; Zakhnini, Abdelhamid; Axer, Markus; Sauerzapf, Sophie; Benoit, Didier; Gaens, Michaela

    2013-02-01

    EduGATE is a collection of basic examples to introduce students to the fundamental physical aspects of medical imaging devices. It is based on the GATE platform, which has received a wide acceptance in the field of simulating medical imaging devices including SPECT, PET, CT and also applications in radiation therapy. GATE can be configured by commands, which are, for the sake of simplicity, listed in a collection of one or more macro files to set up phantoms, multiple types of sources, detection device, and acquisition parameters. The aim of the EduGATE is to use all these helpful features of GATE to provide insights into the physics of medical imaging by means of a collection of very basic and simple GATE macros in connection with analysis programs based on ROOT, a framework for data processing. A graphical user interface to define a configuration is also included. Copyright © 2012. Published by Elsevier GmbH.

  5. Colony image acquisition and genetic segmentation algorithm and colony analyses

    NASA Astrophysics Data System (ADS)

    Wang, W. X.

    2012-01-01

    Colony anaysis is used in a large number of engineerings such as food, dairy, beverages, hygiene, environmental monitoring, water, toxicology, sterility testing. In order to reduce laboring and increase analysis acuracy, many researchers and developers have made efforts for image analysis systems. The main problems in the systems are image acquisition, image segmentation and image analysis. In this paper, to acquire colony images with good quality, an illumination box was constructed. In the box, the distances between lights and dishe, camra lens and lights, and camera lens and dishe are adjusted optimally. In image segmentation, It is based on a genetic approach that allow one to consider the segmentation problem as a global optimization,. After image pre-processing and image segmentation, the colony analyses are perfomed. The colony image analysis consists of (1) basic colony parameter measurements; (2) colony size analysis; (3) colony shape analysis; and (4) colony surface measurements. All the above visual colony parameters can be selected and combined together, used to make a new engineeing parameters. The colony analysis can be applied into different applications.

  6. Methods for scalar-on-function regression.

    PubMed

    Reiss, Philip T; Goldsmith, Jeff; Shang, Han Lin; Ogden, R Todd

    2017-08-01

    Recent years have seen an explosion of activity in the field of functional data analysis (FDA), in which curves, spectra, images, etc. are considered as basic functional data units. A central problem in FDA is how to fit regression models with scalar responses and functional data points as predictors. We review some of the main approaches to this problem, categorizing the basic model types as linear, nonlinear and nonparametric. We discuss publicly available software packages, and illustrate some of the procedures by application to a functional magnetic resonance imaging dataset.

  7. A learning tool for optical and microwave satellite image processing and analysis

    NASA Astrophysics Data System (ADS)

    Dashondhi, Gaurav K.; Mohanty, Jyotirmoy; Eeti, Laxmi N.; Bhattacharya, Avik; De, Shaunak; Buddhiraju, Krishna M.

    2016-04-01

    This paper presents a self-learning tool, which contains a number of virtual experiments for processing and analysis of Optical/Infrared and Synthetic Aperture Radar (SAR) images. The tool is named Virtual Satellite Image Processing and Analysis Lab (v-SIPLAB) Experiments that are included in Learning Tool are related to: Optical/Infrared - Image and Edge enhancement, smoothing, PCT, vegetation indices, Mathematical Morphology, Accuracy Assessment, Supervised/Unsupervised classification etc.; Basic SAR - Parameter extraction and range spectrum estimation, Range compression, Doppler centroid estimation, Azimuth reference function generation and compression, Multilooking, image enhancement, texture analysis, edge and detection. etc.; SAR Interferometry - BaseLine Calculation, Extraction of single look SAR images, Registration, Resampling, and Interferogram generation; SAR Polarimetry - Conversion of AirSAR or Radarsat data to S2/C3/T3 matrix, Speckle Filtering, Power/Intensity image generation, Decomposition of S2/C3/T3, Classification of S2/C3/T3 using Wishart Classifier [3]. A professional quality polarimetric SAR software can be found at [8], a part of whose functionality can be found in our system. The learning tool also contains other modules, besides executable software experiments, such as aim, theory, procedure, interpretation, quizzes, link to additional reading material and user feedback. Students can have understanding of Optical and SAR remotely sensed images through discussion of basic principles and supported by structured procedure for running and interpreting the experiments. Quizzes for self-assessment and a provision for online feedback are also being provided to make this Learning tool self-contained. One can download results after performing experiments.

  8. CCDs in the Mechanics Lab--A Competitive Alternative? (Part I).

    ERIC Educational Resources Information Center

    Pinto, Fabrizio

    1995-01-01

    Reports on the implementation of a relatively low-cost, versatile, and intuitive system to teach basic mechanics based on the use of a Charge-Coupled Device (CCD) camera and inexpensive image-processing and analysis software. Discusses strengths and limitations of CCD imaging technologies. (JRH)

  9. Methods in Astronomical Image Processing

    NASA Astrophysics Data System (ADS)

    Jörsäter, S.

    A Brief Introductory Note History of Astronomical Imaging Astronomical Image Data Images in Various Formats Digitized Image Data Digital Image Data Philosophy of Astronomical Image Processing Properties of Digital Astronomical Images Human Image Processing Astronomical vs. Computer Science Image Processing Basic Tools of Astronomical Image Processing Display Applications Calibration of Intensity Scales Calibration of Length Scales Image Re-shaping Feature Enhancement Noise Suppression Noise and Error Analysis Image Processing Packages: Design of AIPS and MIDAS AIPS MIDAS Reduction of CCD Data Bias Subtraction Clipping Preflash Subtraction Dark Subtraction Flat Fielding Sky Subtraction Extinction Correction Deconvolution Methods Rebinning/Combining Summary and Prospects for the Future

  10. An introduction to diffusion tensor image analysis.

    PubMed

    O'Donnell, Lauren J; Westin, Carl-Fredrik

    2011-04-01

    Diffusion tensor magnetic resonance imaging (DTI) is a relatively new technology that is popular for imaging the white matter of the brain. This article provides a basic and broad overview of DTI to enable the reader to develop an intuitive understanding of these types of data, and an awareness of their strengths and weaknesses. Copyright © 2011 Elsevier Inc. All rights reserved.

  11. Quantitative analysis of phosphoinositide 3-kinase (PI3K) signaling using live-cell total internal reflection fluorescence (TIRF) microscopy.

    PubMed

    Johnson, Heath E; Haugh, Jason M

    2013-12-02

    This unit focuses on the use of total internal reflection fluorescence (TIRF) microscopy and image analysis methods to study the dynamics of signal transduction mediated by class I phosphoinositide 3-kinases (PI3Ks) in mammalian cells. The first four protocols cover live-cell imaging experiments, image acquisition parameters, and basic image processing and segmentation. These methods are generally applicable to live-cell TIRF experiments. The remaining protocols outline more advanced image analysis methods, which were developed in our laboratory for the purpose of characterizing the spatiotemporal dynamics of PI3K signaling. These methods may be extended to analyze other cellular processes monitored using fluorescent biosensors. Copyright © 2013 John Wiley & Sons, Inc.

  12. Efficacy of a Single Dose of Basic Fibroblast Growth Factor: Clinical Observation for 1 Year.

    PubMed

    Suzuki, Hirotaka; Makiyama, Kiyoshi; Hirai, Ryoji; Matsuzaki, Hiroumi; Furusaka, Toru; Oshima, Takeshi

    2016-11-01

    Basic fibroblast growth factor promotes wound healing by accelerating healthy granulation and epithelialization. However, the duration of the effects of a single intracordal injection of basic fibroblast growth factor has not been established, and administration intervals and timing have yet to be standardized. Here, we administered a single injection to patients with insufficient glottic closure and conducted follow-up examinations with high-speed digital imaging to determine the duration of the treatment response. Case series. For treatment, 20 µg/mL recombinant human basic fibroblast growth factor was injected into two vocal cords. The following examinations were performed before the procedure and at 3-month intervals for 12 months starting at 1 month postinjection: Grade, Roughness, Breathiness, Asthenia, and Strain (GRBAS) scale assessment, maximum phonation time, acoustic analysis, high-speed digital imaging, glottal wave analysis, and kymographic analysis. Postinjection, the GRBAS scale score decreased, and the maximum phonation time was prolonged. In addition, the mean minimum glottal area and mean minimum glottal distance decreased. These changes were significant at 12 months postinjection compared with preinjection. However, there were no significant changes in the vibrations of the vocal cord margins. The intracordal injection of basic fibroblast growth factor improved insufficient glottic closure without reducing the vibrations of the vocal cord margins. This effect remained evident at 12 months postinjection. A single injection can be expected to yield a sufficient and persistent long-term effect. Copyright © 2016 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  13. Satellite images to aircraft in flight. [GEOS image transmission feasibility analysis

    NASA Technical Reports Server (NTRS)

    Camp, D.; Luers, J. K.; Kadlec, P. W.

    1977-01-01

    A study has been initiated to evaluate the feasibility of transmitting selected GOES images to aircraft in flight. Pertinent observations that could be made from satellite images on board aircraft include jet stream activity, cloud/wind motion, cloud temperatures, tropical storm activity, and location of severe weather. The basic features of the Satellite Aircraft Flight Environment System (SAFES) are described. This system uses East GOES and West GOES satellite images, which are interpreted, enhanced, and then retransmitted to designated aircraft.

  14. [Investigation of the accurate measurement of the basic imaging properties for the digital radiographic system based on flat panel detector].

    PubMed

    Katayama, R; Sakai, S; Sakaguchi, T; Maeda, T; Takada, K; Hayabuchi, N; Morishita, J

    2008-07-20

    PURPOSE/AIM OF THE EXHIBIT: The purpose of this exhibit is: 1. To explain "resampling", an image data processing, performed by the digital radiographic system based on flat panel detector (FPD). 2. To show the influence of "resampling" on the basic imaging properties. 3. To present accurate measurement methods of the basic imaging properties of the FPD system. 1. The relationship between the matrix sizes of the output image and the image data acquired on FPD that automatically changes depending on a selected image size (FOV). 2. The explanation of the image data processing of "resampling". 3. The evaluation results of the basic imaging properties of the FPD system using two types of DICOM image to which "resampling" was performed: characteristic curves, presampled MTFs, noise power spectra, detective quantum efficiencies. CONCLUSION/SUMMARY: The major points of the exhibit are as follows: 1. The influence of "resampling" should not be disregarded in the evaluation of the basic imaging properties of the flat panel detector system. 2. It is necessary for the basic imaging properties to be measured by using DICOM image to which no "resampling" is performed.

  15. Component pattern analysis of chemicals using multispectral THz imaging system

    NASA Astrophysics Data System (ADS)

    Kawase, Kodo; Ogawa, Yuichi; Watanabe, Yuki

    2004-04-01

    We have developed a novel basic technology for terahertz (THz) imaging, which allows detection and identification of chemicals by introducing the component spatial pattern analysis. The spatial distributions of the chemicals were obtained from terahertz multispectral transillumination images, using absorption spectra previously measured with a widely tunable THz-wave parametric oscillator. Further we have applied this technique to the detection and identification of illicit drugs concealed in envelopes. The samples we used were methamphetamine and MDMA, two of the most widely consumed illegal drugs in Japan, and aspirin as a reference.

  16. IDIMS/GEOPAK: Users manual for a geophysical data display and analysis system

    NASA Technical Reports Server (NTRS)

    Libert, J. M.

    1982-01-01

    The application of an existing image analysis system to the display and analysis of geophysical data is described, the potential for expanding the capabilities of such a system toward more advanced computer analytic and modeling functions is investigated. The major features of the IDIMS (Interactive Display and Image Manipulation System) and its applicability for image type analysis of geophysical data are described. Development of a basic geophysical data processing system to permit the image representation, coloring, interdisplay and comparison of geophysical data sets using existing IDIMS functions and to provide for the production of hard copies of processed images was described. An instruction manual and documentation for the GEOPAK subsystem was produced. A training course for personnel in the use of the IDIMS/GEOPAK was conducted. The effectiveness of the current IDIMS/GEOPAK system for geophysical data analysis was evaluated.

  17. A Critical Analysis of USAir's Image Repair Discourse.

    ERIC Educational Resources Information Center

    Benoit, William L.; Czerwinski, Anne

    1997-01-01

    Applies the theory of image restoration to a case study of USAir's response to media coverage of a 1994 crash. Argues that introducing such case studies in the classroom helps students to understand the basic tenets of persuasion in the highly charged context of repairing a corporate reputation after an attack. (SR)

  18. Analysis of Variance in Statistical Image Processing

    NASA Astrophysics Data System (ADS)

    Kurz, Ludwik; Hafed Benteftifa, M.

    1997-04-01

    A key problem in practical image processing is the detection of specific features in a noisy image. Analysis of variance (ANOVA) techniques can be very effective in such situations, and this book gives a detailed account of the use of ANOVA in statistical image processing. The book begins by describing the statistical representation of images in the various ANOVA models. The authors present a number of computationally efficient algorithms and techniques to deal with such problems as line, edge, and object detection, as well as image restoration and enhancement. By describing the basic principles of these techniques, and showing their use in specific situations, the book will facilitate the design of new algorithms for particular applications. It will be of great interest to graduate students and engineers in the field of image processing and pattern recognition.

  19. Aural analysis of image texture via cepstral filtering and sonification

    NASA Astrophysics Data System (ADS)

    Rangayyan, Rangaraj M.; Martins, Antonio C. G.; Ruschioni, Ruggero A.

    1996-03-01

    Texture plays an important role in image analysis and understanding, with many applications in medical imaging and computer vision. However, analysis of texture by image processing is a rather difficult issue, with most techniques being oriented towards statistical analysis which may not have readily comprehensible perceptual correlates. We propose new methods for auditory display (AD) and sonification of (quasi-) periodic texture (where a basic texture element or `texton' is repeated over the image field) and random texture (which could be modeled as filtered or `spot' noise). Although the AD designed is not intended to be speech- like or musical, we draw analogies between the two types of texture mentioned above and voiced/unvoiced speech, and design a sonification algorithm which incorporates physical and perceptual concepts of texture and speech. More specifically, we present a method for AD of texture where the projections of the image at various angles (Radon transforms or integrals) are mapped to audible signals and played in sequence. In the case of random texture, the spectral envelopes of the projections are related to the filter spot characteristics, and convey the essential information for texture discrimination. In the case of periodic texture, the AD provides timber and pitch related to the texton and periodicity. In another procedure for sonification of periodic texture, we propose to first deconvolve the image using cepstral analysis to extract information about the texton and horizontal and vertical periodicities. The projections of individual textons at various angles are used to create a voiced-speech-like signal with each projection mapped to a basic wavelet, the horizontal period to pitch, and the vertical period to rhythm on a longer time scale. The sound pattern then consists of a serial, melody-like sonification of the patterns for each projection. We believe that our approaches provide the much-desired `natural' connection between the image data and the sounds generated. We have evaluated the sonification techniques with a number of synthetic textures. The sound patterns created have demonstrated the potential of the methods in distinguishing between different types of texture. We are investigating the application of these techniques to auditory analysis of texture in medical images such as magnetic resonance images.

  20. APPLEPIPS /Apple Personal Image Processing System/ - An interactive digital image processing system for the Apple II microcomputer

    NASA Technical Reports Server (NTRS)

    Masuoka, E.; Rose, J.; Quattromani, M.

    1981-01-01

    Recent developments related to microprocessor-based personal computers have made low-cost digital image processing systems a reality. Image analysis systems built around these microcomputers provide color image displays for images as large as 256 by 240 pixels in sixteen colors. Descriptive statistics can be computed for portions of an image, and supervised image classification can be obtained. The systems support Basic, Fortran, Pascal, and assembler language. A description is provided of a system which is representative of the new microprocessor-based image processing systems currently on the market. While small systems may never be truly independent of larger mainframes, because they lack 9-track tape drives, the independent processing power of the microcomputers will help alleviate some of the turn-around time problems associated with image analysis and display on the larger multiuser systems.

  1. All Source Sensor Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    - PNNL, Harold Trease

    2012-10-10

    ASSA is a software application that processes binary data into summarized index tables that can be used to organize features contained within the data. ASSA's index tables can also be used to search for user specified features. ASSA is designed to organize and search for patterns in unstructured binary data streams or archives, such as video, images, audio, and network traffic. ASSA is basically a very general search engine used to search for any pattern in any binary data stream. It has uses in video analytics, image analysis, audio analysis, searching hard-drives, monitoring network traffic, etc.

  2. Fundamental remote science research program. Part 2: Status report of the mathematical pattern recognition and image analysis project

    NASA Technical Reports Server (NTRS)

    Heydorn, R. P.

    1984-01-01

    The Mathematical Pattern Recognition and Image Analysis (MPRIA) Project is concerned with basic research problems related to the study of he Earth from remotely sensed measurements of its surface characteristics. The program goal is to better understand how to analyze the digital image that represents the spatial, spectral, and temporal arrangement of these measurements for purposing of making selected inferences about the Earth. This report summarizes the progress that has been made toward this program goal by each of the principal investigators in the MPRIA Program.

  3. Cardiovascular imaging environment: will the future be cloud-based?

    PubMed

    Kawel-Boehm, Nadine; Bluemke, David A

    2017-07-01

    In cardiovascular CT and MR imaging large datasets have to be stored, post-processed, analyzed and distributed. Beside basic assessment of volume and function in cardiac magnetic resonance imaging e.g., more sophisticated quantitative analysis is requested requiring specific software. Several institutions cannot afford various types of software and provide expertise to perform sophisticated analysis. Areas covered: Various cloud services exist related to data storage and analysis specifically for cardiovascular CT and MR imaging. Instead of on-site data storage, cloud providers offer flexible storage services on a pay-per-use basis. To avoid purchase and maintenance of specialized software for cardiovascular image analysis, e.g. to assess myocardial iron overload, MR 4D flow and fractional flow reserve, evaluation can be performed with cloud based software by the consumer or complete analysis is performed by the cloud provider. However, challenges to widespread implementation of cloud services include regulatory issues regarding patient privacy and data security. Expert commentary: If patient privacy and data security is guaranteed cloud imaging is a valuable option to cope with storage of large image datasets and offer sophisticated cardiovascular image analysis for institutions of all sizes.

  4. Analysis of the basic science section of the orthopaedic in-training examination.

    PubMed

    Sheibani-Rad, Shahin; Arnoczky, Steven Paul; Walter, Norman E

    2012-08-01

    Since 1963, the Orthopaedic In-Training Examination (OITE) has been administered to orthopedic residents to assess residents' knowledge and measure the quality of teaching within individual programs. The OITE currently consists of 275 questions divided among 12 domains. This study analyzed all OITE basic science questions between 2006 and 2010. The following data were recorded: number of questions, question taxonomy, category of question, type of imaging modality, and recommended journal and book references. Between 2006 and 2010, the basic science section constituted 12.2% of the OITE. The assessment of taxonomy classification showed that recall-type questions were the most common, at 81.4%. Imaging modalities typically involved questions on radiographs and constituted 6.2% of the OITE basic science section. The majority of questions were basic science questions (eg, genetics, cell replication, and bone metabolism), with an average of 26.4 questions per year. The Journal of Bone & Joint Surgery (American Volume) and the American Academy of Orthopaedic Surgeons' Orthopaedic Basic Science were the most commonly and consistently cited journal and review book, respectively. This study provides the first review of the question content and recommended references of the OITE basic science section. This information will provide orthopedic trainees, orthopedic residency programs, and the American Academy of Orthopaedic Surgeons Evaluation Committee valuable information related to improving residents' knowledge and performance and optimizing basic science educational curricula. Copyright 2012, SLACK Incorporated.

  5. Structural Image Analysis of the Brain in Neuropsychology Using Magnetic Resonance Imaging (MRI) Techniques.

    PubMed

    Bigler, Erin D

    2015-09-01

    Magnetic resonance imaging (MRI) of the brain provides exceptional image quality for visualization and neuroanatomical classification of brain structure. A variety of image analysis techniques provide both qualitative as well as quantitative methods to relate brain structure with neuropsychological outcome and are reviewed herein. Of particular importance are more automated methods that permit analysis of a broad spectrum of anatomical measures including volume, thickness and shape. The challenge for neuropsychology is which metric to use, for which disorder and the timing of when image analysis methods are applied to assess brain structure and pathology. A basic overview is provided as to the anatomical and pathoanatomical relations of different MRI sequences in assessing normal and abnormal findings. Some interpretive guidelines are offered including factors related to similarity and symmetry of typical brain development along with size-normalcy features of brain anatomy related to function. The review concludes with a detailed example of various quantitative techniques applied to analyzing brain structure for neuropsychological outcome studies in traumatic brain injury.

  6. Processing Cones: A Computational Structure for Image Analysis.

    DTIC Science & Technology

    1981-12-01

    image analysis applications, referred to as a processing cone, is described and sample algorithms are presented. A fundamental characteristic of the structure is its hierarchical organization into two-dimensional arrays of decreasing resolution. In this architecture, a protypical function is defined on a local window of data and applied uniformly to all windows in a parallel manner. Three basic modes of processing are supported in the cone: reduction operations (upward processing), horizontal operations (processing at a single level) and projection operations (downward

  7. Principles of Quantitative MR Imaging with Illustrated Review of Applicable Modular Pulse Diagrams.

    PubMed

    Mills, Andrew F; Sakai, Osamu; Anderson, Stephan W; Jara, Hernan

    2017-01-01

    Continued improvements in diagnostic accuracy using magnetic resonance (MR) imaging will require development of methods for tissue analysis that complement traditional qualitative MR imaging studies. Quantitative MR imaging is based on measurement and interpretation of tissue-specific parameters independent of experimental design, compared with qualitative MR imaging, which relies on interpretation of tissue contrast that results from experimental pulse sequence parameters. Quantitative MR imaging represents a natural next step in the evolution of MR imaging practice, since quantitative MR imaging data can be acquired using currently available qualitative imaging pulse sequences without modifications to imaging equipment. The article presents a review of the basic physical concepts used in MR imaging and how quantitative MR imaging is distinct from qualitative MR imaging. Subsequently, the article reviews the hierarchical organization of major applicable pulse sequences used in this article, with the sequences organized into conventional, hybrid, and multispectral sequences capable of calculating the main tissue parameters of T1, T2, and proton density. While this new concept offers the potential for improved diagnostic accuracy and workflow, awareness of this extension to qualitative imaging is generally low. This article reviews the basic physical concepts in MR imaging, describes commonly measured tissue parameters in quantitative MR imaging, and presents the major available pulse sequences used for quantitative MR imaging, with a focus on the hierarchical organization of these sequences. © RSNA, 2017.

  8. Diffraction enhance x-ray imaging for quantitative phase contrast studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Agrawal, A. K.; Singh, B., E-mail: balwants@rrcat.gov.in; Kashyap, Y. S.

    2016-05-23

    Conventional X-ray imaging based on absorption contrast permits limited visibility of feature having small density and thickness variations. For imaging of weakly absorbing material or materials possessing similar densities, a novel phase contrast imaging techniques called diffraction enhanced imaging has been designed and developed at imaging beamline Indus-2 RRCAT Indore. The technique provides improved visibility of the interfaces and show high contrast in the image forsmall density or thickness gradients in the bulk. This paper presents basic principle, instrumentation and analysis methods for this technique. Initial results of quantitative phase retrieval carried out on various samples have also been presented.

  9. Thread concept for automatic task parallelization in image analysis

    NASA Astrophysics Data System (ADS)

    Lueckenhaus, Maximilian; Eckstein, Wolfgang

    1998-09-01

    Parallel processing of image analysis tasks is an essential method to speed up image processing and helps to exploit the full capacity of distributed systems. However, writing parallel code is a difficult and time-consuming process and often leads to an architecture-dependent program that has to be re-implemented when changing the hardware. Therefore it is highly desirable to do the parallelization automatically. For this we have developed a special kind of thread concept for image analysis tasks. Threads derivated from one subtask may share objects and run in the same context but may process different threads of execution and work on different data in parallel. In this paper we describe the basics of our thread concept and show how it can be used as basis of an automatic task parallelization to speed up image processing. We further illustrate the design and implementation of an agent-based system that uses image analysis threads for generating and processing parallel programs by taking into account the available hardware. The tests made with our system prototype show that the thread concept combined with the agent paradigm is suitable to speed up image processing by an automatic parallelization of image analysis tasks.

  10. Quantitative Hyperspectral Reflectance Imaging

    PubMed Central

    Klein, Marvin E.; Aalderink, Bernard J.; Padoan, Roberto; de Bruin, Gerrit; Steemers, Ted A.G.

    2008-01-01

    Hyperspectral imaging is a non-destructive optical analysis technique that can for instance be used to obtain information from cultural heritage objects unavailable with conventional colour or multi-spectral photography. This technique can be used to distinguish and recognize materials, to enhance the visibility of faint or obscured features, to detect signs of degradation and study the effect of environmental conditions on the object. We describe the basic concept, working principles, construction and performance of a laboratory instrument specifically developed for the analysis of historical documents. The instrument measures calibrated spectral reflectance images at 70 wavelengths ranging from 365 to 1100 nm (near-ultraviolet, visible and near-infrared). By using a wavelength tunable narrow-bandwidth light-source, the light energy used to illuminate the measured object is minimal, so that any light-induced degradation can be excluded. Basic analysis of the hyperspectral data includes a qualitative comparison of the spectral images and the extraction of quantitative data such as mean spectral reflectance curves and statistical information from user-defined regions-of-interest. More sophisticated mathematical feature extraction and classification techniques can be used to map areas on the document, where different types of ink had been applied or where one ink shows various degrees of degradation. The developed quantitative hyperspectral imager is currently in use by the Nationaal Archief (National Archives of The Netherlands) to study degradation effects of artificial samples and original documents, exposed in their permanent exhibition area or stored in their deposit rooms. PMID:27873831

  11. Intelligent bandwidth compression

    NASA Astrophysics Data System (ADS)

    Tseng, D. Y.; Bullock, B. L.; Olin, K. E.; Kandt, R. K.; Olsen, J. D.

    1980-02-01

    The feasibility of a 1000:1 bandwidth compression ratio for image transmission has been demonstrated using image-analysis algorithms and a rule-based controller. Such a high compression ratio was achieved by first analyzing scene content using auto-cueing and feature-extraction algorithms, and then transmitting only the pertinent information consistent with mission requirements. A rule-based controller directs the flow of analysis and performs priority allocations on the extracted scene content. The reconstructed bandwidth-compressed image consists of an edge map of the scene background, with primary and secondary target windows embedded in the edge map. The bandwidth-compressed images are updated at a basic rate of 1 frame per second, with the high-priority target window updated at 7.5 frames per second. The scene-analysis algorithms used in this system together with the adaptive priority controller are described. Results of simulated 1000:1 bandwidth-compressed images are presented.

  12. Imaging Tumor Cell Movement In Vivo

    PubMed Central

    Entenberg, David; Kedrin, Dmitriy; Wyckoff, Jeffrey; Sahai, Erik; Condeelis, John; Segall, Jeffrey E.

    2013-01-01

    This unit describes the methods that we have been developing for analyzing tumor cell motility in mouse and rat models of breast cancer metastasis. Rodents are commonly used both to provide a mammalian system for studying human tumor cells (as xenografts in immunocompromised mice) as well as for following the development of tumors from a specific tissue type in transgenic lines. The Basic Protocol in this unit describes the standard methods used for generation of mammary tumors and imaging them. Additional protocols for labeling macrophages, blood vessel imaging, and image analysis are also included. PMID:23456602

  13. Use of satellite images in the evaluation of farmlands. [in Mexico

    NASA Technical Reports Server (NTRS)

    Lozano H., A. E.

    1978-01-01

    Remote sensing techniques in the evaluation of farmland in Mexico are discussed. Electronic analysis techniques and photointerpretation techniques are analyzed. Characteristics of the basic crops in Mexico as related to remote sensing are described.

  14. Topochemical Analysis of Cell Wall Components by TOF-SIMS.

    PubMed

    Aoki, Dan; Fukushima, Kazuhiko

    2017-01-01

    Time-of-flight secondary ion mass spectrometry (TOF-SIMS) is a recently developing analytical tool and a type of imaging mass spectrometry. TOF-SIMS provides mass spectral information with a lateral resolution on the order of submicrons, with widespread applicability. Sometimes, it is described as a surface analysis method without the requirement for sample pretreatment; however, several points need to be taken into account for the complete utilization of the capabilities of TOF-SIMS. In this chapter, we introduce methods for TOF-SIMS sample treatments, as well as basic knowledge of wood samples TOF-SIMS spectral and image data analysis.

  15. The horse-collar aurora - A frequent pattern of the aurora in quiet times

    NASA Technical Reports Server (NTRS)

    Hones, E. W., Jr.; Craven, J. D.; Frank, L. A.; Evans, D. S.; Newell, P. T.

    1989-01-01

    The frequent appearance of the 'horse-collar aurora' pattern in quiet-time DE 1 images is reported, presenting a two-hour image sequence that displays the basic features and shows that it sometimes evolves toward the theta configuration. There is some evidence for interplanetary magnetic field B(y) influence on the temporal development of the pattern. A preliminary statistical analysis finds the pattern appearing in one-third or more of the image sequences recorded during quiet times.

  16. Introduction to Modern Methods in Light Microscopy.

    PubMed

    Ryan, Joel; Gerhold, Abby R; Boudreau, Vincent; Smith, Lydia; Maddox, Paul S

    2017-01-01

    For centuries, light microscopy has been a key method in biological research, from the early work of Robert Hooke describing biological organisms as cells, to the latest in live-cell and single-molecule systems. Here, we introduce some of the key concepts related to the development and implementation of modern microscopy techniques. We briefly discuss the basics of optics in the microscope, super-resolution imaging, quantitative image analysis, live-cell imaging, and provide an outlook on active research areas pertaining to light microscopy.

  17. Clock Scan Protocol for Image Analysis: ImageJ Plugins.

    PubMed

    Dobretsov, Maxim; Petkau, Georg; Hayar, Abdallah; Petkau, Eugen

    2017-06-19

    The clock scan protocol for image analysis is an efficient tool to quantify the average pixel intensity within, at the border, and outside (background) a closed or segmented convex-shaped region of interest, leading to the generation of an averaged integral radial pixel-intensity profile. This protocol was originally developed in 2006, as a visual basic 6 script, but as such, it had limited distribution. To address this problem and to join similar recent efforts by others, we converted the original clock scan protocol code into two Java-based plugins compatible with NIH-sponsored and freely available image analysis programs like ImageJ or Fiji ImageJ. Furthermore, these plugins have several new functions, further expanding the range of capabilities of the original protocol, such as analysis of multiple regions of interest and image stacks. The latter feature of the program is especially useful in applications in which it is important to determine changes related to time and location. Thus, the clock scan analysis of stacks of biological images may potentially be applied to spreading of Na + or Ca ++ within a single cell, as well as to the analysis of spreading activity (e.g., Ca ++ waves) in populations of synaptically-connected or gap junction-coupled cells. Here, we describe these new clock scan plugins and show some examples of their applications in image analysis.

  18. Frequency domain analysis of knock images

    NASA Astrophysics Data System (ADS)

    Qi, Yunliang; He, Xin; Wang, Zhi; Wang, Jianxin

    2014-12-01

    High speed imaging-based knock analysis has mainly focused on time domain information, e.g. the spark triggered flame speed, the time when end gas auto-ignition occurs and the end gas flame speed after auto-ignition. This study presents a frequency domain analysis on the knock images recorded using a high speed camera with direct photography in a rapid compression machine (RCM). To clearly visualize the pressure wave oscillation in the combustion chamber, the images were high-pass-filtered to extract the luminosity oscillation. The luminosity spectrum was then obtained by applying fast Fourier transform (FFT) to three basic colour components (red, green and blue) of the high-pass-filtered images. Compared to the pressure spectrum, the luminosity spectra better identify the resonant modes of pressure wave oscillation. More importantly, the resonant mode shapes can be clearly visualized by reconstructing the images based on the amplitudes of luminosity spectra at the corresponding resonant frequencies, which agree well with the analytical solutions for mode shapes of gas vibration in a cylindrical cavity.

  19. Imaging through Fog Using Polarization Imaging in the Visible/NIR/SWIR Spectrum

    DTIC Science & Technology

    2017-01-11

    few haze effects as possible.  One post processing step on the image in order to complete image dehazing Figure 6: Basic architecture of the...Page 16 Figure 7: Basic architecture of post-processing techniques to recover an image dehazed from a raw image This first study was limited on the

  20. Geneious Basic: An integrated and extendable desktop software platform for the organization and analysis of sequence data

    PubMed Central

    Kearse, Matthew; Moir, Richard; Wilson, Amy; Stones-Havas, Steven; Cheung, Matthew; Sturrock, Shane; Buxton, Simon; Cooper, Alex; Markowitz, Sidney; Duran, Chris; Thierer, Tobias; Ashton, Bruce; Meintjes, Peter; Drummond, Alexei

    2012-01-01

    Summary: The two main functions of bioinformatics are the organization and analysis of biological data using computational resources. Geneious Basic has been designed to be an easy-to-use and flexible desktop software application framework for the organization and analysis of biological data, with a focus on molecular sequences and related data types. It integrates numerous industry-standard discovery analysis tools, with interactive visualizations to generate publication-ready images. One key contribution to researchers in the life sciences is the Geneious public application programming interface (API) that affords the ability to leverage the existing framework of the Geneious Basic software platform for virtually unlimited extension and customization. The result is an increase in the speed and quality of development of computation tools for the life sciences, due to the functionality and graphical user interface available to the developer through the public API. Geneious Basic represents an ideal platform for the bioinformatics community to leverage existing components and to integrate their own specific requirements for the discovery, analysis and visualization of biological data. Availability and implementation: Binaries and public API freely available for download at http://www.geneious.com/basic, implemented in Java and supported on Linux, Apple OSX and MS Windows. The software is also available from the Bio-Linux package repository at http://nebc.nerc.ac.uk/news/geneiousonbl. Contact: peter@biomatters.com PMID:22543367

  1. Geneious Basic: an integrated and extendable desktop software platform for the organization and analysis of sequence data.

    PubMed

    Kearse, Matthew; Moir, Richard; Wilson, Amy; Stones-Havas, Steven; Cheung, Matthew; Sturrock, Shane; Buxton, Simon; Cooper, Alex; Markowitz, Sidney; Duran, Chris; Thierer, Tobias; Ashton, Bruce; Meintjes, Peter; Drummond, Alexei

    2012-06-15

    The two main functions of bioinformatics are the organization and analysis of biological data using computational resources. Geneious Basic has been designed to be an easy-to-use and flexible desktop software application framework for the organization and analysis of biological data, with a focus on molecular sequences and related data types. It integrates numerous industry-standard discovery analysis tools, with interactive visualizations to generate publication-ready images. One key contribution to researchers in the life sciences is the Geneious public application programming interface (API) that affords the ability to leverage the existing framework of the Geneious Basic software platform for virtually unlimited extension and customization. The result is an increase in the speed and quality of development of computation tools for the life sciences, due to the functionality and graphical user interface available to the developer through the public API. Geneious Basic represents an ideal platform for the bioinformatics community to leverage existing components and to integrate their own specific requirements for the discovery, analysis and visualization of biological data. Binaries and public API freely available for download at http://www.geneious.com/basic, implemented in Java and supported on Linux, Apple OSX and MS Windows. The software is also available from the Bio-Linux package repository at http://nebc.nerc.ac.uk/news/geneiousonbl.

  2. Development of Land Analysis System display modules

    NASA Technical Reports Server (NTRS)

    Gordon, Douglas; Hollaren, Douglas; Huewe, Laurie

    1986-01-01

    The Land Analysis System (LAS) display modules were developed to allow a user to interactively display, manipulate, and store image and image related data. To help accomplish this task, these modules utilize the Transportable Applications Executive and the Display Management System software to interact with the user and the display device. The basic characteristics of a display are outlined and some of the major modifications and additions made to the display management software are discussed. Finally, all available LAS display modules are listed along with a short description of each.

  3. Preliminary analyses of SIB-B radar data for recent Hawaii lava flows

    NASA Technical Reports Server (NTRS)

    Kaupp, V. H.; Derryberry, B. A.; Macdonald, H. C.; Gaddis, L. R.; Mouginis-Mark, P. J.

    1986-01-01

    The Shuttle Imaging Radar (SIR-B) experiment acquired two L-band (23 cm wavelength) radar images (at about 28 and 48 deg incidence angles) over the Kilauea Volcano area of southeastern Hawaii. Geologic analysis of these data indicates that, although aa lava flows and pyroclastic deposits can be discriminated, pahoehoe lava flows are not readily distinguished from surrounding low return materials. Preliminary analysis of data extracted from isolated flows indicates that flow type (i.e., aa or pahoehoe) and relative age can be determined from their basic statistics and illumination angle.

  4. Perception of linear horizontal self-motion induced by peripheral vision /linearvection/ - Basic characteristics and visual-vestibular interactions

    NASA Technical Reports Server (NTRS)

    Berthoz, A.; Pavard, B.; Young, L. R.

    1975-01-01

    The basic characteristics of the sensation of linear horizontal motion have been studied. Objective linear motion was induced by means of a moving cart. Visually induced linear motion perception (linearvection) was obtained by projection of moving images at the periphery of the visual field. Image velocity and luminance thresholds for the appearance of linearvection have been measured and are in the range of those for image motion detection (without sensation of self motion) by the visual system. Latencies of onset are around 1 sec and short term adaptation has been shown. The dynamic range of the visual analyzer as judged by frequency analysis is lower than the vestibular analyzer. Conflicting situations in which visual cues contradict vestibular and other proprioceptive cues show, in the case of linearvection a dominance of vision which supports the idea of an essential although not independent role of vision in self motion perception.

  5. Imaging with the fluorogenic dye Basic Fuchsin reveals subcellular patterning and ecotype variation of lignification in Brachypodium distachyon.

    PubMed

    Kapp, Nikki; Barnes, William J; Richard, Tom L; Anderson, Charles T

    2015-07-01

    Lignin is a complex polyphenolic heteropolymer that is abundant in the secondary cell walls of plants and functions in growth and defence. It is also a major barrier to the deconstruction of plant biomass for bioenergy production, but the spatiotemporal details of how lignin is deposited in actively lignifying tissues and the precise relationships between wall lignification in different cell types and developmental events, such as flowering, are incompletely understood. Here, the lignin-detecting fluorogenic dye, Basic Fuchsin, was adapted to enable comparative fluorescence-based imaging of lignin in the basal internodes of three Brachypodium distachyon ecotypes that display divergent flowering times. It was found that the extent and intensity of Basic Fuchsin fluorescence increase over time in the Bd21-3 ecotype, that Basic Fuchsin staining is more widespread and intense in 4-week-old Bd21-3 and Adi-10 basal internodes than in Bd1-1 internodes, and that Basic Fuchsin staining reveals subcellular patterns of lignin in vascular and interfascicular fibre cell walls. Basic Fuchsin fluorescence did not correlate with lignin quantification by acetyl bromide analysis, indicating that whole-plant and subcellular lignin analyses provide distinct information about the extent and patterns of lignification in B. distachyon. Finally, it was found that flowering time correlated with a transient increase in total lignin, but did not correlate strongly with the patterning of stem lignification, suggesting that additional developmental pathways might regulate secondary wall formation in grasses. This study provides a new comparative tool for imaging lignin in plants and helps inform our views of how lignification proceeds in grasses. © The Author 2015. Published by Oxford University Press on behalf of the Society for Experimental Biology.

  6. Imaging with the fluorogenic dye Basic Fuchsin reveals subcellular patterning and ecotype variation of lignification in Brachypodium distachyon

    PubMed Central

    Kapp, Nikki; Barnes, William J.; Richard, Tom L.; Anderson, Charles T.

    2015-01-01

    Lignin is a complex polyphenolic heteropolymer that is abundant in the secondary cell walls of plants and functions in growth and defence. It is also a major barrier to the deconstruction of plant biomass for bioenergy production, but the spatiotemporal details of how lignin is deposited in actively lignifying tissues and the precise relationships between wall lignification in different cell types and developmental events, such as flowering, are incompletely understood. Here, the lignin-detecting fluorogenic dye, Basic Fuchsin, was adapted to enable comparative fluorescence-based imaging of lignin in the basal internodes of three Brachypodium distachyon ecotypes that display divergent flowering times. It was found that the extent and intensity of Basic Fuchsin fluorescence increase over time in the Bd21-3 ecotype, that Basic Fuchsin staining is more widespread and intense in 4-week-old Bd21-3 and Adi-10 basal internodes than in Bd1-1 internodes, and that Basic Fuchsin staining reveals subcellular patterns of lignin in vascular and interfascicular fibre cell walls. Basic Fuchsin fluorescence did not correlate with lignin quantification by acetyl bromide analysis, indicating that whole-plant and subcellular lignin analyses provide distinct information about the extent and patterns of lignification in B. distachyon. Finally, it was found that flowering time correlated with a transient increase in total lignin, but did not correlate strongly with the patterning of stem lignification, suggesting that additional developmental pathways might regulate secondary wall formation in grasses. This study provides a new comparative tool for imaging lignin in plants and helps inform our views of how lignification proceeds in grasses. PMID:25922482

  7. Quantum dots in bio-imaging: Revolution by the small

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arya, Harinder; Kaul, Zeenia; Wadhwa, Renu

    2005-04-22

    Visual analysis of biomolecules is an integral avenue of basic and applied biological research. It has been widely carried out by tagging of nucleotides and proteins with traditional fluorophores that are limited in their application by features such as photobleaching, spectral overlaps, and operational difficulties. Quantum dots (QDs) are emerging as a superior alternative and are poised to change the world of bio-imaging and further its applications in basic and applied biology. The interdisciplinary field of nanobiotechnology is experiencing a revolution and QDs as an enabling technology have become a harbinger of this hybrid field. Within a decade, research onmore » QDs has evolved from being a pure science subject to the one with high-end commercial applications.« less

  8. HyphArea--automated analysis of spatiotemporal fungal patterns.

    PubMed

    Baum, Tobias; Navarro-Quezada, Aura; Knogge, Wolfgang; Douchkov, Dimitar; Schweizer, Patrick; Seiffert, Udo

    2011-01-01

    In phytopathology quantitative measurements are rarely used to assess crop plant disease symptoms. Instead, a qualitative valuation by eye is often the method of choice. In order to close the gap between subjective human inspection and objective quantitative results, the development of an automated analysis system that is capable of recognizing and characterizing the growth patterns of fungal hyphae in micrograph images was developed. This system should enable the efficient screening of different host-pathogen combinations (e.g., barley-Blumeria graminis, barley-Rhynchosporium secalis) using different microscopy technologies (e.g., bright field, fluorescence). An image segmentation algorithm was developed for gray-scale image data that achieved good results with several microscope imaging protocols. Furthermore, adaptability towards different host-pathogen systems was obtained by using a classification that is based on a genetic algorithm. The developed software system was named HyphArea, since the quantification of the area covered by a hyphal colony is the basic task and prerequisite for all further morphological and statistical analyses in this context. By means of a typical use case the utilization and basic properties of HyphArea could be demonstrated. It was possible to detect statistically significant differences between the growth of an R. secalis wild-type strain and a virulence mutant. Copyright © 2010 Elsevier GmbH. All rights reserved.

  9. Dynamic chest radiography: flat-panel detector (FPD) based functional X-ray imaging.

    PubMed

    Tanaka, Rie

    2016-07-01

    Dynamic chest radiography is a flat-panel detector (FPD)-based functional X-ray imaging, which is performed as an additional examination in chest radiography. The large field of view (FOV) of FPDs permits real-time observation of the entire lungs and simultaneous right-and-left evaluation of diaphragm kinetics. Most importantly, dynamic chest radiography provides pulmonary ventilation and circulation findings as slight changes in pixel value even without the use of contrast media; the interpretation is challenging and crucial for a better understanding of pulmonary function. The basic concept was proposed in the 1980s; however, it was not realized until the 2010s because of technical limitations. Dynamic FPDs and advanced digital image processing played a key role for clinical application of dynamic chest radiography. Pulmonary ventilation and circulation can be quantified and visualized for the diagnosis of pulmonary diseases. Dynamic chest radiography can be deployed as a simple and rapid means of functional imaging in both routine and emergency medicine. Here, we focus on the evaluation of pulmonary ventilation and circulation. This review article describes the basic mechanism of imaging findings according to pulmonary/circulation physiology, followed by imaging procedures, analysis method, and diagnostic performance of dynamic chest radiography.

  10. Rotation covariant image processing for biomedical applications.

    PubMed

    Skibbe, Henrik; Reisert, Marco

    2013-01-01

    With the advent of novel biomedical 3D image acquisition techniques, the efficient and reliable analysis of volumetric images has become more and more important. The amount of data is enormous and demands an automated processing. The applications are manifold, ranging from image enhancement, image reconstruction, and image description to object/feature detection and high-level contextual feature extraction. In most scenarios, it is expected that geometric transformations alter the output in a mathematically well-defined manner. In this paper we emphasis on 3D translations and rotations. Many algorithms rely on intensity or low-order tensorial-like descriptions to fulfill this demand. This paper proposes a general mathematical framework based on mathematical concepts and theories transferred from mathematical physics and harmonic analysis into the domain of image analysis and pattern recognition. Based on two basic operations, spherical tensor differentiation and spherical tensor multiplication, we show how to design a variety of 3D image processing methods in an efficient way. The framework has already been applied to several biomedical applications ranging from feature and object detection tasks to image enhancement and image restoration techniques. In this paper, the proposed methods are applied on a variety of different 3D data modalities stemming from medical and biological sciences.

  11. Basic physics of ultrasound imaging.

    PubMed

    Aldrich, John E

    2007-05-01

    The appearance of ultrasound images depends critically on the physical interactions of sound with the tissues in the body. The basic principles of ultrasound imaging and the physical reasons for many common artifacts are described.

  12. Comparing methods for analysis of biomedical hyperspectral image data

    NASA Astrophysics Data System (ADS)

    Leavesley, Silas J.; Sweat, Brenner; Abbott, Caitlyn; Favreau, Peter F.; Annamdevula, Naga S.; Rich, Thomas C.

    2017-02-01

    Over the past 2 decades, hyperspectral imaging technologies have been adapted to address the need for molecule-specific identification in the biomedical imaging field. Applications have ranged from single-cell microscopy to whole-animal in vivo imaging and from basic research to clinical systems. Enabling this growth has been the availability of faster, more effective hyperspectral filtering technologies and more sensitive detectors. Hence, the potential for growth of biomedical hyperspectral imaging is high, and many hyperspectral imaging options are already commercially available. However, despite the growth in hyperspectral technologies for biomedical imaging, little work has been done to aid users of hyperspectral imaging instruments in selecting appropriate analysis algorithms. Here, we present an approach for comparing the effectiveness of spectral analysis algorithms by combining experimental image data with a theoretical "what if" scenario. This approach allows us to quantify several key outcomes that characterize a hyperspectral imaging study: linearity of sensitivity, positive detection cut-off slope, dynamic range, and false positive events. We present results of using this approach for comparing the effectiveness of several common spectral analysis algorithms for detecting weak fluorescent protein emission in the midst of strong tissue autofluorescence. Results indicate that this approach should be applicable to a very wide range of applications, allowing a quantitative assessment of the effectiveness of the combined biology, hardware, and computational analysis for detecting a specific molecular signature.

  13. Image processing and machine learning in the morphological analysis of blood cells.

    PubMed

    Rodellar, J; Alférez, S; Acevedo, A; Molina, A; Merino, A

    2018-05-01

    This review focuses on how image processing and machine learning can be useful for the morphological characterization and automatic recognition of cell images captured from peripheral blood smears. The basics of the 3 core elements (segmentation, quantitative features, and classification) are outlined, and recent literature is discussed. Although red blood cells are a significant part of this context, this study focuses on malignant lymphoid cells and blast cells. There is no doubt that these technologies may help the cytologist to perform efficient, objective, and fast morphological analysis of blood cells. They may also help in the interpretation of some morphological features and may serve as learning and survey tools. Although research is still needed, it is important to define screening strategies to exploit the potential of image-based automatic recognition systems integrated in the daily routine of laboratories along with other analysis methodologies. © 2018 John Wiley & Sons Ltd.

  14. Affect school and script analysis versus basic body awareness therapy in the treatment of psychological symptoms in patients with diabetes and high HbA1c concentrations: two study protocols for two randomized controlled trials.

    PubMed

    Melin, Eva O; Svensson, Ralph; Gustavsson, Sven-Åke; Winberg, Agneta; Denward-Olah, Ewa; Landin-Olsson, Mona; Thulesius, Hans O

    2016-04-27

    Depression is linked with alexithymia, anxiety, high HbA1c concentrations, disturbances of cortisol secretion, increased prevalence of diabetes complications and all-cause mortality. The psycho-educational method 'affect school with script analysis' and the mind-body therapy 'basic body awareness treatment' will be trialled in patients with diabetes, high HbA1c concentrations and psychological symptoms. The primary outcome measure is change in symptoms of depression. Secondary outcome measures are changes in HbA1c concentrations, midnight salivary cortisol concentration, symptoms of alexithymia, anxiety, self-image measures, use of antidepressants, incidence of diabetes complications and mortality. Two studies will be performed. Study I is an open-labeled parallel-group study with a two-arm randomized controlled trial design. Patients are randomized to either affect school with script analysis or to basic body awareness treatment. According to power calculations, 64 persons are required in each intervention arm at the last follow-up session. Patients with type 1 or type 2 diabetes were recruited from one hospital diabetes outpatient clinic in 2009. The trial will be completed in 2016. Study II is a multicentre open-labeled parallel-group three-arm randomized controlled trial. Patients will be randomized to affect school with script analysis, to basic body awareness treatment, or to treatment as usual. Power calculations show that 70 persons are required in each arm at the last follow-up session. Patients with type 2 diabetes will be recruited from primary care. This study will start in 2016 and finish in 2023. For both studies, the inclusion criteria are: HbA1c concentration ≥62.5 mmol/mol; depression, alexithymia, anxiety or a negative self-image; age 18-59 years; and diabetes duration ≥1 year. The exclusion criteria are pregnancy, severe comorbidities, cognitive deficiencies or inadequate Swedish. Depression, anxiety, alexithymia and self-image are assessed using self-report instruments. HbA1c concentration, midnight salivary cortisol concentration, blood pressure, serum lipid concentrations and anthropometrics are measured. Data are collected from computerized medical records and the Swedish national diabetes and causes of death registers. Whether the "affect school with script analysis" will reduce psychological symptoms, increase emotional awareness and improve diabetes related factors will be tried, and compared to "basic body awareness treatment" and treatment as usual. ClinicalTrials.gov: NCT01714986.

  15. Frequency Domain Analysis of Multiwavelength Photoacoustic Signals for Differentiating Tissue Components

    NASA Astrophysics Data System (ADS)

    Jian, X. H.; Dong, F. L.; Xu, J.; Li, Z. J.; Jiao, Y.; Cui, Y. Y.

    2018-05-01

    The feasibility of differentiating tissue components by performing frequency domain analysis of photoacoustic images acquired at different wavelengths was studied in this paper. Firstly, according to the basic theory of photoacoustic imaging, a brief theoretical model for frequency domain analysis of multiwavelength photoacoustic signal was deduced. The experiment results proved that the performance of different targets in frequency domain is quite different. Especially, the acoustic spectrum characteristic peaks of different targets are unique, which are 2.93 MHz, 5.37 MHz, 6.83 MHz, and 8.78 MHz for PDMS phantom, while 13.20 MHz, 16.60 MHz, 26.86 MHz, and 29.30 MHz for pork fat. The results indicated that the acoustic spectrum of photoacoustic imaging signals is possible to be utilized for tissue composition characterization.

  16. CognitionMaster: an object-based image analysis framework

    PubMed Central

    2013-01-01

    Background Automated image analysis methods are becoming more and more important to extract and quantify image features in microscopy-based biomedical studies and several commercial or open-source tools are available. However, most of the approaches rely on pixel-wise operations, a concept that has limitations when high-level object features and relationships between objects are studied and if user-interactivity on the object-level is desired. Results In this paper we present an open-source software that facilitates the analysis of content features and object relationships by using objects as basic processing unit instead of individual pixels. Our approach enables also users without programming knowledge to compose “analysis pipelines“ that exploit the object-level approach. We demonstrate the design and use of example pipelines for the immunohistochemistry-based cell proliferation quantification in breast cancer and two-photon fluorescence microscopy data about bone-osteoclast interaction, which underline the advantages of the object-based concept. Conclusions We introduce an open source software system that offers object-based image analysis. The object-based concept allows for a straight-forward development of object-related interactive or fully automated image analysis solutions. The presented software may therefore serve as a basis for various applications in the field of digital image analysis. PMID:23445542

  17. Visually enhanced CCTV digital surveillance utilizing Intranet and Internet.

    PubMed

    Ozaki, Nobuyuki

    2002-07-01

    This paper describes a solution for integrated plant supervision utilizing closed circuit television (CCTV) digital surveillance. Three basic requirements are first addressed as the platform of the system, with discussion on the suitable video compression. The system configuration is described in blocks. The system provides surveillance functionality: real-time monitoring, and process analysis functionality: a troubleshooting tool. This paper describes the formulation of practical performance design for determining various encoder parameters. It also introduces image processing techniques for enhancing the original CCTV digital image to lessen the burden on operators. Some screenshots are listed for the surveillance functionality. For the process analysis, an image searching filter supported by image processing techniques is explained with screenshots. Multimedia surveillance, which is the merger with process data surveillance, or the SCADA system, is also explained.

  18. The Processing Speed of Scene Categorization at Multiple Levels of Description: The Superordinate Advantage Revisited.

    PubMed

    Banno, Hayaki; Saiki, Jun

    2015-03-01

    Recent studies have sought to determine which levels of categories are processed first in visual scene categorization and have shown that the natural and man-made superordinate-level categories are understood faster than are basic-level categories. The current study examined the robustness of the superordinate-level advantage in a visual scene categorization task. A go/no-go categorization task was evaluated with response time distribution analysis using an ex-Gaussian template. A visual scene was categorized as either superordinate or basic level, and two basic-level categories forming a superordinate category were judged as either similar or dissimilar to each other. First, outdoor/ indoor groups and natural/man-made were used as superordinate categories to investigate whether the advantage could be generalized beyond the natural/man-made boundary. Second, a set of images forming a superordinate category was manipulated. We predicted that decreasing image set similarity within the superordinate-level category would work against the speed advantage. We found that basic-level categorization was faster than outdoor/indoor categorization when the outdoor category comprised dissimilar basic-level categories. Our results indicate that the superordinate-level advantage in visual scene categorization is labile across different categories and category structures. © 2015 SAGE Publications.

  19. Development of a quantitative morphological assessment of toxicant-treated zebrafish larvae using brightfield imaging and high-content analysis.

    PubMed

    Deal, Samantha; Wambaugh, John; Judson, Richard; Mosher, Shad; Radio, Nick; Houck, Keith; Padilla, Stephanie

    2016-09-01

    One of the rate-limiting procedures in a developmental zebrafish screen is the morphological assessment of each larva. Most researchers opt for a time-consuming, structured visual assessment by trained human observer(s). The present studies were designed to develop a more objective, accurate and rapid method for screening zebrafish for dysmorphology. Instead of the very detailed human assessment, we have developed the computational malformation index, which combines the use of high-content imaging with a very brief human visual assessment. Each larva was quickly assessed by a human observer (basic visual assessment), killed, fixed and assessed for dysmorphology with the Zebratox V4 BioApplication using the Cellomics® ArrayScan® V(TI) high-content image analysis platform. The basic visual assessment adds in-life parameters, and the high-content analysis assesses each individual larva for various features (total area, width, spine length, head-tail length, length-width ratio, perimeter-area ratio). In developing the computational malformation index, a training set of hundreds of embryos treated with hundreds of chemicals were visually assessed using the basic or detailed method. In the second phase, we assessed both the stability of these high-content measurements and its performance using a test set of zebrafish treated with a dose range of two reference chemicals (trans-retinoic acid or cadmium). We found the measures were stable for at least 1 week and comparison of these automated measures to detailed visual inspection of the larvae showed excellent congruence. Our computational malformation index provides an objective manner for rapid phenotypic brightfield assessment of individual larva in a developmental zebrafish assay. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  20. Intelligent bandwith compression

    NASA Astrophysics Data System (ADS)

    Tseng, D. Y.; Bullock, B. L.; Olin, K. E.; Kandt, R. K.; Olsen, J. D.

    1980-02-01

    The feasibility of a 1000:1 bandwidth compression ratio for image transmission has been demonstrated using image-analysis algorithms and a rule-based controller. Such a high compression ratio was achieved by first analyzing scene content using auto-cueing and feature-extraction algorithms, and then transmitting only the pertinent information consistent with mission requirements. A rule-based controller directs the flow of analysis and performs priority allocations on the extracted scene content. The reconstructed bandwidth-compressed image consists of an edge map of the scene background, with primary and secondary target windows embedded in the edge map. The bandwidth-compressed images are updated at a basic rate of 1 frame per second, with the high-priority target window updated at 7.5 frames per second. The scene-analysis algorithms used in this system together with the adaptive priority controller are described. Results of simulated 1000:1 band width-compressed images are presented. A video tape simulation of the Intelligent Bandwidth Compression system has been produced using a sequence of video input from the data base.

  1. Digital 3D Microstructure Analysis of Concrete using X-Ray Micro Computed Tomography SkyScan 1173: A Preliminary Study

    NASA Astrophysics Data System (ADS)

    Latief, F. D. E.; Mohammad, I. H.; Rarasati, A. D.

    2017-11-01

    Digital imaging of a concrete sample using high resolution tomographic imaging by means of X-Ray Micro Computed Tomography (μ-CT) has been conducted to assess the characteristic of the sample’s structure. A standard procedure of image acquisition, reconstruction, image processing of the method using a particular scanning device i.e., the Bruker SkyScan 1173 High Energy Micro-CT are elaborated. A qualitative and a quantitative analysis were briefly performed on the sample to deliver some basic ideas of the capability of the system and the bundled software package. Calculation of total VOI volume, object volume, percent of object volume, total VOI surface, object surface, object surface/volume ratio, object surface density, structure thickness, structure separation, total porosity were conducted and analysed. This paper should serve as a brief description of how the device can produce the preferred image quality as well as the ability of the bundled software packages to help in performing qualitative and quantitative analysis.

  2. Physics and engineering aspects of cell and tissue imaging systems: microscopic devices and computer assisted diagnosis.

    PubMed

    Chen, Xiaodong; Ren, Liqiang; Zheng, Bin; Liu, Hong

    2013-01-01

    The conventional optical microscopes have been used widely in scientific research and in clinical practice. The modern digital microscopic devices combine the power of optical imaging and computerized analysis, archiving and communication techniques. It has a great potential in pathological examinations for improving the efficiency and accuracy of clinical diagnosis. This chapter reviews the basic optical principles of conventional microscopes, fluorescence microscopes and electron microscopes. The recent developments and future clinical applications of advanced digital microscopic imaging methods and computer assisted diagnosis schemes are also discussed.

  3. Infrared spectroscopic imaging for noninvasive detection of latent fingerprints.

    PubMed

    Crane, Nicole J; Bartick, Edward G; Perlman, Rebecca Schwartz; Huffman, Scott

    2007-01-01

    The capability of Fourier transform infrared (FTIR) spectroscopic imaging to provide detailed images of unprocessed latent fingerprints while also preserving important trace evidence is demonstrated. Unprocessed fingerprints were developed on various porous and nonporous substrates. Data-processing methods used to extract the latent fingerprint ridge pattern from the background material included basic infrared spectroscopic band intensities, addition and subtraction of band intensity measurements, principal components analysis (PCA) and calculation of second derivative band intensities, as well as combinations of these various techniques. Additionally, trace evidence within the fingerprints was recovered and identified.

  4. Bundle Adjustment-Based Stability Analysis Method with a Case Study of a Dual Fluoroscopy Imaging System

    NASA Astrophysics Data System (ADS)

    Al-Durgham, K.; Lichti, D. D.; Detchev, I.; Kuntze, G.; Ronsky, J. L.

    2018-05-01

    A fundamental task in photogrammetry is the temporal stability analysis of a camera/imaging-system's calibration parameters. This is essential to validate the repeatability of the parameters' estimation, to detect any behavioural changes in the camera/imaging system and to ensure precise photogrammetric products. Many stability analysis methods exist in the photogrammetric literature; each one has different methodological bases, and advantages and disadvantages. This paper presents a simple and rigorous stability analysis method that can be straightforwardly implemented for a single camera or an imaging system with multiple cameras. The basic collinearity model is used to capture differences between two calibration datasets, and to establish the stability analysis methodology. Geometric simulation is used as a tool to derive image and object space scenarios. Experiments were performed on real calibration datasets from a dual fluoroscopy (DF; X-ray-based) imaging system. The calibration data consisted of hundreds of images and thousands of image observations from six temporal points over a two-day period for a precise evaluation of the DF system stability. The stability of the DF system - for a single camera analysis - was found to be within a range of 0.01 to 0.66 mm in terms of 3D coordinates root-mean-square-error (RMSE), and 0.07 to 0.19 mm for dual cameras analysis. It is to the authors' best knowledge that this work is the first to address the topic of DF stability analysis.

  5. Chosen Aspects of the Production of the Basic Map Using Uav Imagery

    NASA Astrophysics Data System (ADS)

    Kedzierski, M.; Fryskowska, A.; Wierzbicki, D.; Nerc, P.

    2016-06-01

    For several years there has been an increasing interest in the use of unmanned aerial vehicles in acquiring image data from a low altitude. Considering the cost-effectiveness of the flight time of UAVs vs. conventional airplanes, the use of the former is advantageous when generating large scale accurate ortophotos. Through the development of UAV imagery, we can update large-scale basic maps. These maps are cartographic products which are used for registration, economic, and strategic planning. On the basis of these maps other cartographic maps are produced, for example maps used building planning. The article presents an assessesment of the usefulness of orthophotos based on UAV imagery to upgrade the basic map. In the research a compact, non-metric camera, mounted on a fixed wing powered by an electric motor was used. The tested area covered flat, agricultural and woodland terrains. The processing and analysis of orthorectification were carried out with the INPHO UASMaster programme. Due to the effect of UAV instability on low-altitude imagery, the use of non-metric digital cameras and the low-accuracy GPS-INS sensors, the geometry of images is visibly lower were compared to conventional digital aerial photos (large values of phi and kappa angles). Therefore, typically, low-altitude images require large along- and across-track direction overlap - usually above 70 %. As a result of the research orthoimages were obtained with a resolution of 0.06 meters and a horizontal accuracy of 0.10m. Digitized basic maps were used as the reference data. The accuracy of orthoimages vs. basic maps was estimated based on the study and on the available reference sources. As a result, it was found that the geometric accuracy and interpretative advantages of the final orthoimages allow the updating of basic maps. It is estimated that such an update of basic maps based on UAV imagery reduces processing time by approx. 40%.

  6. A modeling analysis program for the JPL table mountain Io sodium cloud data

    NASA Technical Reports Server (NTRS)

    Smyth, William H.; Goldberg, Bruce A.

    1988-01-01

    Research in the third and final year of this project is divided into three main areas: (1) completion of data processing and calibration for 34 of the 1981 Region B/C images, selected from the massive JPL sodium cloud data set; (2) identification and examination of the basic features and observed changes in the morphological characteristics of the sodium cloud images; and (3) successful physical interpretation of these basic features and observed changes using the highly developed numerical sodium cloud model at AER. The modeling analysis has led to a number of definite conclusions regarding the local structure of Io's atmosphere, the gas escape mechanism at Io, and the presence of an east-west electric field and a System III longitudinal asymmetry in the plasma torus. Large scale stability, as well as some smaller scale time variability for both the sodium cloud and the structure of the plasma torus over a several year time period are also discussed.

  7. Optimality of the basic colour categories for classification

    PubMed Central

    Griffin, Lewis D

    2005-01-01

    Categorization of colour has been widely studied as a window into human language and cognition, and quite separately has been used pragmatically in image-database retrieval systems. This suggests the hypothesis that the best category system for pragmatic purposes coincides with human categories (i.e. the basic colours). We have tested this hypothesis by assessing the performance of different category systems in a machine-vision task. The task was the identification of the odd-one-out from triples of images obtained using a web-based image-search service. In each triple, two of the images had been retrieved using the same search term, the other a different term. The terms were simple concrete nouns. The results were as follows: (i) the odd-one-out task can be performed better than chance using colour alone; (ii) basic colour categorization performs better than random systems of categories; (iii) a category system that performs better than the basic colours could not be found; and (iv) it is not just the general layout of the basic colours that is important, but also the detail. We conclude that (i) the results support the plausibility of an explanation for the basic colours as a result of a pressure-to-optimality and (ii) the basic colours are good categories for machine vision image-retrieval systems. PMID:16849219

  8. Promise of new imaging technologies for assessing ovarian function.

    PubMed

    Singh, Jaswant; Adams, Gregg P; Pierson, Roger A

    2003-10-15

    Advancements in imaging technologies over the last two decades have ushered a quiet revolution in research approaches to the study of ovarian structure and function. The most significant changes in our understanding of the ovary have resulted from the use of ultrasonography which has enabled sequential analyses in live animals. Computer-assisted image analysis and mathematical modeling of the dynamic changes within the ovary has permitted exciting new avenues of research with readily quantifiable endpoints. Spectral, color-flow and power Doppler imaging now facilitate physiologic interpretations of vascular dynamics over time. Similarly, magnetic resonance imaging (MRI) is emerging as a research tool in ovarian imaging. New technologies, such as three-dimensional ultrasonography and MRI, ultrasound-based biomicroscopy and synchrotron-based techniques each have the potential to enhance our real-time picture of ovarian function to the near-cellular level. Collectively, information available in ultrasonography, MRI, computer-assisted image analysis and mathematical modeling heralds a new era in our understanding of the basic processes of female and male reproduction.

  9. Processing Satellite Images on Tertiary Storage: A Study of the Impact of Tile Size on Performance

    NASA Technical Reports Server (NTRS)

    Yu, JieBing; DeWitt, David J.

    1996-01-01

    Before raw data from a satellite can be used by an Earth scientist, it must first undergo a number of processing steps including basic processing, cleansing, and geo-registration. Processing actually expands the volume of data collected by a factor of 2 or 3 and the original data is never deleted. Thus processing and storage requirements can exceed 2 terrabytes/day. Once processed data is ready for analysis, a series of algorithms (typically developed by the Earth scientists) is applied to a large number of images in a data set. The focus of this paper is how best to handle such images stored on tape using the following assumptions: (1) all images of interest to a scientist are stored on a single tape, (2) images are accessed and processed in the order that they are stored on tape, and (3) the analysis requires access to only a portion of each image and not the entire image.

  10. Implementation of the Pan-STARRS Image Processing Pipeline

    NASA Astrophysics Data System (ADS)

    Fang, Julia; Aspin, C.

    2007-12-01

    Pan-STARRS, or Panoramic Survey Telescope and Rapid Response System, is a wide-field imaging facility that combines small mirrors with gigapixel cameras. It surveys the entire available sky several times a month, which ultimately requires large amounts of data to be processed and stored right away. Accordingly, the Image Processing Pipeline--the IPP--is a collection of software tools that is responsible for the primary image analysis for Pan-STARRS. It includes data registration, basic image analysis such as obtaining master images and detrending the exposures, mosaic calibration when applicable, and lastly, image sum and difference. In this paper I present my work of the installation of IPP 2.1 and 2.2 on a Linux machine, running the Simtest, which is simulated data to test your installation, and finally applying the IPP to two different sets of UH 2.2m Tek data. This work was conducted by a Research Experience for Undergraduates (REU) position at the University of Hawaii's Institute for Astronomy and funded by the NSF.

  11. Automatic analysis of microscopic images of red blood cell aggregates

    NASA Astrophysics Data System (ADS)

    Menichini, Pablo A.; Larese, Mónica G.; Riquelme, Bibiana D.

    2015-06-01

    Red blood cell aggregation is one of the most important factors in blood viscosity at stasis or at very low rates of flow. The basic structure of aggregates is a linear array of cell commonly termed as rouleaux. Enhanced or abnormal aggregation is seen in clinical conditions, such as diabetes and hypertension, producing alterations in the microcirculation, some of which can be analyzed through the characterization of aggregated cells. Frequently, image processing and analysis for the characterization of RBC aggregation were done manually or semi-automatically using interactive tools. We propose a system that processes images of RBC aggregation and automatically obtains the characterization and quantification of the different types of RBC aggregates. Present technique could be interesting to perform the adaptation as a routine used in hemorheological and Clinical Biochemistry Laboratories because this automatic method is rapid, efficient and economical, and at the same time independent of the user performing the analysis (repeatability of the analysis).

  12. Rotation Covariant Image Processing for Biomedical Applications

    PubMed Central

    Reisert, Marco

    2013-01-01

    With the advent of novel biomedical 3D image acquisition techniques, the efficient and reliable analysis of volumetric images has become more and more important. The amount of data is enormous and demands an automated processing. The applications are manifold, ranging from image enhancement, image reconstruction, and image description to object/feature detection and high-level contextual feature extraction. In most scenarios, it is expected that geometric transformations alter the output in a mathematically well-defined manner. In this paper we emphasis on 3D translations and rotations. Many algorithms rely on intensity or low-order tensorial-like descriptions to fulfill this demand. This paper proposes a general mathematical framework based on mathematical concepts and theories transferred from mathematical physics and harmonic analysis into the domain of image analysis and pattern recognition. Based on two basic operations, spherical tensor differentiation and spherical tensor multiplication, we show how to design a variety of 3D image processing methods in an efficient way. The framework has already been applied to several biomedical applications ranging from feature and object detection tasks to image enhancement and image restoration techniques. In this paper, the proposed methods are applied on a variety of different 3D data modalities stemming from medical and biological sciences. PMID:23710255

  13. Fixed-Cell Imaging of Schizosaccharomyces pombe.

    PubMed

    Hagan, Iain M; Bagley, Steven

    2016-07-01

    The acknowledged genetic malleability of fission yeast has been matched by impressive cytology to drive major advances in our understanding of basic molecular cell biological processes. In many of the more recent studies, traditional approaches of fixation followed by processing to accommodate classical staining procedures have been superseded by live-cell imaging approaches that monitor the distribution of fusion proteins between a molecule of interest and a fluorescent protein. Although such live-cell imaging is uniquely informative for many questions, fixed-cell imaging remains the better option for others and is an important-sometimes critical-complement to the analysis of fluorescent fusion proteins by live-cell imaging. Here, we discuss the merits of fixed- and live-cell imaging as well as specific issues for fluorescence microscopy imaging of fission yeast. © 2016 Cold Spring Harbor Laboratory Press.

  14. MRI Segmentation of the Human Brain: Challenges, Methods, and Applications

    PubMed Central

    Despotović, Ivana

    2015-01-01

    Image segmentation is one of the most important tasks in medical image analysis and is often the first and the most critical step in many clinical applications. In brain MRI analysis, image segmentation is commonly used for measuring and visualizing the brain's anatomical structures, for analyzing brain changes, for delineating pathological regions, and for surgical planning and image-guided interventions. In the last few decades, various segmentation techniques of different accuracy and degree of complexity have been developed and reported in the literature. In this paper we review the most popular methods commonly used for brain MRI segmentation. We highlight differences between them and discuss their capabilities, advantages, and limitations. To address the complexity and challenges of the brain MRI segmentation problem, we first introduce the basic concepts of image segmentation. Then, we explain different MRI preprocessing steps including image registration, bias field correction, and removal of nonbrain tissue. Finally, after reviewing different brain MRI segmentation methods, we discuss the validation problem in brain MRI segmentation. PMID:25945121

  15. Feasibility of digital image colorimetry--application for water calcium hardness determination.

    PubMed

    Lopez-Molinero, Angel; Tejedor Cubero, Valle; Domingo Irigoyen, Rosa; Sipiera Piazuelo, Daniel

    2013-01-15

    Interpretation and relevance of basic RGB colors in Digital Image-Based Colorimetry have been treated in this paper. The studies were carried out using the chromogenic model formed by the reaction between Ca(II) ions and glyoxal bis(2-hydroxyanil). It produced orange-red colored solutions in alkaline media. Individual basic color data (RGB) and also the total intensity of colors, I(tot), were the original variables treated by Factorial Analysis. Te evaluation evidenced that the highest variance of the system and the highest analytical sensitivity were associated to the G color. However, after the study by Fourier transform the basic R color was recognized as an important feature in the information. It was manifested as an intrinsic characteristic that appeared differentiated in terms of low frequency in Fourier transform. The Principal Components Analysis study showed that the variance of the system could be mostly retained in the first principal component, but was dependent on all basic colors. The colored complex was also applied and validated as a Digital Image Colorimetric method for the determination of Ca(II) ions. RGB intensities were linearly correlated with Ca(II) in the range 0.2-2.0 mg L(-1). In the best conditions, using green color, a simple and reliable method for Ca determination could be developed. Its detection limit was established (criterion 3s) as 0.07 mg L(-1). And the reproducibility was lower than 6%, for 1.0 mg L(-1) Ca. Other chromatic parameters were evaluated as dependent calibration variables. Their representativeness, variance and sensitivity were discussed in order to select the best analytical variable. The potentiality of the procedure as a field and ready-to-use method, susceptible to be applied 'in situ' with a minimum of experimental needs, was probed. Applications of the analysis of Ca in different real water samples were carried out. Water of the city net, mineral bottled, and natural-river were analyzed and results were compared and evaluated statistically. The validity was assessed by the alternative techniques of flame atomic absorption spectroscopy and titrimetry. Differences were appreciated but they were consistent with the applied methods. Copyright © 2012 Elsevier B.V. All rights reserved.

  16. The Basic Principles of FDG-PET/CT Imaging.

    PubMed

    Basu, Sandip; Hess, Søren; Nielsen Braad, Poul-Erik; Olsen, Birgitte Brinkmann; Inglev, Signe; Høilund-Carlsen, Poul Flemming

    2014-10-01

    Positron emission tomography (PET) imaging with 2-[(18)F]fluoro-2-deoxy-D-glucose (FDG) forms the basis of molecular imaging. FDG-PET imaging is a multidisciplinary undertaking that requires close interdisciplinary collaboration in a broad team comprising physicians, technologists, secretaries, radio-chemists, hospital physicists, molecular biologists, engineers, and cyclotron technicians. The aim of this review is to provide a brief overview of important basic issues and considerations pivotal to successful patient examinations, including basic physics, instrumentation, radiochemistry, molecular and cell biology, patient preparation, normal distribution of tracer, and potential interpretive pitfalls. Copyright © 2014 Elsevier Inc. All rights reserved.

  17. Non-destructive terahertz imaging of illicit drugs using spectral fingerprints

    NASA Astrophysics Data System (ADS)

    Kawase, Kodo; Ogawa, Yuichi; Watanabe, Yuuki; Inoue, Hiroyuki

    2003-10-01

    The absence of non-destructive inspection techniques for illicit drugs hidden in mail envelopes has resulted in such drugs being smuggled across international borders freely. We have developed a novel basic technology for terahertz imaging, which allows detection and identification of drugs concealed in envelopes, by introducing the component spatial pattern analysis. The spatial distributions of the targets are obtained from terahertz multispectral transillumination images, using absorption spectra measured with a tunable terahertz-wave source. The samples we used were methamphetamine and MDMA, two of the most widely consumed illegal drugs in Japan, and aspirin as a reference.

  18. Acoustic Waves in Medical Imaging and Diagnostics

    PubMed Central

    Sarvazyan, Armen P.; Urban, Matthew W.; Greenleaf, James F.

    2013-01-01

    Up until about two decades ago acoustic imaging and ultrasound imaging were synonymous. The term “ultrasonography,” or its abbreviated version “sonography” meant an imaging modality based on the use of ultrasonic compressional bulk waves. Since the 1990s numerous acoustic imaging modalities started to emerge based on the use of a different mode of acoustic wave: shear waves. It was demonstrated that imaging with these waves can provide very useful and very different information about the biological tissue being examined. We will discuss physical basis for the differences between these two basic modes of acoustic waves used in medical imaging and analyze the advantages associated with shear acoustic imaging. A comprehensive analysis of the range of acoustic wavelengths, velocities, and frequencies that have been used in different imaging applications will be presented. We will discuss the potential for future shear wave imaging applications. PMID:23643056

  19. A new version of Visual tool for estimating the fractal dimension of images

    NASA Astrophysics Data System (ADS)

    Grossu, I. V.; Felea, D.; Besliu, C.; Jipa, Al.; Bordeianu, C. C.; Stan, E.; Esanu, T.

    2010-04-01

    This work presents a new version of a Visual Basic 6.0 application for estimating the fractal dimension of images (Grossu et al., 2009 [1]). The earlier version was limited to bi-dimensional sets of points, stored in bitmap files. The application was extended for working also with comma separated values files and three-dimensional images. New version program summaryProgram title: Fractal Analysis v02 Catalogue identifier: AEEG_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEEG_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 9999 No. of bytes in distributed program, including test data, etc.: 4 366 783 Distribution format: tar.gz Programming language: MS Visual Basic 6.0 Computer: PC Operating system: MS Windows 98 or later RAM: 30 M Classification: 14 Catalogue identifier of previous version: AEEG_v1_0 Journal reference of previous version: Comput. Phys. Comm. 180 (2009) 1999 Does the new version supersede the previous version?: Yes Nature of problem: Estimating the fractal dimension of 2D and 3D images. Solution method: Optimized implementation of the box-counting algorithm. Reasons for new version:The previous version was limited to bitmap image files. The new application was extended in order to work with objects stored in comma separated values (csv) files. The main advantages are: Easier integration with other applications (csv is a widely used, simple text file format); Less resources consumed and improved performance (only the information of interest, the "black points", are stored); Higher resolution (the points coordinates are loaded into Visual Basic double variables [2]); Possibility of storing three-dimensional objects (e.g. the 3D Sierpinski gasket). In this version the optimized box-counting algorithm [1] was extended to the three-dimensional case. Summary of revisions:The application interface was changed from SDI (single document interface) to MDI (multi-document interface). One form was added in order to provide a graphical user interface for the new functionalities (fractal analysis of 2D and 3D images stored in csv files). Additional comments: User friendly graphical interface; Easy deployment mechanism. Running time: In the first approximation, the algorithm is linear. References:[1] I.V. Grossu, C. Besliu, M.V. Rusu, Al. Jipa, C.C. Bordeianu, D. Felea, Comput. Phys. Comm. 180 (2009) 1999-2001.[2] F. Balena, Programming Microsoft Visual Basic 6.0, Microsoft Press, US, 1999.

  20. The Crew Earth Observations Experiment: Earth System Science from the ISS

    NASA Technical Reports Server (NTRS)

    Stefanov, William L.; Evans, Cynthia A.; Robinson, Julie A.; Wilkinson, M. Justin

    2007-01-01

    This viewgraph presentation reviews the use of Astronaut Photography (AP) as taken from the International Space Station (ISS) in Earth System Science (ESS). Included are slides showing basic remote sensing theory, data characteristics of astronaut photography, astronaut training and operations, crew Earth observations group, targeting sites and acquisition, cataloging and database, analysis and applications for ESS, image analysis of particular interest urban areas, megafans, deltas, coral reefs. There are examples of the photographs and the analysis.

  1. PET kinetic analysis --pitfalls and a solution for the Logan plot.

    PubMed

    Kimura, Yuichi; Naganawa, Mika; Shidahara, Miho; Ikoma, Yoko; Watabe, Hiroshi

    2007-01-01

    The Logan plot is a widely used algorithm for the quantitative analysis of neuroreceptors using PET because it is easy to use and simple to implement. The Logan plot is also suitable for receptor imaging because its algorithm is fast. However, use of the Logan plot, and interpretation of the formed receptor images should be regarded with caution, because noise in PET data causes bias in the Logan plot estimates. In this paper, we describe the basic concept of the Logan plot in detail and introduce three algorithms for the Logan plot. By comparing these algorithms, we demonstrate the pitfalls of the Logan plot and discuss the solution.

  2. Live Cell Imaging Confocal Microscopy Analysis of HBV Myr-PreS1 Peptide Binding and Uptake in NTCP-GFP Expressing HepG2 Cells.

    PubMed

    König, Alexander; Glebe, Dieter

    2017-01-01

    To obtain basic knowledge about specific molecular mechanisms involved in the entry of pathogens into cells is the basis for establishing pharmacologic substances blocking initial viral binding, infection, and subsequent viral spread. Lack of information about key cellular factors involved in the initial steps of HBV infection has hampered the characterization of HBV binding and entry for decades. However, recently, the liver-specific sodium-dependent taurocholate cotransporting polypeptide (NTCP) has been discovered as a functional receptor for HBV and HDV, thus opening the field for new concepts of basic binding and entry of HBV and HDV. Here, we describe practical issues of a basic in vitro assay system to examine kinetics and mechanisms of receptor-dependent HBV binding, uptake, and intracellular trafficking by live-cell imaging confocal microscopy. The assay system is comprised of HepG2 cells expressing a NTCP-GFP fusion-protein and chemically synthesized, fluorophore-labeled part of HBV surface protein, spanning the first N-terminal 48 amino acids of preS1 of the large hepatitis B virus surface protein.

  3. A 2D Fourier tool for the analysis of photo-elastic effect in large granular assemblies

    NASA Astrophysics Data System (ADS)

    Leśniewska, Danuta

    2017-06-01

    Fourier transforms are the basic tool in constructing different types of image filters, mainly those reducing optical noise. Some DIC or PIV software also uses frequency space to obtain displacement fields from a series of digital images of a deforming body. The paper presents series of 2D Fourier transforms of photo-elastic transmission images, representing large pseudo 2D granular assembly, deforming under varying boundary conditions. The images related to different scales were acquired using the same image resolution, but taken at different distance from the sample. Fourier transforms of images, representing different stages of deformation, reveal characteristic features at the three (`macro-`, `meso-` and `micro-`) scales, which can serve as a data to study internal order-disorder transition within granular materials.

  4. Platform for Post-Processing Waveform-Based NDE

    NASA Technical Reports Server (NTRS)

    Roth, Don J.

    2010-01-01

    Signal- and image-processing methods are commonly needed to extract information from the waves, improve resolution of, and highlight defects in an image. Since some similarity exists for all waveform-based nondestructive evaluation (NDE) methods, it would seem that a common software platform containing multiple signal- and image-processing techniques to process the waveforms and images makes sense where multiple techniques, scientists, engineers, and organizations are involved. NDE Wave & Image Processor Version 2.0 software provides a single, integrated signal- and image-processing and analysis environment for total NDE data processing and analysis. It brings some of the most useful algorithms developed for NDE over the past 20 years into a commercial-grade product. The software can import signal/spectroscopic data, image data, and image series data. This software offers the user hundreds of basic and advanced signal- and image-processing capabilities including esoteric 1D and 2D wavelet-based de-noising, de-trending, and filtering. Batch processing is included for signal- and image-processing capability so that an optimized sequence of processing operations can be applied to entire folders of signals, spectra, and images. Additionally, an extensive interactive model-based curve-fitting facility has been included to allow fitting of spectroscopy data such as from Raman spectroscopy. An extensive joint-time frequency module is included for analysis of non-stationary or transient data such as that from acoustic emission, vibration, or earthquake data.

  5. A modeling analysis program for the JPL Table Mountain Io sodium cloud data

    NASA Technical Reports Server (NTRS)

    Smyth, W. H.; Goldberg, B. A.

    1986-01-01

    Progress and achievements in the second year are discussed in three main areas: (1) data quality review of the 1981 Region B/C images; (2) data processing activities; and (3) modeling activities. The data quality review revealed that almost all 1981 Region B/C images are of sufficient quality to be valuable in the analyses of the JPL data set. In the second area, the major milestone reached was the successful development and application of complex image-processing software required to render the original image data suitable for modeling analysis studies. In the third area, the lifetime description of sodium atoms in the planet magnetosphere was improved in the model to include the offset dipole nature of the magnetic field as well as an east-west electric field. These improvements are important in properly representing the basic morphology as well as the east-west asymmetries of the sodium cloud.

  6. Design on the x-ray oral digital image display card

    NASA Astrophysics Data System (ADS)

    Wang, Liping; Gu, Guohua; Chen, Qian

    2009-10-01

    According to the main characteristics of X-ray imaging, the X-ray display card is successfully designed and debugged using the basic principle of correlated double sampling (CDS) and combined with embedded computer technology. CCD sensor drive circuit and the corresponding procedures have been designed. Filtering and sampling hold circuit have been designed. The data exchange with PC104 bus has been implemented. Using complex programmable logic device as a device to provide gating and timing logic, the functions which counting, reading CPU control instructions, corresponding exposure and controlling sample-and-hold have been completed. According to the image effect and noise analysis, the circuit components have been adjusted. And high-quality images have been obtained.

  7. The AAPM/RSNA physics tutorial for residents. Basic physics of MR imaging: an introduction.

    PubMed

    Hendrick, R E

    1994-07-01

    This article provides an introduction to the basic physical principles of magnetic resonance (MR) imaging. Essential basic concepts such as nuclear magnetism, tissue magnetization, precession, excitation, and tissue relaxation properties are presented. Hydrogen spin density and tissue relaxation times T1, T2, and T2* are explained. The basic elements of a planar MR pulse sequence are described: section selection during tissue excitation, phase encoding, and frequency encoding during signal measurement.

  8. Characterization of the basic charge variants of a human IgG1

    PubMed Central

    Lu, Franklin; Derfus, Gayle; Kluck, Brian; Nogal, Bartek; Emery, Craig; Summers, Christie; Zheng, Kai; Bayer, Robert; Amanullah, Ashraf

    2011-01-01

    We report a case study of an IgG1 with a unique basic charge variant profile caused by C-terminal proline amidation on either one or two heavy chains. The proline amidation was sensitive to copper ion concentration in the production media during cell culture: the higher the Cu2+ ion concentration, the higher the level of proline amidation detected. This conclusion was supported by the analysis of samples that revealed direct correlation between the proline amidation level observed from peptide maps and the level of basic peaks measured by imaged capillary isoelectric focusing and a pH gradient ion-exchange chromatography method. The importance of these observations to therapeutic antibody production is discussed. PMID:22123059

  9. Characterization of the basic charge variants of a human IgG1: effect of copper concentration in cell culture media.

    PubMed

    Kaschak, Timothy; Boyd, Daniel; Lu, Franklin; Derfus, Gayle; Kluck, Brian; Nogal, Bartek; Emery, Craig; Summers, Christie; Zheng, Kai; Bayer, Robert; Amanullah, Ashraf; Yan, Boxu

    2011-01-01

    We report a case study of an IgG1 with a unique basic charge variant profile caused by C-terminal proline amidation on either one or two heavy chains. The proline amidation was sensitive to copper ion concentration in the production media during cell culture: the higher the Cu ( 2+) ion concentration, the higher the level of proline amidation detected. This conclusion was supported by the analysis of samples that revealed direct correlation between the proline amidation level observed from peptide maps and the level of basic peaks measured by imaged capillary isoelectric focusing and a pH gradient ion-exchange chromatography method. The importance of these observations to therapeutic antibody production is discussed.

  10. The observation and coverage analysis of the moon-based ultraviolet telescope on CE-3 lander

    NASA Astrophysics Data System (ADS)

    wang, f.; wen, w.-b.; liu, d.-w.; geng, l.; zhang, x.-x.; zhao, s.

    2017-09-01

    Through the analysis of all the observed images of MUVT, it is found that in the celestial coordinate system, all the images of the survey are concentrated at Latitude 65 degrees and Longtitude -90 degrees as the center, a ring of 15 degrees width. The observation data analysis: the coverage of the northern area is up to 2263.8 square degrees, accounting for about 5.487% of the all area. The task is completed the observation target. For the first time, the MUVT in a long time has carried out the astronomical observations, and accumulated abundant observational data for basic research on the evolution of stars, compact star and high energy astrophysics and so on.

  11. Comparative study of different approaches for multivariate image analysis in HPTLC fingerprinting of natural products such as plant resin.

    PubMed

    Ristivojević, Petar; Trifković, Jelena; Vovk, Irena; Milojković-Opsenica, Dušanka

    2017-01-01

    Considering the introduction of phytochemical fingerprint analysis, as a method of screening the complex natural products for the presence of most bioactive compounds, use of chemometric classification methods, application of powerful scanning and image capturing and processing devices and algorithms, advancement in development of novel stationary phases as well as various separation modalities, high-performance thin-layer chromatography (HPTLC) fingerprinting is becoming attractive and fruitful field of separation science. Multivariate image analysis is crucial in the light of proper data acquisition. In a current study, different image processing procedures were studied and compared in detail on the example of HPTLC chromatograms of plant resins. In that sense, obtained variables such as gray intensities of pixels along the solvent front, peak area and mean values of peak were used as input data and compared to obtained best classification models. Important steps in image analysis, baseline removal, denoising, target peak alignment and normalization were pointed out. Numerical data set based on mean value of selected bands and intensities of pixels along the solvent front proved to be the most convenient for planar-chromatographic profiling, although required at least the basic knowledge on image processing methodology, and could be proposed for further investigation in HPLTC fingerprinting. Copyright © 2016 Elsevier B.V. All rights reserved.

  12. Integrated approach to multimodal media content analysis

    NASA Astrophysics Data System (ADS)

    Zhang, Tong; Kuo, C.-C. Jay

    1999-12-01

    In this work, we present a system for the automatic segmentation, indexing and retrieval of audiovisual data based on the combination of audio, visual and textural content analysis. The video stream is demultiplexed into audio, image and caption components. Then, a semantic segmentation of the audio signal based on audio content analysis is conducted, and each segment is indexed as one of the basic audio types. The image sequence is segmented into shots based on visual information analysis, and keyframes are extracted from each shot. Meanwhile, keywords are detected from the closed caption. Index tables are designed for both linear and non-linear access to the video. It is shown by experiments that the proposed methods for multimodal media content analysis are effective. And that the integrated framework achieves satisfactory results for video information filtering and retrieval.

  13. On the Usage of GPUs for Efficient Motion Estimation in Medical Image Sequences

    PubMed Central

    Thiyagalingam, Jeyarajan; Goodman, Daniel; Schnabel, Julia A.; Trefethen, Anne; Grau, Vicente

    2011-01-01

    Images are ubiquitous in biomedical applications from basic research to clinical practice. With the rapid increase in resolution, dimensionality of the images and the need for real-time performance in many applications, computational requirements demand proper exploitation of multicore architectures. Towards this, GPU-specific implementations of image analysis algorithms are particularly promising. In this paper, we investigate the mapping of an enhanced motion estimation algorithm to novel GPU-specific architectures, the resulting challenges and benefits therein. Using a database of three-dimensional image sequences, we show that the mapping leads to substantial performance gains, up to a factor of 60, and can provide near-real-time experience. We also show how architectural peculiarities of these devices can be best exploited in the benefit of algorithms, most specifically for addressing the challenges related to their access patterns and different memory configurations. Finally, we evaluate the performance of the algorithm on three different GPU architectures and perform a comprehensive analysis of the results. PMID:21869880

  14. Comparing Core-Image-Based Basic Verb Learning in an EFL Junior High School: Learner-Centered and Teacher-Centered Approaches

    ERIC Educational Resources Information Center

    Yamagata, Satoshi

    2018-01-01

    The present study investigated the effects of two types of core-image-based basic verb learning approaches: the learner-centered and the teacher-centered approaches. The learner-centered approach was an activity in which participants found semantic relationships among several definitions of each basic target verb through a picture-elucidated card…

  15. "Anatomy and imaging": 10 years of experience with an interdisciplinary teaching project in preclinical medical education - from an elective to a curricular course.

    PubMed

    Schober, A; Pieper, C C; Schmidt, R; Wittkowski, W

    2014-05-01

    Presentation of an interdisciplinary, interactive, tutor-based preclinical teaching project called "Anatomy and Imaging". Experience report, analysis of evaluation results and selective literature review. From 2001 to 2012, 618 students took the basic course (4 periods per week throughout the semester) and 316 took the advanced course (2 periods per week). We reviewed 557 (return rate 90.1 %) and 292 (92.4 %) completed evaluation forms of the basic and the advanced course. Results showed overall high satisfaction with the courses (1.33 and 1.56, respectively, on a 5-point Likert scale). The recognizability of the relevance of the course content for medical training, the promotion of the interest in medicine and the quality of the student tutors were evaluated especially positively. The "Anatomy and Imaging" teaching project is a successful concept for integrating medical imaging into the preclinical stage of medical education. The course was offered as part of the curriculum in 2013 for the first time. "Anatomia in mortuis" and "Anatomia in vivo" are not regarded as rivaling entities in the delivery of knowledge, but as complementary methods. © Georg Thieme Verlag KG Stuttgart · New York.

  16. Low-cost data analysis systems for processing multispectral scanner data

    NASA Technical Reports Server (NTRS)

    Whitely, S. L.

    1976-01-01

    The basic hardware and software requirements are described for four low cost analysis systems for computer generated land use maps. The data analysis systems consist of an image display system, a small digital computer, and an output recording device. Software is described together with some of the display and recording devices, and typical costs are cited. Computer requirements are given, and two approaches are described for converting black-white film and electrostatic printer output to inexpensive color output products. Examples of output products are shown.

  17. GIS Toolsets for Planetary Geomorphology and Landing-Site Analysis

    NASA Astrophysics Data System (ADS)

    Nass, Andrea; van Gasselt, Stephan

    2015-04-01

    Modern Geographic Information Systems (GIS) allow expert and lay users alike to load and position geographic data and perform simple to highly complex surface analyses. For many applications dedicated and ready-to-use GIS tools are available in standard software systems while other applications require the modular combination of available basic tools to answer more specific questions. This also applies to analyses in modern planetary geomorphology where many of such (basic) tools can be used to build complex analysis tools, e.g. in image- and terrain model analysis. Apart from the simple application of sets of different tools, many complex tasks require a more sophisticated design for storing and accessing data using databases (e.g. ArcHydro for hydrological data analysis). In planetary sciences, complex database-driven models are often required to efficiently analyse potential landings sites or store rover data, but also geologic mapping data can be efficiently stored and accessed using database models rather than stand-alone shapefiles. For landings-site analyses, relief and surface roughness estimates are two common concepts that are of particular interest and for both, a number of different definitions co-exist. We here present an advanced toolset for the analysis of image and terrain-model data with an emphasis on extraction of landing site characteristics using established criteria. We provide working examples and particularly focus on the concepts of terrain roughness as it is interpreted in geomorphology and engineering studies.

  18. Deep Learning in Nuclear Medicine and Molecular Imaging: Current Perspectives and Future Directions.

    PubMed

    Choi, Hongyoon

    2018-04-01

    Recent advances in deep learning have impacted various scientific and industrial fields. Due to the rapid application of deep learning in biomedical data, molecular imaging has also started to adopt this technique. In this regard, it is expected that deep learning will potentially affect the roles of molecular imaging experts as well as clinical decision making. This review firstly offers a basic overview of deep learning particularly for image data analysis to give knowledge to nuclear medicine physicians and researchers. Because of the unique characteristics and distinctive aims of various types of molecular imaging, deep learning applications can be different from other fields. In this context, the review deals with current perspectives of deep learning in molecular imaging particularly in terms of development of biomarkers. Finally, future challenges of deep learning application for molecular imaging and future roles of experts in molecular imaging will be discussed.

  19. Computer-assisted image analysis of human cilia and Chlamydomonas flagella reveals both similarities and differences in axoneme structure.

    PubMed

    O'Toole, Eileen T; Giddings, Thomas H; Porter, Mary E; Ostrowski, Lawrence E

    2012-08-01

    In the past decade, investigations from several different fields have revealed the critical role of cilia in human health and disease. Because of the highly conserved nature of the basic axonemal structure, many different model systems have proven useful for the study of ciliopathies, especially the unicellular, biflagellate green alga Chlamydomonas reinhardtii. Although the basic axonemal structure of cilia and flagella is highly conserved, these organelles often perform specialized functions unique to the cell or tissue in which they are found. These differences in function are likely reflected in differences in structural organization. In this work, we directly compare the structure of isolated axonemes from human cilia and Chlamydomonas flagella to identify similarities and differences that potentially play key roles in determining their functionality. Using transmission electron microscopy and 2D image averaging techniques, our analysis has confirmed the overall structural similarity between these two species, but also revealed clear differences in the structure of the outer dynein arms, the central pair projections, and the radial spokes. We also show how the application of 2D image averaging can clarify the underlying structural defects associated with primary ciliary dyskinesia (PCD). Overall, our results document the remarkable similarity between these two structures separated evolutionarily by over a billion years, while highlighting several significant differences, and demonstrate the potential of 2D image averaging to improve the diagnosis and understanding of PCD. Copyright © 2012 Wiley Periodicals, Inc.

  20. A standardization model based on image recognition for performance evaluation of an oral scanner.

    PubMed

    Seo, Sang-Wan; Lee, Wan-Sun; Byun, Jae-Young; Lee, Kyu-Bok

    2017-12-01

    Accurate information is essential in dentistry. The image information of missing teeth is used in optically based medical equipment in prosthodontic treatment. To evaluate oral scanners, the standardized model was examined from cases of image recognition errors of linear discriminant analysis (LDA), and a model that combines the variables with reference to ISO 12836:2015 was designed. The basic model was fabricated by applying 4 factors to the tooth profile (chamfer, groove, curve, and square) and the bottom surface. Photo-type and video-type scanners were used to analyze 3D images after image capture. The scans were performed several times according to the prescribed sequence to distinguish the model from the one that did not form, and the results confirmed it to be the best. In the case of the initial basic model, a 3D shape could not be obtained by scanning even if several shots were taken. Subsequently, the recognition rate of the image was improved with every variable factor, and the difference depends on the tooth profile and the pattern of the floor surface. Based on the recognition error of the LDA, the recognition rate decreases when the model has a similar pattern. Therefore, to obtain the accurate 3D data, the difference of each class needs to be provided when developing a standardized model.

  1. SIP: A Web-Based Astronomical Image Processing Program

    NASA Astrophysics Data System (ADS)

    Simonetti, J. H.

    1999-12-01

    I have written an astronomical image processing and analysis program designed to run over the internet in a Java-compatible web browser. The program, Sky Image Processor (SIP), is accessible at the SIP webpage (http://www.phys.vt.edu/SIP). Since nothing is installed on the user's machine, there is no need to download upgrades; the latest version of the program is always instantly available. Furthermore, the Java programming language is designed to work on any computer platform (any machine and operating system). The program could be used with students in web-based instruction or in a computer laboratory setting; it may also be of use in some research or outreach applications. While SIP is similar to other image processing programs, it is unique in some important respects. For example, SIP can load images from the user's machine or from the Web. An instructor can put images on a web server for students to load and analyze on their own personal computer. Or, the instructor can inform the students of images to load from any other web server. Furthermore, since SIP was written with students in mind, the philosophy is to present the user with the most basic tools necessary to process and analyze astronomical images. Images can be combined (by addition, subtraction, multiplication, or division), multiplied by a constant, smoothed, cropped, flipped, rotated, and so on. Statistics can be gathered for pixels within a box drawn by the user. Basic tools are available for gathering data from an image which can be used for performing simple differential photometry, or astrometry. Therefore, students can learn how astronomical image processing works. Since SIP is not part of a commercial CCD camera package, the program is written to handle the most common denominator image file, the FITS format.

  2. Remote sensing programs and courses in engineering and water resources

    NASA Technical Reports Server (NTRS)

    Kiefer, R. W.

    1981-01-01

    The content of typical basic and advanced remote sensing and image interpretation courses are described and typical remote sensing graduate programs of study in civil engineering and in interdisciplinary environmental remote sensing and water resources management programs are outlined. Ideally, graduate programs with an emphasis on remote sensing and image interpretation should be built around a core of five courses: (1) a basic course in fundamentals of remote sensing upon which the more specialized advanced remote sensing courses can build; (2) a course dealing with visual image interpretation; (3) a course dealing with quantitative (computer-based) image interpretation; (4) a basic photogrammetry course; and (5) a basic surveying course. These five courses comprise up to one-half of the course work required for the M.S. degree. The nature of other course work and thesis requirements vary greatly, depending on the department in which the degree is being awarded.

  3. Characterisation of human non-proliferative diabetic retinopathy using the fractal analysis

    PubMed Central

    Ţălu, Ştefan; Călugăru, Dan Mihai; Lupaşcu, Carmen Alina

    2015-01-01

    AIM To investigate and quantify changes in the branching patterns of the retina vascular network in diabetes using the fractal analysis method. METHODS This was a clinic-based prospective study of 172 participants managed at the Ophthalmological Clinic of Cluj-Napoca, Romania, between January 2012 and December 2013. A set of 172 segmented and skeletonized human retinal images, corresponding to both normal (24 images) and pathological (148 images) states of the retina were examined. An automatic unsupervised method for retinal vessel segmentation was applied before fractal analysis. The fractal analyses of the retinal digital images were performed using the fractal analysis software ImageJ. Statistical analyses were performed for these groups using Microsoft Office Excel 2003 and GraphPad InStat software. RESULTS It was found that subtle changes in the vascular network geometry of the human retina are influenced by diabetic retinopathy (DR) and can be estimated using the fractal geometry. The average of fractal dimensions D for the normal images (segmented and skeletonized versions) is slightly lower than the corresponding values of mild non-proliferative DR (NPDR) images (segmented and skeletonized versions). The average of fractal dimensions D for the normal images (segmented and skeletonized versions) is higher than the corresponding values of moderate NPDR images (segmented and skeletonized versions). The lowest values were found for the corresponding values of severe NPDR images (segmented and skeletonized versions). CONCLUSION The fractal analysis of fundus photographs may be used for a more complete undeTrstanding of the early and basic pathophysiological mechanisms of diabetes. The architecture of the retinal microvasculature in diabetes can be quantitative quantified by means of the fractal dimension. Microvascular abnormalities on retinal imaging may elucidate early mechanistic pathways for microvascular complications and distinguish patients with DR from healthy individuals. PMID:26309878

  4. Characterisation of human non-proliferative diabetic retinopathy using the fractal analysis.

    PubMed

    Ţălu, Ştefan; Călugăru, Dan Mihai; Lupaşcu, Carmen Alina

    2015-01-01

    To investigate and quantify changes in the branching patterns of the retina vascular network in diabetes using the fractal analysis method. This was a clinic-based prospective study of 172 participants managed at the Ophthalmological Clinic of Cluj-Napoca, Romania, between January 2012 and December 2013. A set of 172 segmented and skeletonized human retinal images, corresponding to both normal (24 images) and pathological (148 images) states of the retina were examined. An automatic unsupervised method for retinal vessel segmentation was applied before fractal analysis. The fractal analyses of the retinal digital images were performed using the fractal analysis software ImageJ. Statistical analyses were performed for these groups using Microsoft Office Excel 2003 and GraphPad InStat software. It was found that subtle changes in the vascular network geometry of the human retina are influenced by diabetic retinopathy (DR) and can be estimated using the fractal geometry. The average of fractal dimensions D for the normal images (segmented and skeletonized versions) is slightly lower than the corresponding values of mild non-proliferative DR (NPDR) images (segmented and skeletonized versions). The average of fractal dimensions D for the normal images (segmented and skeletonized versions) is higher than the corresponding values of moderate NPDR images (segmented and skeletonized versions). The lowest values were found for the corresponding values of severe NPDR images (segmented and skeletonized versions). The fractal analysis of fundus photographs may be used for a more complete undeTrstanding of the early and basic pathophysiological mechanisms of diabetes. The architecture of the retinal microvasculature in diabetes can be quantitative quantified by means of the fractal dimension. Microvascular abnormalities on retinal imaging may elucidate early mechanistic pathways for microvascular complications and distinguish patients with DR from healthy individuals.

  5. Application of basic principles of physics to head and neck MR angiography: troubleshooting for artifacts.

    PubMed

    Pandey, Shilpa; Hakky, Michael; Kwak, Ellie; Jara, Hernan; Geyer, Carl A; Erbay, Sami H

    2013-05-01

    Neurovascular imaging studies are routinely used for the assessment of headaches and changes in mental status, stroke workup, and evaluation of the arteriovenous structures of the head and neck. These imaging studies are being performed with greater frequency as the aging population continues to increase. Magnetic resonance (MR) angiographic imaging techniques are helpful in this setting. However, mastering these techniques requires an in-depth understanding of the basic principles of physics, complex flow patterns, and the correlation of MR angiographic findings with conventional MR imaging findings. More than one imaging technique may be used to solve difficult cases, with each technique contributing unique information. Unfortunately, incorporating findings obtained with multiple imaging modalities may add to the diagnostic challenge. To ensure diagnostic accuracy, it is essential that the radiologist carefully evaluate the details provided by these modalities in light of basic physics principles, the fundamentals of various imaging techniques, and common neurovascular imaging pitfalls. ©RSNA, 2013.

  6. Feedback circuit design of an auto-gating power supply for low-light-level image intensifier

    NASA Astrophysics Data System (ADS)

    Yang, Ye; Yan, Bo; Zhi, Qiang; Ni, Xiao-bing; Li, Jun-guo; Wang, Yu; Yao, Ze

    2015-11-01

    This paper introduces the basic principle of auto-gating power supply which using a hybrid automatic brightness control scheme. By the analysis of current as image intensifier to special requirements of auto-gating power supply, a feedback circuit of the auto-gating power supply is analyzed. Find out the reason of the screen flash after the auto-gating power supply assembled image intensifier. This paper designed a feedback circuit which can shorten the response time of auto-gating power supply and improve screen slight flicker phenomenon which the human eye can distinguish under the high intensity of illumination.

  7. Optical threshold secret sharing scheme based on basic vector operations and coherence superposition

    NASA Astrophysics Data System (ADS)

    Deng, Xiaopeng; Wen, Wei; Mi, Xianwu; Long, Xuewen

    2015-04-01

    We propose, to our knowledge for the first time, a simple optical algorithm for secret image sharing with the (2,n) threshold scheme based on basic vector operations and coherence superposition. The secret image to be shared is firstly divided into n shadow images by use of basic vector operations. In the reconstruction stage, the secret image can be retrieved by recording the intensity of the coherence superposition of any two shadow images. Compared with the published encryption techniques which focus narrowly on information encryption, the proposed method can realize information encryption as well as secret sharing, which further ensures the safety and integrality of the secret information and prevents power from being kept centralized and abused. The feasibility and effectiveness of the proposed method are demonstrated by numerical results.

  8. Recent advances in parametric neuroreceptor mapping with dynamic PET: basic concepts and graphical analyses.

    PubMed

    Seo, Seongho; Kim, Su Jin; Lee, Dong Soo; Lee, Jae Sung

    2014-10-01

    Tracer kinetic modeling in dynamic positron emission tomography (PET) has been widely used to investigate the characteristic distribution patterns or dysfunctions of neuroreceptors in brain diseases. Its practical goal has progressed from regional data quantification to parametric mapping that produces images of kinetic-model parameters by fully exploiting the spatiotemporal information in dynamic PET data. Graphical analysis (GA) is a major parametric mapping technique that is independent on any compartmental model configuration, robust to noise, and computationally efficient. In this paper, we provide an overview of recent advances in the parametric mapping of neuroreceptor binding based on GA methods. The associated basic concepts in tracer kinetic modeling are presented, including commonly-used compartment models and major parameters of interest. Technical details of GA approaches for reversible and irreversible radioligands are described, considering both plasma input and reference tissue input models. Their statistical properties are discussed in view of parametric imaging.

  9. Chemical and isotopic database of water and gas from hydrothermal systems with an emphasis for the western United States

    USGS Publications Warehouse

    Mariner, R.H.; Venezky, D.Y.; Hurwitz, S.

    2006-01-01

    Chemical and isotope data accumulated by two USGS Projects (led by I. Barnes and R. Mariner) over a time period of about 40 years can now be found using a basic web search or through an image search (left). The data are primarily chemical and isotopic analyses of waters (thermal, mineral, or fresh) and associated gas (free and/or dissolved) collected from hot springs, mineral springs, cold springs, geothermal wells, fumaroles, and gas seeps. Additional information is available about the collection methods and analysis procedures.The chemical and isotope data are stored in a MySQL database and accessed using PHP from a basic search form below. Data can also be accessed using an Open Source GIS called WorldKit by clicking on the image to the left. Additional information is available about WorldKit including the files used to set up the site.

  10. Digital Pathology: Data-Intensive Frontier in Medical Imaging

    PubMed Central

    Cooper, Lee A. D.; Carter, Alexis B.; Farris, Alton B.; Wang, Fusheng; Kong, Jun; Gutman, David A.; Widener, Patrick; Pan, Tony C.; Cholleti, Sharath R.; Sharma, Ashish; Kurc, Tahsin M.; Brat, Daniel J.; Saltz, Joel H.

    2013-01-01

    Pathology is a medical subspecialty that practices the diagnosis of disease. Microscopic examination of tissue reveals information enabling the pathologist to render accurate diagnoses and to guide therapy. The basic process by which anatomic pathologists render diagnoses has remained relatively unchanged over the last century, yet advances in information technology now offer significant opportunities in image-based diagnostic and research applications. Pathology has lagged behind other healthcare practices such as radiology where digital adoption is widespread. As devices that generate whole slide images become more practical and affordable, practices will increasingly adopt this technology and eventually produce an explosion of data that will quickly eclipse the already vast quantities of radiology imaging data. These advances are accompanied by significant challenges for data management and storage, but they also introduce new opportunities to improve patient care by streamlining and standardizing diagnostic approaches and uncovering disease mechanisms. Computer-based image analysis is already available in commercial diagnostic systems, but further advances in image analysis algorithms are warranted in order to fully realize the benefits of digital pathology in medical discovery and patient care. In coming decades, pathology image analysis will extend beyond the streamlining of diagnostic workflows and minimizing interobserver variability and will begin to provide diagnostic assistance, identify therapeutic targets, and predict patient outcomes and therapeutic responses. PMID:25328166

  11. Cascaded image analysis for dynamic crack detection in material testing

    NASA Astrophysics Data System (ADS)

    Hampel, U.; Maas, H.-G.

    Concrete probes in civil engineering material testing often show fissures or hairline-cracks. These cracks develop dynamically. Starting at a width of a few microns, they usually cannot be detected visually or in an image of a camera imaging the whole probe. Conventional image analysis techniques will detect fissures only if they show a width in the order of one pixel. To be able to detect and measure fissures with a width of a fraction of a pixel at an early stage of their development, a cascaded image analysis approach has been developed, implemented and tested. The basic idea of the approach is to detect discontinuities in dense surface deformation vector fields. These deformation vector fields between consecutive stereo image pairs, which are generated by cross correlation or least squares matching, show a precision in the order of 1/50 pixel. Hairline-cracks can be detected and measured by applying edge detection techniques such as a Sobel operator to the results of the image matching process. Cracks will show up as linear discontinuities in the deformation vector field and can be vectorized by edge chaining. In practical tests of the method, cracks with a width of 1/20 pixel could be detected, and their width could be determined at a precision of 1/50 pixel.

  12. Body Basics

    MedlinePlus

    ... learn more about how the body works, what basic human anatomy is, and what happens when parts of ... consult your doctor. © 1995- The Nemours Foundation. All rights reserved. Images provided by The Nemours Foundation, iStock, Getty Images, Veer, Shutterstock, and Clipart.com.

  13. Quantitative analysis of image quality for acceptance and commissioning of an MRI simulator with a semiautomatic method.

    PubMed

    Chen, Xinyuan; Dai, Jianrong

    2018-05-01

    Magnetic Resonance Imaging (MRI) simulation differs from diagnostic MRI in purpose, technical requirements, and implementation. We propose a semiautomatic method for image acceptance and commissioning for the scanner, the radiofrequency (RF) coils, and pulse sequences for an MRI simulator. The ACR MRI accreditation large phantom was used for image quality analysis with seven parameters. Standard ACR sequences with a split head coil were adopted to examine the scanner's basic performance. The performance of simulation RF coils were measured and compared using the standard sequence with different clinical diagnostic coils. We used simulation sequences with simulation coils to test the quality of image and advanced performance of the scanner. Codes and procedures were developed for semiautomatic image quality analysis. When using standard ACR sequences with a split head coil, image quality passed all ACR recommended criteria. The image intensity uniformity with a simulation RF coil decreased about 34% compared with the eight-channel diagnostic head coil, while the other six image quality parameters were acceptable. Those two image quality parameters could be improved to more than 85% by built-in intensity calibration methods. In the simulation sequences test, the contrast resolution was sensitive to the FOV and matrix settings. The geometric distortion of simulation sequences such as T1-weighted and T2-weighted images was well-controlled in the isocenter and 10 cm off-center within a range of ±1% (2 mm). We developed a semiautomatic image quality analysis method for quantitative evaluation of images and commissioning of an MRI simulator. The baseline performances of simulation RF coils and pulse sequences have been established for routine QA. © 2018 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  14. Web-based platform for collaborative medical imaging research

    NASA Astrophysics Data System (ADS)

    Rittner, Leticia; Bento, Mariana P.; Costa, André L.; Souza, Roberto M.; Machado, Rubens C.; Lotufo, Roberto A.

    2015-03-01

    Medical imaging research depends basically on the availability of large image collections, image processing and analysis algorithms, hardware and a multidisciplinary research team. It has to be reproducible, free of errors, fast, accessible through a large variety of devices spread around research centers and conducted simultaneously by a multidisciplinary team. Therefore, we propose a collaborative research environment, named Adessowiki, where tools and datasets are integrated and readily available in the Internet through a web browser. Moreover, processing history and all intermediate results are stored and displayed in automatic generated web pages for each object in the research project or clinical study. It requires no installation or configuration from the client side and offers centralized tools and specialized hardware resources, since processing takes place in the cloud.

  15. Gaia: focus, straylight and basic angle

    NASA Astrophysics Data System (ADS)

    Mora, A.; Biermann, M.; Bombrun, A.; Boyadjian, J.; Chassat, F.; Corberand, P.; Davidson, M.; Doyle, D.; Escolar, D.; Gielesen, W. L. M.; Guilpain, T.; Hernandez, J.; Kirschner, V.; Klioner, S. A.; Koeck, C.; Laine, B.; Lindegren, L.; Serpell, E.; Tatry, P.; Thoral, P.

    2016-07-01

    The Gaia all-sky astrometric survey is challenged by several issues affecting the spacecraft stability. Amongst them, we find the focus evolution, straylight and basic angle variations Contrary to pre-launch expectations, the image quality is continuously evolving, during commissioning and the nominal mission. Payload decontaminations and wavefront sensor assisted refocuses have been carried out to recover optimum performance. An ESA-Airbus DS working group analysed the straylight and basic angle issues and worked on a detailed root cause analysis. In parallel, the Gaia scientists have also analysed the data, most notably comparing the BAM signal to global astrometric solutions, with remarkable agreement. In this contribution, a status review of these issues will be provided, with emphasis on the mitigation schemes and the lessons learned for future space missions where extreme stability is a key requirement.

  16. Mosaic of coded aperture arrays

    DOEpatents

    Fenimore, Edward E.; Cannon, Thomas M.

    1980-01-01

    The present invention pertains to a mosaic of coded aperture arrays which is capable of imaging off-axis sources with minimum detector size. Mosaics of the basic array pattern create a circular on periodic correlation of the object on a section of the picture plane. This section consists of elements of the central basic pattern as well as elements from neighboring patterns and is a cyclic version of the basic pattern. Since all object points contribute a complete cyclic version of the basic pattern, a section of the picture, which is the size of the basic aperture pattern, contains all the information necessary to image the object with no artifacts.

  17. An overview of computer vision

    NASA Technical Reports Server (NTRS)

    Gevarter, W. B.

    1982-01-01

    An overview of computer vision is provided. Image understanding and scene analysis are emphasized, and pertinent aspects of pattern recognition are treated. The basic approach to computer vision systems, the techniques utilized, applications, the current existing systems and state-of-the-art issues and research requirements, who is doing it and who is funding it, and future trends and expectations are reviewed.

  18. Near-earth orbital guidance and remote sensing

    NASA Technical Reports Server (NTRS)

    Powers, W. F.

    1972-01-01

    The curriculum of a short course in remote sensing and parameter optimization is presented. The subjects discussed are: (1) basics of remote sensing and the user community, (2) multivariant spectral analysis, (3) advanced mathematics and physics of remote sensing, (4) the atmospheric environment, (5) imaging sensing, and (6)nonimaging sensing. Mathematical models of optimization techniques are developed.

  19. A microcomputer program for analysis of nucleic acid hybridization data

    PubMed Central

    Green, S.; Field, J.K.; Green, C.D.; Beynon, R.J.

    1982-01-01

    The study of nucleic acid hybridization is facilitated by computer mediated fitting of theoretical models to experimental data. This paper describes a non-linear curve fitting program, using the `Patternsearch' algorithm, written in BASIC for the Apple II microcomputer. The advantages and disadvantages of using a microcomputer for local data processing are discussed. Images PMID:7071017

  20. Image analysis of anatomical traits in stalk transections of maize and other grasses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heckwolf, Sven; Heckwolf, Marlies; Kaeppler, Shawn M.

    Grass stalks architecturally support leaves and reproductive structures, functionally support the transport of water and nutrients, and are harvested for multiple agricultural uses. Research on these basic and applied aspects of grass stalks would benefit from improved capabilities for measuring internal anatomical features. In particular, methods suitable for phenotyping populations of plants are needed.

  1. Image analysis of anatomical traits in stalk transections of maize and other grasses

    DOE PAGES

    Heckwolf, Sven; Heckwolf, Marlies; Kaeppler, Shawn M.; ...

    2015-04-09

    Grass stalks architecturally support leaves and reproductive structures, functionally support the transport of water and nutrients, and are harvested for multiple agricultural uses. Research on these basic and applied aspects of grass stalks would benefit from improved capabilities for measuring internal anatomical features. In particular, methods suitable for phenotyping populations of plants are needed.

  2. Optical design of multi-multiple expander structure of laser gas analysis and measurement device

    NASA Astrophysics Data System (ADS)

    Fu, Xiang; Wei, Biao

    2018-03-01

    The installation and debugging of optical circuit structure in the application of carbon monoxide distributed laser gas analysis and measurement, there are difficult key technical problems. Based on the three-component expansion theory, multi-multiple expander structure with expansion ratio of 4, 5, 6 and 7 is adopted in the absorption chamber to enhance the adaptability of the installation environment of the gas analysis and measurement device. According to the basic theory of aberration, the optimal design of multi-multiple beam expander structure is carried out. By using image quality evaluation method, the difference of image quality under different magnifications is analyzed. The results show that the optical quality of the optical system with the expanded beam structure is the best when the expansion ratio is 5-7.

  3. Quantification of micro-CT images of textile reinforcements

    NASA Astrophysics Data System (ADS)

    Straumit, Ilya; Lomov, Stepan V.; Wevers, Martine

    2017-10-01

    VoxTex software (KU Leuven) employs 3D image processing, which use the local directionality information, retrieved using analysis of local structure tensor. The processing results in a voxel 3D array, with each voxel carrying information on (1) material type (matrix; yarn/ply, with identification of the yarn/ply in the reinforcement architecture; void) and (2) fibre direction for fibrous yarns/plies. The knowledge of the material phase volume and known characterisation of the textile structure allows assigning to the voxels (3) fibre volume fraction. This basic voxel model can be further used for different type of the material analysis: Internal geometry and characterisation of defects; permeability; micromechanics; mesoFE voxel models. Apart from the voxel based analysis, approaches to reconstruction of the yarn paths are presented.

  4. Contextual analysis of immunological response through whole-organ fluorescent imaging.

    PubMed

    Woodruff, Matthew C; Herndon, Caroline N; Heesters, B A; Carroll, Michael C

    2013-09-01

    As fluorescent microscopy has developed, significant insights have been gained into the establishment of immune response within secondary lymphoid organs, particularly in draining lymph nodes. While established techniques such as confocal imaging and intravital multi-photon microscopy have proven invaluable, they provide limited insight into the architectural and structural context in which these responses occur. To interrogate the role of the lymph node environment in immune response effectively, a new set of imaging tools taking into account broader architectural context must be implemented into emerging immunological questions. Using two different methods of whole-organ imaging, optical clearing and three-dimensional reconstruction of serially sectioned lymph nodes, fluorescent representations of whole lymph nodes can be acquired at cellular resolution. Using freely available post-processing tools, images of unlimited size and depth can be assembled into cohesive, contextual snapshots of immunological response. Through the implementation of robust iterative analysis techniques, these highly complex three-dimensional images can be objectified into sortable object data sets. These data can then be used to interrogate complex questions at the cellular level within the broader context of lymph node biology. By combining existing imaging technology with complex methods of sample preparation and capture, we have developed efficient systems for contextualizing immunological phenomena within lymphatic architecture. In combination with robust approaches to image analysis, these advances provide a path to integrating scientific understanding of basic lymphatic biology into the complex nature of immunological response.

  5. Using Microsoft PowerPoint as an Astronomical Image Analysis Tool

    NASA Astrophysics Data System (ADS)

    Beck-Winchatz, Bernhard

    2006-12-01

    Engaging students in the analysis of authentic scientific data is an effective way to teach them about the scientific process and to develop their problem solving, teamwork and communication skills. In astronomy several image processing and analysis software tools have been developed for use in school environments. However, the practical implementation in the classroom is often difficult because the teachers may not have the comfort level with computers necessary to install and use these tools, they may not have adequate computer privileges and/or support, and they may not have the time to learn how to use specialized astronomy software. To address this problem, we have developed a set of activities in which students analyze astronomical images using basic tools provided in PowerPoint. These include measuring sizes, distances, and angles, and blinking images. In contrast to specialized software, PowerPoint is broadly available on school computers. Many teachers are already familiar with PowerPoint, and the skills developed while learning how to analyze astronomical images are highly transferable. We will discuss several practical examples of measurements, including the following: -Variations in the distances to the sun and moon from their angular sizes -Magnetic declination from images of shadows -Diameter of the moon from lunar eclipse images -Sizes of lunar craters -Orbital radii of the Jovian moons and mass of Jupiter -Supernova and comet searches -Expansion rate of the universe from images of distant galaxies

  6. On the accuracy potential of focused plenoptic camera range determination in long distance operation

    NASA Astrophysics Data System (ADS)

    Sardemann, Hannes; Maas, Hans-Gerd

    2016-04-01

    Plenoptic cameras have found increasing interest in optical 3D measurement techniques in recent years. While their basic principle is 100 years old, the development in digital photography, micro-lens fabrication technology and computer hardware has boosted the development and lead to several commercially available ready-to-use cameras. Beyond their popular option of a posteriori image focusing or total focus image generation, their basic ability of generating 3D information from single camera imagery depicts a very beneficial option for certain applications. The paper will first present some fundamentals on the design and history of plenoptic cameras and will describe depth determination from plenoptic camera image data. It will then present an analysis of the depth determination accuracy potential of plenoptic cameras. While most research on plenoptic camera accuracy so far has focused on close range applications, we will focus on mid and long ranges of up to 100 m. This range is especially relevant, if plenoptic cameras are discussed as potential mono-sensorial range imaging devices in (semi-)autonomous cars or in mobile robotics. The results show the expected deterioration of depth measurement accuracy with depth. At depths of 30-100 m, which may be considered typical in autonomous driving, depth errors in the order of 3% (with peaks up to 10-13 m) were obtained from processing small point clusters on an imaged target. Outliers much higher than these values were observed in single point analysis, stressing the necessity of spatial or spatio-temporal filtering of the plenoptic camera depth measurements. Despite these obviously large errors, a plenoptic camera may nevertheless be considered a valid option for the application fields of real-time robotics like autonomous driving or unmanned aerial and underwater vehicles, where the accuracy requirements decrease with distance.

  7. An error analysis of tropical cyclone divergence and vorticity fields derived from satellite cloud winds on the Atmospheric and Oceanographic Information Processing System (AOIPS)

    NASA Technical Reports Server (NTRS)

    Hasler, A. F.; Rodgers, E. B.

    1977-01-01

    An advanced Man-Interactive image and data processing system (AOIPS) was developed to extract basic meteorological parameters from satellite data and to perform further analyses. The errors in the satellite derived cloud wind fields for tropical cyclones are investigated. The propagation of these errors through the AOIPS system and their effects on the analysis of horizontal divergence and relative vorticity are evaluated.

  8. Roles of universal three-dimensional image analysis devices that assist surgical operations.

    PubMed

    Sakamoto, Tsuyoshi

    2014-04-01

    The circumstances surrounding medical image analysis have undergone rapid evolution. In such a situation, it can be said that "imaging" obtained through medical imaging modality and the "analysis" that we employ have become amalgamated. Recently, we feel the distance between "imaging" and "analysis" has become closer regarding the imaging analysis of any organ system, as if both terms mentioned above have become integrated. The history of medical image analysis started with the appearance of the computer. The invention of multi-planar reconstruction (MPR) used in the helical scan had a significant impact and became the basis for recent image analysis. Subsequently, curbed MPR (CPR) and other methods were developed, and the 3D diagnostic imaging and image analysis of the human body have started on a full scale. Volume rendering: the development of a new rendering algorithm and the significant improvement of memory and CPUs contributed to the development of "volume rendering," which allows 3D views with retained internal information. A new value was created by this development; computed tomography (CT) images that used to be for "diagnosis" before that time have become "applicable to treatment." In the past, before the development of volume rendering, a clinician had to mentally reconstruct an image reconfigured for diagnosis into a 3D image, but these developments have allowed the depiction of a 3D image on a monitor. Current technology: Currently, in Japan, the estimation of the liver volume and the perfusion area of the portal vein and hepatic vein are vigorously being adopted during preoperative planning for hepatectomy. Such a circumstance seems to be brought by the substantial improvement of said basic techniques and by upgrading the user interface, allowing doctors easy manipulation by themselves. The following describes the specific techniques. Future of post-processing technology: It is expected, in terms of the role of image analysis, for better or worse, that computer-aided diagnosis (CAD) will develop to a highly advanced level in every diagnostic field. Further, it is also expected in the treatment field that a technique coordinating various devices will be strongly required as a surgery navigator. Actually, surgery using an image navigator is being widely studied, and coordination with hardware, including robots, will also be developed. © 2014 Japanese Society of Hepato-Biliary-Pancreatic Surgery.

  9. An introduction to Na(18)F bone scintigraphy: basic principles, advanced imaging concepts, and case examples.

    PubMed

    Bridges, Robert L; Wiley, Chris R; Christian, John C; Strohm, Adam P

    2007-06-01

    Na(18)F, an early bone scintigraphy agent, is poised to reenter mainstream clinical imaging with the present generations of stand-alone PET and PET/CT hybrid scanners. (18)F PET scans promise improved imaging quality for both benign and malignant bone disease, with significantly improved sensitivity and specificity over conventional planar and SPECT bone scans. In this article, basic acquisition information will be presented along with examples of studies related to oncology, sports medicine, and general orthopedics. The use of image fusion of PET bone scans with CT and MRI will be demonstrated. The objectives of this article are to provide the reader with an understanding of the history of early bone scintigraphy in relation to Na(18)F scanning, a familiarity with basic imaging techniques for PET bone scanning, an appreciation of the extent of disease processes that can be imaged with PET bone scanning, an appreciation for the added value of multimodality image fusion with bone disease, and a recognition of the potential role PET bone scanning may play in clinical imaging.

  10. Image-based compound profiling reveals a dual inhibitor of tyrosine kinase and microtubule polymerization.

    PubMed

    Tanabe, Kenji

    2016-04-27

    Small-molecule compounds are widely used as biological research tools and therapeutic drugs. Therefore, uncovering novel targets of these compounds should provide insights that are valuable in both basic and clinical studies. I developed a method for image-based compound profiling by quantitating the effects of compounds on signal transduction and vesicle trafficking of epidermal growth factor receptor (EGFR). Using six signal transduction molecules and two markers of vesicle trafficking, 570 image features were obtained and subjected to multivariate analysis. Fourteen compounds that affected EGFR or its pathways were classified into four clusters, based on their phenotypic features. Surprisingly, one EGFR inhibitor (CAS 879127-07-8) was classified into the same cluster as nocodazole, a microtubule depolymerizer. In fact, this compound directly depolymerized microtubules. These results indicate that CAS 879127-07-8 could be used as a chemical probe to investigate both the EGFR pathway and microtubule dynamics. The image-based multivariate analysis developed herein has potential as a powerful tool for discovering unexpected drug properties.

  11. Automated Ontology Generation Using Spatial Reasoning

    NASA Astrophysics Data System (ADS)

    Coalter, Alton; Leopold, Jennifer L.

    Recently there has been much interest in using ontologies to facilitate knowledge representation, integration, and reasoning. Correspondingly, the extent of the information embodied by an ontology is increasing beyond the conventional is_a and part_of relationships. To address these requirements, a vast amount of digitally available information may need to be considered when building ontologies, prompting a desire for software tools to automate at least part of the process. The main efforts in this direction have involved textual information retrieval and extraction methods. For some domains extension of the basic relationships could be enhanced further by the analysis of 2D and/or 3D images. For this type of media, image processing algorithms are more appropriate than textual analysis methods. Herein we present an algorithm that, given a collection of 3D image files, utilizes Qualitative Spatial Reasoning (QSR) to automate the creation of an ontology for the objects represented by the images, relating the objects in terms of is_a and part_of relationships and also through unambiguous Relational Connection Calculus (RCC) relations.

  12. Imaging Intratumor Heterogeneity: Role in Therapy Response, Resistance, and Clinical Outcome

    PubMed Central

    O’Connor, James P.B.; Rose, Chris J.; Waterton, John C.; Carano, Richard A.D.; Parker, Geoff J.M.; Jackson, Alan

    2014-01-01

    Tumors exhibit genomic and phenotypic heterogeneity which has prognostic significance and may influence response to therapy. Imaging can quantify the spatial variation in architecture and function of individual tumors through quantifying basic biophysical parameters such as density or MRI signal relaxation rate; through measurements of blood flow, hypoxia, metabolism, cell death and other phenotypic features; and through mapping the spatial distribution of biochemical pathways and cell signaling networks. These methods can establish whether one tumor is more or less heterogeneous than another and can identify sub-regions with differing biology. In this article we review the image analysis methods currently used to quantify spatial heterogeneity within tumors. We discuss how analysis of intratumor heterogeneity can provide benefit over more simple biomarkers such as tumor size and average function. We consider how imaging methods can be integrated with genomic and pathology data, rather than be developed in isolation. Finally, we identify the challenges that must be overcome before measurements of intratumoral heterogeneity can be used routinely to guide patient care. PMID:25421725

  13. Quantum realization of the bilinear interpolation method for NEQR.

    PubMed

    Zhou, Ri-Gui; Hu, Wenwen; Fan, Ping; Ian, Hou

    2017-05-31

    In recent years, quantum image processing is one of the most active fields in quantum computation and quantum information. Image scaling as a kind of image geometric transformation has been widely studied and applied in the classical image processing, however, the quantum version of which does not exist. This paper is concerned with the feasibility of the classical bilinear interpolation based on novel enhanced quantum image representation (NEQR). Firstly, the feasibility of the bilinear interpolation for NEQR is proven. Then the concrete quantum circuits of the bilinear interpolation including scaling up and scaling down for NEQR are given by using the multiply Control-Not operation, special adding one operation, the reverse parallel adder, parallel subtractor, multiplier and division operations. Finally, the complexity analysis of the quantum network circuit based on the basic quantum gates is deduced. Simulation result shows that the scaled-up image using bilinear interpolation is clearer and less distorted than nearest interpolation.

  14. Improved automatic adjustment of density and contrast in FCR system using neural network

    NASA Astrophysics Data System (ADS)

    Takeo, Hideya; Nakajima, Nobuyoshi; Ishida, Masamitsu; Kato, Hisatoyo

    1994-05-01

    FCR system has an automatic adjustment of image density and contrast by analyzing the histogram of image data in the radiation field. Advanced image recognition methods proposed in this paper can improve the automatic adjustment performance, in which neural network technology is used. There are two methods. Both methods are basically used 3-layer neural network with back propagation. The image data are directly input to the input-layer in one method and the histogram data is input in the other method. The former is effective to the imaging menu such as shoulder joint in which the position of interest region occupied on the histogram changes by difference of positioning and the latter is effective to the imaging menu such as chest-pediatrics in which the histogram shape changes by difference of positioning. We experimentally confirm the validity of these methods (about the automatic adjustment performance) as compared with the conventional histogram analysis methods.

  15. dada - a web-based 2D detector analysis tool

    NASA Astrophysics Data System (ADS)

    Osterhoff, Markus

    2017-06-01

    The data daemon, dada, is a server backend for unified access to 2D pixel detector image data stored with different detectors, file formats and saved with varying naming conventions and folder structures across instruments. Furthermore, dada implements basic pre-processing and analysis routines from pixel binning over azimuthal integration to raster scan processing. Common user interactions with dada are by a web frontend, but all parameters for an analysis are encoded into a Uniform Resource Identifier (URI) which can also be written by hand or scripts for batch processing.

  16. Research on spatial-variant property of bistatic ISAR imaging plane of space target

    NASA Astrophysics Data System (ADS)

    Guo, Bao-Feng; Wang, Jun-Ling; Gao, Mei-Guo

    2015-04-01

    The imaging plane of inverse synthetic aperture radar (ISAR) is the projection plane of the target. When taking an image using the range-Doppler theory, the imaging plane may have a spatial-variant property, which causes the change of scatter’s projection position and results in migration through resolution cells. In this study, we focus on the spatial-variant property of the imaging plane of a three-axis-stabilized space target. The innovative contributions are as follows. 1) The target motion model in orbit is provided based on a two-body model. 2) The instantaneous imaging plane is determined by the method of vector analysis. 3) Three Euler angles are introduced to describe the spatial-variant property of the imaging plane, and the image quality is analyzed. The simulation results confirm the analysis of the spatial-variant property. The research in this study is significant for the selection of the imaging segment, and provides the evidence for the following data processing and compensation algorithm. Project supported by the National Natural Science Foundation of China (Grant No. 61401024), the Shanghai Aerospace Science and Technology Innovation Foundation, China (Grant No. SAST201240), and the Basic Research Foundation of Beijing Institute of Technology (Grant No. 20140542001).

  17. Magnetoencephalography - a noninvasive brain imaging method with 1 ms time resolution

    NASA Astrophysics Data System (ADS)

    DelGratta, Cosimo; Pizzella, Vittorio; Tecchio, Franca; Luca Romani, Gian

    2001-12-01

    The basics of magnetoencephalography (MEG), i.e. the measurement and the analysis of the tiny magnetic fields generated outside the scalp by the working human brain, are reviewed. Three main topics are discussed: (1) the relationship between the magnetic field and its generators, including on one hand the neurophysiological basis and the physical theory of magnetic field generation, and on the other hand the techniques for the estimation of the sources from the magnetic field measurements; (2) the instrumental techniques and the laboratory practice of neuromagnetic field measurement and (3) the main applications of MEG in basic neurophysiology as well as in clinical neurology.

  18. Geologic information from satellite images

    NASA Technical Reports Server (NTRS)

    Lee, K.; Knepper, D. H.; Sawatzky, D. L.

    1974-01-01

    Extracting geologic information from ERTS and Skylab/EREP images is best done by a geologist trained in photo-interpretation. The information is at a regional scale, and three basic types are available: rock and soil, geologic structures, and landforms. Discrimination between alluvium and sedimentary or crystalline bedrock, and between units in thick sedimentary sequences is best, primarily because of topographic expression and vegetation differences. Discrimination between crystalline rock types is poor. Folds and fractures are the best displayed geologic features. They are recognizable by topographic expression, drainage patterns, and rock or vegetation tonal patterns. Landforms are easily discriminated by their familiar shapes and patterns. Several examples demonstrate the applicability of satellite images to tectonic analysis and petroleum and mineral exploration.

  19. In Situ and In Vivo Molecular Analysis by Coherent Raman Scattering Microscopy

    PubMed Central

    Liao, Chien-Sheng; Cheng, Ji-Xin

    2017-01-01

    Coherent Raman scattering (CRS) microscopy is a high-speed vibrational imaging platform with the ability to visualize the chemical content of a living specimen by using molecular vibrational fingerprints. We review technical advances and biological applications of CRS microscopy. The basic theory of CRS and the state-of-the-art instrumentation of a CRS microscope are presented. We further summarize and compare the algorithms that are used to separate the Raman signal from the nonresonant background, to denoise a CRS image, and to decompose a hyperspectral CRS image into concentration maps of principal components. Important applications of single-frequency and hyperspectral CRS microscopy are highlighted. Potential directions of CRS microscopy are discussed. PMID:27306307

  20. A Simple Encryption Algorithm for Quantum Color Image

    NASA Astrophysics Data System (ADS)

    Li, Panchi; Zhao, Ya

    2017-06-01

    In this paper, a simple encryption scheme for quantum color image is proposed. Firstly, a color image is transformed into a quantum superposition state by employing NEQR (novel enhanced quantum representation), where the R,G,B values of every pixel in a 24-bit RGB true color image are represented by 24 single-qubit basic states, and each value has 8 qubits. Then, these 24 qubits are respectively transformed from a basic state into a balanced superposition state by employed the controlled rotation gates. At this time, the gray-scale values of R, G, B of every pixel are in a balanced superposition of 224 multi-qubits basic states. After measuring, the whole image is an uniform white noise, which does not provide any information. Decryption is the reverse process of encryption. The experimental results on the classical computer show that the proposed encryption scheme has better security.

  1. Study of the ink-paper interaction by image analysis: surface and bulk inspection

    NASA Astrophysics Data System (ADS)

    Fiadeiro, Paulo T.; de O. Mendes, António; M. Ramos, Ana M.; L. de Sousa, Sónia C.

    2013-11-01

    In this work, two optical systems previously designed and implemented by our research team, were used to enable the surface and bulk inspection of the ink-paper interaction by image analysis. Basically, the first system works by ejecting micro-liter ink drops onto the papers surface while monitoring the event under three different views over time. The second system is used for sectioning the paper samples through their thickness and to simultaneously acquire images of the ink penetration of each section cut. In the performed experiments, three black inks of different brands and a common copy paper were chosen, used, and tested with the two developed optical systems. Both qualitative and quantitative analyses were carried out at the surface level and in the bulk of the paper. In terms of conclusions, it was shown that the three tested ink-paper combinations revealed very distinct characteristics.

  2. Fluorescence imaging of chromosomal DNA using click chemistry

    NASA Astrophysics Data System (ADS)

    Ishizuka, Takumi; Liu, Hong Shan; Ito, Kenichiro; Xu, Yan

    2016-09-01

    Chromosome visualization is essential for chromosome analysis and genetic diagnostics. Here, we developed a click chemistry approach for multicolor imaging of chromosomal DNA instead of the traditional dye method. We first demonstrated that the commercially available reagents allow for the multicolor staining of chromosomes. We then prepared two pro-fluorophore moieties that served as light-up reporters to stain chromosomal DNA based on click reaction and visualized the clear chromosomes in multicolor. We applied this strategy in fluorescence in situ hybridization (FISH) and identified, with high sensitivity and specificity, telomere DNA at the end of the chromosome. We further extended this approach to observe several basic stages of cell division. We found that the click reaction enables direct visualization of the chromosome behavior in cell division. These results suggest that the technique can be broadly used for imaging chromosomes and may serve as a new approach for chromosome analysis and genetic diagnostics.

  3. Detection and clustering of features in aerial images by neuron network-based algorithm

    NASA Astrophysics Data System (ADS)

    Vozenilek, Vit

    2015-12-01

    The paper presents the algorithm for detection and clustering of feature in aerial photographs based on artificial neural networks. The presented approach is not focused on the detection of specific topographic features, but on the combination of general features analysis and their use for clustering and backward projection of clusters to aerial image. The basis of the algorithm is a calculation of the total error of the network and a change of weights of the network to minimize the error. A classic bipolar sigmoid was used for the activation function of the neurons and the basic method of backpropagation was used for learning. To verify that a set of features is able to represent the image content from the user's perspective, the web application was compiled (ASP.NET on the Microsoft .NET platform). The main achievements include the knowledge that man-made objects in aerial images can be successfully identified by detection of shapes and anomalies. It was also found that the appropriate combination of comprehensive features that describe the colors and selected shapes of individual areas can be useful for image analysis.

  4. Soft computing approach to 3D lung nodule segmentation in CT.

    PubMed

    Badura, P; Pietka, E

    2014-10-01

    This paper presents a novel, multilevel approach to the segmentation of various types of pulmonary nodules in computed tomography studies. It is based on two branches of computational intelligence: the fuzzy connectedness (FC) and the evolutionary computation. First, the image and auxiliary data are prepared for the 3D FC analysis during the first stage of an algorithm - the masks generation. Its main goal is to process some specific types of nodules connected to the pleura or vessels. It consists of some basic image processing operations as well as dedicated routines for the specific cases of nodules. The evolutionary computation is performed on the image and seed points in order to shorten the FC analysis and improve its accuracy. After the FC application, the remaining vessels are removed during the postprocessing stage. The method has been validated using the first dataset of studies acquired and described by the Lung Image Database Consortium (LIDC) and by its latest release - the LIDC-IDRI (Image Database Resource Initiative) database. Copyright © 2014 Elsevier Ltd. All rights reserved.

  5. Imaging intratumor heterogeneity: role in therapy response, resistance, and clinical outcome.

    PubMed

    O'Connor, James P B; Rose, Chris J; Waterton, John C; Carano, Richard A D; Parker, Geoff J M; Jackson, Alan

    2015-01-15

    Tumors exhibit genomic and phenotypic heterogeneity, which has prognostic significance and may influence response to therapy. Imaging can quantify the spatial variation in architecture and function of individual tumors through quantifying basic biophysical parameters such as CT density or MRI signal relaxation rate; through measurements of blood flow, hypoxia, metabolism, cell death, and other phenotypic features; and through mapping the spatial distribution of biochemical pathways and cell signaling networks using PET, MRI, and other emerging molecular imaging techniques. These methods can establish whether one tumor is more or less heterogeneous than another and can identify subregions with differing biology. In this article, we review the image analysis methods currently used to quantify spatial heterogeneity within tumors. We discuss how analysis of intratumor heterogeneity can provide benefit over more simple biomarkers such as tumor size and average function. We consider how imaging methods can be integrated with genomic and pathology data, instead of being developed in isolation. Finally, we identify the challenges that must be overcome before measurements of intratumoral heterogeneity can be used routinely to guide patient care. ©2014 American Association for Cancer Research.

  6. Designing Multimedia Learning Systems for Adult Learners: Basic Skills with a Workforce Emphasis. NCAL Working Paper.

    ERIC Educational Resources Information Center

    Sabatini, John P.

    An analysis was conducted of the results of a formative evaluation of the LiteracyLink "Workplace Essential Skills" (WES) learning system conducted in the fall of 1998. (The WES learning system is a multimedia learning system integrating text, sound, graphics, animation, video, and images in a computer system and includes a videotape series, a…

  7. Effects of photographic distance on tree crown atributes calculated using urbancrowns image analysis software

    Treesearch

    Mason F. Patterson; P. Eric Wiseman; Matthew F. Winn; Sang-mook Lee; Philip A. Araman

    2011-01-01

    UrbanCrowns is a software program developed by the USDA Forest Service that computes crown attributes using a side-view digital photograph and a few basic field measurements. From an operational standpoint, it is not known how well the software performs under varying photographic conditions for trees of diverse size, which could impact measurement reproducibility and...

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krumeich, F., E-mail: krumeich@inorg.chem.ethz.ch; Mueller, E.; Wepf, R.A.

    While HRTEM is the well-established method to characterize the structure of dodecagonal tantalum (vanadium) telluride quasicrystals and their periodic approximants, phase-contrast imaging performed on an aberration-corrected scanning transmission electron microscope (STEM) represents a favorable alternative. The (Ta,V){sub 151}Te{sub 74} clusters, the basic structural unit in all these phases, can be visualized with high resolution. A dependence of the image contrast on defocus and specimen thickness has been observed. In thin areas, the projected crystal potential is basically imaged with either dark or bright contrast at two defocus values close to Scherzer defocus as confirmed by image simulations utilizing the principlemore » of reciprocity. Models for square-triangle tilings describing the arrangement of the basic clusters can be derived from such images. - Graphical abstract: PC-STEM image of a (Ta,V){sub 151}Te{sub 74} cluster. Highlights: Black-Right-Pointing-Pointer C{sub s}-corrected STEM is applied for the characterization of dodecagonal quasicrystals. Black-Right-Pointing-Pointer The projected potential of the structure is mirrored in the images. Black-Right-Pointing-Pointer Phase-contrast STEM imaging depends on defocus and thickness. Black-Right-Pointing-Pointer For simulations of phase-contrast STEM images, the reciprocity theorem is applicable.« less

  9. Facial Expression Recognition using Multiclass Ensemble Least-Square Support Vector Machine

    NASA Astrophysics Data System (ADS)

    Lawi, Armin; Sya'Rani Machrizzandi, M.

    2018-03-01

    Facial expression is one of behavior characteristics of human-being. The use of biometrics technology system with facial expression characteristics makes it possible to recognize a person’s mood or emotion. The basic components of facial expression analysis system are face detection, face image extraction, facial classification and facial expressions recognition. This paper uses Principal Component Analysis (PCA) algorithm to extract facial features with expression parameters, i.e., happy, sad, neutral, angry, fear, and disgusted. Then Multiclass Ensemble Least-Squares Support Vector Machine (MELS-SVM) is used for the classification process of facial expression. The result of MELS-SVM model obtained from our 185 different expression images of 10 persons showed high accuracy level of 99.998% using RBF kernel.

  10. A Tentative Application Of Morphological Filters To Time-Varying Images

    NASA Astrophysics Data System (ADS)

    Billard, D.; Poquillon, B.

    1989-03-01

    In this paper, morphological filters, which are commonly used to process either 2D or multidimensional static images, are generalized to the analysis of time-varying image sequence. The introduction of the time dimension induces then interesting prop-erties when designing such spatio-temporal morphological filters. In particular, the specification of spatio-temporal structuring ele-ments (equivalent to time-varying spatial structuring elements) can be adjusted according to the temporal variations of the image sequences to be processed : this allows to derive specific morphological transforms to perform noise filtering or moving objects discrimination on dynamic images viewed by a non-stationary sensor. First, a brief introduction to the basic principles underlying morphological filters will be given. Then, a straightforward gener-alization of these principles to time-varying images will be pro-posed. This will lead us to define spatio-temporal opening and closing and to introduce some of their possible applications to process dynamic images. At last, preliminary results obtained us-ing a natural forward looking infrared (FUR) image sequence are presented.

  11. Technical aspects of dental CBCT: state of the art

    PubMed Central

    Araki, K; Siewerdsen, J H; Thongvigitmanee, S S

    2015-01-01

    As CBCT is widely used in dental and maxillofacial imaging, it is important for users as well as referring practitioners to understand the basic concepts of this imaging modality. This review covers the technical aspects of each part of the CBCT imaging chain. First, an overview is given of the hardware of a CBCT device. The principles of cone beam image acquisition and image reconstruction are described. Optimization of imaging protocols in CBCT is briefly discussed. Finally, basic and advanced visualization methods are illustrated. Certain topics in these review are applicable to all types of radiographic imaging (e.g. the principle and properties of an X-ray tube), others are specific for dental CBCT imaging (e.g. advanced visualization techniques). PMID:25263643

  12. Tse computers. [ultrahigh speed optical processing for two dimensional binary image

    NASA Technical Reports Server (NTRS)

    Schaefer, D. H.; Strong, J. P., III

    1977-01-01

    An ultra-high-speed computer that utilizes binary images as its basic computational entity is being developed. The basic logic components perform thousands of operations simultaneously. Technologies of the fiber optics, display, thin film, and semiconductor industries are being utilized in the building of the hardware.

  13. Looking back to inform the future: The role of cognition in forest disturbance characterization from remote sensing imagery

    NASA Astrophysics Data System (ADS)

    Bianchetti, Raechel Anne

    Remotely sensed images have become a ubiquitous part of our daily lives. From novice users, aiding in search and rescue missions using tools such as TomNod, to trained analysts, synthesizing disparate data to address complex problems like climate change, imagery has become central to geospatial problem solving. Expert image analysts are continually faced with rapidly developing sensor technologies and software systems. In response to these cognitively demanding environments, expert analysts develop specialized knowledge and analytic skills to address increasingly complex problems. This study identifies the knowledge, skills, and analytic goals of expert image analysts tasked with identification of land cover and land use change. Analysts participating in this research are currently working as part of a national level analysis of land use change, and are well versed with the use of TimeSync, forest science, and image analysis. The results of this study benefit current analysts as it improves their awareness of their mental processes used during the image interpretation process. The study also can be generalized to understand the types of knowledge and visual cues that analysts use when reasoning with imagery for purposes beyond land use change studies. Here a Cognitive Task Analysis framework is used to organize evidence from qualitative knowledge elicitation methods for characterizing the cognitive aspects of the TimeSync image analysis process. Using a combination of content analysis, diagramming, semi-structured interviews, and observation, the study highlights the perceptual and cognitive elements of expert remote sensing interpretation. Results show that image analysts perform several standard cognitive processes, but flexibly employ these processes in response to various contextual cues. Expert image analysts' ability to think flexibly during their analysis process was directly related to their amount of image analysis experience. Additionally, results show that the basic Image Interpretation Elements continue to be important despite technological augmentation of the interpretation process. These results are used to derive a set of design guidelines for developing geovisual analytic tools and training to support image analysis.

  14. A simple program to measure and analyse tree rings using Excel, R and SigmaScan

    PubMed Central

    Hietz, Peter

    2011-01-01

    I present a new software that links a program for image analysis (SigmaScan), one for spreadsheets (Excel) and one for statistical analysis (R) for applications of tree-ring analysis. The first macro measures ring width marked by the user on scanned images, stores raw and detrended data in Excel and calculates the distance to the pith and inter-series correlations. A second macro measures darkness along a defined path to identify latewood–earlywood transition in conifers, and a third shows the potential for automatic detection of boundaries. Written in Visual Basic for Applications, the code makes use of the advantages of existing programs and is consequently very economic and relatively simple to adjust to the requirements of specific projects or to expand making use of already available code. PMID:26109835

  15. A simple program to measure and analyse tree rings using Excel, R and SigmaScan.

    PubMed

    Hietz, Peter

    I present a new software that links a program for image analysis (SigmaScan), one for spreadsheets (Excel) and one for statistical analysis (R) for applications of tree-ring analysis. The first macro measures ring width marked by the user on scanned images, stores raw and detrended data in Excel and calculates the distance to the pith and inter-series correlations. A second macro measures darkness along a defined path to identify latewood-earlywood transition in conifers, and a third shows the potential for automatic detection of boundaries. Written in Visual Basic for Applications, the code makes use of the advantages of existing programs and is consequently very economic and relatively simple to adjust to the requirements of specific projects or to expand making use of already available code.

  16. A dedicated cone-beam CT system for musculoskeletal extremities imaging: design, optimization, and initial performance characterization.

    PubMed

    Zbijewski, W; De Jean, P; Prakash, P; Ding, Y; Stayman, J W; Packard, N; Senn, R; Yang, D; Yorkston, J; Machado, A; Carrino, J A; Siewerdsen, J H

    2011-08-01

    This paper reports on the design and initial imaging performance of a dedicated cone-beam CT (CBCT) system for musculoskeletal (MSK) extremities. The system complements conventional CT and MR and offers a variety of potential clinical and logistical advantages that are likely to be of benefit to diagnosis, treatment planning, and assessment of therapy response in MSK radiology, orthopaedic surgery, and rheumatology. The scanner design incorporated a host of clinical requirements (e.g., ability to scan the weight-bearing knee in a natural stance) and was guided by theoretical and experimental analysis of image quality and dose. Such criteria identified the following basic scanner components and system configuration: a flat-panel detector (FPD, Varian 3030+, 0.194 mm pixels); and a low-power, fixed anode x-ray source with 0.5 mm focal spot (SourceRay XRS-125-7K-P, 0.875 kW) mounted on a retractable C-arm allowing for two scanning orientations with the capability for side entry, viz. a standing configuration for imaging of weight-bearing lower extremities and a sitting configuration for imaging of tensioned upper extremity and unloaded lower extremity. Theoretical modeling employed cascaded systems analysis of modulation transfer function (MTF) and detective quantum efficiency (DQE) computed as a function of system geometry, kVp and filtration, dose, source power, etc. Physical experimentation utilized an imaging bench simulating the scanner geometry for verification of theoretical results and investigation of other factors, such as antiscatter grid selection and 3D image quality in phantom and cadaver, including qualitative comparison to conventional CT. Theoretical modeling and benchtop experimentation confirmed the basic suitability of the FPD and x-ray source mentioned above. Clinical requirements combined with analysis of MTF and DQE yielded the following system geometry: a -55 cm source-to-detector distance; 1.3 magnification; a 20 cm diameter bore (20 x 20 x 20 cm3 field of view); total acquisition arc of -240 degrees. The system MTF declines to 50% at -1.3 mm(-1) and to 10% at -2.7 mm(-1), consistent with sub-millimeter spatial resolution. Analysis of DQE suggested a nominal technique of 90 kVp (+0.3 mm Cu added filtration) to provide high imaging performance from -500 projections at less than -0.5 kW power, implying -6.4 mGy (0.064 mSv) for low-dose protocols and -15 mGy (0.15 mSv) for high-quality protocols. The experimental studies show improved image uniformity and contrast-to-noise ratio (without increase in dose) through incorporation of a custom 10:1 GR antiscatter grid. Cadaver images demonstrate exquisite bone detail, visualization of articular morphology, and soft-tissue visibility comparable to diagnostic CT (10-20 HU contrast resolution). The results indicate that the proposed system will deliver volumetric images of the extremities with soft-tissue contrast resolution comparable to diagnostic CT and improved spatial resolution at potentially reduced dose. Cascaded systems analysis provided a useful basis for system design and optimization without costly repeated experimentation. A combined process of design specification, image quality analysis, clinical feedback, and revision yielded a prototype that is now awaiting clinical pilot studies. Potential advantages of the proposed system include reduced space and cost, imaging of load-bearing extremities, and combined volumetric imaging with real-time fluoroscopy and digital radiography.

  17. A dedicated cone-beam CT system for musculoskeletal extremities imaging: Design, optimization, and initial performance characterization

    PubMed Central

    Zbijewski, W.; De Jean, P.; Prakash, P.; Ding, Y.; Stayman, J. W.; Packard, N.; Senn, R.; Yang, D.; Yorkston, J.; Machado, A.; Carrino, J. A.; Siewerdsen, J. H.

    2011-01-01

    Purpose: This paper reports on the design and initial imaging performance of a dedicated cone-beam CT (CBCT) system for musculoskeletal (MSK) extremities. The system complements conventional CT and MR and offers a variety of potential clinical and logistical advantages that are likely to be of benefit to diagnosis, treatment planning, and assessment of therapy response in MSK radiology, orthopaedic surgery, and rheumatology. Methods: The scanner design incorporated a host of clinical requirements (e.g., ability to scan the weight-bearing knee in a natural stance) and was guided by theoretical and experimental analysis of image quality and dose. Such criteria identified the following basic scanner components and system configuration: a flat-panel detector (FPD, Varian 3030+, 0.194 mm pixels); and a low-power, fixed anode x-ray source with 0.5 mm focal spot (SourceRay XRS-125-7K-P, 0.875 kW) mounted on a retractable C-arm allowing for two scanning orientations with the capability for side entry, viz. a standing configuration for imaging of weight-bearing lower extremities and a sitting configuration for imaging of tensioned upper extremity and unloaded lower extremity. Theoretical modeling employed cascaded systems analysis of modulation transfer function (MTF) and detective quantum efficiency (DQE) computed as a function of system geometry, kVp and filtration, dose, source power, etc. Physical experimentation utilized an imaging bench simulating the scanner geometry for verification of theoretical results and investigation of other factors, such as antiscatter grid selection and 3D image quality in phantom and cadaver, including qualitative comparison to conventional CT. Results: Theoretical modeling and benchtop experimentation confirmed the basic suitability of the FPD and x-ray source mentioned above. Clinical requirements combined with analysis of MTF and DQE yielded the following system geometry: a ∼55 cm source-to-detector distance; 1.3 magnification; a 20 cm diameter bore (20 × 20 × 20 cm3 field of view); total acquisition arc of ∼240°. The system MTF declines to 50% at ∼1.3 mm−1 and to 10% at ∼2.7 mm−1, consistent with sub-millimeter spatial resolution. Analysis of DQE suggested a nominal technique of 90 kVp (+0.3 mm Cu added filtration) to provide high imaging performance from ∼500 projections at less than ∼0.5 kW power, implying ∼6.4 mGy (0.064 mSv) for low-dose protocols and ∼15 mGy (0.15 mSv) for high-quality protocols. The experimental studies show improved image uniformity and contrast-to-noise ratio (without increase in dose) through incorporation of a custom 10:1 GR antiscatter grid. Cadaver images demonstrate exquisite bone detail, visualization of articular morphology, and soft-tissue visibility comparable to diagnostic CT (10–20 HU contrast resolution). Conclusions: The results indicate that the proposed system will deliver volumetric images of the extremities with soft-tissue contrast resolution comparable to diagnostic CT and improved spatial resolution at potentially reduced dose. Cascaded systems analysis provided a useful basis for system design and optimization without costly repeated experimentation. A combined process of design specification, image quality analysis, clinical feedback, and revision yielded a prototype that is now awaiting clinical pilot studies. Potential advantages of the proposed system include reduced space and cost, imaging of load-bearing extremities, and combined volumetric imaging with real-time fluoroscopy and digital radiography. PMID:21928644

  18. A dedicated cone-beam CT system for musculoskeletal extremities imaging: Design, optimization, and initial performance characterization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zbijewski, W.; De Jean, P.; Prakash, P.

    2011-08-15

    Purpose: This paper reports on the design and initial imaging performance of a dedicated cone-beam CT (CBCT) system for musculoskeletal (MSK) extremities. The system complements conventional CT and MR and offers a variety of potential clinical and logistical advantages that are likely to be of benefit to diagnosis, treatment planning, and assessment of therapy response in MSK radiology, orthopaedic surgery, and rheumatology. Methods: The scanner design incorporated a host of clinical requirements (e.g., ability to scan the weight-bearing knee in a natural stance) and was guided by theoretical and experimental analysis of image quality and dose. Such criteria identified themore » following basic scanner components and system configuration: a flat-panel detector (FPD, Varian 3030+, 0.194 mm pixels); and a low-power, fixed anode x-ray source with 0.5 mm focal spot (SourceRay XRS-125-7K-P, 0.875 kW) mounted on a retractable C-arm allowing for two scanning orientations with the capability for side entry, viz. a standing configuration for imaging of weight-bearing lower extremities and a sitting configuration for imaging of tensioned upper extremity and unloaded lower extremity. Theoretical modeling employed cascaded systems analysis of modulation transfer function (MTF) and detective quantum efficiency (DQE) computed as a function of system geometry, kVp and filtration, dose, source power, etc. Physical experimentation utilized an imaging bench simulating the scanner geometry for verification of theoretical results and investigation of other factors, such as antiscatter grid selection and 3D image quality in phantom and cadaver, including qualitative comparison to conventional CT. Results: Theoretical modeling and benchtop experimentation confirmed the basic suitability of the FPD and x-ray source mentioned above. Clinical requirements combined with analysis of MTF and DQE yielded the following system geometry: a {approx}55 cm source-to-detector distance; 1.3 magnification; a 20 cm diameter bore (20 x 20 x 20 cm{sup 3} field of view); total acquisition arc of {approx}240 deg. The system MTF declines to 50% at {approx}1.3 mm{sup -1} and to 10% at {approx}2.7 mm{sup -1}, consistent with sub-millimeter spatial resolution. Analysis of DQE suggested a nominal technique of 90 kVp (+0.3 mm Cu added filtration) to provide high imaging performance from {approx}500 projections at less than {approx}0.5 kW power, implying {approx}6.4 mGy (0.064 mSv) for low-dose protocols and {approx}15 mGy (0.15 mSv) for high-quality protocols. The experimental studies show improved image uniformity and contrast-to-noise ratio (without increase in dose) through incorporation of a custom 10:1 GR antiscatter grid. Cadaver images demonstrate exquisite bone detail, visualization of articular morphology, and soft-tissue visibility comparable to diagnostic CT (10-20 HU contrast resolution). Conclusions: The results indicate that the proposed system will deliver volumetric images of the extremities with soft-tissue contrast resolution comparable to diagnostic CT and improved spatial resolution at potentially reduced dose. Cascaded systems analysis provided a useful basis for system design and optimization without costly repeated experimentation. A combined process of design specification, image quality analysis, clinical feedback, and revision yielded a prototype that is now awaiting clinical pilot studies. Potential advantages of the proposed system include reduced space and cost, imaging of load-bearing extremities, and combined volumetric imaging with real-time fluoroscopy and digital radiography.« less

  19. Performance characteristics of a visual-search human-model observer with sparse PET image data

    NASA Astrophysics Data System (ADS)

    Gifford, Howard C.

    2012-02-01

    As predictors of human performance in detection-localization tasks, statistical model observers can have problems with tasks that are primarily limited by target contrast or structural noise. Model observers with a visual-search (VS) framework may provide a more reliable alternative. This framework provides for an initial holistic search that identifies suspicious locations for analysis by a statistical observer. A basic VS observer for emission tomography focuses on hot "blobs" in an image and uses a channelized nonprewhitening (CNPW) observer for analysis. In [1], we investigated this model for a contrast-limited task with SPECT images; herein, a statisticalnoise limited task involving PET images is considered. An LROC study used 2D image slices with liver, lung and soft-tissue tumors. Human and model observers read the images in coronal, sagittal and transverse display formats. The study thus measured the detectability of tumors in a given organ as a function of display format. The model observers were applied under several task variants that tested their response to structural noise both at the organ boundaries alone and over the organs as a whole. As measured by correlation with the human data, the VS observer outperformed the CNPW scanning observer.

  20. Basic research and data analysis for the earth and ocean physics applications program and for the National Geodetic Satellite program

    NASA Technical Reports Server (NTRS)

    1975-01-01

    Data acquisition using single image and seven image data processing is used to provide a precise and accurate geometric description of the earth's surface. Transformation parameters and network distortions are determined, Sea slope along the continental boundaries of the U.S. and earth rotation are examined, along with close grid geodynamic satellite system. Data are derived for a mathematical description of the earth's gravitational field; time variations are determined for geometry of the ocean surface, the solid earth, gravity field, and other geophysical parameters.

  1. Improving cervical region of interest by eliminating vaginal walls and cotton-swabs for automated image analysis

    NASA Astrophysics Data System (ADS)

    Venkataraman, Sankar; Li, Wenjing

    2008-03-01

    Image analysis for automated diagnosis of cervical cancer has attained high prominence in the last decade. Automated image analysis at all levels requires a basic segmentation of the region of interest (ROI) within a given image. The precision of the diagnosis is often reflected by the precision in detecting the initial region of interest, especially when some features outside the ROI mimic the ones within the same. Work described here discusses algorithms that are used to improve the cervical region of interest as a part of automated cervical image diagnosis. A vital visual aid in diagnosing cervical cancer is the aceto-whitening of the cervix after the application of acetic acid. Color and texture are used to segment acetowhite regions within the cervical ROI. Vaginal walls along with cottonswabs sometimes mimic these essential features leading to several false positives. Work presented here is focused towards detecting in-focus vaginal wall boundaries and then extrapolating them to exclude vaginal walls from the cervical ROI. In addition, discussed here is a marker-controlled watershed segmentation that is used to detect cottonswabs from the cervical ROI. A dataset comprising 50 high resolution images of the cervix acquired after 60 seconds of acetic acid application were used to test the algorithm. Out of the 50 images, 27 benefited from a new cervical ROI. Significant improvement in overall diagnosis was observed in these images as false positives caused by features outside the actual ROI mimicking acetowhite region were eliminated.

  2. AAPM/RSNA physics tutorial for residents. Topics in US: B-mode US: basic concepts and new technology.

    PubMed

    Hangiandreou, Nicholas J

    2003-01-01

    Ultrasonography (US) has been used in medical imaging for over half a century. Current US scanners are based largely on the same basic principles used in the initial devices for human imaging. Modern equipment uses a pulse-echo approach with a brightness-mode (B-mode) display. Fundamental aspects of the B-mode imaging process include basic ultrasound physics, interactions of ultrasound with tissue, ultrasound pulse formation, scanning the ultrasound beam, and echo detection and signal processing. Recent technical innovations that have been developed to improve the performance of modern US equipment include the following: tissue harmonic imaging, spatial compound imaging, extended field of view imaging, coded pulse excitation, electronic section focusing, three-dimensional and four-dimensional imaging, and the general trend toward equipment miniaturization. US is a relatively inexpensive, portable, safe, and real-time modality, all of which make it one of the most widely used imaging modalities in medicine. Although B-mode US is sometimes referred to as a mature technology, this modality continues to experience a significant evolution in capability with even more exciting developments on the horizon. Copyright RSNA, 2003

  3. Developing Photoacoustic Tomography Devices for Translational Medicine and Basic Science Research

    NASA Astrophysics Data System (ADS)

    Wong, Terence Tsz Wai

    Photoacoustic (PA) tomography (PAT) provides volumetric images of biological tissue with scalable spatial resolutions and imaging depths, while preserving the same imaging contrast--optical absorption. Taking the advantage of its 100% sensitivity to optical absorption, PAT has been widely applied in structural, functional, and molecular imaging, with both endogenous and exogenous contrasts, at superior depths than pure optical methods. Intuitively, hemoglobin has been the most commonly studied biomolecule in PAT due to its strong absorption in the visible wavelength regime. One of the main focuses of this dissertation is to investigate an underexplored wavelength regime--ultraviolet (UV), which allows us to image cell nuclei without labels and generate histology-like images naturally from unprocessed biological tissue. These preparation-free and easy-to-interpret characteristics open up new possibilities for PAT to become readily applicable to other important biomedical problems (e.g., surgical margin analysis, Chapter 2) or basic science studies (e.g., whole-organ imaging, Chapter 3). For instance, we developed and optimized a PA microscopy system with UV laser illumination (UV-PAM) to achieve fast, label-free, multilayered, and histology-like imaging of human breast cancer in Chapter 2. These imaging abilities are essential to intraoperative surgical margin analysis, which enables promptly directed re-excision and reduces the number of repeat surgeries. We have incorporated the Gruneisen relaxation (GR) effect with UV-PAM to improve the performance of our UV-PAM system (e.g., the axial resolution), thus providing more accurate three-dimensional (3D) information (Chapter 4). The nonlinear PA signals caused by the GR effect enable optical sectioning capability, revealing important 3D cell nuclear distributions and internal structures for cancer diagnosis. In the final focus of this dissertation, we have implemented a low-cost PA computed tomography (PACT) system with a single xenon flash lamp as the illumination source (Chapter 5). Lasers have been commonly used as illumination light sources in PACT. However, lasers are usually expensive and bulky, limiting their applicability in many clinical usages. Therefore, the use of a single xenon flash lamp as an alternative light source was explored. We found that PACT images acquired with flash lamp illumination were comparable to those acquired with laser illumination. This low-cost and portable PACT system opens up new potentials, such as low-cost skin melanoma imaging in undeveloped countries.

  4. Feasibility of dynamic cardiac ultrasound transmission via mobile phone for basic emergency teleconsultation.

    PubMed

    Lim, Tae Ho; Choi, Hyuk Joong; Kang, Bo Seung

    2010-01-01

    We assessed the feasibility of using a camcorder mobile phone for teleconsulting about cardiac echocardiography. The diagnostic performance of evaluating left ventricle (LV) systolic function was measured by three emergency medicine physicians. A total of 138 short echocardiography video sequences (from 70 subjects) was selected from previous emergency room ultrasound examinations. The measurement of LV ejection fraction based on the transmitted video displayed on a mobile phone was compared with the original video displayed on the LCD monitor of the ultrasound machine. The image quality was evaluated using the double stimulation impairment scale (DSIS). All observers showed high sensitivity. There was an improvement in specificity with the observer's increasing experience of cardiac ultrasound. Although the image quality of video on the mobile phone was lower than that of the original, a receiver operating characteristic (ROC) analysis indicated that there was no significant difference in diagnostic performance. Immediate basic teleconsulting of echocardiography movies is possible using current commercially-available mobile phone systems.

  5. A modified approach combining FNEA and watershed algorithms for segmenting remotely-sensed optical images

    NASA Astrophysics Data System (ADS)

    Liu, Likun

    2018-01-01

    In the field of remote sensing image processing, remote sensing image segmentation is a preliminary step for later analysis of remote sensing image processing and semi-auto human interpretation, fully-automatic machine recognition and learning. Since 2000, a technique of object-oriented remote sensing image processing method and its basic thought prevails. The core of the approach is Fractal Net Evolution Approach (FNEA) multi-scale segmentation algorithm. The paper is intent on the research and improvement of the algorithm, which analyzes present segmentation algorithms and selects optimum watershed algorithm as an initialization. Meanwhile, the algorithm is modified by modifying an area parameter, and then combining area parameter with a heterogeneous parameter further. After that, several experiments is carried on to prove the modified FNEA algorithm, compared with traditional pixel-based method (FCM algorithm based on neighborhood information) and combination of FNEA and watershed, has a better segmentation result.

  6. Structural neuroimaging in neuropsychology: History and contemporary applications.

    PubMed

    Bigler, Erin D

    2017-11-01

    Neuropsychology's origins began long before there were any in vivo methods to image the brain. That changed with the advent of computed tomography in the 1970s and magnetic resonance imaging in the early 1980s. Now computed tomography and magnetic resonance imaging are routinely a part of neuropsychological investigations with an increasing number of sophisticated methods for image analysis. This review examines the history of neuroimaging utilization in neuropsychological investigations, highlighting the basic methods that go into image quantification and the various metrics that can be derived. Neuroimaging methods and limitations for identify what constitutes a lesion are discussed. Likewise, the influence of various demographic and developmental factors that influence quantification of brain structure are reviewed. Neuroimaging is an integral part of 21st Century neuropsychology. The importance of neuroimaging to advancing neuropsychology is emphasized. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  7. Image of Turkish Basic Schools: A Reflection from the Province of Ankara

    ERIC Educational Resources Information Center

    Eres, Figen

    2011-01-01

    The purpose of this study was to investigate the organizational image of basic schools in Turkey, a rapidly developing nation that has been investing significantly in education. Participants were 730 residents of Ankara province in the Golbasi district. The participants were selected using a cluster sampling methodology. Data were collected…

  8. Texture analysis based on the Hermite transform for image classification and segmentation

    NASA Astrophysics Data System (ADS)

    Estudillo-Romero, Alfonso; Escalante-Ramirez, Boris; Savage-Carmona, Jesus

    2012-06-01

    Texture analysis has become an important task in image processing because it is used as a preprocessing stage in different research areas including medical image analysis, industrial inspection, segmentation of remote sensed imaginary, multimedia indexing and retrieval. In order to extract visual texture features a texture image analysis technique is presented based on the Hermite transform. Psychovisual evidence suggests that the Gaussian derivatives fit the receptive field profiles of mammalian visual systems. The Hermite transform describes locally basic texture features in terms of Gaussian derivatives. Multiresolution combined with several analysis orders provides detection of patterns that characterizes every texture class. The analysis of the local maximum energy direction and steering of the transformation coefficients increase the method robustness against the texture orientation. This method presents an advantage over classical filter bank design because in the latter a fixed number of orientations for the analysis has to be selected. During the training stage, a subset of the Hermite analysis filters is chosen in order to improve the inter-class separability, reduce dimensionality of the feature vectors and computational cost during the classification stage. We exhaustively evaluated the correct classification rate of real randomly selected training and testing texture subsets using several kinds of common used texture features. A comparison between different distance measurements is also presented. Results of the unsupervised real texture segmentation using this approach and comparison with previous approaches showed the benefits of our proposal.

  9. Meeting Report: Tissue-based Image Analysis.

    PubMed

    Saravanan, Chandra; Schumacher, Vanessa; Brown, Danielle; Dunstan, Robert; Galarneau, Jean-Rene; Odin, Marielle; Mishra, Sasmita

    2017-10-01

    Quantitative image analysis (IA) is a rapidly evolving area of digital pathology. Although not a new concept, the quantification of histological features on photomicrographs used to be cumbersome, resource-intensive, and limited to specialists and specialized laboratories. Recent technological advances like highly efficient automated whole slide digitizer (scanner) systems, innovative IA platforms, and the emergence of pathologist-friendly image annotation and analysis systems mean that quantification of features on histological digital images will become increasingly prominent in pathologists' daily professional lives. The added value of quantitative IA in pathology includes confirmation of equivocal findings noted by a pathologist, increasing the sensitivity of feature detection, quantification of signal intensity, and improving efficiency. There is no denying that quantitative IA is part of the future of pathology; however, there are also several potential pitfalls when trying to estimate volumetric features from limited 2-dimensional sections. This continuing education session on quantitative IA offered a broad overview of the field; a hands-on toxicologic pathologist experience with IA principles, tools, and workflows; a discussion on how to apply basic stereology principles in order to minimize bias in IA; and finally, a reflection on the future of IA in the toxicologic pathology field.

  10. A Novel Hyperspectral Microscopic Imaging System for Evaluating Fresh Degree of Pork.

    PubMed

    Xu, Yi; Chen, Quansheng; Liu, Yan; Sun, Xin; Huang, Qiping; Ouyang, Qin; Zhao, Jiewen

    2018-04-01

    This study proposed a rapid microscopic examination method for pork freshness evaluation by using the self-assembled hyperspectral microscopic imaging (HMI) system with the help of feature extraction algorithm and pattern recognition methods. Pork samples were stored for different days ranging from 0 to 5 days and the freshness of samples was divided into three levels which were determined by total volatile basic nitrogen (TVB-N) content. Meanwhile, hyperspectral microscopic images of samples were acquired by HMI system and processed by the following steps for the further analysis. Firstly, characteristic hyperspectral microscopic images were extracted by using principal component analysis (PCA) and then texture features were selected based on the gray level co-occurrence matrix (GLCM). Next, features data were reduced dimensionality by fisher discriminant analysis (FDA) for further building classification model. Finally, compared with linear discriminant analysis (LDA) model and support vector machine (SVM) model, good back propagation artificial neural network (BP-ANN) model obtained the best freshness classification with a 100 % accuracy rating based on the extracted data. The results confirm that the fabricated HMI system combined with multivariate algorithms has ability to evaluate the fresh degree of pork accurately in the microscopic level, which plays an important role in animal food quality control.

  11. A Novel Hyperspectral Microscopic Imaging System for Evaluating Fresh Degree of Pork

    PubMed Central

    Xu, Yi; Chen, Quansheng; Liu, Yan; Sun, Xin; Huang, Qiping; Ouyang, Qin; Zhao, Jiewen

    2018-01-01

    Abstract This study proposed a rapid microscopic examination method for pork freshness evaluation by using the self-assembled hyperspectral microscopic imaging (HMI) system with the help of feature extraction algorithm and pattern recognition methods. Pork samples were stored for different days ranging from 0 to 5 days and the freshness of samples was divided into three levels which were determined by total volatile basic nitrogen (TVB-N) content. Meanwhile, hyperspectral microscopic images of samples were acquired by HMI system and processed by the following steps for the further analysis. Firstly, characteristic hyperspectral microscopic images were extracted by using principal component analysis (PCA) and then texture features were selected based on the gray level co-occurrence matrix (GLCM). Next, features data were reduced dimensionality by fisher discriminant analysis (FDA) for further building classification model. Finally, compared with linear discriminant analysis (LDA) model and support vector machine (SVM) model, good back propagation artificial neural network (BP-ANN) model obtained the best freshness classification with a 100 % accuracy rating based on the extracted data. The results confirm that the fabricated HMI system combined with multivariate algorithms has ability to evaluate the fresh degree of pork accurately in the microscopic level, which plays an important role in animal food quality control. PMID:29805285

  12. Spatial/Spectral Identification of Endmembers from AVIRIS Data using Mathematical Morphology

    NASA Technical Reports Server (NTRS)

    Plaza, Antonio; Martinez, Pablo; Gualtieri, J. Anthony; Perez, Rosa M.

    2001-01-01

    During the last several years, a number of airborne and satellite hyperspectral sensors have been developed or improved for remote sensing applications. Imaging spectrometry allows the detection of materials, objects and regions in a particular scene with a high degree of accuracy. Hyperspectral data typically consist of hundreds of thousands of spectra, so the analysis of this information is a key issue. Mathematical morphology theory is a widely used nonlinear technique for image analysis and pattern recognition. Although it is especially well suited to segment binary or grayscale images with irregular and complex shapes, its application in the classification/segmentation of multispectral or hyperspectral images has been quite rare. In this paper, we discuss a new completely automated methodology to find endmembers in the hyperspectral data cube using mathematical morphology. The extension of classic morphology to the hyperspectral domain allows us to integrate spectral and spatial information in the analysis process. In Section 3, some basic concepts about mathematical morphology and the technical details of our algorithm are provided. In Section 4, the accuracy of the proposed method is tested by its application to real hyperspectral data obtained from the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) imaging spectrometer. Some details about these data and reference results, obtained by well-known endmember extraction techniques, are provided in Section 2. Finally, in Section 5 we expose the main conclusions at which we have arrived.

  13. Ocular Screening System

    NASA Technical Reports Server (NTRS)

    1987-01-01

    Used to detect eye problems in children through analysis of retinal reflexes, the system incorporates image processing techniques. VISISCREEN's photorefractor is basically a 35 millimeter camera with a telephoto lens and an electronic flash. By making a color photograph, the system can test the human eye for refractive error and obstruction in the cornea or lens. Ocular alignment problems are detected by imaging both eyes simultaneously. Electronic flash sends light into the eyes and the light is reflected from the retina back to the camera lens. Photorefractor analyzes the retinal reflexes generated by the subject's response to the flash and produces an image of the subject's eyes in which the pupils are variously colored. The nature of a defect, where such exists, is identifiable by atrained observer's visual examination.

  14. Structure of a zinc oxide ultra-thin film on Rh(100)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yuhara, J.; Kato, D.; Matsui, T.

    The structural parameters of ultra-thin zinc oxide films on Rh(100) are investigated using low-energy electron diffraction intensity (LEED I–V) curves, scanning tunneling microscopy (STM), and first-principles density functional theory (DFT) calculations. From the analysis of LEED I–V curves and DFT calculations, two optimized models A and B are determined. Their structures are basically similar to the planer h-BN ZnO(0001) structure, although some oxygen atoms protrude from the surface, associated with an in-plane shift of Zn atoms. From a comparison of experimental STM images and simulated STM images, majority and minority structures observed in the STM images represent the two optimizedmore » models A and B, respectively.« less

  15. Quantitation of Indoleacetic Acid Conjugates in Bean Seeds by Direct Tissue Hydrolysis 1

    PubMed Central

    Bialek, Krystyna; Cohen, Jerry D.

    1989-01-01

    Gas chromatography-selected ion monitoring-mass spectral analysis using [13C6]indole-3-acetic acid (IAA) as an internal standard provides an effective means for quantitation of IAA liberated during direct strong basic hydrolysis of bean (Phaseolus vulgaris L.) seed powder, provided that extra precautions are undertaken to exclude oxygen from the reaction vial. Direct seed powder hydrolysis revealed that the major portion of amide IAA conjugates in bean seeds are not extractable by aqueous acetone, the solvent used commonly for IAA conjugate extraction from seeds and other plant tissues. Strong basic hydrolysis of plant tissue can be used to provide new information on IAA content. Images Figure 1 PMID:16666783

  16. A Study on the Basic Criteria for Selecting Heterogeneity Parameters of F18-FDG PET Images.

    PubMed

    Forgacs, Attila; Pall Jonsson, Hermann; Dahlbom, Magnus; Daver, Freddie; D DiFranco, Matthew; Opposits, Gabor; K Krizsan, Aron; Garai, Ildiko; Czernin, Johannes; Varga, Jozsef; Tron, Lajos; Balkay, Laszlo

    2016-01-01

    Textural analysis might give new insights into the quantitative characterization of metabolically active tumors. More than thirty textural parameters have been investigated in former F18-FDG studies already. The purpose of the paper is to declare basic requirements as a selection strategy to identify the most appropriate heterogeneity parameters to measure textural features. Our predefined requirements were: a reliable heterogeneity parameter has to be volume independent, reproducible, and suitable for expressing quantitatively the degree of heterogeneity. Based on this criteria, we compared various suggested measures of homogeneity. A homogeneous cylindrical phantom was measured on three different PET/CT scanners using the commonly used protocol. In addition, a custom-made inhomogeneous tumor insert placed into the NEMA image quality phantom was imaged with a set of acquisition times and several different reconstruction protocols. PET data of 65 patients with proven lung lesions were retrospectively analyzed as well. Four heterogeneity parameters out of 27 were found as the most attractive ones to characterize the textural properties of metabolically active tumors in FDG PET images. These four parameters included Entropy, Contrast, Correlation, and Coefficient of Variation. These parameters were independent of delineated tumor volume (bigger than 25-30 ml), provided reproducible values (relative standard deviation< 10%), and showed high sensitivity to changes in heterogeneity. Phantom measurements are a viable way to test the reliability of heterogeneity parameters that would be of interest to nuclear imaging clinicians.

  17. A Study on the Basic Criteria for Selecting Heterogeneity Parameters of F18-FDG PET Images

    PubMed Central

    Forgacs, Attila; Pall Jonsson, Hermann; Dahlbom, Magnus; Daver, Freddie; D. DiFranco, Matthew; Opposits, Gabor; K. Krizsan, Aron; Garai, Ildiko; Czernin, Johannes; Varga, Jozsef; Tron, Lajos; Balkay, Laszlo

    2016-01-01

    Textural analysis might give new insights into the quantitative characterization of metabolically active tumors. More than thirty textural parameters have been investigated in former F18-FDG studies already. The purpose of the paper is to declare basic requirements as a selection strategy to identify the most appropriate heterogeneity parameters to measure textural features. Our predefined requirements were: a reliable heterogeneity parameter has to be volume independent, reproducible, and suitable for expressing quantitatively the degree of heterogeneity. Based on this criteria, we compared various suggested measures of homogeneity. A homogeneous cylindrical phantom was measured on three different PET/CT scanners using the commonly used protocol. In addition, a custom-made inhomogeneous tumor insert placed into the NEMA image quality phantom was imaged with a set of acquisition times and several different reconstruction protocols. PET data of 65 patients with proven lung lesions were retrospectively analyzed as well. Four heterogeneity parameters out of 27 were found as the most attractive ones to characterize the textural properties of metabolically active tumors in FDG PET images. These four parameters included Entropy, Contrast, Correlation, and Coefficient of Variation. These parameters were independent of delineated tumor volume (bigger than 25–30 ml), provided reproducible values (relative standard deviation< 10%), and showed high sensitivity to changes in heterogeneity. Phantom measurements are a viable way to test the reliability of heterogeneity parameters that would be of interest to nuclear imaging clinicians. PMID:27736888

  18. Contrast imaging in mouse embryos using high-frequency ultrasound.

    PubMed

    Denbeigh, Janet M; Nixon, Brian A; Puri, Mira C; Foster, F Stuart

    2015-03-04

    Ultrasound contrast-enhanced imaging can convey essential quantitative information regarding tissue vascularity and perfusion and, in targeted applications, facilitate the detection and measure of vascular biomarkers at the molecular level. Within the mouse embryo, this noninvasive technique may be used to uncover basic mechanisms underlying vascular development in the early mouse circulatory system and in genetic models of cardiovascular disease. The mouse embryo also presents as an excellent model for studying the adhesion of microbubbles to angiogenic targets (including vascular endothelial growth factor receptor 2 (VEGFR2) or αvβ3) and for assessing the quantitative nature of molecular ultrasound. We therefore developed a method to introduce ultrasound contrast agents into the vasculature of living, isolated embryos. This allows freedom in terms of injection control and positioning, reproducibility of the imaging plane without obstruction and motion, and simplified image analysis and quantification. Late gestational stage (embryonic day (E)16.6 and E17.5) murine embryos were isolated from the uterus, gently exteriorized from the yolk sac and microbubble contrast agents were injected into veins accessible on the chorionic surface of the placental disc. Nonlinear contrast ultrasound imaging was then employed to collect a number of basic perfusion parameters (peak enhancement, wash-in rate and time to peak) and quantify targeted microbubble binding in an endoglin mouse model. We show the successful circulation of microbubbles within living embryos and the utility of this approach in characterizing embryonic vasculature and microbubble behavior.

  19. Adaptive thresholding image series from fluorescence confocal scanning laser microscope using orientation intensity profiles

    NASA Astrophysics Data System (ADS)

    Feng, Judy J.; Ip, Horace H.; Cheng, Shuk H.

    2004-05-01

    Many grey-level thresholding methods based on histogram or other statistic information about the interest image such as maximum entropy and so on have been proposed in the past. However, most methods based on statistic analysis of the images concerned little about the characteristics of morphology of interest objects, which sometimes could provide very important indication which can help to find the optimum threshold, especially for those organisms which have special texture morphologies such as vasculature, neuro-network etc. in medical imaging. In this paper, we propose a novel method for thresholding the fluorescent vasculature image series recorded from Confocal Scanning Laser Microscope. After extracting the basic orientation of the slice of vessels inside a sub-region partitioned from the images, we analysis the intensity profiles perpendicular to the vessel orientation to get the reasonable initial threshold for each region. Then the threshold values of those regions near the interest one both in x-y and optical directions have been referenced to get the final result of thresholds of the region, which makes the whole stack of images look more continuous. The resulting images are characterized by suppressing both noise and non-interest tissues conglutinated to vessels, while improving the vessel connectivities and edge definitions. The value of the method for idealized thresholding the fluorescence images of biological objects is demonstrated by a comparison of the results of 3D vascular reconstruction.

  20. Computational assessment of mammography accreditation phantom images and correlation with human observer analysis

    NASA Astrophysics Data System (ADS)

    Barufaldi, Bruno; Lau, Kristen C.; Schiabel, Homero; Maidment, D. A.

    2015-03-01

    Routine performance of basic test procedures and dose measurements are essential for assuring high quality of mammograms. International guidelines recommend that breast care providers ascertain that mammography systems produce a constant high quality image, using as low a radiation dose as is reasonably achievable. The main purpose of this research is to develop a framework to monitor radiation dose and image quality in a mixed breast screening and diagnostic imaging environment using an automated tracking system. This study presents a module of this framework, consisting of a computerized system to measure the image quality of the American College of Radiology mammography accreditation phantom. The methods developed combine correlation approaches, matched filters, and data mining techniques. These methods have been used to analyze radiological images of the accreditation phantom. The classification of structures of interest is based upon reports produced by four trained readers. As previously reported, human observers demonstrate great variation in their analysis due to the subjectivity of human visual inspection. The software tool was trained with three sets of 60 phantom images in order to generate decision trees using the software WEKA (Waikato Environment for Knowledge Analysis). When tested with 240 images during the classification step, the tool correctly classified 88%, 99%, and 98%, of fibers, speck groups and masses, respectively. The variation between the computer classification and human reading was comparable to the variation between human readers. This computerized system not only automates the quality control procedure in mammography, but also decreases the subjectivity in the expert evaluation of the phantom images.

  1. Analysis of radar images of the active volcanic zone at Krafla, Iceland: The effects of look azimuth biasing

    NASA Technical Reports Server (NTRS)

    Garvin, J. B.; Williams, R. S., Jr.

    1989-01-01

    The geomorphic expression of Mid-Ocean-Ridge (MOR) volcanism in a subaerial setting occurs uniquely on Earth in Iceland, and the most recent MOR eruptive activity has been concentrated in the Northeastern Volcanic Zone in an area known as Krafla. Within the Krafla region are many of the key morphologic elements of MOR-related basaltic volcanism, as well as volcanic explosion craters, subglacial lava shields, tectonic fissure swarms known as gjar, and basaltic-andesite flows with well developed ogives (pressure-ridges). The objective was to quantify the degree to which the basic volcanic and structural features can be mapped from directional SAR imagery as a function of the look azimuth. To accomplish this, the current expression of volcanic and tectonic constructs was independently mapped within the Krafla region on the E, W, and N-looking SAR images, as well as from SPOT Panchromatic imagery acquired in 1987. The initial observations of the E, W, and N images indicates that fresh a'a lava surfaces are extremely radar bright (rough at 3 cm to meter scales) independent of look direction; this suggests that these flows do not have strong flow direction related structures at meter and cm scales, which is consistent with typical Icelandic a'a lava surfaces in general. The basic impression from a preliminary analysis of the effects of look azimuth biasing on interpretation of the geology of an active MOR volcanic zone is that up to 30 percent of the diagnostic features can be missed at any given look direction, but that having two orthogonal look direction images is probably sufficient to prevent gross misinterpretation.

  2. Experimental research of digital holographic microscopic measuring

    NASA Astrophysics Data System (ADS)

    Zhu, Xueliang; Chen, Feifei; Li, Jicheng

    2013-06-01

    Digital holography is a new imaging technique, which is developed on the base of optical holography, Digital processing, and Computer techniques. It is using CCD instead of the conventional silver to record hologram, and then reproducing the 3D contour of the object by the way of computer simulation. Compared with the traditional optical holographic, the whole process is of simple measuring, lower production cost, faster the imaging speed, and with the advantages of non-contact real-time measurement. At present, it can be used in the fields of the morphology detection of tiny objects, micro deformation analysis, and biological cells shape measurement. It is one of the research hot spot at home and abroad. This paper introduced the basic principles and relevant theories about the optical holography and Digital holography, and researched the basic questions which influence the reproduce images in the process of recording and reconstructing of the digital holographic microcopy. In order to get a clear digital hologram, by analyzing the optical system structure, we discussed the recording distance and of the hologram. On the base of the theoretical studies, we established a measurement and analyzed the experimental conditions, then adjusted them to the system. To achieve a precise measurement of tiny object in three-dimension, we measured MEMS micro device for example, and obtained the reproduction three-dimensional contour, realized the three dimensional profile measurement of tiny object. According to the experiment results consider: analysis the reference factors between the zero-order term and a pair of twin-images by the choice of the object light and the reference light and the distance of the recording and reconstructing and the characteristics of reconstruction light on the measurement, the measurement errors were analyzed. The research result shows that the device owns certain reliability.

  3. Study of imaging fiber bundle coupling technique in IR system

    NASA Astrophysics Data System (ADS)

    Chen, Guoqing; Yang, Jianfeng; Yan, Xingtao; Song, Yansong

    2017-02-01

    Due to its advantageous imaging characteristic and banding flexibility, imaging fiber bundle can be used for line-plane-switching push-broom infrared imaging. How to precisely couple the fiber bundle in the optics system is the key to get excellent image for transmission. After introducing the basic system composition and structural characteristics of the infrared systems coupled with imaging fiber bundle, this article analysis the coupling efficiency and the design requirements of its relay lenses with the angle of the numerical aperture selecting in the system and cold stop matching of the refrigerant infrared detector. For an actual need, one relay coupling system has been designed with the magnification is -0.6, field of objective height is 4mm, objective numerical aperture is 0.15, which has excellent image quality and enough coupling efficiency. In the end, the push broom imaging experiment is carried out. The results show that the design meets the requirements of light energy efficiency and image quality. This design has a certain reference value for the design of the infrared fiber optical system.

  4. a New Improved Threshold Segmentation Method for Scanning Images of Reservoir Rocks Considering Pore Fractal Characteristics

    NASA Astrophysics Data System (ADS)

    Lin, Wei; Li, Xizhe; Yang, Zhengming; Lin, Lijun; Xiong, Shengchun; Wang, Zhiyuan; Wang, Xiangyang; Xiao, Qianhua

    Based on the basic principle of the porosity method in image segmentation, considering the relationship between the porosity of the rocks and the fractal characteristics of the pore structures, a new improved image segmentation method was proposed, which uses the calculated porosity of the core images as a constraint to obtain the best threshold. The results of comparative analysis show that the porosity method can best segment images theoretically, but the actual segmentation effect is deviated from the real situation. Due to the existence of heterogeneity and isolated pores of cores, the porosity method that takes the experimental porosity of the whole core as the criterion cannot achieve the desired segmentation effect. On the contrary, the new improved method overcomes the shortcomings of the porosity method, and makes a more reasonable binary segmentation for the core grayscale images, which segments images based on the actual porosity of each image by calculated. Moreover, the image segmentation method based on the calculated porosity rather than the measured porosity also greatly saves manpower and material resources, especially for tight rocks.

  5. Gloss discrimination and eye movements

    NASA Astrophysics Data System (ADS)

    Phillips, Jonathan B.; Ferwerda, James A.; Nunziata, Ann

    2010-02-01

    Human observers are able to make fine discriminations of surface gloss. What cues are they using to perform this task? In previous studies, we identified two reflection-related cues-the contrast of the reflected image (c, contrast gloss) and the sharpness of reflected image (d, distinctness-of-image gloss)--but these were for objects rendered in standard dynamic range (SDR) images with compressed highlights. In ongoing work, we are studying the effects of image dynamic range on perceived gloss, comparing high dynamic range (HDR) images with accurate reflections and SDR images with compressed reflections. In this paper, we first present the basic findings of this gloss discrimination study then present an analysis of eye movement recordings that show where observers were looking during the gloss discrimination task. The results indicate that: 1) image dynamic range has significant influence on perceived gloss, with surfaces presented in HDR images being seen as glossier and more discriminable than their SDR counterparts; 2) observers look at both light source highlights and environmental interreflections when judging gloss; and 3) both of these results are modulated by surface geometry and scene illumination.

  6. Polar ring galaxies in the Galaxy Zoo

    NASA Astrophysics Data System (ADS)

    Finkelman, Ido; Funes, José G.; Brosch, Noah

    2012-05-01

    We report observations of 16 candidate polar-ring galaxies (PRGs) identified by the Galaxy Zoo project in the Sloan Digital Sky Survey (SDSS) data base. Deep images of five galaxies are available in the SDSS Stripe82 data base, while to reach similar depth we observed the remaining galaxies with the 1.8-m Vatican Advanced Technology Telescope. We derive integrated magnitudes and u-r colours for the host and ring components and show continuum-subtracted Hα+[N II] images for seven objects. We present a basic morphological and environmental analysis of the galaxies and discuss their properties in comparison with other types of early-type galaxies. Follow-up photometric and spectroscopic observations will allow a kinematic confirmation of the nature of these systems and a more detailed analysis of their stellar populations.

  7. Development of Time-Distance Helioseismology Data Analysis Pipeline for SDO/HMI

    NASA Technical Reports Server (NTRS)

    DuVall, T. L., Jr.; Zhao, J.; Couvidat, S.; Parchevsky, K. V.; Beck, J.; Kosovichev, A. G.; Scherrer, P. H.

    2008-01-01

    The Helioseismic and Magnetic Imager of SDO will provide uninterrupted 4k x 4k-pixel Doppler-shift images of the Sun with approximately 40 sec cadence. These data will have a unique potential for advancing local helioseismic diagnostics of the Sun's interior structure and dynamics. They will help to understand the basic mechanisms of solar activity and develop predictive capabilities for NASA's Living with a Star program. Because of the tremendous amount of data the HMI team is developing a data analysis pipeline, which will provide maps of subsurface flows and sound-speed distributions inferred form the Doppler data by the time-distance technique. We discuss the development plan, methods, and algorithms, and present the status of the pipeline, testing results and examples of the data products.

  8. An automated form of video image analysis applied to classification of movement disorders.

    PubMed

    Chang, R; Guan, L; Burne, J A

    Video image analysis is able to provide quantitative data on postural and movement abnormalities and thus has an important application in neurological diagnosis and management. The conventional techniques require patients to be videotaped while wearing markers in a highly structured laboratory environment. This restricts the utility of video in routine clinical practise. We have begun development of intelligent software which aims to provide a more flexible system able to quantify human posture and movement directly from whole-body images without markers and in an unstructured environment. The steps involved are to extract complete human profiles from video frames, to fit skeletal frameworks to the profiles and derive joint angles and swing distances. By this means a given posture is reduced to a set of basic parameters that can provide input to a neural network classifier. To test the system's performance we videotaped patients with dopa-responsive Parkinsonism and age-matched normals during several gait cycles, to yield 61 patient and 49 normal postures. These postures were reduced to their basic parameters and fed to the neural network classifier in various combinations. The optimal parameter sets (consisting of both swing distances and joint angles) yielded successful classification of normals and patients with an accuracy above 90%. This result demonstrated the feasibility of the approach. The technique has the potential to guide clinicians on the relative sensitivity of specific postural/gait features in diagnosis. Future studies will aim to improve the robustness of the system in providing accurate parameter estimates from subjects wearing a range of clothing, and to further improve discrimination by incorporating more stages of the gait cycle into the analysis.

  9. Support vector machine as a binary classifier for automated object detection in remotely sensed data

    NASA Astrophysics Data System (ADS)

    Wardaya, P. D.

    2014-02-01

    In the present paper, author proposes the application of Support Vector Machine (SVM) for the analysis of satellite imagery. One of the advantages of SVM is that, with limited training data, it may generate comparable or even better results than the other methods. The SVM algorithm is used for automated object detection and characterization. Specifically, the SVM is applied in its basic nature as a binary classifier where it classifies two classes namely, object and background. The algorithm aims at effectively detecting an object from its background with the minimum training data. The synthetic image containing noises is used for algorithm testing. Furthermore, it is implemented to perform remote sensing image analysis such as identification of Island vegetation, water body, and oil spill from the satellite imagery. It is indicated that SVM provides the fast and accurate analysis with the acceptable result.

  10. Analysis of geologic terrain models for determination of optimum SAR sensor configuration and optimum information extraction for exploration of global non-renewable resources. Pilot study: Arkansas Remote Sensing Laboratory, part 1, part 2, and part 3

    NASA Technical Reports Server (NTRS)

    Kaupp, V. H.; Macdonald, H. C.; Waite, W. P.; Stiles, J. A.; Frost, F. S.; Shanmugam, K. S.; Smith, S. A.; Narayanan, V.; Holtzman, J. C. (Principal Investigator)

    1982-01-01

    Computer-generated radar simulations and mathematical geologic terrain models were used to establish the optimum radar sensor operating parameters for geologic research. An initial set of mathematical geologic terrain models was created for three basic landforms and families of simulated radar images were prepared from these models for numerous interacting sensor, platform, and terrain variables. The tradeoffs between the various sensor parameters and the quantity and quality of the extractable geologic data were investigated as well as the development of automated techniques of digital SAR image analysis. Initial work on a texture analysis of SEASAT SAR imagery is reported. Computer-generated radar simulations are shown for combinations of two geologic models and three SAR angles of incidence.

  11. Predicting Cortical Dark/Bright Asymmetries from Natural Image Statistics and Early Visual Transforms

    PubMed Central

    Cooper, Emily A.; Norcia, Anthony M.

    2015-01-01

    The nervous system has evolved in an environment with structure and predictability. One of the ubiquitous principles of sensory systems is the creation of circuits that capitalize on this predictability. Previous work has identified predictable non-uniformities in the distributions of basic visual features in natural images that are relevant to the encoding tasks of the visual system. Here, we report that the well-established statistical distributions of visual features -- such as visual contrast, spatial scale, and depth -- differ between bright and dark image components. Following this analysis, we go on to trace how these differences in natural images translate into different patterns of cortical input that arise from the separate bright (ON) and dark (OFF) pathways originating in the retina. We use models of these early visual pathways to transform natural images into statistical patterns of cortical input. The models include the receptive fields and non-linear response properties of the magnocellular (M) and parvocellular (P) pathways, with their ON and OFF pathway divisions. The results indicate that there are regularities in visual cortical input beyond those that have previously been appreciated from the direct analysis of natural images. In particular, several dark/bright asymmetries provide a potential account for recently discovered asymmetries in how the brain processes visual features, such as violations of classic energy-type models. On the basis of our analysis, we expect that the dark/bright dichotomy in natural images plays a key role in the generation of both cortical and perceptual asymmetries. PMID:26020624

  12. Cardiac multidetector computed tomography: basic physics of image acquisition and clinical applications.

    PubMed

    Bardo, Dianna M E; Brown, Paul

    2008-08-01

    Cardiac MDCT is here to stay. And, it is more than just imaging coronary arteries. Understanding the differences in and the benefits of one CT scanner from another will help you to optimize the capabilities of the scanner, but requires a basic understanding of the MDCT imaging physics.This review provides key information needed to understand the differences in the types of MDCT scanners, from 64 - 320 detectors, flat panels, single and dual source configurations, step and shoot prospective and retrospective gating, and how each factor influences radiation dose, spatial and temporal resolution, and image noise.

  13. Theoretical foundations of spatially-variant mathematical morphology part ii: gray-level images.

    PubMed

    Bouaynaya, Nidhal; Schonfeld, Dan

    2008-05-01

    In this paper, we develop a spatially-variant (SV) mathematical morphology theory for gray-level signals and images in the Euclidean space. The proposed theory preserves the geometrical concept of the structuring function, which provides the foundation of classical morphology and is essential in signal and image processing applications. We define the basic SV gray-level morphological operators (i.e., SV gray-level erosion, dilation, opening, and closing) and investigate their properties. We demonstrate the ubiquity of SV gray-level morphological systems by deriving a kernel representation for a large class of systems, called V-systems, in terms of the basic SV graylevel morphological operators. A V-system is defined to be a gray-level operator, which is invariant under gray-level (vertical) translations. Particular attention is focused on the class of SV flat gray-level operators. The kernel representation for increasing V-systems is a generalization of Maragos' kernel representation for increasing and translation-invariant function-processing systems. A representation of V-systems in terms of their kernel elements is established for increasing and upper-semi-continuous V-systems. This representation unifies a large class of spatially-variant linear and non-linear systems under the same mathematical framework. Finally, simulation results show the potential power of the general theory of gray-level spatially-variant mathematical morphology in several image analysis and computer vision applications.

  14. The analysis of optical-electro collimated light tube measurement system

    NASA Astrophysics Data System (ADS)

    Li, Zhenhui; Jiang, Tao; Cao, Guohua; Wang, Yanfei

    2005-12-01

    A new type of collimated light tube (CLT) is mentioned in this paper. The analysis and structure of CLT are described detail. The reticle and discrimination board are replaced by a optical-electro graphics generator, or DLP-Digital Light Processor. DLP gives all kinds of graphics controlled by computer, the lighting surface lies on the focus of the CLT. The rays of light pass through the CLT, and the tested products, the image of aim is received by variant focus objective CCD camera, the image can be processed by computer, then, some basic optical parameters will be obtained, such as optical aberration, image slope, etc. At the same time, motorized translation stage carry the DLP moving to simulate the limited distance. The grating ruler records the displacement of the DLP. The key technique is optical-electro auto-focus, the best imaging quality can be gotten by moving 6-D motorized positioning stage. Some principal questions can be solved in this device, for example, the aim generating, the structure of receiving system and optical matching.

  15. A new hyperchaotic map and its application for image encryption

    NASA Astrophysics Data System (ADS)

    Natiq, Hayder; Al-Saidi, N. M. G.; Said, M. R. M.; Kilicman, Adem

    2018-01-01

    Based on the one-dimensional Sine map and the two-dimensional Hénon map, a new two-dimensional Sine-Hénon alteration model (2D-SHAM) is hereby proposed. Basic dynamic characteristics of 2D-SHAM are studied through the following aspects: equilibria, Jacobin eigenvalues, trajectory, bifurcation diagram, Lyapunov exponents and sensitivity dependence test. The complexity of 2D-SHAM is investigated using Sample Entropy algorithm. Simulation results show that 2D-SHAM is overall hyperchaotic with the high complexity, and high sensitivity to its initial values and control parameters. To investigate its performance in terms of security, a new 2D-SHAM-based image encryption algorithm (SHAM-IEA) is also proposed. In this algorithm, the essential requirements of confusion and diffusion are accomplished, and the stochastic 2D-SHAM is used to enhance the security of encrypted image. The stochastic 2D-SHAM generates random values, hence SHAM-IEA can produce different encrypted images even with the same secret key. Experimental results and security analysis show that SHAM-IEA has strong capability to withstand statistical analysis, differential attack, chosen-plaintext and chosen-ciphertext attacks.

  16. ImageX: new and improved image explorer for astronomical images and beyond

    NASA Astrophysics Data System (ADS)

    Hayashi, Soichi; Gopu, Arvind; Kotulla, Ralf; Young, Michael D.

    2016-08-01

    The One Degree Imager - Portal, Pipeline, and Archive (ODI-PPA) has included the Image Explorer interactive image visualization tool since it went operational. Portal users were able to quickly open up several ODI images within any HTML5 capable web browser, adjust the scaling, apply color maps, and perform other basic image visualization steps typically done on a desktop client like DS9. However, the original design of the Image Explorer required lossless PNG tiles to be generated and stored for all raw and reduced ODI images thereby taking up tens of TB of spinning disk space even though a small fraction of those images were being accessed by portal users at any given time. It also caused significant overhead on the portal web application and the Apache webserver used by ODI-PPA. We found it hard to merge in improvements made to a similar deployment in another project's portal. To address these concerns, we re-architected Image Explorer from scratch and came up with ImageX, a set of microservices that are part of the IU Trident project software suite, with rapid interactive visualization capabilities useful for ODI data and beyond. We generate a full resolution JPEG image for each raw and reduced ODI FITS image before producing a JPG tileset, one that can be rendered using the ImageX frontend code at various locations as appropriate within a web portal (for example: on tabular image listings, views allowing quick perusal of a set of thumbnails or other image sifting activities). The new design has decreased spinning disk requirements, uses AngularJS for the client side Model/View code (instead of depending on backend PHP Model/View/Controller code previously used), OpenSeaDragon to render the tile images, and uses nginx and a lightweight NodeJS application to serve tile images thereby significantly decreasing the Time To First Byte latency by a few orders of magnitude. We plan to extend ImageX for non-FITS images including electron microscopy and radiology scan images, and its featureset to include basic functions like image overlay and colormaps. Users needing more advanced visualization and analysis capabilities could use a desktop tool like DS9+IRAF on another IU Trident project called StarDock, without having to download Gigabytes of FITS image data.

  17. TU-G-303-03: Machine Learning to Improve Human Learning From Longitudinal Image Sets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Veeraraghavan, H.

    ‘Radiomics’ refers to studies that extract a large amount of quantitative information from medical imaging studies as a basis for characterizing a specific aspect of patient health. Radiomics models can be built to address a wide range of outcome predictions, clinical decisions, basic cancer biology, etc. For example, radiomics models can be built to predict the aggressiveness of an imaged cancer, cancer gene expression characteristics (radiogenomics), radiation therapy treatment response, etc. Technically, radiomics brings together quantitative imaging, computer vision/image processing, and machine learning. In this symposium, speakers will discuss approaches to radiomics investigations, including: longitudinal radiomics, radiomics combined with othermore » biomarkers (‘pan-omics’), radiomics for various imaging modalities (CT, MRI, and PET), and the use of registered multi-modality imaging datasets as a basis for radiomics. There are many challenges to the eventual use of radiomics-derived methods in clinical practice, including: standardization and robustness of selected metrics, accruing the data required, building and validating the resulting models, registering longitudinal data that often involve significant patient changes, reliable automated cancer segmentation tools, etc. Despite the hurdles, results achieved so far indicate the tremendous potential of this general approach to quantifying and using data from medical images. Specific applications of radiomics to be presented in this symposium will include: the longitudinal analysis of patients with low-grade gliomas; automatic detection and assessment of patients with metastatic bone lesions; image-based monitoring of patients with growing lymph nodes; predicting radiotherapy outcomes using multi-modality radiomics; and studies relating radiomics with genomics in lung cancer and glioblastoma. Learning Objectives: Understanding the basic image features that are often used in radiomic models. Understanding requirements for reliable radiomic models, including robustness of metrics, adequate predictive accuracy, and generalizability. Understanding the methodology behind radiomic-genomic (’radiogenomics’) correlations. Research supported by NIH (US), CIHR (Canada), and NSERC (Canada)« less

  18. Trends in radiology and experimental research.

    PubMed

    Sardanelli, Francesco

    2017-01-01

    European Radiology Experimental , the new journal launched by the European Society of Radiology, is placed in the context of three general and seven radiology-specific trends. After describing the impact of population aging, personalized/precision medicine, and information technology development, the article considers the following trends: the tension between subspecialties and the unity of the discipline; attention to patient safety; the challenge of reproducibility for quantitative imaging; standardized and structured reporting; search for higher levels of evidence in radiology (from diagnostic performance to patient outcome); the increasing relevance of interventional radiology; and continuous technological evolution. The new journal will publish not only studies on phantoms, cells, or animal models but also those describing development steps of imaging biomarkers or those exploring secondary end-points of large clinical trials. Moreover, consideration will be given to studies regarding: computer modelling and computer aided detection and diagnosis; contrast materials, tracers, and theranostics; advanced image analysis; optical, molecular, hybrid and fusion imaging; radiomics and radiogenomics; three-dimensional printing, information technology, image reconstruction and post-processing, big data analysis, teleradiology, clinical decision support systems; radiobiology; radioprotection; and physics in radiology. The journal aims to establish a forum for basic science, computer and information technology, radiology, and other medical subspecialties.

  19. NEFI: Network Extraction From Images

    PubMed Central

    Dirnberger, M.; Kehl, T.; Neumann, A.

    2015-01-01

    Networks are amongst the central building blocks of many systems. Given a graph of a network, methods from graph theory enable a precise investigation of its properties. Software for the analysis of graphs is widely available and has been applied to study various types of networks. In some applications, graph acquisition is relatively simple. However, for many networks data collection relies on images where graph extraction requires domain-specific solutions. Here we introduce NEFI, a tool that extracts graphs from images of networks originating in various domains. Regarding previous work on graph extraction, theoretical results are fully accessible only to an expert audience and ready-to-use implementations for non-experts are rarely available or insufficiently documented. NEFI provides a novel platform allowing practitioners to easily extract graphs from images by combining basic tools from image processing, computer vision and graph theory. Thus, NEFI constitutes an alternative to tedious manual graph extraction and special purpose tools. We anticipate NEFI to enable time-efficient collection of large datasets. The analysis of these novel datasets may open up the possibility to gain new insights into the structure and function of various networks. NEFI is open source and available at http://nefi.mpi-inf.mpg.de. PMID:26521675

  20. Image based performance analysis of thermal imagers

    NASA Astrophysics Data System (ADS)

    Wegner, D.; Repasi, E.

    2016-05-01

    Due to advances in technology, modern thermal imagers resemble sophisticated image processing systems in functionality. Advanced signal and image processing tools enclosed into the camera body extend the basic image capturing capability of thermal cameras. This happens in order to enhance the display presentation of the captured scene or specific scene details. Usually, the implemented methods are proprietary company expertise, distributed without extensive documentation. This makes the comparison of thermal imagers especially from different companies a difficult task (or at least a very time consuming/expensive task - e.g. requiring the execution of a field trial and/or an observer trial). For example, a thermal camera equipped with turbulence mitigation capability stands for such a closed system. The Fraunhofer IOSB has started to build up a system for testing thermal imagers by image based methods in the lab environment. This will extend our capability of measuring the classical IR-system parameters (e.g. MTF, MTDP, etc.) in the lab. The system is set up around the IR- scene projector, which is necessary for the thermal display (projection) of an image sequence for the IR-camera under test. The same set of thermal test sequences might be presented to every unit under test. For turbulence mitigation tests, this could be e.g. the same turbulence sequence. During system tests, gradual variation of input parameters (e. g. thermal contrast) can be applied. First ideas of test scenes selection and how to assembly an imaging suite (a set of image sequences) for the analysis of imaging thermal systems containing such black boxes in the image forming path is discussed.

  1. An Analysis of Full Scale Measurements on M/V Stewart J. Cort during the 1979 and 1980 Trial Programs. Parts I and II.

    DTIC Science & Technology

    1982-02-01

    IKCuNITY CLASSIFICATION OF Tm4iS IMAGE (Vrhn Dot& Entered) .,.-’- . . . . . ... .. ... " . . ...... ....... .. . . . . . . . . TABLE OF CONTENTS...11-19 APPENDIX D: BASIC PROCESSING ............................... 11-21 APPENDIX E: SIMULATION OF DATA...equipment previously developed, and an on-board data processing system. These full scale ship trials were the first in history with the objective of directly

  2. An investigation and conceptual design of a holographic starfield and landmark tracker

    NASA Technical Reports Server (NTRS)

    Welch, J. D.

    1973-01-01

    The analysis, experiments, and design effort of this study have supported the feasibility of the basic holographic tracker concept. Image intensifiers and photoplastic recording materials were examined, along with a Polaroid rapid process silver halide material. Two reference beam, coherent optical matched filter technique was used for multiplexing spatial frequency filters for starfields. A 1 watt HeNe laser and an electro-optical readout are also considered.

  3. What Is an Image?

    ERIC Educational Resources Information Center

    Gerber, Andrew J.; Peterson, Bradley S.

    2008-01-01

    The article helps to understand the interpretation of an image by presenting as to what constitutes an image. A common feature in all images is the basic physical structure that can be described with a common set of terms.

  4. A maximally stable extremal region based scene text localization method

    NASA Astrophysics Data System (ADS)

    Xiao, Chengqiu; Ji, Lixin; Gao, Chao; Li, Shaomei

    2015-07-01

    Text localization in natural scene images is an important prerequisite for many content-based image analysis tasks. This paper proposes a novel text localization algorithm. Firstly, a fast pruning algorithm is designed to extract Maximally Stable Extremal Regions (MSER) as basic character candidates. Secondly, these candidates are filtered by using the properties of fitting ellipse and the distribution properties of characters to exclude most non-characters. Finally, a new extremal regions projection merging algorithm is designed to group character candidates into words. Experimental results show that the proposed method has an advantage in speed and achieve relatively high precision and recall rates than the latest published algorithms.

  5. Computer Modeling of Basic Physico-Chemical Processes for DSEC Composites of System LaB6-MeB2(MeTi, Zr, Hf) at Macro-, Meso- and Microstructure Scales

    DTIC Science & Technology

    2010-07-15

    operations of mathematical morphology applied for analysis of images are ways to extract information of image. The approach early developed [52] to use...1,2568 57 VB2 5,642; 5,804; 5,67; 5,784 0,5429 0,2338 0,04334 0,45837 CrB2 5,62; 5,779; 5,61; 5,783 0,53276 0,23482...maxT For VB2 - has min value if compare with other composite materials on the base of LaB6 and diborides of transitive metals [3], = Joule and

  6. Analysis of random point images with the use of symbolic computation codes and generalized Catalan numbers

    NASA Astrophysics Data System (ADS)

    Reznik, A. L.; Tuzikov, A. V.; Solov'ev, A. A.; Torgov, A. V.

    2016-11-01

    Original codes and combinatorial-geometrical computational schemes are presented, which are developed and applied for finding exact analytical formulas that describe the probability of errorless readout of random point images recorded by a scanning aperture with a limited number of threshold levels. Combinatorial problems encountered in the course of the study and associated with the new generalization of Catalan numbers are formulated and solved. An attempt is made to find the explicit analytical form of these numbers, which is, on the one hand, a necessary stage of solving the basic research problem and, on the other hand, an independent self-consistent problem.

  7. SU-F-T-463: Light-Field Based Dynalog Verification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Atwal, P; Ramaseshan, R

    2016-06-15

    Purpose: To independently verify leaf positions in so-called dynalog files for a Varian iX linac with a Millennium 120 MLC. This verification provides a measure of confidence that the files can be used directly as part of a more extensive intensity modulated radiation therapy / volumetric modulated arc therapy QA program. Methods: Initial testing used white paper placed at the collimator plane and a standard hand-held digital camera to image the light and shadow of a static MLC field through the paper. Known markings on the paper allow for image calibration. Noise reduction was attempted with removal of ‘inherent noise’more » from an open-field light image through the paper, but the method was found to be inconsequential. This is likely because the environment could not be controlled to the precision required for the sort of reproducible characterization of the quantum noise needed in order to meaningfully characterize and account for it. A multi-scale iterative edge detection algorithm was used for localizing the leaf ends. These were compared with the planned locations from the treatment console. Results: With a very basic setup, the image of the central bank A leaves 15–45, which are arguably the most important for beam modulation, differed from the planned location by [0.38±0.28] mm. Similarly, for bank B leaves 15–45 had a difference of [0.42±0.28] mm Conclusion: It should be possible to determine leaf position accurately with not much more than a modern hand-held camera and some software. This means we can have a periodic and independent verification of the dynalog file information. This is indicated by the precision already achieved using a basic setup and analysis methodology. Currently, work is being done to reduce imaging and setup errors, which will bring the leaf position error down further, and allow meaningful analysis over the full range of leaves.« less

  8. Mina Shaughnessy in the 1990s: Some Changing Answers in Basic Writing.

    ERIC Educational Resources Information Center

    McAlexander, Patricia J.

    Although Mina Shaughnessy remains influential in the basic writing field, her answers to the vital questions of who basic writers are and why they underachieve as writers are changing. Whether she intended to or not, Shaughnessy's book "Errors and Expectations" (published in 1977) was a major force in forming an image of basic writers as…

  9. Pharmacokinetics Application in Biophysics Experiments

    NASA Astrophysics Data System (ADS)

    Millet, Philippe; Lemoigne, Yves

    Among the available computerised tomography devices, the Positron Emission Tomography (PET) has the advantage to be sensitive to pico-molar concentrations of radiotracers inside living matter. Devices adapted to small animal imaging are now commercially available and allow us to study the function rather than the structure of living tissues by in vivo analysis. PET methodology, from the physics of electron-positron annihilation to the biophysics involved in tracers, is treated by other authors in this book. The basics of coincidence detection, image reconstruction, spatial resolution and sensitivity are discussed in the paper by R. Ott. The use of compartment analysis combined with pharmacokinetics is described here to illustrate an application to neuroimaging and to show how parametric imaging can bring insight on the in vivo bio-distribution of a radioactive tracer with small animal PET scanners. After reporting on the use of an intracerebral β+ radiosensitive probe (βP), we describe a small animal PET experiment used to measure the density of 5HT 1 a receptors in rat brain.

  10. Accurate Analysis of Target Characteristic in Bistatic SAR Images: A Dihedral Corner Reflectors Case.

    PubMed

    Ao, Dongyang; Li, Yuanhao; Hu, Cheng; Tian, Weiming

    2017-12-22

    The dihedral corner reflectors are the basic geometric structure of many targets and are the main contributions of radar cross section (RCS) in the synthetic aperture radar (SAR) images. In stealth technologies, the elaborate design of the dihedral corners with different opening angles is a useful approach to reduce the high RCS generated by multiple reflections. As bistatic synthetic aperture sensors have flexible geometric configurations and are sensitive to the dihedral corners with different opening angles, they specially fit for the stealth target detections. In this paper, the scattering characteristic of dihedral corner reflectors is accurately analyzed in bistatic synthetic aperture images. The variation of RCS with the changing opening angle is formulated and the method to design a proper bistatic radar for maximizing the detection capability is provided. Both the results of the theoretical analysis and the experiments show the bistatic SAR could detect the dihedral corners, under a certain bistatic angle which is related to the geometry of target structures.

  11. Accurate Analysis of Target Characteristic in Bistatic SAR Images: A Dihedral Corner Reflectors Case

    PubMed Central

    Ao, Dongyang; Hu, Cheng; Tian, Weiming

    2017-01-01

    The dihedral corner reflectors are the basic geometric structure of many targets and are the main contributions of radar cross section (RCS) in the synthetic aperture radar (SAR) images. In stealth technologies, the elaborate design of the dihedral corners with different opening angles is a useful approach to reduce the high RCS generated by multiple reflections. As bistatic synthetic aperture sensors have flexible geometric configurations and are sensitive to the dihedral corners with different opening angles, they specially fit for the stealth target detections. In this paper, the scattering characteristic of dihedral corner reflectors is accurately analyzed in bistatic synthetic aperture images. The variation of RCS with the changing opening angle is formulated and the method to design a proper bistatic radar for maximizing the detection capability is provided. Both the results of the theoretical analysis and the experiments show the bistatic SAR could detect the dihedral corners, under a certain bistatic angle which is related to the geometry of target structures. PMID:29271917

  12. A comparative interregional analysis of selected data from LANDSAT-1 and EREP for the inventory and monitoring of natural ecosystems

    NASA Technical Reports Server (NTRS)

    Poulton, C. E.

    1975-01-01

    Comparative statistics were presented on the capability of LANDSAT-1 and three of the Skylab remote sensing systems (S-190A, S-190B, S-192) for the recognition and inventory of analogous natural vegetations and landscape features important in resource allocation and management. Two analogous regions presenting vegetational zonation from salt desert to alpine conditions above the timberline were observed, emphasizing the visual interpretation mode in the investigation. An hierarchical legend system was used as the basic classification of all land surface features. Comparative tests were run on image identifiability with the different sensor systems, and mapping and interpretation tests were made both in monocular and stereo interpretation with all systems except the S-192. Significant advantage was found in the use of stereo from space when image analysis is by visual or visual-machine-aided interactive systems. Some cost factors in mapping from space are identified. The various image types are compared and an operational system is postulated.

  13. Melanoma Diagnosis

    NASA Astrophysics Data System (ADS)

    Horsch, Alexander

    The chapter deals with the diagnosis of the malignant melanoma of the skin. This aggressive type of cancer with steadily growing incidence in white populations can hundred percent be cured if it is detected in an early stage. Imaging techniques, in particular dermoscopy, have contributed significantly to improvement of diagnostic accuracy in clinical settings, achieving sensitivities for melanoma experts of beyond 95% at specificities of 90% and more. Automatic computer analysis of dermoscopy images has, in preliminary studies, achieved classification rates comparable to those of experts. However, the diagnosis of melanoma requires a lot of training and experience, and at the time being, average numbers of lesions excised per histology-proven melanoma are around 30, a number which clearly is too high. Further improvements in computer dermoscopy systems and their competent use in clinical settings certainly have the potential to support efforts of improving this situation. In the chapter, medical basics, current state of melanoma diagnosis, image analysis methods, commercial dermoscopy systems, evaluation of systems, and methods and future directions are presented.

  14. [Research on fast implementation method of image Gaussian RBF interpolation based on CUDA].

    PubMed

    Chen, Hao; Yu, Haizhong

    2014-04-01

    Image interpolation is often required during medical image processing and analysis. Although interpolation method based on Gaussian radial basis function (GRBF) has high precision, the long calculation time still limits its application in field of image interpolation. To overcome this problem, a method of two-dimensional and three-dimensional medical image GRBF interpolation based on computing unified device architecture (CUDA) is proposed in this paper. According to single instruction multiple threads (SIMT) executive model of CUDA, various optimizing measures such as coalesced access and shared memory are adopted in this study. To eliminate the edge distortion of image interpolation, natural suture algorithm is utilized in overlapping regions while adopting data space strategy of separating 2D images into blocks or dividing 3D images into sub-volumes. Keeping a high interpolation precision, the 2D and 3D medical image GRBF interpolation achieved great acceleration in each basic computing step. The experiments showed that the operative efficiency of image GRBF interpolation based on CUDA platform was obviously improved compared with CPU calculation. The present method is of a considerable reference value in the application field of image interpolation.

  15. Basic science curriculums in nuclear cardiology and cardiovascular imaging: evolving and emerging concepts.

    PubMed

    Van Decker, William A; Villafana, Theodore

    2008-01-01

    The teaching of basic science with regard to physics, instrumentation, and radiation safety has been part of nuclear cardiology training since its inception. Although there are clear educational and quality rationale for such, regulations associated with the Nuclear Regulatory Commission Subpart J of old 10 CFR section 35 (Title 10, Code of Federal Regulations, Part 35) from the 1960s mandated such prescriptive instruction. Cardiovascular fellowship training programs now have a new opportunity to rethink their basic science imaging curriculums with the era of "revised 10 CFR section 35" and the growing implementation of multimodality imaging training and expertise. This review focuses on the history and the why, what, and how of such a curriculum arising in one city and suggests examples of future implementation in other locations.

  16. How do scientists respond to anomalies? Different strategies used in basic and applied science.

    PubMed

    Trickett, Susan Bell; Trafton, J Gregory; Schunn, Christian D

    2009-10-01

    We conducted two in vivo studies to explore how scientists respond to anomalies. Based on prior research, we identify three candidate strategies: mental simulation, mental manipulation of an image, and comparison between images. In Study 1, we compared experts in basic and applied domains (physics and meteorology). We found that the basic scientists used mental simulation to resolve an anomaly, whereas applied science practitioners mentally manipulated the image. In Study 2, we compared novice and expert meteorologists. We found that unlike experts, novices used comparison to address anomalies. We discuss the nature of expertise in the two kinds of science, the relationship between the type of science and the task performed, and the relationship of the strategies investigated to scientific creativity. Copyright © 2009 Cognitive Science Society, Inc.

  17. A method for the automated processing and analysis of images of ULVWF-platelet strings.

    PubMed

    Reeve, Scott R; Abbitt, Katherine B; Cruise, Thomas D; Hose, D Rodney; Lawford, Patricia V

    2013-01-01

    We present a method for identifying and analysing unusually large von Willebrand factor (ULVWF)-platelet strings in noisy low-quality images. The method requires relatively inexpensive, non-specialist equipment and allows multiple users to be employed in the capture of images. Images are subsequently enhanced and analysed, using custom-written software to perform the processing tasks. The formation and properties of ULVWF-platelet strings released in in vitro flow-based assays have recently become a popular research area. Endothelial cells are incorporated into a flow chamber, chemically stimulated to induce ULVWF release and perfused with isolated platelets which are able to bind to the ULVWF to form strings. The numbers and lengths of the strings released are related to characteristics of the flow. ULVWF-platelet strings are routinely identified by eye from video recordings captured during experiments and analysed manually using basic NIH image software to determine the number of strings and their lengths. This is a laborious, time-consuming task and a single experiment, often consisting of data from four to six dishes of endothelial cells, can take 2 or more days to analyse. The method described here allows analysis of the strings to provide data such as the number and length of strings, number of platelets per string and the distance between each platelet to be found. The software reduces analysis time, and more importantly removes user subjectivity, producing highly reproducible results with an error of less than 2% when compared with detailed manual analysis.

  18. In vivo optical imaging and dynamic contrast methods for biomedical research

    PubMed Central

    Hillman, Elizabeth M. C.; Amoozegar, Cyrus B.; Wang, Tracy; McCaslin, Addason F. H.; Bouchard, Matthew B.; Mansfield, James; Levenson, Richard M.

    2011-01-01

    This paper provides an overview of optical imaging methods commonly applied to basic research applications. Optical imaging is well suited for non-clinical use, since it can exploit an enormous range of endogenous and exogenous forms of contrast that provide information about the structure and function of tissues ranging from single cells to entire organisms. An additional benefit of optical imaging that is often under-exploited is its ability to acquire data at high speeds; a feature that enables it to not only observe static distributions of contrast, but to probe and characterize dynamic events related to physiology, disease progression and acute interventions in real time. The benefits and limitations of in vivo optical imaging for biomedical research applications are described, followed by a perspective on future applications of optical imaging for basic research centred on a recently introduced real-time imaging technique called dynamic contrast-enhanced small animal molecular imaging (DyCE). PMID:22006910

  19. [Functional magnetic resonance imaging in psychiatry and psychotherapy].

    PubMed

    Derntl, B; Habel, U; Schneider, F

    2010-01-01

    technical improvements, functional magnetic resonance imaging (fMRI) has become the most popular and versatile imaging method in psychiatric research. The scope of this manuscript is to briefly introduce the basics of MR physics, the blood oxygenation level-dependent (BOLD) contrast as well as the principles of MR study design and functional data analysis. The presentation of exemplary studies on emotion recognition and empathy in schizophrenia patients will highlight the importance of MR methods in psychiatry. Finally, we will demonstrate insights into new developments that will further boost MR techniques in clinical research and will help to gain more insight into dysfunctional neural networks underlying cognitive and emotional deficits in psychiatric patients. Moreover, some techniques such as neurofeedback seem promising for evaluation of therapy effects on a behavioral and neural level.

  20. TU-G-303-04: Radiomics and the Coming Pan-Omics Revolution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    El Naqa, I.

    ‘Radiomics’ refers to studies that extract a large amount of quantitative information from medical imaging studies as a basis for characterizing a specific aspect of patient health. Radiomics models can be built to address a wide range of outcome predictions, clinical decisions, basic cancer biology, etc. For example, radiomics models can be built to predict the aggressiveness of an imaged cancer, cancer gene expression characteristics (radiogenomics), radiation therapy treatment response, etc. Technically, radiomics brings together quantitative imaging, computer vision/image processing, and machine learning. In this symposium, speakers will discuss approaches to radiomics investigations, including: longitudinal radiomics, radiomics combined with othermore » biomarkers (‘pan-omics’), radiomics for various imaging modalities (CT, MRI, and PET), and the use of registered multi-modality imaging datasets as a basis for radiomics. There are many challenges to the eventual use of radiomics-derived methods in clinical practice, including: standardization and robustness of selected metrics, accruing the data required, building and validating the resulting models, registering longitudinal data that often involve significant patient changes, reliable automated cancer segmentation tools, etc. Despite the hurdles, results achieved so far indicate the tremendous potential of this general approach to quantifying and using data from medical images. Specific applications of radiomics to be presented in this symposium will include: the longitudinal analysis of patients with low-grade gliomas; automatic detection and assessment of patients with metastatic bone lesions; image-based monitoring of patients with growing lymph nodes; predicting radiotherapy outcomes using multi-modality radiomics; and studies relating radiomics with genomics in lung cancer and glioblastoma. Learning Objectives: Understanding the basic image features that are often used in radiomic models. Understanding requirements for reliable radiomic models, including robustness of metrics, adequate predictive accuracy, and generalizability. Understanding the methodology behind radiomic-genomic (’radiogenomics’) correlations. Research supported by NIH (US), CIHR (Canada), and NSERC (Canada)« less

  1. DQE analysis for CCD imaging arrays

    NASA Astrophysics Data System (ADS)

    Shaw, Rodney

    1997-05-01

    By consideration of the statistical interaction between exposure quanta and the mechanisms of image detection, the signal-to-noise limitations of a variety of image acquisition technologies are now well understood. However in spite of the growing fields of application for CCD imaging- arrays and the obvious advantages of their multi-level mode of quantum detection, only limited and largely empirical approaches have been made to quantify these advantages on an absolute basis. Here an extension is made of a previous model for noise-free sequential photon-counting to the more general case involving both count-noise and arbitrary separation functions between count levels. This allows a basic model to be developed for the DQE associated with devices which approximate to the CCD mode of operation, and conclusions to be made concerning the roles of the separation-function and count-noise in defining the departure from the ideal photon counter.

  2. CADx Mammography

    NASA Astrophysics Data System (ADS)

    Costaridou, Lena

    Although a wide variety of Computer-Aided Diagnosis (CADx) schemes have been proposed across breast imaging modalities, and especially in mammography, research is still ongoing to meet the high performance CADx requirements. In this chapter, methodological contributions to CADx in mammography and adjunct breast imaging modalities are reviewed, as they play a major role in early detection, diagnosis and clinical management of breast cancer. At first, basic terms and definitions are provided. Then, emphasis is given to lesion content derivation, both anatomical and functional, considering only quantitative image features of micro-calcification clusters and masses across modalities. Additionally, two CADx application examples are provided. The first example investigates the effect of segmentation accuracy on micro-calcification cluster morphology derivation in X-ray mammography. The second one demonstrates the efficiency of texture analysis in quantification of enhancement kinetics, related to vascular heterogeneity, for mass classification in dynamic contrast-enhanced magnetic resonance imaging.

  3. A new blood vessel extraction technique using edge enhancement and object classification.

    PubMed

    Badsha, Shahriar; Reza, Ahmed Wasif; Tan, Kim Geok; Dimyati, Kaharudin

    2013-12-01

    Diabetic retinopathy (DR) is increasing progressively pushing the demand of automatic extraction and classification of severity of diseases. Blood vessel extraction from the fundus image is a vital and challenging task. Therefore, this paper presents a new, computationally simple, and automatic method to extract the retinal blood vessel. The proposed method comprises several basic image processing techniques, namely edge enhancement by standard template, noise removal, thresholding, morphological operation, and object classification. The proposed method has been tested on a set of retinal images. The retinal images were collected from the DRIVE database and we have employed robust performance analysis to evaluate the accuracy. The results obtained from this study reveal that the proposed method offers an average accuracy of about 97 %, sensitivity of 99 %, specificity of 86 %, and predictive value of 98 %, which is superior to various well-known techniques.

  4. Joint Estimation of Effective Brain Wave Activation Modes Using EEG/MEG Sensor Arrays and Multimodal MRI Volumes.

    PubMed

    Galinsky, Vitaly L; Martinez, Antigona; Paulus, Martin P; Frank, Lawrence R

    2018-04-13

    In this letter, we present a new method for integration of sensor-based multifrequency bands of electroencephalography and magnetoencephalography data sets into a voxel-based structural-temporal magnetic resonance imaging analysis by utilizing the general joint estimation using entropy regularization (JESTER) framework. This allows enhancement of the spatial-temporal localization of brain function and the ability to relate it to morphological features and structural connectivity. This method has broad implications for both basic neuroscience research and clinical neuroscience focused on identifying disease-relevant biomarkers by enhancing the spatial-temporal resolution of the estimates derived from current neuroimaging modalities, thereby providing a better picture of the normal human brain in basic neuroimaging experiments and variations associated with disease states.

  5. Tools for evaluating Veterinary Services: an external auditing model for the quality assurance process.

    PubMed

    Melo, E Correa

    2003-08-01

    The author describes the reasons why evaluation processes should be applied to the Veterinary Services of Member Countries, either for trade in animals and animal products and by-products between two countries, or for establishing essential measures to improve the Veterinary Service concerned. The author also describes the basic elements involved in conducting an evaluation process, including the instruments for doing so. These basic elements centre on the following:--designing a model, or desirable image, against which a comparison can be made--establishing a list of processes to be analysed and defining the qualitative and quantitative mechanisms for this analysis--establishing a multidisciplinary evaluation team and developing a process for standardising the evaluation criteria.

  6. Quantum dots versus organic fluorophores in fluorescent deep-tissue imaging--merits and demerits.

    PubMed

    Bakalova, Rumiana; Zhelev, Zhivko; Gadjeva, Veselina

    2008-12-01

    The use of fluorescence in deep-tissue imaging is rapidly expanding in last several years. The progress in fluorescent molecular probes and fluorescent imaging techniques gives an opportunity to detect single cells and even molecular targets in live organisms. The highly sensitive and high-speed fluorescent molecular sensors and detection devices allow the application of fluorescence in functional imaging. With the development of novel bright fluorophores based on nanotechnologies and 3D fluorescence scanners with high spatial and temporal resolution, the fluorescent imaging has a potential to become an alternative of the other non-invasive imaging techniques as magnetic resonance imaging, positron-emission tomography, X-ray, computing tomography. The fluorescent imaging has also a potential to give a real map of human anatomy and physiology. The current review outlines the advantages of fluorescent nanoparticles over conventional organic dyes in deep-tissue imaging in vivo and defines the major requirements to the "perfect fluorophore". The analysis proceeds from the basic principles of fluorescence and major characteristics of fluorophores, light-tissue interactions, and major limitations of fluorescent deep-tissue imaging. The article is addressed to a broad readership - from specialists in this field to university students.

  7. The Relationship between Immediate Relevant Basic Science Knowledge and Clinical Knowledge: Physiology Knowledge and Transthoracic Echocardiography Image Interpretation

    ERIC Educational Resources Information Center

    Nielsen, Dorte Guldbrand; Gotzsche, Ole; Sonne, Ole; Eika, Berit

    2012-01-01

    Two major views on the relationship between basic science knowledge and clinical knowledge stand out; the Two-world view seeing basic science and clinical science as two separate knowledge bases and the encapsulated knowledge view stating that basic science knowledge plays an overt role being encapsulated in the clinical knowledge. However, resent…

  8. Automatic SAR/optical cross-matching for GCP monograph generation

    NASA Astrophysics Data System (ADS)

    Nutricato, Raffaele; Morea, Alberto; Nitti, Davide Oscar; La Mantia, Claudio; Agrimano, Luigi; Samarelli, Sergio; Chiaradia, Maria Teresa

    2016-10-01

    Ground Control Points (GCP), automatically extracted from Synthetic Aperture Radar (SAR) images through 3D stereo analysis, can be effectively exploited for an automatic orthorectification of optical imagery if they can be robustly located in the basic optical images. The present study outlines a SAR/Optical cross-matching procedure that allows a robust alignment of radar and optical images, and consequently to derive automatically the corresponding sub-pixel position of the GCPs in the optical image in input, expressed as fractional pixel/line image coordinates. The cross-matching in performed in two subsequent steps, in order to gradually gather a better precision. The first step is based on the Mutual Information (MI) maximization between optical and SAR chips while the last one uses the Normalized Cross-Correlation as similarity metric. This work outlines the designed algorithmic solution and discusses the results derived over the urban area of Pisa (Italy), where more than ten COSMO-SkyMed Enhanced Spotlight stereo images with different beams and passes are available. The experimental analysis involves different satellite images, in order to evaluate the performances of the algorithm w.r.t. the optical spatial resolution. An assessment of the performances of the algorithm has been carried out, and errors are computed by measuring the distance between the GCP pixel/line position in the optical image, automatically estimated by the tool, and the "true" position of the GCP, visually identified by an expert user in the optical images.

  9. Visual analytics for semantic queries of TerraSAR-X image content

    NASA Astrophysics Data System (ADS)

    Espinoza-Molina, Daniela; Alonso, Kevin; Datcu, Mihai

    2015-10-01

    With the continuous image product acquisition of satellite missions, the size of the image archives is considerably increasing every day as well as the variety and complexity of their content, surpassing the end-user capacity to analyse and exploit them. Advances in the image retrieval field have contributed to the development of tools for interactive exploration and extraction of the images from huge archives using different parameters like metadata, key-words, and basic image descriptors. Even though we count on more powerful tools for automated image retrieval and data analysis, we still face the problem of understanding and analyzing the results. Thus, a systematic computational analysis of these results is required in order to provide to the end-user a summary of the archive content in comprehensible terms. In this context, visual analytics combines automated analysis with interactive visualizations analysis techniques for an effective understanding, reasoning and decision making on the basis of very large and complex datasets. Moreover, currently several researches are focused on associating the content of the images with semantic definitions for describing the data in a format to be easily understood by the end-user. In this paper, we present our approach for computing visual analytics and semantically querying the TerraSAR-X archive. Our approach is mainly composed of four steps: 1) the generation of a data model that explains the information contained in a TerraSAR-X product. The model is formed by primitive descriptors and metadata entries, 2) the storage of this model in a database system, 3) the semantic definition of the image content based on machine learning algorithms and relevance feedback, and 4) querying the image archive using semantic descriptors as query parameters and computing the statistical analysis of the query results. The experimental results shows that with the help of visual analytics and semantic definitions we are able to explain the image content using semantic terms and the relations between them answering questions such as what is the percentage of urban area in a region? or what is the distribution of water bodies in a city?

  10. Co-Registration Between Multisource Remote-Sensing Images

    NASA Astrophysics Data System (ADS)

    Wu, J.; Chang, C.; Tsai, H.-Y.; Liu, M.-C.

    2012-07-01

    Image registration is essential for geospatial information systems analysis, which usually involves integrating multitemporal and multispectral datasets from remote optical and radar sensors. An algorithm that deals with feature extraction, keypoint matching, outlier detection and image warping is experimented in this study. The methods currently available in the literature rely on techniques, such as the scale-invariant feature transform, between-edge cost minimization, normalized cross correlation, leasts-quares image matching, random sample consensus, iterated data snooping and thin-plate splines. Their basics are highlighted and encoded into a computer program. The test images are excerpts from digital files created by the multispectral SPOT-5 and Formosat-2 sensors, and by the panchromatic IKONOS and QuickBird sensors. Suburban areas, housing rooftops, the countryside and hilly plantations are studied. The co-registered images are displayed with block subimages in a criss-cross pattern. Besides the imagery, the registration accuracy is expressed by the root mean square error. Toward the end, this paper also includes a few opinions on issues that are believed to hinder a correct correspondence between diverse images.

  11. Computer-based classification of bacteria species by analysis of their colonies Fresnel diffraction patterns

    NASA Astrophysics Data System (ADS)

    Suchwalko, Agnieszka; Buzalewicz, Igor; Podbielska, Halina

    2012-01-01

    In the presented paper the optical system with converging spherical wave illumination for classification of bacteria species, is proposed. It allows for compression of the observation space, observation of Fresnel patterns, diffraction pattern scaling and low level of optical aberrations, which are not possessed by other optical configurations. Obtained experimental results have shown that colonies of specific bacteria species generate unique diffraction signatures. Analysis of Fresnel diffraction patterns of bacteria colonies can be fast and reliable method for classification and recognition of bacteria species. To determine the unique features of bacteria colonies diffraction patterns the image processing analysis was proposed. Classification can be performed by analyzing the spatial structure of diffraction patterns, which can be characterized by set of concentric rings. The characteristics of such rings depends on the bacteria species. In the paper, the influence of basic features and ring partitioning number on the bacteria classification, is analyzed. It is demonstrated that Fresnel patterns can be used for classification of following species: Salmonella enteritidis, Staplyococcus aureus, Proteus mirabilis and Citrobacter freundii. Image processing is performed by free ImageJ software, for which a special macro with human interaction, was written. LDA classification, CV method, ANOVA and PCA visualizations preceded by image data extraction were conducted using the free software R.

  12. Medical Ultrasound Imaging.

    ERIC Educational Resources Information Center

    Hughes, Stephen

    2001-01-01

    Explains the basic principles of ultrasound using everyday physics. Topics include the generation of ultrasound, basic interactions with material, and the measurement of blood flow using the Doppler effect. (Author/MM)

  13. Image editing with Adobe Photoshop 6.0.

    PubMed

    Caruso, Ronald D; Postel, Gregory C

    2002-01-01

    The authors introduce Photoshop 6.0 for radiologists and demonstrate basic techniques of editing gray-scale cross-sectional images intended for publication and for incorporation into computerized presentations. For basic editing of gray-scale cross-sectional images, the Tools palette and the History/Actions palette pair should be displayed. The History palette may be used to undo a step or series of steps. The Actions palette is a menu of user-defined macros that save time by automating an action or series of actions. Converting an image to 8-bit gray scale is the first editing function. Cropping is the next action. Both decrease file size. Use of the smallest file size necessary for the purpose at hand is recommended. Final file size for gray-scale cross-sectional neuroradiologic images (8-bit, single-layer TIFF [tagged image file format] at 300 pixels per inch) intended for publication varies from about 700 Kbytes to 3 Mbytes. Final file size for incorporation into computerized presentations is about 10-100 Kbytes (8-bit, single-layer, gray-scale, high-quality JPEG [Joint Photographic Experts Group]), depending on source and intended use. Editing and annotating images before they are inserted into presentation software is highly recommended, both for convenience and flexibility. Radiologists should find that image editing can be carried out very rapidly once the basic steps are learned and automated. Copyright RSNA, 2002

  14. WalkThrough Example Procedures for MAMA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ruggiero, Christy E.; Gaschen, Brian Keith; Bloch, Jeffrey Joseph

    This documentation is a growing set of walk through examples of analyses using the MAMA V2.0 software. It does not cover all the features or possibilities with the MAMA software, but will address using many of the basic analysis tools to quantify particle size and shape in an image. This document will continue to evolve as additional procedures and examples are added. The starting assumption is that the MAMA software has been successfully installed.

  15. Focused Impedance Method (FIM) and Pigeon Hole Imaging (PHI) for localized measurements - a review

    NASA Astrophysics Data System (ADS)

    Siddique-e Rabbani, K.

    2010-04-01

    This paper summarises up to date development in Focused Impedance Method (FIM) initiated by us. It basically involves taking the sum of two orthogonal tetra-polar impedance measurements around a common central region, giving a localized enhanced sensitivity. Although the basic idea requires 8 electrodes, versions with 6- and 4-electrodes were subsequently conceived and developed. The focusing effect has been verified in 2D and 3D phantoms and through numerical analysis. Dynamic stomach emptying, and ventilation of localized lung regions have been studied successfully suggesting further applications in monitoring of gastric acid secretion, artificial respiration, bladder emptying, etc. Multi-frequency FIM may help identify some diseases and disorders including certain cancers. FIM, being much simpler and having less number of electrodes, appears to have the potential to replace EIT for applications involving large and shallow organs. An enhancement of 6-electrode FIM led to Pigeon Hole Imaging (PHI) in a square matrix through backprojection in two orthogonal directions, good for localising of one or two well separated objects.

  16. Colour thresholding and objective quantification in bioimaging

    NASA Technical Reports Server (NTRS)

    Fermin, C. D.; Gerber, M. A.; Torre-Bueno, J. R.

    1992-01-01

    Computer imaging is rapidly becoming an indispensable tool for the quantification of variables in research and medicine. Whilst its use in medicine has largely been limited to qualitative observations, imaging in applied basic sciences, medical research and biotechnology demands objective quantification of the variables in question. In black and white densitometry (0-256 levels of intensity) the separation of subtle differences between closely related hues from stains is sometimes very difficult. True-colour and real-time video microscopy analysis offer choices not previously available with monochrome systems. In this paper we demonstrate the usefulness of colour thresholding, which has so far proven indispensable for proper objective quantification of the products of histochemical reactions and/or subtle differences in tissue and cells. In addition, we provide interested, but untrained readers with basic information that may assist decisions regarding the most suitable set-up for a project under consideration. Data from projects in progress at Tulane are shown to illustrate the advantage of colour thresholding over monochrome densitometry and for objective quantification of subtle colour differences between experimental and control samples.

  17. pH tunability and influence of alkali metal basicity on the plasmonic resonance of silver nanoparticles

    NASA Astrophysics Data System (ADS)

    Yadav, Vijay D.; Akhil Krishnan, R.; Borade, Lalit; Shirolikar, Seema; Jain, Ratnesh; Dandekar, Prajakta

    2017-07-01

    Localized surface plasmon resonance has been a unique and intriguing feature of silver nanoparticles (AgNPs) that has attracted immense attention. This has led to an array of applications for AgNPs in optics, sensors, plasmonic imaging etc. Although numerous applications have been reported consistently, the importance of buffer and reaction parameters during the synthesis of AgNPs, is still unclear. In the present study, we have demonstrated the influence of parameters like pH, temperature and buffer conditions (0.1 M citrate buffer) on the plasmonic resonance of AgNPs. We found that neutral and basic pH (from alkali metal) provide optimum interaction conditions for nucleation of plasmon resonant AgNPs. Interestingly, this was not observed in the non-alkali metal base (ammonia). Also, when the nanoparticles synthesized from alkali metal base were incorporated in different buffers, it was observed that the nanoparticles dissolved in the acidic buffer and had reduced plasmonic resonance intensity. This, however, was resolved in the basic buffer, increasing the plasmonic resonance intensity and confirming that nucleation of nanoparticles required basic conditions. The above inference has been supported by characterization of AgNPs using UV-Vis spectrophotometer, Fluorimetry analysis, Infrared spectrometer and TEM analysis. The study concluded that the plasmonic resonance of AgNPs occurs due to the interaction of alkali (Na) and transition metal (Ag) salt in basic/neutral conditions, at a specific temperature range, in presence of a capping agent (citric acid), providing a pH tune to the overall system.

  18. [MUC4 research progress in tumor molecular markers].

    PubMed

    Zhu, Hua; You, Jinhui

    2014-02-01

    Mucin antigen 4 (MUC4) is a molecular marker for some malignant tumors for early tumor diagnosis, prognosis and targeted therapy. It provides a new research direction in tumor diagnosis and treatment that will have a wide application prospect. In recent years, there has been a large number of research reports on the basic and clini-a wide application prospect. In recent years, there has been a large number of research reports on the basic and clinical studies about MUC4, but the molecular imaging study about MUC4 is seldom reported. In this paper the recentcal studies about MUC4, but the molecular imaging study about MUC4 is seldom reported. In this paper the recent research about MUC4 on basic and clinical studies is briefly reviewed, and it is expected to promote the development of tumor molecular imaging.

  19. Fuzzy Emotional Semantic Analysis and Automated Annotation of Scene Images

    PubMed Central

    Cao, Jianfang; Chen, Lichao

    2015-01-01

    With the advances in electronic and imaging techniques, the production of digital images has rapidly increased, and the extraction and automated annotation of emotional semantics implied by images have become issues that must be urgently addressed. To better simulate human subjectivity and ambiguity for understanding scene images, the current study proposes an emotional semantic annotation method for scene images based on fuzzy set theory. A fuzzy membership degree was calculated to describe the emotional degree of a scene image and was implemented using the Adaboost algorithm and a back-propagation (BP) neural network. The automated annotation method was trained and tested using scene images from the SUN Database. The annotation results were then compared with those based on artificial annotation. Our method showed an annotation accuracy rate of 91.2% for basic emotional values and 82.4% after extended emotional values were added, which correspond to increases of 5.5% and 8.9%, respectively, compared with the results from using a single BP neural network algorithm. Furthermore, the retrieval accuracy rate based on our method reached approximately 89%. This study attempts to lay a solid foundation for the automated emotional semantic annotation of more types of images and therefore is of practical significance. PMID:25838818

  20. Measurement of Separated Flow Structures Using a Multiple-Camera DPIV System. [conducted in the Langley Subsonic Basic Research Tunnel

    NASA Technical Reports Server (NTRS)

    Humphreys, William M., Jr.; Bartram, Scott M.

    2001-01-01

    A novel multiple-camera system for the recording of digital particle image velocimetry (DPIV) images acquired in a two-dimensional separating/reattaching flow is described. The measurements were performed in the NASA Langley Subsonic Basic Research Tunnel as part of an overall series of experiments involving the simultaneous acquisition of dynamic surface pressures and off-body velocities. The DPIV system utilized two frequency-doubled Nd:YAG lasers to generate two coplanar, orthogonally polarized light sheets directed upstream along the horizontal centerline of the test model. A recording system containing two pairs of matched high resolution, 8-bit cameras was used to separate and capture images of illuminated tracer particles embedded in the flow field. Background image subtraction was used to reduce undesirable flare light emanating from the surface of the model, and custom pixel alignment algorithms were employed to provide accurate registration among the various cameras. Spatial cross correlation analysis with median filter validation was used to determine the instantaneous velocity structure in the separating/reattaching flow region illuminated by the laser light sheets. In operation the DPIV system exhibited a good ability to resolve large-scale separated flow structures with acceptable accuracy over the extended field of view of the cameras. The recording system design provided enhanced performance versus traditional DPIV systems by allowing a variety of standard and non-standard cameras to be easily incorporated into the system.

  1. Revealing representational content with pattern-information fMRI--an introductory guide.

    PubMed

    Mur, Marieke; Bandettini, Peter A; Kriegeskorte, Nikolaus

    2009-03-01

    Conventional statistical analysis methods for functional magnetic resonance imaging (fMRI) data are very successful at detecting brain regions that are activated as a whole during specific mental activities. The overall activation of a region is usually taken to indicate involvement of the region in the task. However, such activation analysis does not consider the multivoxel patterns of activity within a brain region. These patterns of activity, which are thought to reflect neuronal population codes, can be investigated by pattern-information analysis. In this framework, a region's multivariate pattern information is taken to indicate representational content. This tutorial introduction motivates pattern-information analysis, explains its underlying assumptions, introduces the most widespread methods in an intuitive way, and outlines the basic sequence of analysis steps.

  2. Image retrieval and processing system version 2.0 development work

    NASA Technical Reports Server (NTRS)

    Slavney, Susan H.; Guinness, Edward A.

    1991-01-01

    The Image Retrieval and Processing System (IRPS) is a software package developed at Washington University and used by the NASA Regional Planetary Image Facilities (RPIF's). The IRPS combines data base management and image processing components to allow the user to examine catalogs of image data, locate the data of interest, and perform radiometric and geometric calibration of the data in preparation for analysis. Version 1.0 of IRPS was completed in Aug. 1989 and was installed at several IRPS's. Other RPIF's use remote logins via NASA Science Internet to access IRPS at Washington University. Work was begun on designing and population a catalog of Magellan image products that will be part of IRPS Version 2.0, planned for release by the end of calendar year 1991. With this catalog, a user will be able to search by orbit and by location for Magellan Basic Image Data Records (BIDR's), Mosaicked Image Data Records (MIDR's), and Altimetry-Radiometry Composite Data Records (ARCDR's). The catalog will include the Magellan CD-ROM volume, director, and file name for each data product. The image processing component of IRPS is based on the Planetary Image Cartography Software (PICS) developed by the U.S. Geological Survey, Flagstaff, Arizona. To augment PICS capabilities, a set of image processing programs were developed that are compatible with PICS-format images. This software includes general-purpose functions that PICS does not have, analysis and utility programs for specific data sets, and programs from other sources that were modified to work with PICS images. Some of the software will be integrated into the Version 2.0 release of IRPS. A table is presented that lists the programs with a brief functional description of each.

  3. Image reconstruction: an overview for clinicians.

    PubMed

    Hansen, Michael S; Kellman, Peter

    2015-03-01

    Image reconstruction plays a critical role in the clinical use of magnetic resonance imaging (MRI). The MRI raw data is not acquired in image space and the role of the image reconstruction process is to transform the acquired raw data into images that can be interpreted clinically. This process involves multiple signal processing steps that each have an impact on the image quality. This review explains the basic terminology used for describing and quantifying image quality in terms of signal-to-noise ratio and point spread function. In this context, several commonly used image reconstruction components are discussed. The image reconstruction components covered include noise prewhitening for phased array data acquisition, interpolation needed to reconstruct square pixels, raw data filtering for reducing Gibbs ringing artifacts, Fourier transforms connecting the raw data with image space, and phased array coil combination. The treatment of phased array coils includes a general explanation of parallel imaging as a coil combination technique. The review is aimed at readers with no signal processing experience and should enable them to understand what role basic image reconstruction steps play in the formation of clinical images and how the resulting image quality is described. © 2014 Wiley Periodicals, Inc.

  4. Semi-Automated Identification of Rocks in Images

    NASA Technical Reports Server (NTRS)

    Bornstein, Benjamin; Castano, Andres; Anderson, Robert

    2006-01-01

    Rock Identification Toolkit Suite is a computer program that assists users in identifying and characterizing rocks shown in images returned by the Mars Explorer Rover mission. Included in the program are components for automated finding of rocks, interactive adjustments of outlines of rocks, active contouring of rocks, and automated analysis of shapes in two dimensions. The program assists users in evaluating the surface properties of rocks and soil and reports basic properties of rocks. The program requires either the Mac OS X operating system running on a G4 (or more capable) processor or a Linux operating system running on a Pentium (or more capable) processor, plus at least 128MB of random-access memory.

  5. Remote Sensing Image Analysis Without Expert Knowledge - A Web-Based Classification Tool On Top of Taverna Workflow Management System

    NASA Astrophysics Data System (ADS)

    Selsam, Peter; Schwartze, Christian

    2016-10-01

    Providing software solutions via internet has been known for quite some time and is now an increasing trend marketed as "software as a service". A lot of business units accept the new methods and streamlined IT strategies by offering web-based infrastructures for external software usage - but geospatial applications featuring very specialized services or functionalities on demand are still rare. Originally applied in desktop environments, the ILMSimage tool for remote sensing image analysis and classification was modified in its communicating structures and enabled for running on a high-power server and benefiting from Tavema software. On top, a GIS-like and web-based user interface guides the user through the different steps in ILMSimage. ILMSimage combines object oriented image segmentation with pattern recognition features. Basic image elements form a construction set to model for large image objects with diverse and complex appearance. There is no need for the user to set up detailed object definitions. Training is done by delineating one or more typical examples (templates) of the desired object using a simple vector polygon. The template can be large and does not need to be homogeneous. The template is completely independent from the segmentation. The object definition is done completely by the software.

  6. Methods and potentials for using satellite image classification in school lessons

    NASA Astrophysics Data System (ADS)

    Voss, Kerstin; Goetzke, Roland; Hodam, Henryk

    2011-11-01

    The FIS project - FIS stands for Fernerkundung in Schulen (Remote Sensing in Schools) - aims at a better integration of the topic "satellite remote sensing" in school lessons. According to this, the overarching objective is to teach pupils basic knowledge and fields of application of remote sensing. Despite the growing significance of digital geomedia, the topic "remote sensing" is not broadly supported in schools. Often, the topic is reduced to a short reflection on satellite images and used only for additional illustration of issues relevant for the curriculum. Without addressing the issue of image data, this can hardly contribute to the improvement of the pupils' methodical competences. Because remote sensing covers more than simple, visual interpretation of satellite images, it is necessary to integrate remote sensing methods like preprocessing, classification and change detection. Dealing with these topics often fails because of confusing background information and the lack of easy-to-use software. Based on these insights, the FIS project created different simple analysis tools for remote sensing in school lessons, which enable teachers as well as pupils to be introduced to the topic in a structured way. This functionality as well as the fields of application of these analysis tools will be presented in detail with the help of three different classification tools for satellite image classification.

  7. Applications of HCMM satellite data to the study of urban heating patterns

    NASA Technical Reports Server (NTRS)

    Carlson, T. N. (Principal Investigator)

    1980-01-01

    A research summary is presented and is divided into two major areas, one developmental and the other basic science. In the first three sub-categories are discussed: image processing techniques, especially the method whereby surface temperature image are converted to images of surface energy budget, moisture availability and thermal inertia; model development; and model verification. Basic science includes the use of a method to further the understanding of the urban heat island and anthropogenic modification of the surface heating, evaporation over vegetated surfaces, and the effect of surface heat flux on plume spread.

  8. Bone histomorphometry using free and commonly available software.

    PubMed

    Egan, Kevin P; Brennan, Tracy A; Pignolo, Robert J

    2012-12-01

    Histomorphometric analysis is a widely used technique to assess changes in tissue structure and function. Commercially available programs that measure histomorphometric parameters can be cost-prohibitive. In this study, we compared an inexpensive method of histomorphometry to a current proprietary software program. Image J and Adobe Photoshop(®) were used to measure static and kinetic bone histomorphometric parameters. Photomicrographs of Goldner's trichrome-stained femurs were used to generate black-and-white image masks, representing bone and non-bone tissue, respectively, in Adobe Photoshop(®) . The masks were used to quantify histomorphometric parameters (bone volume, tissue volume, osteoid volume, mineralizing surface and interlabel width) in Image J. The resultant values obtained using Image J and the proprietary software were compared and differences found to be statistically non-significant. The wide-ranging use of histomorphometric analysis for assessing the basic morphology of tissue components makes it important to have affordable and accurate measurement options available for a diverse range of applications. Here we have developed and validated an approach to histomorphometry using commonly and freely available software that is comparable to a much more costly, commercially available software program. © 2012 Blackwell Publishing Limited.

  9. Bone histomorphometry using free and commonly available software

    PubMed Central

    Egan, Kevin P.; Brennan, Tracy A.; Pignolo, Robert J.

    2012-01-01

    Aims Histomorphometric analysis is a widely used technique to assess changes in tissue structure and function. Commercially-available programs that measure histomorphometric parameters can be cost prohibitive. In this study, we compared an inexpensive method of histomorphometry to a current proprietary software program. Methods and results Image J and Adobe Photoshop® were used to measure static and kinetic bone histomorphometric parameters. Photomicrographs of Goldner’s Trichrome stained femurs were used to generate black and white image masks, representing bone and non-bone tissue, respectively, in Adobe Photoshop®. The masks were used to quantify histomorphometric parameters (bone volume, tissue volume, osteoid volume, mineralizing surface, and interlabel width) in Image J. The resultant values obtained using Image J and the proprietary software were compared and found to be statistically non-significant. Conclusions The wide ranging use of histomorphometric analysis for assessing the basic morphology of tissue components makes it important to have affordable and accurate measurement options that are available for a diverse range of applications. Here we have developed and validated an approach to histomorphometry using commonly and freely available software that is comparable to a much more costly, commercially-available software program. PMID:22882309

  10. The Scientific Image in Behavior Analysis.

    PubMed

    Keenan, Mickey

    2016-05-01

    Throughout the history of science, the scientific image has played a significant role in communication. With recent developments in computing technology, there has been an increase in the kinds of opportunities now available for scientists to communicate in more sophisticated ways. Within behavior analysis, though, we are only just beginning to appreciate the importance of going beyond the printing press to elucidate basic principles of behavior. The aim of this manuscript is to stimulate appreciation of both the role of the scientific image and the opportunities provided by a quick response code (QR code) for enhancing the functionality of the printed page. I discuss the limitations of imagery in behavior analysis ("Introduction"), and I show examples of what can be done with animations and multimedia for teaching philosophical issues that arise when teaching about private events ("Private Events 1 and 2"). Animations are also useful for bypassing ethical issues when showing examples of challenging behavior ("Challenging Behavior"). Each of these topics can be accessed only by scanning the QR code provided. This contingency has been arranged to help the reader embrace this new technology. In so doing, I hope to show its potential for going beyond the limitations of the printing press.

  11. THz-wave parametric source and its imaging applications

    NASA Astrophysics Data System (ADS)

    Kawase, Kodo

    2004-08-01

    Widely tunable coherent terahertz (THz) wave generation has been demonstrated based on the parametric oscillation using MgO doped LiNbO3 crystal pumped by a Q-switched Nd:YAG laser. This method exhibits multiple advantages like wide tunability, coherency and compactness of its system. We have developed a novel basic technology for terahertz (THz) imaging, which allows detection and identification of chemicals by introducing the component spatial pattern analysis. The spatial distributions of the chemicals were obtained from terahertz multispectral transillumination images, using absorption spectra previously measured with a widely tunable THz-wave parametric oscillator. Further we have applied this technique to the detection and identification of illicit drugs concealed in envelopes. The samples we used were methamphetamine and MDMA, two of the most widely consumed illegal drugs in Japan, and aspirin as a reference.

  12. Quantitative image analysis of immunohistochemical stains using a CMYK color model

    PubMed Central

    Pham, Nhu-An; Morrison, Andrew; Schwock, Joerg; Aviel-Ronen, Sarit; Iakovlev, Vladimir; Tsao, Ming-Sound; Ho, James; Hedley, David W

    2007-01-01

    Background Computer image analysis techniques have decreased effects of observer biases, and increased the sensitivity and the throughput of immunohistochemistry (IHC) as a tissue-based procedure for the evaluation of diseases. Methods We adapted a Cyan/Magenta/Yellow/Key (CMYK) model for automated computer image analysis to quantify IHC stains in hematoxylin counterstained histological sections. Results The spectral characteristics of the chromogens AEC, DAB and NovaRed as well as the counterstain hematoxylin were first determined using CMYK, Red/Green/Blue (RGB), normalized RGB and Hue/Saturation/Lightness (HSL) color models. The contrast of chromogen intensities on a 0–255 scale (24-bit image file) as well as compared to the hematoxylin counterstain was greatest using the Yellow channel of a CMYK color model, suggesting an improved sensitivity for IHC evaluation compared to other color models. An increase in activated STAT3 levels due to growth factor stimulation, quantified using the Yellow channel image analysis was associated with an increase detected by Western blotting. Two clinical image data sets were used to compare the Yellow channel automated method with observer-dependent methods. First, a quantification of DAB-labeled carbonic anhydrase IX hypoxia marker in 414 sections obtained from 138 biopsies of cervical carcinoma showed strong association between Yellow channel and positive color selection results. Second, a linear relationship was also demonstrated between Yellow intensity and visual scoring for NovaRed-labeled epidermal growth factor receptor in 256 non-small cell lung cancer biopsies. Conclusion The Yellow channel image analysis method based on a CMYK color model is independent of observer biases for threshold and positive color selection, applicable to different chromogens, tolerant of hematoxylin, sensitive to small changes in IHC intensity and is applicable to simple automation procedures. These characteristics are advantageous for both basic as well as clinical research in an unbiased, reproducible and high throughput evaluation of IHC intensity. PMID:17326824

  13. A Methodology and Implementation for Annotating Digital Images for Context-appropriate Use in an Academic Health Care Environment

    PubMed Central

    Goede, Patricia A.; Lauman, Jason R.; Cochella, Christopher; Katzman, Gregory L.; Morton, David A.; Albertine, Kurt H.

    2004-01-01

    Use of digital medical images has become common over the last several years, coincident with the release of inexpensive, mega-pixel quality digital cameras and the transition to digital radiology operation by hospitals. One problem that clinicians, medical educators, and basic scientists encounter when handling images is the difficulty of using business and graphic arts commercial-off-the-shelf (COTS) software in multicontext authoring and interactive teaching environments. The authors investigated and developed software-supported methodologies to help clinicians, medical educators, and basic scientists become more efficient and effective in their digital imaging environments. The software that the authors developed provides the ability to annotate images based on a multispecialty methodology for annotation and visual knowledge representation. This annotation methodology is designed by consensus, with contributions from the authors and physicians, medical educators, and basic scientists in the Departments of Radiology, Neurobiology and Anatomy, Dermatology, and Ophthalmology at the University of Utah. The annotation methodology functions as a foundation for creating, using, reusing, and extending dynamic annotations in a context-appropriate, interactive digital environment. The annotation methodology supports the authoring process as well as output and presentation mechanisms. The annotation methodology is the foundation for a Windows implementation that allows annotated elements to be represented as structured eXtensible Markup Language and stored separate from the image(s). PMID:14527971

  14. Onboard spectral imager data processor

    NASA Astrophysics Data System (ADS)

    Otten, Leonard J.; Meigs, Andrew D.; Franklin, Abraham J.; Sears, Robert D.; Robison, Mark W.; Rafert, J. Bruce; Fronterhouse, Donald C.; Grotbeck, Ronald L.

    1999-10-01

    Previous papers have described the concept behind the MightySat II.1 program, the satellite's Fourier Transform imaging spectrometer's optical design, the design for the spectral imaging payload, and its initial qualification testing. This paper discusses the on board data processing designed to reduce the amount of downloaded data by an order of magnitude and provide a demonstration of a smart spaceborne spectral imaging sensor. Two custom components, a spectral imager interface 6U VME card that moves data at over 30 MByte/sec, and four TI C-40 processors mounted to a second 6U VME and daughter card, are used to adapt the sensor to the spacecraft and provide the necessary high speed processing. A system architecture that offers both on board real time image processing and high-speed post data collection analysis of the spectral data has been developed. In addition to the on board processing of the raw data into a usable spectral data volume, one feature extraction technique has been incorporated. This algorithm operates on the basic interferometric data. The algorithm is integrated within the data compression process to search for uploadable feature descriptions.

  15. Combining fluorescence imaging with Hi-C to study 3D genome architecture of the same single cell.

    PubMed

    Lando, David; Basu, Srinjan; Stevens, Tim J; Riddell, Andy; Wohlfahrt, Kai J; Cao, Yang; Boucher, Wayne; Leeb, Martin; Atkinson, Liam P; Lee, Steven F; Hendrich, Brian; Klenerman, Dave; Laue, Ernest D

    2018-05-01

    Fluorescence imaging and chromosome conformation capture assays such as Hi-C are key tools for studying genome organization. However, traditionally, they have been carried out independently, making integration of the two types of data difficult to perform. By trapping individual cell nuclei inside a well of a 384-well glass-bottom plate with an agarose pad, we have established a protocol that allows both fluorescence imaging and Hi-C processing to be carried out on the same single cell. The protocol identifies 30,000-100,000 chromosome contacts per single haploid genome in parallel with fluorescence images. Contacts can be used to calculate intact genome structures to better than 100-kb resolution, which can then be directly compared with the images. Preparation of 20 single-cell Hi-C libraries using this protocol takes 5 d of bench work by researchers experienced in molecular biology techniques. Image acquisition and analysis require basic understanding of fluorescence microscopy, and some bioinformatics knowledge is required to run the sequence-processing tools described here.

  16. Efficiency analysis for 3D filtering of multichannel images

    NASA Astrophysics Data System (ADS)

    Kozhemiakin, Ruslan A.; Rubel, Oleksii; Abramov, Sergey K.; Lukin, Vladimir V.; Vozel, Benoit; Chehdi, Kacem

    2016-10-01

    Modern remote sensing systems basically acquire images that are multichannel (dual- or multi-polarization, multi- and hyperspectral) where noise, usually with different characteristics, is present in all components. If noise is intensive, it is desirable to remove (suppress) it before applying methods of image classification, interpreting, and information extraction. This can be done using one of two approaches - by component-wise or by vectorial (3D) filtering. The second approach has shown itself to have higher efficiency if there is essential correlation between multichannel image components as this often happens for multichannel remote sensing data of different origin. Within the class of 3D filtering techniques, there are many possibilities and variations. In this paper, we consider filtering based on discrete cosine transform (DCT) and pay attention to two aspects of processing. First, we study in detail what changes in DCT coefficient statistics take place for 3D denoising compared to component-wise processing. Second, we analyze how selection of component images united into 3D data array influences efficiency of filtering and can the observed tendencies be exploited in processing of images with rather large number of channels.

  17. Image-Based Predictive Modeling of Heart Mechanics.

    PubMed

    Wang, V Y; Nielsen, P M F; Nash, M P

    2015-01-01

    Personalized biophysical modeling of the heart is a useful approach for noninvasively analyzing and predicting in vivo cardiac mechanics. Three main developments support this style of analysis: state-of-the-art cardiac imaging technologies, modern computational infrastructure, and advanced mathematical modeling techniques. In vivo measurements of cardiac structure and function can be integrated using sophisticated computational methods to investigate mechanisms of myocardial function and dysfunction, and can aid in clinical diagnosis and developing personalized treatment. In this article, we review the state-of-the-art in cardiac imaging modalities, model-based interpretation of 3D images of cardiac structure and function, and recent advances in modeling that allow personalized predictions of heart mechanics. We discuss how using such image-based modeling frameworks can increase the understanding of the fundamental biophysics behind cardiac mechanics, and assist with diagnosis, surgical guidance, and treatment planning. Addressing the challenges in this field will require a coordinated effort from both the clinical-imaging and modeling communities. We also discuss future directions that can be taken to bridge the gap between basic science and clinical translation.

  18. Magnetic resonance microscopy of prostate tissue: How basic science can inform clinical imaging development

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bourne, Roger

    2013-03-15

    This commentary outlines how magnetic resonance imaging (MRI) microscopy studies of prostate tissue samples and whole organs have shed light on a number of clinical imaging mysteries and may enable more effective development of new clinical imaging methods.

  19. Colour flow and motion imaging.

    PubMed

    Evans, D H

    2010-01-01

    Colour flow imaging (CFI) is an ultrasound imaging technique whereby colour-coded maps of tissue velocity are superimposed on grey-scale pulse-echo images of tissue anatomy. The most widespread use of the method is to image the movement of blood through arteries and veins, but it may also be used to image the motion of solid tissue. The production of velocity information is technically more demanding than the production of the anatomical information, partly because the target of interest is often blood, which backscatters significantly less power than solid tissues, and partly because several transmit-receive cycles are necessary for each velocity estimate. This review first describes the various components of basic CFI systems necessary to generate the velocity information and to combine it with anatomical information. It then describes a number of variations on the basic autocorrelation technique, including cross-correlation-based techniques, power Doppler, Doppler tissue imaging, and three-dimensional (3D) Doppler imaging. Finally, a number of limitations of current techniques and some potential solutions are reviewed.

  20. Effect of Metakaolin on Strength and Efflorescence Quantity of Cement-Based Composites

    PubMed Central

    Weng, Tsai-Lung; Lin, Wei-Ting; Cheng, An

    2013-01-01

    This study investigated the basic mechanical and microscopic properties of cement produced with metakaolin and quantified the production of residual white efflorescence. Cement mortar was produced at various replacement ratios of metakaolin (0, 5, 10, 15, 20, and 25% by weight of cement) and exposed to various environments. Compressive strength and efflorescence quantify (using Matrix Laboratory image analysis and the curettage method), scanning electron microscopy, and X-ray diffraction analysis were reported in this study. Specimens with metakaolin as a replacement for Portland cement present higher compressive strength and greater resistance to efflorescence; however, the addition of more than 20% metakaolin has a detrimental effect on strength and efflorescence. This may be explained by the microstructure and hydration products. The quantity of efflorescence determined using MATLAB image analysis is close to the result obtained using the curettage method. The results demonstrate the best effectiveness of replacing Portland cement with metakaolin at a 15% replacement ratio by weight. PMID:23737719

  1. Neuroimaging in aphasia treatment research: Consensus and practical guidelines for data analysis

    PubMed Central

    Meinzer, Marcus; Beeson, Pélagie M.; Cappa, Stefano; Crinion, Jenny; Kiran, Swathi; Saur, Dorothee; Parrish, Todd; Crosson, Bruce; Thompson, Cynthia K.

    2012-01-01

    Functional magnetic resonance imaging is the most widely used imaging technique to study treatment-induced recovery in post-stroke aphasia. The longitudinal design of such studies adds to the challenges researchers face when studying patient populations with brain damage in cross-sectional settings. The present review focuses on issues specifically relevant to neuroimaging data analysis in aphasia treatment research identified in discussions among international researchers at the Neuroimaging in Aphasia Treatment Research Workshop held at Northwestern University (Evanston, Illinois, USA). In particular, we aim to provide the reader with a critical review of unique problems related to the pre-processing, statistical modeling and interpretation of such data sets. Despite the fact that data analysis procedures critically depend on specific design features of a given study, we aim to discuss and communicate a basic set of practical guidelines that should be applicable to a wide range of studies and useful as a reference for researchers pursuing this line of research. PMID:22387474

  2. Dual-Energy CT: New Horizon in Medical Imaging

    PubMed Central

    Goo, Jin Mo

    2017-01-01

    Dual-energy CT has remained underutilized over the past decade probably due to a cumbersome workflow issue and current technical limitations. Clinical radiologists should be made aware of the potential clinical benefits of dual-energy CT over single-energy CT. To accomplish this aim, the basic principle, current acquisition methods with advantages and disadvantages, and various material-specific imaging methods as clinical applications of dual-energy CT should be addressed in detail. Current dual-energy CT acquisition methods include dual tubes with or without beam filtration, rapid voltage switching, dual-layer detector, split filter technique, and sequential scanning. Dual-energy material-specific imaging methods include virtual monoenergetic or monochromatic imaging, effective atomic number map, virtual non-contrast or unenhanced imaging, virtual non-calcium imaging, iodine map, inhaled xenon map, uric acid imaging, automatic bone removal, and lung vessels analysis. In this review, we focus on dual-energy CT imaging including related issues of radiation exposure to patients, scanning and post-processing options, and potential clinical benefits mainly to improve the understanding of clinical radiologists and thus, expand the clinical use of dual-energy CT; in addition, we briefly describe the current technical limitations of dual-energy CT and the current developments of photon-counting detector. PMID:28670151

  3. Imaging of DNA and Protein by SFM and Combined SFM-TIRF Microscopy.

    PubMed

    Grosbart, Małgorzata; Ristić, Dejan; Sánchez, Humberto; Wyman, Claire

    2018-01-01

    Direct imaging is invaluable for understanding the mechanism of complex genome transactions where proteins work together to organize, transcribe, replicate and repair DNA. Scanning (or atomic) force microscopy is an ideal tool for this, providing 3D information on molecular structure at nm resolution from defined components. This is a convenient and practical addition to in vitro studies as readily obtainable amounts of purified proteins and DNA are required. The images reveal structural details on the size and location of DNA bound proteins as well as protein-induced arrangement of the DNA, which are directly correlated in the same complexes. In addition, even from static images, the different forms observed and their relative distributions can be used to deduce the variety and stability of different complexes that are necessarily involved in dynamic processes. Recently available instruments that combine fluorescence with topographic imaging allow the identification of specific molecular components in complex assemblies, which broadens the applications and increases the information obtained from direct imaging of molecular complexes. We describe here basic methods for preparing samples of proteins, DNA and complexes of the two for topographic imaging and quantitative analysis. We also describe special considerations for combined fluorescence and topographic imaging of molecular complexes.

  4. Sample preparation for SFM imaging of DNA, proteins, and DNA-protein complexes.

    PubMed

    Ristic, Dejan; Sanchez, Humberto; Wyman, Claire

    2011-01-01

    Direct imaging is invaluable for understanding the mechanism of complex genome transactions where proteins work together to organize, transcribe, replicate, and repair DNA. Scanning (or atomic) force microscopy is an ideal tool for this, providing 3D information on molecular structure at nanometer resolution from defined components. This is a convenient and practical addition to in vitro studies as readily obtainable amounts of purified proteins and DNA are required. The images reveal structural details on the size and location of DNA-bound proteins as well as protein-induced arrangement of the DNA, which are directly correlated in the same complexes. In addition, even from static images, the different forms observed and their relative distributions can be used to deduce the variety and stability of different complexes that are necessarily involved in dynamic processes. Recently available instruments that combine fluorescence with topographic imaging allow the identification of specific molecular components in complex assemblies, which broadens the applications and increases the information obtained from direct imaging of molecular complexes. We describe here basic methods for preparing samples of proteins, DNA, and complexes of the two for topographic imaging and quantitative analysis. We also describe special considerations for combined fluorescence and topographic imaging of molecular complexes.

  5. The Decoding Toolbox (TDT): a versatile software package for multivariate analyses of functional imaging data

    PubMed Central

    Hebart, Martin N.; Görgen, Kai; Haynes, John-Dylan

    2015-01-01

    The multivariate analysis of brain signals has recently sparked a great amount of interest, yet accessible and versatile tools to carry out decoding analyses are scarce. Here we introduce The Decoding Toolbox (TDT) which represents a user-friendly, powerful and flexible package for multivariate analysis of functional brain imaging data. TDT is written in Matlab and equipped with an interface to the widely used brain data analysis package SPM. The toolbox allows running fast whole-brain analyses, region-of-interest analyses and searchlight analyses, using machine learning classifiers, pattern correlation analysis, or representational similarity analysis. It offers automatic creation and visualization of diverse cross-validation schemes, feature scaling, nested parameter selection, a variety of feature selection methods, multiclass capabilities, and pattern reconstruction from classifier weights. While basic users can implement a generic analysis in one line of code, advanced users can extend the toolbox to their needs or exploit the structure to combine it with external high-performance classification toolboxes. The toolbox comes with an example data set which can be used to try out the various analysis methods. Taken together, TDT offers a promising option for researchers who want to employ multivariate analyses of brain activity patterns. PMID:25610393

  6. GUI for Coordinate Measurement of an Image for the Estimation of Geometric Distortion of an Opto-electronic Display System

    NASA Astrophysics Data System (ADS)

    Saini, Surender Singh; Sardana, Harish Kumar; Pattnaik, Shyam Sundar

    2017-06-01

    Conventional image editing software in combination with other techniques are not only difficult to apply to an image but also permits a user to perform some basic functions one at a time. However, image processing algorithms and photogrammetric systems are developed in the recent past for real-time pattern recognition applications. A graphical user interface (GUI) is developed which can perform multiple functions simultaneously for the analysis and estimation of geometric distortion in an image with reference to the corresponding distorted image. The GUI measure, record, and visualize the performance metric of X/Y coordinates of one image over the other. The various keys and icons provided in the utility extracts the coordinates of distortion free reference image and the image with geometric distortion. The error between these two corresponding points gives the measure of distortion and also used to evaluate the correction parameters for image distortion. As the GUI interface minimizes human interference in the process of geometric correction, its execution just requires use of icons and keys provided in the utility; this technique gives swift and accurate results as compared to other conventional methods for the measurement of the X/Y coordinates of an image.

  7. High frequency ultrasound with color Doppler in dermatology*

    PubMed Central

    Barcaui, Elisa de Oliveira; Carvalho, Antonio Carlos Pires; Lopes, Flavia Paiva Proença Lobo; Piñeiro-Maceira, Juan; Barcaui, Carlos Baptista

    2016-01-01

    Ultrasonography is a method of imaging that classically is used in dermatology to study changes in the hypoderma, as nodules and infectious and inflammatory processes. The introduction of high frequency and resolution equipments enabled the observation of superficial structures, allowing differentiation between skin layers and providing details for the analysis of the skin and its appendages. This paper aims to review the basic principles of high frequency ultrasound and its applications in different areas of dermatology. PMID:27438191

  8. MAMA User Guide v2.0.1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gaschen, Brian Keith; Bloch, Jeffrey Joseph; Porter, Reid

    Morphological signatures of bulk SNM materials have significant promise, but these potential signatures are not fully utilized. This document describes software tools, collectively called the MAMA (Morphological Analysis for Material Attribution) software that can help provide robust and accurate quantification of morphological features in bulk material microscopy images (Optical, SEM). Although many of the specific tools are not unique to Mama, the software package has been designed specifically for nuclear material morphological analysis, and is at a point where it can be easily adapted (by Los Alamos or by collaborators) in response to new, different, or changing forensics needs. Themore » current release of the MAMA software only includes the image quantification, descriptions, and annotation functionality. Only limited information on a sample, its pedigree, and its chemistry are recorded inside this part of the software. This was decision based on initial feedback and the fact that there are several analytical chemistry databases being developed within the community. Currently MAMA is a standalone program that can export quantification results in a basic text format that can be imported into other programs such as Excel and Access. There is also a basic report generating feature that produces HTML formatted pages of the same information. We will be working with collaborators to provide better integration of MAMA into their particular systems, databases and workflows.« less

  9. Computed tomography, magnetic resonance, and ultrasound imaging: basic principles, glossary of terms, and patient safety.

    PubMed

    Cogbill, Thomas H; Ziegelbein, Kurt J

    2011-02-01

    The basic principles underlying computed tomography, magnetic resonance, and ultrasound are reviewed to promote better understanding of the properties and appropriate applications of these 3 common imaging modalities. A glossary of frequently used terms for each technique is appended for convenience. Risks to patient safety including contrast-induced nephropathy, radiation-induced malignancy, and nephrogenic systemic fibrosis are discussed. Copyright © 2011 Elsevier Inc. All rights reserved.

  10. Imaging Genetics

    ERIC Educational Resources Information Center

    Munoz, Karen E.; Hyde, Luke W.; Hariri, Ahmad R.

    2009-01-01

    Imaging genetics is an experimental strategy that integrates molecular genetics and neuroimaging technology to examine biological mechanisms that mediate differences in behavior and the risks for psychiatric disorder. The basic principles in imaging genetics and the development of the field are discussed.

  11. Error of the slanted edge method for measuring the modulation transfer function of imaging systems.

    PubMed

    Xie, Xufen; Fan, Hongda; Wang, Hongyuan; Wang, Zebin; Zou, Nianyu

    2018-03-01

    The slanted edge method is a basic approach for measuring the modulation transfer function (MTF) of imaging systems; however, its measurement accuracy is limited in practice. Theoretical analysis of the slanted edge MTF measurement method performed in this paper reveals that inappropriate edge angles and random noise reduce this accuracy. The error caused by edge angles is analyzed using sampling and reconstruction theory. Furthermore, an error model combining noise and edge angles is proposed. We verify the analyses and model with respect to (i) the edge angle, (ii) a statistical analysis of the measurement error, (iii) the full width at half-maximum of a point spread function, and (iv) the error model. The experimental results verify the theoretical findings. This research can be referential for applications of the slanted edge MTF measurement method.

  12. The application of the geography census data in seismic hazard assessment

    NASA Astrophysics Data System (ADS)

    Yuan, Shen; Ying, Zhang

    2017-04-01

    Limited by basic data timeliness to earthquake emergency database in Sichuan province, after the earthquake disaster assessment results and the actual damage there is a certain gap. In 2015, Sichuan completed the province census for the first time which including topography, traffic, vegetation coverage, water area, desert and bare ground, traffic network, the census residents and facilities, geographical unit, geological hazard as well as the Lushan earthquake-stricken area's town planning construction and ecological environment restoration. On this basis, combining with the existing achievements of basic geographic information data and high resolution image data, supplemented by remote sensing image interpretation and geological survey, Carried out distribution and change situation of statistical analysis and information extraction for earthquake disaster hazard-affected body elements such as surface coverage, roads, structures infrastructure in Lushan county before 2013 after 2015. At the same time, achieved the transformation and updating from geographical conditions census data to earthquake emergency basic data through research their data type, structure and relationship. Finally, based on multi-source disaster information including hazard-affected body changed data and Lushan 7.0 magnitude earthquake CORS network coseismal displacement field, etc. obtaining intensity control points through information fusion. Then completed the seismic influence field correction and assessed earthquake disaster again through Sichuan earthquake relief headquarters technology platform. Compared the new assessment result,original assessment result and actual earthquake disaster loss which shows that the revised evaluation result is more close to the actual earthquake disaster loss. In the future can realize geographical conditions census data to earthquake emergency basic data's normalized updates, ensure the timeliness to earthquake emergency database meanwhile improve the accuracy of assessment of earthquake disaster constantly.

  13. The effect of multispectral image fusion enhancement on human efficiency.

    PubMed

    Bittner, Jennifer L; Schill, M Trent; Mohd-Zaid, Fairul; Blaha, Leslie M

    2017-01-01

    The visual system can be highly influenced by changes to visual presentation. Thus, numerous techniques have been developed to augment imagery in an attempt to improve human perception. The current paper examines the potential impact of one such enhancement, multispectral image fusion, where imagery captured in varying spectral bands (e.g., visible, thermal, night vision) is algorithmically combined to produce an output to strengthen visual perception. We employ ideal observer analysis over a series of experimental conditions to (1) establish a framework for testing the impact of image fusion over the varying aspects surrounding its implementation (e.g., stimulus content, task) and (2) examine the effectiveness of fusion on human information processing efficiency in a basic application. We used a set of rotated Landolt C images captured with a number of individual sensor cameras and combined across seven traditional fusion algorithms (e.g., Laplacian pyramid, principal component analysis, averaging) in a 1-of-8 orientation task. We found that, contrary to the idea of fused imagery always producing a greater impact on perception, single-band imagery can be just as influential. Additionally, efficiency data were shown to fluctuate based on sensor combination instead of fusion algorithm, suggesting the need for examining multiple factors to determine the success of image fusion. Our use of ideal observer analysis, a popular technique from the vision sciences, provides not only a standard for testing fusion in direct relation to the visual system but also allows for comparable examination of fusion across its associated problem space of application.

  14. Dental Imaging - A basic guide for the radiologist.

    PubMed

    Masthoff, Max; Gerwing, Mirjam; Masthoff, Malte; Timme, Maximilian; Kleinheinz, Johannes; Berninger, Markus; Heindel, Walter; Wildgruber, Moritz; Schülke, Christoph

    2018-06-18

     As dental imaging accounts for approximately 40 % of all X-ray examinations in Germany, profound knowledge of this topic is essential not only for the dentist but also for the clinical radiologist. This review focuses on basic imaging findings regarding the teeth. Therefore, tooth structure, currently available imaging techniques and common findings in conserving dentistry including endodontology, periodontology, implantology and dental trauma are presented.  Literature research on the current state of dental radiology was performed using Pubmed.  Currently, the most frequent imaging techniques are the orthopantomogram (OPG) and single-tooth radiograph, as well as computer tomography (CT) and cone beam CT mainly for implantology (planning or postoperative control) or trauma indications. Especially early diagnosis and correct classification of a dental trauma, such as dental pulp involvement, prevents from treatment delays or worsening of therapy options and prognosis. Furthermore, teeth are commonly a hidden focus of infection.Since radiologists are frequently confronted with dental imaging, either concerning a particular question such as a trauma patient or regarding incidental findings throughout head and neck imaging, further training in this field is more than worthwhile to facilitate an early and sufficient dental treatment.   · This review focuses on dental imaging techniques and the most important pathologies.. · Dental pathologies may not only be locally but also systemically relevant.. · Reporting of dental findings is important for best patient care.. · Masthoff M, Gerwing M, Masthoff M et al. Dental Imaging - A basic guide for the radiologist. Fortschr Röntgenstr 2018; DOI: 10.1055/a-0636-4129. © Georg Thieme Verlag KG Stuttgart · New York.

  15. Field Geology/Processes

    NASA Technical Reports Server (NTRS)

    Allen, Carlton; Jakes, Petr; Jaumann, Ralf; Marshall, John; Moses, Stewart; Ryder, Graham; Saunders, Stephen; Singer, Robert

    1996-01-01

    The field geology/process group examined the basic operations of a terrestrial field geologist and the manner in which these operations could be transferred to a planetary lander. Four basic requirements for robotic field geology were determined: geologic content; surface vision; mobility; and manipulation. Geologic content requires a combination of orbital and descent imaging. Surface vision requirements include range, resolution, stereo, and multispectral imaging. The minimum mobility for useful field geology depends on the scale of orbital imagery. Manipulation requirements include exposing unweathered surfaces, screening samples, and bringing samples in contact with analytical instruments. To support these requirements, several advanced capabilities for future development are recommended. Capabilities include near-infrared reflectance spectroscopy, hyper-spectral imaging, multispectral microscopy, artificial intelligence in support of imaging, x ray diffraction, x ray fluorescence, and rock chipping.

  16. Spectral-element simulations of wave propagation in complex exploration-industry models: Imaging and adjoint tomography

    NASA Astrophysics Data System (ADS)

    Luo, Y.; Nissen-Meyer, T.; Morency, C.; Tromp, J.

    2008-12-01

    Seismic imaging in the exploration industry is often based upon ray-theoretical migration techniques (e.g., Kirchhoff) or other ideas which neglect some fraction of the seismic wavefield (e.g., wavefield continuation for acoustic-wave first arrivals) in the inversion process. In a companion paper we discuss the possibility of solving the full physical forward problem (i.e., including visco- and poroelastic, anisotropic media) using the spectral-element method. With such a tool at hand, we can readily apply the adjoint method to tomographic inversions, i.e., iteratively improving an initial 3D background model to fit the data. In the context of this inversion process, we draw connections between kernels in adjoint tomography and basic imaging principles in migration. We show that the images obtained by migration are nothing but particular kinds of adjoint kernels (mainly density kernels). Migration is basically a first step in the iterative inversion process of adjoint tomography. We apply the approach to basic 2D problems involving layered structures, overthrusting faults, topography, salt domes, and poroelastic regions.

  17. Perspective distortion in craniofacial superimposition: Logarithmic decay curves mapped mathematically and by practical experiment.

    PubMed

    Stephan, Carl N

    2015-12-01

    The superimposition of a face photograph with that of a skull for identification purposes necessitates the use of comparable photographic parameters between the two image acquisition sessions, so that differences in optics and consequent recording of images does not thwart the morphological analysis. Widely divergent, but published, speculations about the thresholds at which perspective distortion becomes negligible (0.5 to >13.5 m) must be resolved and perspective distortion (PD) relationships quantified across their full range to judge tolerance levels, and the suitability of commonly employed contemporary equipment (e.g., 1 m photographic copy-stands). Herein, basic trigonometry is employed to map PD for two same sized 179 mm linear lengths - separated anteroposteriorly by 127 mm - as a function of subject-to-camera distance (SCD; 0.2-20 m). These lengths approximate basic craniofacial heights (e.g., tr-n) and widths (e.g., zy-zy), while the latter approximates facial depth (e.g., n-t). As anticipated, PD decayed in logarithmic and continuous manner with increasing SCD. At SCD of 12 m, the within-image PD was negligible (<1%). At <2.5 m SCD, it exceeded 5% and increased sharply as SCD decreased. Since life size images of skulls and faces are commonly employed for superimposition, a relative 1% perspective distortion difference is recommended as the ceiling standard for craniofacial comparison (translates into a ≤2 mm difference in physiognomical face height). Since superimposition depends on relative comparisons of a photographic pair (not one photograph), there is practically no scenario in superimposition casework where SCDs should be ignored and no single distance at which PD should be considered negligible (even if one image holds >12 m SCD). Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  18. A Review of Mid-Infrared and Near-Infrared Imaging: Principles, Concepts and Applications in Plant Tissue Analysis.

    PubMed

    Türker-Kaya, Sevgi; Huck, Christian W

    2017-01-20

    Plant cells, tissues and organs are composed of various biomolecules arranged as structurally diverse units, which represent heterogeneity at microscopic levels. Molecular knowledge about those constituents with their localization in such complexity is very crucial for both basic and applied plant sciences. In this context, infrared imaging techniques have advantages over conventional methods to investigate heterogeneous plant structures in providing quantitative and qualitative analyses with spatial distribution of the components. Thus, particularly, with the use of proper analytical approaches and sampling methods, these technologies offer significant information for the studies on plant classification, physiology, ecology, genetics, pathology and other related disciplines. This review aims to present a general perspective about near-infrared and mid-infrared imaging/microspectroscopy in plant research. It is addressed to compare potentialities of these methodologies with their advantages and limitations. With regard to the organization of the document, the first section will introduce the respective underlying principles followed by instrumentation, sampling techniques, sample preparations, measurement, and an overview of spectral pre-processing and multivariate analysis. The last section will review selected applications in the literature.

  19. Medical Image Analysis Facility

    NASA Technical Reports Server (NTRS)

    1978-01-01

    To improve the quality of photos sent to Earth by unmanned spacecraft. NASA's Jet Propulsion Laboratory (JPL) developed a computerized image enhancement process that brings out detail not visible in the basic photo. JPL is now applying this technology to biomedical research in its Medical lrnage Analysis Facility, which employs computer enhancement techniques to analyze x-ray films of internal organs, such as the heart and lung. A major objective is study of the effects of I stress on persons with heart disease. In animal tests, computerized image processing is being used to study coronary artery lesions and the degree to which they reduce arterial blood flow when stress is applied. The photos illustrate the enhancement process. The upper picture is an x-ray photo in which the artery (dotted line) is barely discernible; in the post-enhancement photo at right, the whole artery and the lesions along its wall are clearly visible. The Medical lrnage Analysis Facility offers a faster means of studying the effects of complex coronary lesions in humans, and the research now being conducted on animals is expected to have important application to diagnosis and treatment of human coronary disease. Other uses of the facility's image processing capability include analysis of muscle biopsy and pap smear specimens, and study of the microscopic structure of fibroprotein in the human lung. Working with JPL on experiments are NASA's Ames Research Center, the University of Southern California School of Medicine, and Rancho Los Amigos Hospital, Downey, California.

  20. Crowdsourcing for translational research: analysis of biomarker expression using cancer microarrays

    PubMed Central

    Lawson, Jonathan; Robinson-Vyas, Rupesh J; McQuillan, Janette P; Paterson, Andy; Christie, Sarah; Kidza-Griffiths, Matthew; McDuffus, Leigh-Anne; Moutasim, Karwan A; Shaw, Emily C; Kiltie, Anne E; Howat, William J; Hanby, Andrew M; Thomas, Gareth J; Smittenaar, Peter

    2017-01-01

    Background: Academic pathology suffers from an acute and growing lack of workforce resource. This especially impacts on translational elements of clinical trials, which can require detailed analysis of thousands of tissue samples. We tested whether crowdsourcing – enlisting help from the public – is a sufficiently accurate method to score such samples. Methods: We developed a novel online interface to train and test lay participants on cancer detection and immunohistochemistry scoring in tissue microarrays. Lay participants initially performed cancer detection on lung cancer images stained for CD8, and we measured how extending a basic tutorial by annotated example images and feedback-based training affected cancer detection accuracy. We then applied this tutorial to additional cancer types and immunohistochemistry markers – bladder/ki67, lung/EGFR, and oesophageal/CD8 – to establish accuracy compared with experts. Using this optimised tutorial, we then tested lay participants' accuracy on immunohistochemistry scoring of lung/EGFR and bladder/p53 samples. Results: We observed that for cancer detection, annotated example images and feedback-based training both improved accuracy compared with a basic tutorial only. Using this optimised tutorial, we demonstrate highly accurate (>0.90 area under curve) detection of cancer in samples stained with nuclear, cytoplasmic and membrane cell markers. We also observed high Spearman correlations between lay participants and experts for immunohistochemistry scoring (0.91 (0.78, 0.96) and 0.97 (0.91, 0.99) for lung/EGFR and bladder/p53 samples, respectively). Conclusions: These results establish crowdsourcing as a promising method to screen large data sets for biomarkers in cancer pathology research across a range of cancers and immunohistochemical stains. PMID:27959886

  1. In vivo confocal microscopy of the cornea: New developments in image acquisition, reconstruction and analysis using the HRT-Rostock Corneal Module

    PubMed Central

    Petroll, W. Matthew; Robertson, Danielle M.

    2015-01-01

    The optical sectioning ability of confocal microscopy allows high magnification images to be obtained from different depths within a thick tissue specimen, and is thus ideally suited to the study of intact tissue in living subjects. In vivo confocal microscopy has been used in a variety of corneal research and clinical applications since its development over 25 years ago. In this article we review the latest developments in quantitative corneal imaging with the Heidelberg Retinal Tomograph with Rostock Corneal Module (HRT-RCM). We provide an overview of the unique strengths and weaknesses of the HRT-RCM. We discuss techniques for performing 3-D imaging with the HRT-RCM, including hardware and software modifications that allow full thickness confocal microscopy through focusing (CMTF) of the cornea, which can provide quantitative measurements of corneal sublayer thicknesses, stromal cell and extracellular matrix backscatter, and depth dependent changes in corneal keratocyte density. We also review current approaches for quantitative imaging of the subbasal nerve plexus, which require a combination of advanced image acquisition and analysis procedures, including wide field mapping and 3-D reconstruction of nerve structures. The development of new hardware, software, and acquisition techniques continues to expand the number of applications of the HRT-RCM for quantitative in vivo corneal imaging at the cellular level. Knowledge of these rapidly evolving strategies should benefit corneal clinicians and basic scientists alike. PMID:25998608

  2. Automatic brain MR image denoising based on texture feature-based artificial neural networks.

    PubMed

    Chang, Yu-Ning; Chang, Herng-Hua

    2015-01-01

    Noise is one of the main sources of quality deterioration not only for visual inspection but also in computerized processing in brain magnetic resonance (MR) image analysis such as tissue classification, segmentation and registration. Accordingly, noise removal in brain MR images is important for a wide variety of subsequent processing applications. However, most existing denoising algorithms require laborious tuning of parameters that are often sensitive to specific image features and textures. Automation of these parameters through artificial intelligence techniques will be highly beneficial. In the present study, an artificial neural network associated with image texture feature analysis is proposed to establish a predictable parameter model and automate the denoising procedure. In the proposed approach, a total of 83 image attributes were extracted based on four categories: 1) Basic image statistics. 2) Gray-level co-occurrence matrix (GLCM). 3) Gray-level run-length matrix (GLRLM) and 4) Tamura texture features. To obtain the ranking of discrimination in these texture features, a paired-samples t-test was applied to each individual image feature computed in every image. Subsequently, the sequential forward selection (SFS) method was used to select the best texture features according to the ranking of discrimination. The selected optimal features were further incorporated into a back propagation neural network to establish a predictable parameter model. A wide variety of MR images with various scenarios were adopted to evaluate the performance of the proposed framework. Experimental results indicated that this new automation system accurately predicted the bilateral filtering parameters and effectively removed the noise in a number of MR images. Comparing to the manually tuned filtering process, our approach not only produced better denoised results but also saved significant processing time.

  3. Setting up a proper power spectral density (PSD) and autocorrelation analysis for material and process characterization

    NASA Astrophysics Data System (ADS)

    Rutigliani, Vito; Lorusso, Gian Francesco; De Simone, Danilo; Lazzarino, Frederic; Rispens, Gijsbert; Papavieros, George; Gogolides, Evangelos; Constantoudis, Vassilios; Mack, Chris A.

    2018-03-01

    Power spectral density (PSD) analysis is playing more and more a critical role in the understanding of line-edge roughness (LER) and linewidth roughness (LWR) in a variety of applications across the industry. It is an essential step to get an unbiased LWR estimate, as well as an extremely useful tool for process and material characterization. However, PSD estimate can be affected by both random to systematic artifacts caused by image acquisition and measurement settings, which could irremediably alter its information content. In this paper, we report on the impact of various setting parameters (smoothing image processing filters, pixel size, and SEM noise levels) on the PSD estimate. We discuss also the use of PSD analysis tool in a variety of cases. Looking beyond the basic roughness estimate, we use PSD and autocorrelation analysis to characterize resist blur[1], as well as low and high frequency roughness contents and we apply this technique to guide the EUV material stack selection. Our results clearly indicate that, if properly used, PSD methodology is a very sensitive tool to investigate material and process variations

  4. Application of spectrometer cropscan MSR 16R and Landsat imagery for identification the spectral characteristics of land cover

    NASA Astrophysics Data System (ADS)

    Tampubolon, Togi; Abdullah, Khiruddin bin; San, Lim Hwee

    2013-09-01

    The spectral characteristics of land cover are basic references in classifying satellite image for geophysics analysis. It can be obtained from the measurements using spectrometer and satellite image processing. The aims of this study to investigate the spectral characteristics of land cover based on the results of measurement using Spectrometer Cropscan MSR 16R and Landsat satellite imagery. The area of study in this research is in Medan, (Deli Serdang, North Sumatera) Indonesia. The scope of this study is the basic survey from the measurements of spectral land cover which is covered several type of land such as a cultivated and managed terrestrial areas, natural and semi-natural, cultivated aquatic or regularly flooded areas, natural and semi-natural aquatic, artificial surfaces and associated areas, bare areas, artificial waterbodies and natural waterbodies. The measurement and verification were conducted using a spectrometer provided their spectral characteristics and Landsat imagery, respectively. The results of the spectral characteristics of land cover shows that each type of land cover have a unique characteristic. The correlation of spectral land cover based on spectrometer Cropscan MSR 16R and Landsat satellite image are above 90 %. However, the land cover of artificial waterbodiese have a correlation under 40 %. That is because the measurement of spectrometer Cropscan MSR 16R and acquisition of Landsat satellite imagery has a time different.

  5. Time-lapse microscopy and image analysis in basic and clinical embryo development research.

    PubMed

    Wong, C; Chen, A A; Behr, B; Shen, S

    2013-02-01

    Mammalian preimplantation embryo development is a complex process in which the exact timing and sequence of events are as essential as the accurate execution of the events themselves. Time-lapse microscopy (TLM) is an ideal tool to study this process since the ability to capture images over time provides a combination of morphological, dynamic and quantitative information about developmental events. Here, we systematically review the application of TLM in basic and clinical embryo research. We identified all relevant preimplantation embryo TLM studies published in English up to May 2012 using PubMed and Google Scholar. We then analysed the technical challenges involved in embryo TLM studies and how these challenges may be overcome with technological innovations. Finally, we reviewed the different types of TLM embryo studies, with a special focus on how TLM can benefit clinical assisted reproduction. Although new parameters predictive of embryo development potential may be discovered and used clinically to potentially increase the success rate of IVF, adopting TLM to routine clinical practice will require innovations in both optics and image analysis. Combined with such innovations, TLM may provide embryologists and clinicians with an important tool for making critical decisions in assisted reproduction. In this review, we perform a literature search of all published early embryo development studies that used time-lapse microscopy (TLM). From the literature, we discuss the benefits of TLM over traditional time-point analysis, as well as the technical difficulties and solutions involved in implementing TLM for embryo studies. We further discuss research that has successfully derived non-invasive markers that may increase the success rate of assisted reproductive technologies, primarily IVF. Most notably, we extend our discussion to highlight important considerations for the practical use of TLM in research and clinical settings. Copyright © 2012 Reproductive Healthcare Ltd. Published by Elsevier Ltd. All rights reserved.

  6. Electrocardiography: A Technologist's Guide to Interpretation.

    PubMed

    Tso, Colin; Currie, Geoffrey M; Gilmore, David; Kiat, Hosen

    2015-12-01

    The nuclear medicine technologist works with electrocardiography when performing cardiac stress testing and gated cardiac imaging and when monitoring critical patients. To enhance patient care, basic electrocardiogram interpretation skills and recognition of key arrhythmias are essential for the nuclear medicine technologist. This article provides insight into the anatomy of an electrocardiogram trace, covers basic electrocardiogram interpretation methods, and describes an example case typical in the nuclear medicine environment. © 2015 by the Society of Nuclear Medicine and Molecular Imaging, Inc.

  7. Analysis of the Image of Scientists Portrayed in the Lebanese National Science Textbooks

    NASA Astrophysics Data System (ADS)

    Yacoubian, Hagop A.; Al-Khatib, Layan; Mardirossian, Taline

    2017-07-01

    This article presents an analysis of how scientists are portrayed in the Lebanese national science textbooks. The purpose of this study was twofold. First, to develop a comprehensive analytical framework that can serve as a tool to analyze the image of scientists portrayed in educational resources. Second, to analyze the image of scientists portrayed in the Lebanese national science textbooks that are used in Basic Education. An analytical framework, based on an extensive review of the relevant literature, was constructed that served as a tool for analyzing the textbooks. Based on evidence-based stereotypes, the framework focused on the individual and work-related characteristics of scientists. Fifteen science textbooks were analyzed using both quantitative and qualitative measures. Our analysis of the textbooks showed the presence of a number of stereotypical images. The scientists are predominantly white males of European descent. Non-Western scientists, including Lebanese and/or Arab scientists are mostly absent in the textbooks. In addition, the scientists are portrayed as rational individuals who work alone, who conduct experiments in their labs by following the scientific method, and by operating within Eurocentric paradigms. External factors do not influence their work. They are engaged in an enterprise which is objective, which aims for discovering the truth out there, and which involves dealing with direct evidence. Implications for science education are discussed.

  8. Feature-Motivated Simplified Adaptive PCNN-Based Medical Image Fusion Algorithm in NSST Domain.

    PubMed

    Ganasala, Padma; Kumar, Vinod

    2016-02-01

    Multimodality medical image fusion plays a vital role in diagnosis, treatment planning, and follow-up studies of various diseases. It provides a composite image containing critical information of source images required for better localization and definition of different organs and lesions. In the state-of-the-art image fusion methods based on nonsubsampled shearlet transform (NSST) and pulse-coupled neural network (PCNN), authors have used normalized coefficient value to motivate the PCNN-processing both low-frequency (LF) and high-frequency (HF) sub-bands. This makes the fused image blurred and decreases its contrast. The main objective of this work is to design an image fusion method that gives the fused image with better contrast, more detail information, and suitable for clinical use. We propose a novel image fusion method utilizing feature-motivated adaptive PCNN in NSST domain for fusion of anatomical images. The basic PCNN model is simplified, and adaptive-linking strength is used. Different features are used to motivate the PCNN-processing LF and HF sub-bands. The proposed method is extended for fusion of functional image with an anatomical image in improved nonlinear intensity hue and saturation (INIHS) color model. Extensive fusion experiments have been performed on CT-MRI and SPECT-MRI datasets. Visual and quantitative analysis of experimental results proved that the proposed method provides satisfactory fusion outcome compared to other image fusion methods.

  9. [MR tomography of the heart].

    PubMed

    Hahn, D; Beer, M; Sandstede, J

    2000-10-01

    The introduction of magnetic resonance (MR) tomography has fundamentally changed radiological diagnosis for many diseases. Invasive digital subtraction angiography has already been widely replaced by noninvasive MR angiography for most of the vascular diseases. The rapid technical development of MR imaging in recent years has opened new functional imaging techniques. MR imaging of the heart allows simultaneous measurement of morphological and functional parameters in a single noninvasive examination without any radiation exposure. Because of the high spatial resolution and the reproducibility cine MR imaging is now the gold standard for functional analysis. With the improvement of myocardial perfusion and viability studies many diseases of the heart can be diagnosed in a single examination. MR spectroscopy is the only method which allows a view of the metabolism of the heart. New examinations for vascular imaging and flow quantification complete the goal of "one-stop-shop" imaging of the heart. MR imaging is the only diagnostic modality which allows a complete evaluation of many diseases of the heart with one technique, basic examination as well as follow-up studies. The very rapid improvement in MRI will overcome most of the limitations in the near future, especially concerning MR coronary angiography.

  10. Image analysis of pulmonary nodules using micro CT

    NASA Astrophysics Data System (ADS)

    Niki, Noboru; Kawata, Yoshiki; Fujii, Masashi; Kakinuma, Ryutaro; Moriyama, Noriyuki; Tateno, Yukio; Matsui, Eisuke

    2001-07-01

    We are developing a micro-computed tomography (micro CT) system for imaging pulmonary nodules. The purpose is to enhance the physician performance in accessing the micro- architecture of the nodule for classification between malignant and benign nodules. The basic components of the micro CT system consist of microfocus X-ray source, a specimen manipulator, and an image intensifier detector coupled to charge-coupled device (CCD) camera. 3D image reconstruction was performed by the slice. A standard fan- beam convolution and backprojection algorithm was used to reconstruct the center plane intersecting the X-ray source. The preprocessing of the 3D image reconstruction included the correction of the geometrical distortions and the shading artifact introduced by the image intensifier. The main advantage of the system is to obtain a high spatial resolution which ranges between b micrometers and 25 micrometers . In this work we report on preliminary studies performed with the micro CT for imaging resected tissues of normal and abnormal lung. Experimental results reveal micro architecture of lung tissues, such as alveolar wall, septal wall of pulmonary lobule, and bronchiole. From the results, the micro CT system is expected to have interesting potentials for high confidential differential diagnosis.

  11. Client-Side Image Maps: Achieving Accessibility and Section 508 Compliance

    ERIC Educational Resources Information Center

    Beasley, William; Jarvis, Moana

    2004-01-01

    Image maps are a means of making a picture "clickable", so that different portions of the image can be hyperlinked to different URLS. There are two basic types of image maps: server-side and client-side. Besides requiring access to a CGI on the server, server-side image maps are undesirable from the standpoint of accessibility--creating…

  12. Galaxy of Images

    Science.gov Websites

    This site has moved! Please go to our new Image Gallery site! dot header Basic Image Search Options dot header Search Tips Enter a keyword term below: Submit Use this search to find ANY words you Irish Lion Cubs Taxonomic (Scientific) Keyword Search: Submit Many of the images in the Galaxy of Images

  13. Image processing in forensic pathology.

    PubMed

    Oliver, W R

    1998-03-01

    Image processing applications in forensic pathology are becoming increasingly important. This article introduces basic concepts in image processing as applied to problems in forensic pathology in a non-mathematical context. Discussions of contrast enhancement, digital encoding, compression, deblurring, and other topics are presented.

  14. Optical Biopsy: A New Frontier in Endoscopic Detection and Diagnosis

    PubMed Central

    WANG, THOMAS D.; VAN DAM, JACQUES

    2007-01-01

    Endoscopic diagnosis currently relies on the ability of the operator to visualize abnormal patterns in the image created by light reflected from the mucosal surface of the gastrointestinal tract. Advances in fiber optics, light sources, detectors, and molecular biology have led to the development of several novel methods for tissue evaluation in situ. The term “optical biopsy” refers to methods that use the properties of light to enable the operator to make an instant diagnosis at endoscopy, previously possible only by using histological or cytological analysis. Promising imaging techniques include fluorescence endoscopy, optical coherence tomography, confocal microendoscopy, and molecular imaging. Point detection schemes under development include light scattering and Raman spectroscopy. Such advanced diagnostic methods go beyond standard endoscopic techniques by offering improved image resolution, contrast, and tissue penetration and providing biochemical and molecular information about mucosal disease. This review describes the basic biophysics of light-tissue interactions, assesses the strengths and weaknesses of each method, and examines clinical and preclinical evidence for each approach. PMID:15354274

  15. The Novel Object and Unusual Name (NOUN) Database: A collection of novel images for use in experimental research.

    PubMed

    Horst, Jessica S; Hout, Michael C

    2016-12-01

    Many experimental research designs require images of novel objects. Here we introduce the Novel Object and Unusual Name (NOUN) Database. This database contains 64 primary novel object images and additional novel exemplars for ten basic- and nine global-level object categories. The objects' novelty was confirmed by both self-report and a lack of consensus on questions that required participants to name and identify the objects. We also found that object novelty correlated with qualifying naming responses pertaining to the objects' colors. The results from a similarity sorting task (and a subsequent multidimensional scaling analysis on the similarity ratings) demonstrated that the objects are complex and distinct entities that vary along several featural dimensions beyond simply shape and color. A final experiment confirmed that additional item exemplars comprised both sub- and superordinate categories. These images may be useful in a variety of settings, particularly for developmental psychology and other research in the language, categorization, perception, visual memory, and related domains.

  16. The feasibility study for electronic imaging system with the photoheliograph

    NASA Technical Reports Server (NTRS)

    Svensson, E. L.; Schaff, F. L.

    1972-01-01

    The development of the electronic subsystems used for the photoheliograph and its application for a high resolution study of the sun are discussed. Basic considerations are as follows: (1) determination of characteristics of solar activity within the spectral response of the photoheliograph, (2) determination of the space vehicles capable of carrying the photoheliograph, (3) analysis of the capability of the ground based data gathering network to assimilate the generated information, and (4) the characteristics of the photoheliograph and the associated spectral filters.

  17. Digital signal processing and control and estimation theory -- Points of tangency, area of intersection, and parallel directions

    NASA Technical Reports Server (NTRS)

    Willsky, A. S.

    1976-01-01

    A number of current research directions in the fields of digital signal processing and modern control and estimation theory were studied. Topics such as stability theory, linear prediction and parameter identification, system analysis and implementation, two-dimensional filtering, decentralized control and estimation, image processing, and nonlinear system theory were examined in order to uncover some of the basic similarities and differences in the goals, techniques, and philosophy of the two disciplines. An extensive bibliography is included.

  18. A comparison of basic deinterlacing approaches for a computer assisted diagnosis approach of videoscope images

    NASA Astrophysics Data System (ADS)

    Kage, Andreas; Canto, Marcia; Gorospe, Emmanuel; Almario, Antonio; Münzenmayer, Christian

    2010-03-01

    In the near future, Computer Assisted Diagnosis (CAD) which is well known in the area of mammography might be used to support clinical experts in the diagnosis of images derived from imaging modalities such as endoscopy. In the recent past, a few first approaches for computer assisted endoscopy have been presented already. These systems use a video signal as an input that is provided by the endoscopes video processor. Despite the advent of high-definition systems most standard endoscopy systems today still provide only analog video signals. These signals consist of interlaced images that can not be used in a CAD approach without deinterlacing. Of course, there are many different deinterlacing approaches known today. But most of them are specializations of some basic approaches. In this paper we present four basic deinterlacing approaches. We have used a database of non-interlaced images which have been degraded by artificial interlacing and afterwards processed by these approaches. The database contains regions of interest (ROI) of clinical relevance for the diagnosis of abnormalities in the esophagus. We compared the classification rates on these ROIs on the original images and after the deinterlacing. The results show that the deinterlacing has an impact on the classification rates. The Bobbing approach and the Motion Compensation approach achieved the best classification results in most cases.

  19. Topographic profiling and refractive-index analysis by use of differential interference contrast with bright-field intensity and atomic force imaging.

    PubMed

    Axelrod, Noel; Radko, Anna; Lewis, Aaron; Ben-Yosef, Nissim

    2004-04-10

    A methodology is described for phase restoration of an object function from differential interference contrast (DIC) images. The methodology involves collecting a set of DIC images in the same plane with different bias retardation between the two illuminating light components produced by a Wollaston prism. These images, together with one conventional bright-field image, allows for reduction of the phase deconvolution restoration problem from a highly complex nonlinear mathematical formulation to a set of linear equations that can be applied to resolve the phase for images with a relatively large number of pixels. Additionally, under certain conditions, an on-line atomic force imaging system that does not interfere with the standard DIC illumination modes resolves uncertainties in large topographical variations that generally lead to a basic problem in DIC imaging, i.e., phase unwrapping. Furthermore, the availability of confocal detection allows for a three-dimensional reconstruction with high accuracy of the refractive-index measurement of the object that is to be imaged. This has been applied to reconstruction of the refractive index of an arrayed waveguide in a region in which a defect in the sample is present. The results of this paper highlight the synergism of far-field microscopies integrated with scanned probe microscopies and restoration algorithms for phase reconstruction.

  20. Design of point-of-care (POC) microfluidic medical diagnostic devices

    NASA Astrophysics Data System (ADS)

    Leary, James F.

    2018-02-01

    Design of inexpensive and portable hand-held microfluidic flow/image cytometry devices for initial medical diagnostics at the point of initial patient contact by emergency medical personnel in the field requires careful design in terms of power/weight requirements to allow for realistic portability as a hand-held, point-of-care medical diagnostics device. True portability also requires small micro-pumps for high-throughput capability. Weight/power requirements dictate use of super-bright LEDs and very small silicon photodiodes or nanophotonic sensors that can be powered by batteries. Signal-to-noise characteristics can be greatly improved by appropriately pulsing the LED excitation sources and sampling and subtracting noise in between excitation pulses. The requirements for basic computing, imaging, GPS and basic telecommunications can be simultaneously met by use of smartphone technologies, which become part of the overall device. Software for a user-interface system, limited real-time computing, real-time imaging, and offline data analysis can be accomplished through multi-platform software development systems that are well-suited to a variety of currently available cellphone technologies which already contain all of these capabilities. Microfluidic cytometry requires judicious use of small sample volumes and appropriate statistical sampling by microfluidic cytometry or imaging for adequate statistical significance to permit real-time (typically < 15 minutes) medical decisions for patients at the physician's office or real-time decision making in the field. One or two drops of blood obtained by pin-prick should be able to provide statistically meaningful results for use in making real-time medical decisions without the need for blood fractionation, which is not realistic in the field.

  1. [Research of dual-photoelastic-modulator-based beat frequency modulation and Fourier-Bessel transform imaging spectrometer].

    PubMed

    Wang, Zhi-Bin; Zhang, Rui; Wang, Yao-Li; Huang, Yan-Fei; Chen, You-Hua; Wang, Li-Fu; Yang, Qiang

    2014-02-01

    As the existing photoelastic-modulator(PEM) modulating frequency in the tens of kHz to hundreds of kHz between, leading to frequency of modulated interference signal is higher, so ordinary array detector cannot effectively caprure interference signal..A new beat frequency modulation method based on dual-photoelastic-modulator (Dual-PEM) and Fourier-Bessel transform is proposed as an key component of dual-photoelastic-modulator-based imaging spectrometer (Dual-PEM-IS) combined with charge coupled device (CCD). The dual-PEM are operated as an electro-optic circular retardance modulator, Operating the PEMs at slightly different resonant frequencies w1 and w2 respectively, generates a differential signal at a much lower heterodyne frequency that modulates the incident light. This method not only retains the advantages of the existing PEM, but also the frequency of modulated photocurrent decreased by 2-3 orders of magnitude (10-500 Hz) and can be detected by common array detector, and the incident light spectra can be obtained by Fourier-Bessel transform of low frequency component in the modulated signal. The method makes the PEM has the dual capability of imaging and spectral measurement. The basic principle is introduced, the basic equations is derived, and the feasibility is verified through the corresponding numerical simulation and experiment. This method has' potential applications in imaging spectrometer technology, and analysis of the effect of deviation of the optical path difference. This work provides the necessary theoretical basis for remote sensing of new Dual-PEM-IS and for engineering implementation of spectra inversion.

  2. SU-E-J-16: Automatic Image Contrast Enhancement Based On Automatic Parameter Optimization for Radiation Therapy Setup Verification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qiu, J; Washington University in St Louis, St Louis, MO; Li, H. Harlod

    Purpose: In RT patient setup 2D images, tissues often cannot be seen well due to the lack of image contrast. Contrast enhancement features provided by image reviewing software, e.g. Mosaiq and ARIA, require manual selection of the image processing filters and parameters thus inefficient and cannot be automated. In this work, we developed a novel method to automatically enhance the 2D RT image contrast to allow automatic verification of patient daily setups as a prerequisite step of automatic patient safety assurance. Methods: The new method is based on contrast limited adaptive histogram equalization (CLAHE) and high-pass filtering algorithms. The mostmore » important innovation is to automatically select the optimal parameters by optimizing the image contrast. The image processing procedure includes the following steps: 1) background and noise removal, 2) hi-pass filtering by subtracting the Gaussian smoothed Result, and 3) histogram equalization using CLAHE algorithm. Three parameters were determined through an iterative optimization which was based on the interior-point constrained optimization algorithm: the Gaussian smoothing weighting factor, the CLAHE algorithm block size and clip limiting parameters. The goal of the optimization is to maximize the entropy of the processed Result. Results: A total 42 RT images were processed. The results were visually evaluated by RT physicians and physicists. About 48% of the images processed by the new method were ranked as excellent. In comparison, only 29% and 18% of the images processed by the basic CLAHE algorithm and by the basic window level adjustment process, were ranked as excellent. Conclusion: This new image contrast enhancement method is robust and automatic, and is able to significantly outperform the basic CLAHE algorithm and the manual window-level adjustment process that are currently used in clinical 2D image review software tools.« less

  3. On-line object feature extraction for multispectral scene representation

    NASA Technical Reports Server (NTRS)

    Ghassemian, Hassan; Landgrebe, David

    1988-01-01

    A new on-line unsupervised object-feature extraction method is presented that reduces the complexity and costs associated with the analysis of the multispectral image data and data transmission, storage, archival and distribution. The ambiguity in the object detection process can be reduced if the spatial dependencies, which exist among the adjacent pixels, are intelligently incorporated into the decision making process. The unity relation was defined that must exist among the pixels of an object. Automatic Multispectral Image Compaction Algorithm (AMICA) uses the within object pixel-feature gradient vector as a valuable contextual information to construct the object's features, which preserve the class separability information within the data. For on-line object extraction the path-hypothesis and the basic mathematical tools for its realization are introduced in terms of a specific similarity measure and adjacency relation. AMICA is applied to several sets of real image data, and the performance and reliability of features is evaluated.

  4. An evaluation of EREP (Skylab) and ERTS imagery for integrated natural resources survey

    NASA Technical Reports Server (NTRS)

    Vangenderen, J. L. (Principal Investigator)

    1973-01-01

    The author has identified the following significant results. An experimental procedure has been devised and is being tested for natural resource surveys to cope with the problems of interpreting and processing the large quantities of data provided by Skylab and ERTS. Some basic aspects of orbital imagery such as scale, the role of repetitive coverage, and types of sensors are being examined in relation to integrated surveys of natural resources and regional development planning. Extrapolation away from known ground conditions, a fundamental technique for mapping resources, becomes very effective when used on orbital imagery supported by field mapping. Meaningful boundary delimitations can be made on orbital images using various image enhancement techniques. To meet the needs of many developing countries, this investigation into the use of satellite imagery for integrated resource surveys involves the analysis of the images by means of standard visual photointerpretation methods.

  5. THz-wave parametric sources and imaging applications

    NASA Astrophysics Data System (ADS)

    Kawase, Kodo

    2004-12-01

    We have studied the generation of terahertz (THz) waves by optical parametric processes based on laser light scattering from the polariton mode of nonlinear crystals. Using parametric oscillation of MgO-doped LiNbO3 crystal pumped by a nano-second Q-switched Nd:YAG laser, we have realized a widely tunable coherent THz-wave sources with a simple configuration. We have also developed a novel basic technology for THz imaging, which allows detection and identification of chemicals by introducing the component spatial pattern analysis. The spatial distributions of the chemicals were obtained from terahertz multispectral trasillumination images, using absorption spectra previously measured with a widely tunable THz-wave parametric oscillator. Further we have applied this technique to the detection and identification of illicit drugs concealed in envelopes. The samples we used were methamphetamine and MDMA, two of the most widely consumed illegal drugs in Japan, and aspirin as a reference.

  6. Luciferase Protein Complementation Assays for Bioluminescence Imaging of Cells and Mice

    PubMed Central

    Luker, Gary D.; Luker, Kathryn E.

    2015-01-01

    Summary Protein fragment complementation assays (PCAs) with luciferase reporters currently are the preferred method for detecting and quantifying protein-protein interactions in living animals. At the most basic level, PCAs involve fusion of two proteins of interest to enzymatically inactive fragments of luciferase. Upon association of the proteins of interest, the luciferase fragments are capable of reconstituting enzymatic activity to generate luminescence in vivo. In addition to bi-molecular luciferase PCAs, unimolecular biosensors for hormones, kinases, and proteases also have been developed using target peptides inserted between inactive luciferase fragments. Luciferase PCAs offer unprecedented opportunities to quantify dynamics of protein-protein interactions in intact cells and living animals, but successful use of luciferase PCAs in cells and mice involves careful consideration of many technical factors. This chapter discusses the design of luciferase PCAs appropriate for animal imaging, including construction of reporters, incorporation of reporters into cells and mice, imaging techniques, and data analysis. PMID:21153371

  7. Basic concepts of MR imaging, diffusion MR imaging, and diffusion tensor imaging.

    PubMed

    de Figueiredo, Eduardo H M S G; Borgonovi, Arthur F N G; Doring, Thomas M

    2011-02-01

    MR image contrast is based on intrinsic tissue properties and specific pulse sequences and parameter adjustments. A growing number of MRI imaging applications are based on diffusion properties of water. To better understand MRI diffusion-weighted imaging, a brief overview of MR physics is presented in this article followed by physics of the evolving techniques of diffusion MR imaging and diffusion tensor imaging. Copyright © 2011. Published by Elsevier Inc.

  8. Co-registered Topographical, Band Excitation Nanomechanical, and Mass Spectral Imaging Using a Combined Atomic Force Microscopy/Mass Spectrometry Platform

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ovchinnikova, Olga S.; Tai, Tamin; Bocharova, Vera

    The advancement of a hybrid atomic force microscopy/mass spectrometry imaging platform demonstrating for the first time co-registered topographical, band excitation nanomechanical, and mass spectral imaging of a surface using a single instrument is reported. The mass spectrometry-based chemical imaging component of the system utilized nanothermal analysis probes for pyrolytic surface sampling followed by atmospheric pressure chemical ionization of the gas phase species produced with subsequent mass analysis. We discuss the basic instrumental setup and operation and the multimodal imaging capability and utility are demonstrated using a phase separated polystyrene/poly(2-vinylpyridine) polymer blend thin film. The topography and band excitation images showedmore » that the valley and plateau regions of the thin film surface were comprised primarily of one of the two polymers in the blend with the mass spectral chemical image used to definitively identify the polymers at the different locations. Data point pixel size for the topography (390 nm x 390 nm), band excitation (781 nm x 781 nm), mass spectrometry (690 nm x 500 nm) images was comparable and submicrometer in all three cases, but the data voxel size for each of the three images was dramatically different. The topography image was uniquely a surface measurement, whereas the band excitation image included information from an estimated 10 nm deep into the sample and the mass spectral image from 110-140 nm in depth. Moreover, because of this dramatic sampling depth variance, some differences in the band excitation and mass spectrometry chemical images were observed and were interpreted to indicate the presence of a buried interface in the sample. The spatial resolution of the mass spectral image was estimated to be between 1.5 m 2.6 m, based on the ability to distinguish surface features in that image that were also observed in the other images.« less

  9. Co-registered Topographical, Band Excitation Nanomechanical, and Mass Spectral Imaging Using a Combined Atomic Force Microscopy/Mass Spectrometry Platform

    DOE PAGES

    Ovchinnikova, Olga S.; Tai, Tamin; Bocharova, Vera; ...

    2015-03-18

    The advancement of a hybrid atomic force microscopy/mass spectrometry imaging platform demonstrating for the first time co-registered topographical, band excitation nanomechanical, and mass spectral imaging of a surface using a single instrument is reported. The mass spectrometry-based chemical imaging component of the system utilized nanothermal analysis probes for pyrolytic surface sampling followed by atmospheric pressure chemical ionization of the gas phase species produced with subsequent mass analysis. We discuss the basic instrumental setup and operation and the multimodal imaging capability and utility are demonstrated using a phase separated polystyrene/poly(2-vinylpyridine) polymer blend thin film. The topography and band excitation images showedmore » that the valley and plateau regions of the thin film surface were comprised primarily of one of the two polymers in the blend with the mass spectral chemical image used to definitively identify the polymers at the different locations. Data point pixel size for the topography (390 nm x 390 nm), band excitation (781 nm x 781 nm), mass spectrometry (690 nm x 500 nm) images was comparable and submicrometer in all three cases, but the data voxel size for each of the three images was dramatically different. The topography image was uniquely a surface measurement, whereas the band excitation image included information from an estimated 10 nm deep into the sample and the mass spectral image from 110-140 nm in depth. Moreover, because of this dramatic sampling depth variance, some differences in the band excitation and mass spectrometry chemical images were observed and were interpreted to indicate the presence of a buried interface in the sample. The spatial resolution of the mass spectral image was estimated to be between 1.5 m 2.6 m, based on the ability to distinguish surface features in that image that were also observed in the other images.« less

  10. Synthetic Foveal Imaging Technology

    NASA Technical Reports Server (NTRS)

    Hoenk, Michael; Monacos, Steve; Nikzad, Shouleh

    2009-01-01

    Synthetic Foveal imaging Technology (SyFT) is an emerging discipline of image capture and image-data processing that offers the prospect of greatly increased capabilities for real-time processing of large, high-resolution images (including mosaic images) for such purposes as automated recognition and tracking of moving objects of interest. SyFT offers a solution to the image-data processing problem arising from the proposed development of gigapixel mosaic focal-plane image-detector assemblies for very wide field-of-view imaging with high resolution for detecting and tracking sparse objects or events within narrow subfields of view. In order to identify and track the objects or events without the means of dynamic adaptation to be afforded by SyFT, it would be necessary to post-process data from an image-data space consisting of terabytes of data. Such post-processing would be time-consuming and, as a consequence, could result in missing significant events that could not be observed at all due to the time evolution of such events or could not be observed at required levels of fidelity without such real-time adaptations as adjusting focal-plane operating conditions or aiming of the focal plane in different directions to track such events. The basic concept of foveal imaging is straightforward: In imitation of a natural eye, a foveal-vision image sensor is designed to offer higher resolution in a small region of interest (ROI) within its field of view. Foveal vision reduces the amount of unwanted information that must be transferred from the image sensor to external image-data-processing circuitry. The aforementioned basic concept is not new in itself: indeed, image sensors based on these concepts have been described in several previous NASA Tech Briefs articles. Active-pixel integrated-circuit image sensors that can be programmed in real time to effect foveal artificial vision on demand are one such example. What is new in SyFT is a synergistic combination of recent advances in foveal imaging, computing, and related fields, along with a generalization of the basic foveal-vision concept to admit a synthetic fovea that is not restricted to one contiguous region of an image.

  11. Symmetric nonnegative matrix factorization: algorithms and applications to probabilistic clustering.

    PubMed

    He, Zhaoshui; Xie, Shengli; Zdunek, Rafal; Zhou, Guoxu; Cichocki, Andrzej

    2011-12-01

    Nonnegative matrix factorization (NMF) is an unsupervised learning method useful in various applications including image processing and semantic analysis of documents. This paper focuses on symmetric NMF (SNMF), which is a special case of NMF decomposition. Three parallel multiplicative update algorithms using level 3 basic linear algebra subprograms directly are developed for this problem. First, by minimizing the Euclidean distance, a multiplicative update algorithm is proposed, and its convergence under mild conditions is proved. Based on it, we further propose another two fast parallel methods: α-SNMF and β -SNMF algorithms. All of them are easy to implement. These algorithms are applied to probabilistic clustering. We demonstrate their effectiveness for facial image clustering, document categorization, and pattern clustering in gene expression.

  12. Securing quality of camera-based biomedical optics

    NASA Astrophysics Data System (ADS)

    Guse, Frank; Kasper, Axel; Zinter, Bob

    2009-02-01

    As sophisticated optical imaging technologies move into clinical applications, manufacturers need to guarantee their products meet required performance criteria over long lifetimes and in very different environmental conditions. A consistent quality management marks critical components features derived from end-users requirements in a top-down approach. Careful risk analysis in the design phase defines the sample sizes for production tests, whereas first article inspection assures the reliability of the production processes. We demonstrate the application of these basic quality principles to camera-based biomedical optics for a variety of examples including molecular diagnostics, dental imaging, ophthalmology and digital radiography, covering a wide range of CCD/CMOS chip sizes and resolutions. Novel concepts in fluorescence detection and structured illumination are also highlighted.

  13. Smart Cameras for Remote Science Survey

    NASA Technical Reports Server (NTRS)

    Thompson, David R.; Abbey, William; Allwood, Abigail; Bekker, Dmitriy; Bornstein, Benjamin; Cabrol, Nathalie A.; Castano, Rebecca; Estlin, Tara; Fuchs, Thomas; Wagstaff, Kiri L.

    2012-01-01

    Communication with remote exploration spacecraft is often intermittent and bandwidth is highly constrained. Future missions could use onboard science data understanding to prioritize downlink of critical features [1], draft summary maps of visited terrain [2], or identify targets of opportunity for followup measurements [3]. We describe a generic approach to classify geologic surfaces for autonomous science operations, suitable for parallelized implementations in FPGA hardware. We map these surfaces with texture channels - distinctive numerical signatures that differentiate properties such as roughness, pavement coatings, regolith characteristics, sedimentary fabrics and differential outcrop weathering. This work describes our basic image analysis approach and reports an initial performance evaluation using surface images from the Mars Exploration Rovers. Future work will incorporate these methods into camera hardware for real-time processing.

  14. Low-rate image coding using vector quantization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Makur, A.

    1990-01-01

    This thesis deals with the development and analysis of a computationally simple vector quantization image compression system for coding monochrome images at low bit rate. Vector quantization has been known to be an effective compression scheme when a low bit rate is desirable, but the intensive computation required in a vector quantization encoder has been a handicap in using it for low rate image coding. The present work shows that, without substantially increasing the coder complexity, it is indeed possible to achieve acceptable picture quality while attaining a high compression ratio. Several modifications to the conventional vector quantization coder aremore » proposed in the thesis. These modifications are shown to offer better subjective quality when compared to the basic coder. Distributed blocks are used instead of spatial blocks to construct the input vectors. A class of input-dependent weighted distortion functions is used to incorporate psychovisual characteristics in the distortion measure. Computationally simple filtering techniques are applied to further improve the decoded image quality. Finally, unique designs of the vector quantization coder using electronic neural networks are described, so that the coding delay is reduced considerably.« less

  15. Segmentation and quantification of subcellular structures in fluorescence microscopy images using Squassh.

    PubMed

    Rizk, Aurélien; Paul, Grégory; Incardona, Pietro; Bugarski, Milica; Mansouri, Maysam; Niemann, Axel; Ziegler, Urs; Berger, Philipp; Sbalzarini, Ivo F

    2014-03-01

    Detection and quantification of fluorescently labeled molecules in subcellular compartments is a key step in the analysis of many cell biological processes. Pixel-wise colocalization analyses, however, are not always suitable, because they do not provide object-specific information, and they are vulnerable to noise and background fluorescence. Here we present a versatile protocol for a method named 'Squassh' (segmentation and quantification of subcellular shapes), which is used for detecting, delineating and quantifying subcellular structures in fluorescence microscopy images. The workflow is implemented in freely available, user-friendly software. It works on both 2D and 3D images, accounts for the microscope optics and for uneven image background, computes cell masks and provides subpixel accuracy. The Squassh software enables both colocalization and shape analyses. The protocol can be applied in batch, on desktop computers or computer clusters, and it usually requires <1 min and <5 min for 2D and 3D images, respectively. Basic computer-user skills and some experience with fluorescence microscopy are recommended to successfully use the protocol.

  16. Clinical utility of resting-state functional connectivity magnetic resonance imaging for mood and cognitive disorders.

    PubMed

    Takamura, T; Hanakawa, T

    2017-07-01

    Although functional magnetic resonance imaging (fMRI) has long been used to assess task-related brain activity in neuropsychiatric disorders, it has not yet become a widely available clinical tool. Resting-state fMRI (rs-fMRI) has been the subject of recent attention in the fields of basic and clinical neuroimaging research. This method enables investigation of the functional organization of the brain and alterations of resting-state networks (RSNs) in patients with neuropsychiatric disorders. Rs-fMRI does not require participants to perform a demanding task, in contrast to task fMRI, which often requires participants to follow complex instructions. Rs-fMRI has a number of advantages over task fMRI for application with neuropsychiatric patients, for example, although applications of task fMR to participants for healthy are easy. However, it is difficult to apply these applications to patients with psychiatric and neurological disorders, because they may have difficulty in performing demanding cognitive task. Here, we review the basic methodology and analysis techniques relevant to clinical studies, and the clinical applications of the technique for examining neuropsychiatric disorders, focusing on mood disorders (major depressive disorder and bipolar disorder) and dementia (Alzheimer's disease and mild cognitive impairment).

  17. What Is an Image?

    ERIC Educational Resources Information Center

    Zetie, K. P.

    2017-01-01

    In basic physics, often in their first year of study of the subject, students meet the concept of an image, for example when using pinhole cameras and finding the position of an image in a mirror. They are also familiar with the term in photography and design, through software which allows image manipulation, even "in-camera" on most…

  18. Dual-Energy CT: Basic Principles, Technical Approaches, and Applications in Musculoskeletal Imaging (Part 1).

    PubMed

    Omoumi, Patrick; Becce, Fabio; Racine, Damien; Ott, Julien G; Andreisek, Gustav; Verdun, Francis R

    2015-12-01

    In recent years, technological advances have allowed manufacturers to implement dual-energy computed tomography (DECT) on clinical scanners. With its unique ability to differentiate basis materials by their atomic number, DECT has opened new perspectives in imaging. DECT has been used successfully in musculoskeletal imaging with applications ranging from detection, characterization, and quantification of crystal and iron deposits; to simulation of noncalcium (improving the visualization of bone marrow lesions) or noniodine images. Furthermore, the data acquired with DECT can be postprocessed to generate monoenergetic images of varying kiloelectron volts, providing new methods for image contrast optimization as well as metal artifact reduction. The first part of this article reviews the basic principles and technical aspects of DECT including radiation dose considerations. The second part focuses on applications of DECT to musculoskeletal imaging including gout and other crystal-induced arthropathies, virtual noncalcium images for the study of bone marrow lesions, the study of collagenous structures, applications in computed tomography arthrography, as well as the detection of hemosiderin and metal particles. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  19. Dual-Energy CT: Basic Principles, Technical Approaches, and Applications in Musculoskeletal Imaging (Part 2).

    PubMed

    Omoumi, Patrick; Verdun, Francis R; Guggenberger, Roman; Andreisek, Gustav; Becce, Fabio

    2015-12-01

    In recent years, technological advances have allowed manufacturers to implement dual-energy computed tomography (DECT) on clinical scanners. With its unique ability to differentiate basis materials by their atomic number, DECT has opened new perspectives in imaging. DECT has been successfully used in musculoskeletal imaging with applications ranging from detection, characterization, and quantification of crystal and iron deposits, to simulation of noncalcium (improving the visualization of bone marrow lesions) or noniodine images. Furthermore, the data acquired with DECT can be postprocessed to generate monoenergetic images of varying kiloelectron volts, providing new methods for image contrast optimization as well as metal artifact reduction. The first part of this article reviews the basic principles and technical aspects of DECT including radiation dose considerations. The second part focuses on applications of DECT to musculoskeletal imaging including gout and other crystal-induced arthropathies, virtual noncalcium images for the study of bone marrow lesions, the study of collagenous structures, applications in computed tomography arthrography, as well as the detection of hemosiderin and metal particles. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  20. A framework for farmland parcels extraction based on image classification

    NASA Astrophysics Data System (ADS)

    Liu, Guoying; Ge, Wenying; Song, Xu; Zhao, Hongdan

    2018-03-01

    It is very important for the government to build an accurate national basic cultivated land database. In this work, farmland parcels extraction is one of the basic steps. However, during the past years, people had to spend much time on determining an area is a farmland parcel or not, since they were bounded to understand remote sensing images only from the mere visual interpretation. In order to overcome this problem, in this study, a method was proposed to extract farmland parcels by means of image classification. In the proposed method, farmland areas and ridge areas of the classification map are semantically processed independently and the results are fused together to form the final results of farmland parcels. Experiments on high spatial remote sensing images have shown the effectiveness of the proposed method.

  1. CME Velocity and Acceleration Error Estimates Using the Bootstrap Method

    NASA Technical Reports Server (NTRS)

    Michalek, Grzegorz; Gopalswamy, Nat; Yashiro, Seiji

    2017-01-01

    The bootstrap method is used to determine errors of basic attributes of coronal mass ejections (CMEs) visually identified in images obtained by the Solar and Heliospheric Observatory (SOHO) mission's Large Angle and Spectrometric Coronagraph (LASCO) instruments. The basic parameters of CMEs are stored, among others, in a database known as the SOHO/LASCO CME catalog and are widely employed for many research studies. The basic attributes of CMEs (e.g. velocity and acceleration) are obtained from manually generated height-time plots. The subjective nature of manual measurements introduces random errors that are difficult to quantify. In many studies the impact of such measurement errors is overlooked. In this study we present a new possibility to estimate measurements errors in the basic attributes of CMEs. This approach is a computer-intensive method because it requires repeating the original data analysis procedure several times using replicate datasets. This is also commonly called the bootstrap method in the literature. We show that the bootstrap approach can be used to estimate the errors of the basic attributes of CMEs having moderately large numbers of height-time measurements. The velocity errors are in the vast majority small and depend mostly on the number of height-time points measured for a particular event. In the case of acceleration, the errors are significant, and for more than half of all CMEs, they are larger than the acceleration itself.

  2. MALDI imaging mass spectrometry analysis-A new approach for protein mapping in multiple sclerosis brain lesions.

    PubMed

    Maccarrone, Giuseppina; Nischwitz, Sandra; Deininger, Sören-Oliver; Hornung, Joachim; König, Fatima Barbara; Stadelmann, Christine; Turck, Christoph W; Weber, Frank

    2017-03-15

    Multiple sclerosis is a disease of the central nervous system characterized by recurrent inflammatory demyelinating lesions in the early disease stage. Lesion formation and mechanisms leading to lesion remyelination are not fully understood. Matrix Assisted Laser Desorption Ionisation Mass Spectrometry imaging (MALDI-IMS) is a technology which analyses proteins and peptides in tissue, preserves their spatial localization, and generates molecular maps within the tissue section. In a pilot study we employed MALDI imaging mass spectrometry to profile and identify peptides and proteins expressed in normal-appearing white matter, grey matter and multiple sclerosis brain lesions with different extents of remyelination. The unsupervised clustering analysis of the mass spectra generated images which reflected the tissue section morphology in luxol fast blue stain and in myelin basic protein immunohistochemistry. Lesions with low remyelination extent were defined by compounds with molecular weight smaller than 5300Da, while more completely remyelinated lesions showed compounds with molecular weights greater than 15,200Da. An in-depth analysis of the mass spectra enabled the detection of cortical lesions which were not seen by routine luxol fast blue histology. An ion mass, mainly distributed at the rim of multiple sclerosis lesions, was identified by liquid chromatography and tandem mass spectrometry as thymosin beta-4, a protein known to be involved in cell migration and in restorative processes. The ion mass of thymosin beta-4 was profiled by MALDI imaging mass spectrometry in brain slides of 12 multiple sclerosis patients and validated by immunohistochemical analysis. In summary, our results demonstrate the ability of the MALDI-IMS technology to map proteins within the brain parenchyma and multiple sclerosis lesions and to identify potential markers involved in multiple sclerosis pathogenesis and/or remyelination. Copyright © 2016 Elsevier B.V. All rights reserved.

  3. Learning Photogrammetry with Interactive Software Tool PhoX

    NASA Astrophysics Data System (ADS)

    Luhmann, T.

    2016-06-01

    Photogrammetry is a complex topic in high-level university teaching, especially in the fields of geodesy, geoinformatics and metrology where high quality results are demanded. In addition, more and more black-box solutions for 3D image processing and point cloud generation are available that generate nice results easily, e.g. by structure-from-motion approaches. Within this context, the classical approach of teaching photogrammetry (e.g. focusing on aerial stereophotogrammetry) has to be reformed in order to educate students and professionals with new topics and provide them with more information behind the scene. Since around 20 years photogrammetry courses at the Jade University of Applied Sciences in Oldenburg, Germany, include the use of digital photogrammetry software that provide individual exercises, deep analysis of calculation results and a wide range of visualization tools for almost all standard tasks in photogrammetry. During the last years the software package PhoX has been developed that is part of a new didactic concept in photogrammetry and related subjects. It also serves as analysis tool in recent research projects. PhoX consists of a project-oriented data structure for images, image data, measured points and features and 3D objects. It allows for almost all basic photogrammetric measurement tools, image processing, calculation methods, graphical analysis functions, simulations and much more. Students use the program in order to conduct predefined exercises where they have the opportunity to analyse results in a high level of detail. This includes the analysis of statistical quality parameters but also the meaning of transformation parameters, rotation matrices, calibration and orientation data. As one specific advantage, PhoX allows for the interactive modification of single parameters and the direct view of the resulting effect in image or object space.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Apte, A; Veeraraghavan, H; Oh, J

    Purpose: To present an open source and free platform to facilitate radiomics research — The “Radiomics toolbox” in CERR. Method: There is scarcity of open source tools that support end-to-end modeling of image features to predict patient outcomes. The “Radiomics toolbox” strives to fill the need for such a software platform. The platform supports (1) import of various kinds of image modalities like CT, PET, MR, SPECT, US. (2) Contouring tools to delineate structures of interest. (3) Extraction and storage of image based features like 1st order statistics, gray-scale co-occurrence and zonesize matrix based texture features and shape features andmore » (4) Statistical Analysis. Statistical analysis of the extracted features is supported with basic functionality that includes univariate correlations, Kaplan-Meir curves and advanced functionality that includes feature reduction and multivariate modeling. The graphical user interface and the data management are performed with Matlab for the ease of development and readability of code and features for wide audience. Open-source software developed with other programming languages is integrated to enhance various components of this toolbox. For example: Java-based DCM4CHE for import of DICOM, R for statistical analysis. Results: The Radiomics toolbox will be distributed as an open source, GNU copyrighted software. The toolbox was prototyped for modeling Oropharyngeal PET dataset at MSKCC. The analysis will be presented in a separate paper. Conclusion: The Radiomics Toolbox provides an extensible platform for extracting and modeling image features. To emphasize new uses of CERR for radiomics and image-based research, we have changed the name from the “Computational Environment for Radiotherapy Research” to the “Computational Environment for Radiological Research”.« less

  5. Novel non-contact retina camera for the rat and its application to dynamic retinal vessel analysis

    PubMed Central

    Link, Dietmar; Strohmaier, Clemens; Seifert, Bernd U.; Riemer, Thomas; Reitsamer, Herbert A.; Haueisen, Jens; Vilser, Walthard

    2011-01-01

    We present a novel non-invasive and non-contact system for reflex-free retinal imaging and dynamic retinal vessel analysis in the rat. Theoretical analysis was performed prior to development of the new optical design, taking into account the optical properties of the rat eye and its specific illumination and imaging requirements. A novel optical model of the rat eye was developed for use with standard optical design software, facilitating both sequential and non-sequential modes. A retinal camera for the rat was constructed using standard optical and mechanical components. The addition of a customized illumination unit and existing standard software enabled dynamic vessel analysis. Seven-minute in-vivo vessel diameter recordings performed on 9 Brown-Norway rats showed stable readings. On average, the coefficient of variation was (1.1 ± 0.19) % for the arteries and (0.6 ± 0.08) % for the veins. The slope of the linear regression analysis was (0.56 ± 0.26) % for the arteries and (0.15 ± 0.27) % for the veins. In conclusion, the device can be used in basic studies of retinal vessel behavior. PMID:22076270

  6. Visualizing excipient composition and homogeneity of Compound Liquorice Tablets by near-infrared chemical imaging

    NASA Astrophysics Data System (ADS)

    Wu, Zhisheng; Tao, Ou; Cheng, Wei; Yu, Lu; Shi, Xinyuan; Qiao, Yanjiang

    2012-02-01

    This study demonstrated that near-infrared chemical imaging (NIR-CI) was a promising technology for visualizing the spatial distribution and homogeneity of Compound Liquorice Tablets. The starch distribution (indirectly, plant extraction) could be spatially determined using basic analysis of correlation between analytes (BACRA) method. The correlation coefficients between starch spectrum and spectrum of each sample were greater than 0.95. Depending on the accurate determination of starch distribution, a method to determine homogeneous distribution was proposed by histogram graph. The result demonstrated that starch distribution in sample 3 was relatively heterogeneous according to four statistical parameters. Furthermore, the agglomerates domain in each tablet was detected using score image layers of principal component analysis (PCA) method. Finally, a novel method named Standard Deviation of Macropixel Texture (SDMT) was introduced to detect agglomerates and heterogeneity based on binary image. Every binary image was divided into different sizes length of macropixel and the number of zero values in each macropixel was counted to calculate standard deviation. Additionally, a curve fitting graph was plotted on the relationship between standard deviation and the size length of macropixel. The result demonstrated the inter-tablet heterogeneity of both starch and total compounds distribution, simultaneously, the similarity of starch distribution and the inconsistency of total compounds distribution among intra-tablet were signified according to the value of slope and intercept parameters in the curve.

  7. Focused ion beam (FIB)/scanning electron microscopy (SEM) in tissue structural research.

    PubMed

    Leser, Vladka; Milani, Marziale; Tatti, Francesco; Tkalec, Ziva Pipan; Strus, Jasna; Drobne, Damjana

    2010-10-01

    The focused ion beam (FIB) and scanning electron microscope (SEM) are commonly used in material sciences for imaging and analysis of materials. Over the last decade, the combined FIB/SEM system has proven to be also applicable in the life sciences. We have examined the potential of the focused ion beam/scanning electron microscope system for the investigation of biological tissues of the model organism Porcellio scaber (Crustacea: Isopoda). Tissue from digestive glands was prepared as for conventional SEM or as for transmission electron microscopy (TEM). The samples were transferred into FIB/SEM for FIB milling and an imaging operation. FIB-milled regions were secondary electron imaged, back-scattered electron imaged, or energy dispersive X-ray (EDX) analyzed. Our results demonstrated that FIB/SEM enables simultaneous investigation of sample gross morphology, cell surface characteristics, and subsurface structures. The same FIB-exposed regions were analyzed by EDX to provide basic compositional data. When samples were prepared as for TEM, the information obtained with FIB/SEM is comparable, though at limited magnification, to that obtained from TEM. A combination of imaging, micro-manipulation, and compositional analysis appears of particular interest in the investigation of epithelial tissues, which are subjected to various endogenous and exogenous conditions affecting their structure and function. The FIB/SEM is a promising tool for an overall examination of epithelial tissue under normal, stressed, or pathological conditions.

  8. Image fusion

    NASA Technical Reports Server (NTRS)

    Pavel, M.

    1993-01-01

    The topics covered include the following: a system overview of the basic components of a system designed to improve the ability of a pilot to fly through low-visibility conditions such as fog; the role of visual sciences; fusion issues; sensor characterization; sources of information; image processing; and image fusion.

  9. [Advantages and Application Prospects of Deep Learning in Image Recognition and Bone Age Assessment].

    PubMed

    Hu, T H; Wan, L; Liu, T A; Wang, M W; Chen, T; Wang, Y H

    2017-12-01

    Deep learning and neural network models have been new research directions and hot issues in the fields of machine learning and artificial intelligence in recent years. Deep learning has made a breakthrough in the applications of image and speech recognitions, and also has been extensively used in the fields of face recognition and information retrieval because of its special superiority. Bone X-ray images express different variations in black-white-gray gradations, which have image features of black and white contrasts and level differences. Based on these advantages of deep learning in image recognition, we combine it with the research of bone age assessment to provide basic datum for constructing a forensic automatic system of bone age assessment. This paper reviews the basic concept and network architectures of deep learning, and describes its recent research progress on image recognition in different research fields at home and abroad, and explores its advantages and application prospects in bone age assessment. Copyright© by the Editorial Department of Journal of Forensic Medicine.

  10. Lung parenchymal analysis on dynamic MRI in thoracic insufficiency syndrome to assess changes following surgical intervention

    NASA Astrophysics Data System (ADS)

    Jagadale, Basavaraj N.; Udupa, Jayaram K.; Tong, Yubing; Wu, Caiyun; McDonough, Joseph; Torigian, Drew A.; Campbell, Robert M.

    2018-02-01

    General surgeons, orthopedists, and pulmonologists individually treat patients with thoracic insufficiency syndrome (TIS). The benefits of growth-sparing procedures such as Vertical Expandable Prosthetic Titanium Rib (VEPTR)insertionfor treating patients with TIS have been demonstrated. However, at present there is no objective assessment metricto examine different thoracic structural components individually as to their roles in the syndrome, in contributing to dynamics and function, and in influencing treatment outcome. Using thoracic dynamic MRI (dMRI), we have been developing a methodology to overcome this problem. In this paper, we extend this methodology from our previous structural analysis approaches to examining lung tissue properties. We process the T2-weighted dMRI images through a series of steps involving 4D image construction of the acquired dMRI images, intensity non-uniformity correction and standardization of the 4D image, lung segmentation, and estimation of the parameters describing lung tissue intensity distributions in the 4D image. Based on pre- and post-operative dMRI data sets from 25 TIS patients (predominantly neuromuscular and congenital conditions), we demonstrate how lung tissue can be characterized by the estimated distribution parameters. Our results show that standardized T2-weighted image intensity values decrease from the pre- to post-operative condition, likely reflecting improved lung aeration post-operatively. In both pre- and post-operative conditions, the intensity values decrease also from end-expiration to end-inspiration, supporting the basic premise of our results.

  11. Basic MRI for the liver oncologists and surgeons.

    PubMed

    Vu, Lan N; Morelli, John N; Szklaruk, Janio

    2017-01-01

    Magnetic resonance imaging (MRI) is the modality of choice for liver imaging due to its superior contrast resolution in comparison with computer tomography and the ability to provide both morphologic and physiologic information. The physics of MR are complex, and a detailed understanding is not required to appreciate findings on an MRI exam. Here, we attempt to introduce the basic principles of MRI with respect to hepatic imaging focusing on various commonly encountered hepatic diseases. The purpose is to facilitate an appreciation of the various diagnostic capabilities of MR among hepatic oncologists and surgeons and to foster an understanding of when MR studies may be appropriate in the care of their patients.

  12. Discrete Neural Signatures of Basic Emotions.

    PubMed

    Saarimäki, Heini; Gotsopoulos, Athanasios; Jääskeläinen, Iiro P; Lampinen, Jouko; Vuilleumier, Patrik; Hari, Riitta; Sams, Mikko; Nummenmaa, Lauri

    2016-06-01

    Categorical models of emotions posit neurally and physiologically distinct human basic emotions. We tested this assumption by using multivariate pattern analysis (MVPA) to classify brain activity patterns of 6 basic emotions (disgust, fear, happiness, sadness, anger, and surprise) in 3 experiments. Emotions were induced with short movies or mental imagery during functional magnetic resonance imaging. MVPA accurately classified emotions induced by both methods, and the classification generalized from one induction condition to another and across individuals. Brain regions contributing most to the classification accuracy included medial and inferior lateral prefrontal cortices, frontal pole, precentral and postcentral gyri, precuneus, and posterior cingulate cortex. Thus, specific neural signatures across these regions hold representations of different emotional states in multimodal fashion, independently of how the emotions are induced. Similarity of subjective experiences between emotions was associated with similarity of neural patterns for the same emotions, suggesting a direct link between activity in these brain regions and the subjective emotional experience. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  13. Basic test framework for the evaluation of text line segmentation and text parameter extraction.

    PubMed

    Brodić, Darko; Milivojević, Dragan R; Milivojević, Zoran

    2010-01-01

    Text line segmentation is an essential stage in off-line optical character recognition (OCR) systems. It is a key because inaccurately segmented text lines will lead to OCR failure. Text line segmentation of handwritten documents is a complex and diverse problem, complicated by the nature of handwriting. Hence, text line segmentation is a leading challenge in handwritten document image processing. Due to inconsistencies in measurement and evaluation of text segmentation algorithm quality, some basic set of measurement methods is required. Currently, there is no commonly accepted one and all algorithm evaluation is custom oriented. In this paper, a basic test framework for the evaluation of text feature extraction algorithms is proposed. This test framework consists of a few experiments primarily linked to text line segmentation, skew rate and reference text line evaluation. Although they are mutually independent, the results obtained are strongly cross linked. In the end, its suitability for different types of letters and languages as well as its adaptability are its main advantages. Thus, the paper presents an efficient evaluation method for text analysis algorithms.

  14. Basic Test Framework for the Evaluation of Text Line Segmentation and Text Parameter Extraction

    PubMed Central

    Brodić, Darko; Milivojević, Dragan R.; Milivojević, Zoran

    2010-01-01

    Text line segmentation is an essential stage in off-line optical character recognition (OCR) systems. It is a key because inaccurately segmented text lines will lead to OCR failure. Text line segmentation of handwritten documents is a complex and diverse problem, complicated by the nature of handwriting. Hence, text line segmentation is a leading challenge in handwritten document image processing. Due to inconsistencies in measurement and evaluation of text segmentation algorithm quality, some basic set of measurement methods is required. Currently, there is no commonly accepted one and all algorithm evaluation is custom oriented. In this paper, a basic test framework for the evaluation of text feature extraction algorithms is proposed. This test framework consists of a few experiments primarily linked to text line segmentation, skew rate and reference text line evaluation. Although they are mutually independent, the results obtained are strongly cross linked. In the end, its suitability for different types of letters and languages as well as its adaptability are its main advantages. Thus, the paper presents an efficient evaluation method for text analysis algorithms. PMID:22399932

  15. Interpolation on the manifold of K component GMMs.

    PubMed

    Kim, Hyunwoo J; Adluru, Nagesh; Banerjee, Monami; Vemuri, Baba C; Singh, Vikas

    2015-12-01

    Probability density functions (PDFs) are fundamental objects in mathematics with numerous applications in computer vision, machine learning and medical imaging. The feasibility of basic operations such as computing the distance between two PDFs and estimating a mean of a set of PDFs is a direct function of the representation we choose to work with. In this paper, we study the Gaussian mixture model (GMM) representation of the PDFs motivated by its numerous attractive features. (1) GMMs are arguably more interpretable than, say, square root parameterizations (2) the model complexity can be explicitly controlled by the number of components and (3) they are already widely used in many applications. The main contributions of this paper are numerical algorithms to enable basic operations on such objects that strictly respect their underlying geometry. For instance, when operating with a set of K component GMMs, a first order expectation is that the result of simple operations like interpolation and averaging should provide an object that is also a K component GMM. The literature provides very little guidance on enforcing such requirements systematically. It turns out that these tasks are important internal modules for analysis and processing of a field of ensemble average propagators (EAPs), common in diffusion weighted magnetic resonance imaging. We provide proof of principle experiments showing how the proposed algorithms for interpolation can facilitate statistical analysis of such data, essential to many neuroimaging studies. Separately, we also derive interesting connections of our algorithm with functional spaces of Gaussians, that may be of independent interest.

  16. A Versatile Image Processor For Digital Diagnostic Imaging And Its Application In Computed Radiography

    NASA Astrophysics Data System (ADS)

    Blume, H.; Alexandru, R.; Applegate, R.; Giordano, T.; Kamiya, K.; Kresina, R.

    1986-06-01

    In a digital diagnostic imaging department, the majority of operations for handling and processing of images can be grouped into a small set of basic operations, such as image data buffering and storage, image processing and analysis, image display, image data transmission and image data compression. These operations occur in almost all nodes of the diagnostic imaging communications network of the department. An image processor architecture was developed in which each of these functions has been mapped into hardware and software modules. The modular approach has advantages in terms of economics, service, expandability and upgradeability. The architectural design is based on the principles of hierarchical functionality, distributed and parallel processing and aims at real time response. Parallel processing and real time response is facilitated in part by a dual bus system: a VME control bus and a high speed image data bus, consisting of 8 independent parallel 16-bit busses, capable of handling combined up to 144 MBytes/sec. The presented image processor is versatile enough to meet the video rate processing needs of digital subtraction angiography, the large pixel matrix processing requirements of static projection radiography, or the broad range of manipulation and display needs of a multi-modality diagnostic work station. Several hardware modules are described in detail. For illustrating the capabilities of the image processor, processed 2000 x 2000 pixel computed radiographs are shown and estimated computation times for executing the processing opera-tions are presented.

  17. Cardiovascular magnetic resonance physics for clinicians: part II

    PubMed Central

    2012-01-01

    This is the second of two reviews that is intended to cover the essential aspects of cardiovascular magnetic resonance (CMR) physics in a way that is understandable and relevant to clinicians using CMR in their daily practice. Starting with the basic pulse sequences and contrast mechanisms described in part I, it briefly discusses further approaches to accelerate image acquisition. It then continues by showing in detail how the contrast behaviour of black blood fast spin echo and bright blood cine gradient echo techniques can be modified by adding rf preparation pulses to derive a number of more specialised pulse sequences. The simplest examples described include T2-weighted oedema imaging, fat suppression and myocardial tagging cine pulse sequences. Two further important derivatives of the gradient echo pulse sequence, obtained by adding preparation pulses, are used in combination with the administration of a gadolinium-based contrast agent for myocardial perfusion imaging and the assessment of myocardial tissue viability using a late gadolinium enhancement (LGE) technique. These two imaging techniques are discussed in more detail, outlining the basic principles of each pulse sequence, the practical steps required to achieve the best results in a clinical setting and, in the case of perfusion, explaining some of the factors that influence current approaches to perfusion image analysis. The key principles of contrast-enhanced magnetic resonance angiography (CE-MRA) are also explained in detail, especially focusing on timing of the acquisition following contrast agent bolus administration, and current approaches to achieving time resolved MRA. Alternative MRA techniques that do not require the use of an endogenous contrast agent are summarised, and the specialised pulse sequence used to image the coronary arteries, using respiratory navigator gating, is described in detail. The article concludes by explaining the principle behind phase contrast imaging techniques which create images that represent the phase of the MR signal rather than the magnitude. It is shown how this principle can be used to generate velocity maps by designing gradient waveforms that give rise to a relative phase change that is proportional to velocity. Choice of velocity encoding range and key pitfalls in the use of this technique are discussed. PMID:22995744

  18. GF-7 Imaging Simulation and Dsm Accuracy Estimate

    NASA Astrophysics Data System (ADS)

    Yue, Q.; Tang, X.; Gao, X.

    2017-05-01

    GF-7 satellite is a two-line-array stereo imaging satellite for surveying and mapping which will be launched in 2018. Its resolution is about 0.8 meter at subastral point corresponding to a 20 km width of cloth, and the viewing angle of its forward and backward cameras are 5 and 26 degrees. This paper proposed the imaging simulation method of GF-7 stereo images. WorldView-2 stereo images were used as basic data for simulation. That is, we didn't use DSM and DOM as basic data (we call it "ortho-to-stereo" method) but used a "stereo-to-stereo" method, which will be better to reflect the difference of geometry and radiation in different looking angle. The shortage is that geometric error will be caused by two factors, one is different looking angles between basic image and simulated image, another is not very accurate or no ground reference data. We generated DSM by WorldView-2 stereo images. The WorldView-2 DSM was not only used as reference DSM to estimate the accuracy of DSM generated by simulated GF-7 stereo images, but also used as "ground truth" to establish the relationship between WorldView-2 image point and simulated image point. Static MTF was simulated on the instantaneous focal plane "image" by filtering. SNR was simulated in the electronic sense, that is, digital value of WorldView-2 image point was converted to radiation brightness and used as radiation brightness of simulated GF-7 camera. This radiation brightness will be converted to electronic number n according to physical parameters of GF-7 camera. The noise electronic number n1 will be a random number between -√n and √n. The overall electronic number obtained by TDI CCD will add and converted to digital value of simulated GF-7 image. Sinusoidal curves with different amplitude, frequency and initial phase were used as attitude curves. Geometric installation errors of CCD tiles were also simulated considering the rotation and translation factors. An accuracy estimate was made for DSM generated from simulated images.

  19. Environmental analysis using integrated GIS and remotely sensed data - Some research needs and priorities

    NASA Technical Reports Server (NTRS)

    Davis, Frank W.; Quattrochi, Dale A.; Ridd, Merrill K.; Lam, Nina S.-N.; Walsh, Stephen J.

    1991-01-01

    This paper discusses some basic scientific issues and research needs in the joint processing of remotely sensed and GIS data for environmental analysis. Two general topics are treated in detail: (1) scale dependence of geographic data and the analysis of multiscale remotely sensed and GIS data, and (2) data transformations and information flow during data processing. The discussion of scale dependence focuses on the theory and applications of spatial autocorrelation, geostatistics, and fractals for characterizing and modeling spatial variation. Data transformations during processing are described within the larger framework of geographical analysis, encompassing sampling, cartography, remote sensing, and GIS. Development of better user interfaces between image processing, GIS, database management, and statistical software is needed to expedite research on these and other impediments to integrated analysis of remotely sensed and GIS data.

  20. Metadata requirements for results of diagnostic imaging procedures: a BIIF profile to support user applications

    NASA Astrophysics Data System (ADS)

    Brown, Nicholas J.; Lloyd, David S.; Reynolds, Melvin I.; Plummer, David L.

    2002-05-01

    A visible digital image is rendered from a set of digital image data. Medical digital image data can be stored as either: (a) pre-rendered format, corresponding to a photographic print, or (b) un-rendered format, corresponding to a photographic negative. The appropriate image data storage format and associated header data (metadata) required by a user of the results of a diagnostic procedure recorded electronically depends on the task(s) to be performed. The DICOM standard provides a rich set of metadata that supports the needs of complex applications. Many end user applications, such as simple report text viewing and display of a selected image, are not so demanding and generic image formats such as JPEG are sometimes used. However, these are lacking some basic identification requirements. In this paper we make specific proposals for minimal extensions to generic image metadata of value in various domains, which enable safe use in the case of two simple healthcare end user scenarios: (a) viewing of text and a selected JPEG image activated by a hyperlink and (b) viewing of one or more JPEG images together with superimposed text and graphics annotation using a file specified by a profile of the ISO/IEC Basic Image Interchange Format (BIIF).

  1. Effective and basic business strategic tools to overcome the DRA impact in outpatient imaging centers.

    PubMed

    Cerdena, Ernesto A; Corigliano, Barbara A

    2007-01-01

    The implementation of the Deficit Reduction Act (DRA) of 2005 has had adverse impacts with freestanding imaging centers and independent diagnostic testing facilities (IDTF) throughout the nation, including patient's access to quality imaging as well as crippling an organization's bottom line. Basic but effective business strategic tools should be formulated and executed to overcome the negative impact of the DRA. This should include creative and innovative process improvement initiatives while reducing operational costs and optimizing staff, thus improving profitability. Radiology administrators should act as facilitators to articulate and instill the mission, core values, and vision of the organization to the staff. Equally important, leaders in the imaging industry need to manifest a strong commitment in bringing the center into a whole new paradigm shift towards excellence and effective business operations.

  2. Improving Echo-Guided Procedures Using an Ultrasound-CT Image Fusion System.

    PubMed

    Diana, Michele; Halvax, Peter; Mertz, Damien; Legner, Andras; Brulé, Jean-Marcel; Robinet, Eric; Mutter, Didier; Pessaux, Patrick; Marescaux, Jacques

    2015-06-01

    Image fusion between ultrasound (US) and computed tomography (CT) scan or magnetic resonance can increase operator accuracy in targeting liver lesions, particularly when those are undetectable with US alone. We have developed a modular gel to simulate hepatic solid lesions for educational purposes in imaging and minimally invasive ablation techniques. We aimed to assess the impact of image fusion in targeting artificial hepatic lesions during the hands-on part of 2 courses (basic and advanced) in hepatobiliary surgery. Under US guidance, 10 fake tumors of various sizes were created in the livers of 2 pigs, by percutaneous injection of a biocompatible gel engineered to be hyperdense on CT scanning and barely detectable on US. A CT scan was obtained and a CT-US image fusion was performed using the ACUSON S3000 US system (Siemens Healthcare, Germany). A total of 12 blinded course attendants, were asked in turn to perform a 10-minute liver scan with US alone followed by a 10-minute scan using image fusion. Using US alone, the expert managed to identify all lesions successfully. The true positive rate for course attendants with US alone was 14/36 and 2/24 in the advanced and basic courses, respectively. The total number of false positives identified was 26. With image fusion, the rate of true positives significantly increased to 31/36 (P < .001) in the advanced group and 16/24 in the basic group (P < .001). The total number of false positives, considering all participants, decreased to 4 (P < .001). Image fusion significantly increases accuracy in targeting hepatic lesions and might improve echo-guided procedures. © The Author(s) 2015.

  3. Characterization of an Extremely Basic Protein Derived from Granulosis Virus Nucleocapsids †

    PubMed Central

    Tweeten, Kathleen A.; Bulla, Lee A.; Consigli, Richard A.

    1980-01-01

    Nucleocapsids were isolated from purified enveloped nucleocapsids of Plodia interpunctella granulosis virus by treatment with Nonidet P-40. When analyzed on sodium dodecyl sulfate-polyacrylamide gels, the nucleocapsids consisted of eight polypeptides. One of these, a major component with a molecular weight of 12,500 (VP12), was selectively extracted from the nucleocapsids with 0.25 M sulfuric acid. Its electrophoretic mobility on acetic acid-urea gels was intermediate to that of cellular histones and protamine. Amino acid analysis showed that 39% of the amino acid residues of VP12 were basic: 27% were arginine and 12% were histidine. The remaining residues consisted primarily of serine, valine, and isoleucine. Proteins of similar arginine content also were extracted from the granulosis virus of Pieris rapae and from the nuclear polyhedrosis viruses of Spodoptera frugiperda and Autographa californica. The basic polypeptide appeared to be virus specific because it was found in nucleocapsids and virus-infected cells but not in uninfected cells. VP12 was not present in polypeptide profiles of granulosis virus capsids, indicating that it was an internal or core protein of the nucleocapsids. Electron microscopic observations suggested that the basic protein was associated with the viral DNA in the form of a DNA-protein complex. Images PMID:16789190

  4. Multiparametric Analysis of the Tumor Microenvironment: Hypoxia Markers and Beyond.

    PubMed

    Mayer, Arnulf; Vaupel, Peter

    2017-01-01

    We have established a novel in situ protein analysis pipeline, which is built upon highly sensitive, multichannel immunofluorescent staining of paraffin sections of human and xenografted tumor tissue. Specimens are digitized using slide scanners equipped with suitable light sources and fluorescence filter combinations. Resulting digital images are subsequently subjected to quantitative image analysis using a primarily object-based approach, which comprises segmentation of single cells or higher-order structures (e.g., blood vessels), cell shape approximation, measurement of signal intensities in individual fluorescent channels and correlation of these data with positional information for each object. Our approach could be particularly useful for the study of the hypoxic tumor microenvironment as it can be utilized to systematically explore the influence of spatial factors on cell phenotypes, e.g., the distance of a given cell type from the nearest blood vessel on the cellular expression of hypoxia-associated biomarkers and other proteins reflecting their specific state of activation or function. In this report, we outline the basic methodology and provide an outlook on possible use cases.

  5. Techniques for automatic large scale change analysis of temporal multispectral imagery

    NASA Astrophysics Data System (ADS)

    Mercovich, Ryan A.

    Change detection in remotely sensed imagery is a multi-faceted problem with a wide variety of desired solutions. Automatic change detection and analysis to assist in the coverage of large areas at high resolution is a popular area of research in the remote sensing community. Beyond basic change detection, the analysis of change is essential to provide results that positively impact an image analyst's job when examining potentially changed areas. Present change detection algorithms are geared toward low resolution imagery, and require analyst input to provide anything more than a simple pixel level map of the magnitude of change that has occurred. One major problem with this approach is that change occurs in such large volume at small spatial scales that a simple change map is no longer useful. This research strives to create an algorithm based on a set of metrics that performs a large area search for change in high resolution multispectral image sequences and utilizes a variety of methods to identify different types of change. Rather than simply mapping the magnitude of any change in the scene, the goal of this research is to create a useful display of the different types of change in the image. The techniques presented in this dissertation are used to interpret large area images and provide useful information to an analyst about small regions that have undergone specific types of change while retaining image context to make further manual interpretation easier. This analyst cueing to reduce information overload in a large area search environment will have an impact in the areas of disaster recovery, search and rescue situations, and land use surveys among others. By utilizing a feature based approach founded on applying existing statistical methods and new and existing topological methods to high resolution temporal multispectral imagery, a novel change detection methodology is produced that can automatically provide useful information about the change occurring in large area and high resolution image sequences. The change detection and analysis algorithm developed could be adapted to many potential image change scenarios to perform automatic large scale analysis of change.

  6. Effective and efficient analysis of spatio-temporal data

    NASA Astrophysics Data System (ADS)

    Zhang, Zhongnan

    Spatio-temporal data mining, i.e., mining knowledge from large amount of spatio-temporal data, is a highly demanding field because huge amounts of spatio-temporal data have been collected in various applications, ranging from remote sensing, to geographical information systems (GIS), computer cartography, environmental assessment and planning, etc. The collection data far exceeded human's ability to analyze which make it crucial to develop analysis tools. Recent studies on data mining have extended to the scope of data mining from relational and transactional datasets to spatial and temporal datasets. Among the various forms of spatio-temporal data, remote sensing images play an important role, due to the growing wide-spreading of outer space satellites. In this dissertation, we proposed two approaches to analyze the remote sensing data. The first one is about applying association rules mining onto images processing. Each image was divided into a number of image blocks. We built a spatial relationship for these blocks during the dividing process. This made a large number of images into a spatio-temporal dataset since each image was shot in time-series. The second one implemented co-occurrence patterns discovery from these images. The generated patterns represent subsets of spatial features that are located together in space and time. A weather analysis is composed of individual analysis of several meteorological variables. These variables include temperature, pressure, dew point, wind, clouds, visibility and so on. Local-scale models provide detailed analysis and forecasts of meteorological phenomena ranging from a few kilometers to about 100 kilometers in size. When some of above meteorological variables have some special change tendency, some kind of severe weather will happen in most cases. Using the discovery of association rules, we found that some special meteorological variables' changing has tight relation with some severe weather situation that will happen very soon. This dissertation is composed of three parts: an introduction, some basic knowledges and relative works, and my own three contributions to the development of approaches for spatio-temporal data mining: DYSTAL algorithm, STARSI algorithm, and COSTCOP+ algorithm.

  7. Edge detection and localization with edge pattern analysis and inflection characterization

    NASA Astrophysics Data System (ADS)

    Jiang, Bo

    2012-05-01

    In general edges are considered to be abrupt changes or discontinuities in two dimensional image signal intensity distributions. The accuracy of front-end edge detection methods in image processing impacts the eventual success of higher level pattern analysis downstream. To generalize edge detectors designed from a simple ideal step function model to real distortions in natural images, research on one dimensional edge pattern analysis to improve the accuracy of edge detection and localization proposes an edge detection algorithm, which is composed by three basic edge patterns, such as ramp, impulse, and step. After mathematical analysis, general rules for edge representation based upon the classification of edge types into three categories-ramp, impulse, and step (RIS) are developed to reduce detection and localization errors, especially reducing "double edge" effect that is one important drawback to the derivative method. But, when applying one dimensional edge pattern in two dimensional image processing, a new issue is naturally raised that the edge detector should correct marking inflections or junctions of edges. Research on human visual perception of objects and information theory pointed out that a pattern lexicon of "inflection micro-patterns" has larger information than a straight line. Also, research on scene perception gave an idea that contours have larger information are more important factor to determine the success of scene categorization. Therefore, inflections or junctions are extremely useful features, whose accurate description and reconstruction are significant in solving correspondence problems in computer vision. Therefore, aside from adoption of edge pattern analysis, inflection or junction characterization is also utilized to extend traditional derivative edge detection algorithm. Experiments were conducted to test my propositions about edge detection and localization accuracy improvements. The results support the idea that these edge detection method improvements are effective in enhancing the accuracy of edge detection and localization.

  8. Improved egg crack detection algorithm for modified pressure imaging system

    USDA-ARS?s Scientific Manuscript database

    Shell eggs with microcracks are often undetected during egg grading processes. In the past, a modified pressure imaging system was developed to detect eggs with microcracks without adversely affecting the quality of normal intact eggs. The basic idea of the modified pressure imaging system was to ap...

  9. Understanding MRI: basic MR physics for physicians.

    PubMed

    Currie, Stuart; Hoggard, Nigel; Craven, Ian J; Hadjivassiliou, Marios; Wilkinson, Iain D

    2013-04-01

    More frequently hospital clinicians are reviewing images from MR studies of their patients before seeking formal radiological opinion. This practice is driven by a multitude of factors, including an increased demand placed on hospital services, the wide availability of the picture archiving and communication system, time pressures for patient treatment (eg, in the management of acute stroke) and an inherent desire for the clinician to learn. Knowledge of the basic physical principles behind MRI is essential for correct image interpretation. This article, written for the general hospital physician, describes the basic physics of MRI taking into account the machinery, contrast weighting, spin- and gradient-echo techniques and pertinent safety issues. Examples provided are primarily referenced to neuroradiology reflecting the subspecialty for which MR currently has the greatest clinical application.

  10. A 3D Reconstruction Strategy of Vehicle Outline Based on Single-Pass Single-Polarization CSAR Data.

    PubMed

    Leping Chen; Daoxiang An; Xiaotao Huang; Zhimin Zhou

    2017-11-01

    In the last few years, interest in circular synthetic aperture radar (CSAR) acquisitions has arisen as a consequence of the potential achievement of 3D reconstructions over 360° azimuth angle variation. In real-world scenarios, full 3D reconstructions of arbitrary targets need multi-pass data, which makes the processing complex, money-consuming, and time expending. In this paper, we propose a processing strategy for the 3D reconstruction of vehicle, which can avoid using multi-pass data by introducing a priori information of vehicle's shape. Besides, the proposed strategy just needs the single-pass single-polarization CSAR data to perform vehicle's 3D reconstruction, which makes the processing much more economic and efficient. First, an analysis of the distribution of attributed scattering centers from vehicle facet model is presented. And the analysis results show that a smooth and continuous basic outline of vehicle could be extracted from the peak curve of a noncoherent processing image. Second, the 3D location of vehicle roofline is inferred from layover with empirical insets of the basic outline. At last, the basic line and roofline of the vehicle are used to estimate the vehicle's 3D information and constitute the vehicle's 3D outline. The simulated and measured data processing results prove the correctness and effectiveness of our proposed strategy.

  11. Mapping of Polar Areas Based on High-Resolution Satellite Images: The Example of the Henryk Arctowski Polish Antarctic Station

    NASA Astrophysics Data System (ADS)

    Kurczyński, Zdzisław; Różycki, Sebastian; Bylina, Paweł

    2017-12-01

    To produce orthophotomaps or digital elevation models, the most commonly used method is photogrammetric measurement. However, the use of aerial images is not easy in polar regions for logistical reasons. In these areas, remote sensing data acquired from satellite systems is much more useful. This paper presents the basic technical requirements of different products which can be obtain (in particular orthoimages and digital elevation model (DEM)) using Very-High-Resolution Satellite (VHRS) images. The study area was situated in the vicinity of the Henryk Arctowski Polish Antarctic Station on the Western Shore of Admiralty Bay, King George Island, Western Antarctic. Image processing was applied on two triplets of images acquired by the Pléiades 1A and 1B in March 2013. The results of the generation of orthoimages from the Pléiades systems without control points showed that the proposed method can achieve Root Mean Squared Error (RMSE) of 3-9 m. The presented Pléiades images are useful for thematic remote sensing analysis and processing of measurements. Using satellite images to produce remote sensing products for polar regions is highly beneficial and reliable and compares well with more expensive airborne photographs or field surveys.

  12. Segmentation of epidermal tissue with histopathological damage in images of haematoxylin and eosin stained human skin

    PubMed Central

    2014-01-01

    Background Digital image analysis has the potential to address issues surrounding traditional histological techniques including a lack of objectivity and high variability, through the application of quantitative analysis. A key initial step in image analysis is the identification of regions of interest. A widely applied methodology is that of segmentation. This paper proposes the application of image analysis techniques to segment skin tissue with varying degrees of histopathological damage. The segmentation of human tissue is challenging as a consequence of the complexity of the tissue structures and inconsistencies in tissue preparation, hence there is a need for a new robust method with the capability to handle the additional challenges materialising from histopathological damage. Methods A new algorithm has been developed which combines enhanced colour information, created following a transformation to the L*a*b* colourspace, with general image intensity information. A colour normalisation step is included to enhance the algorithm’s robustness to variations in the lighting and staining of the input images. The resulting optimised image is subjected to thresholding and the segmentation is fine-tuned using a combination of morphological processing and object classification rules. The segmentation algorithm was tested on 40 digital images of haematoxylin & eosin (H&E) stained skin biopsies. Accuracy, sensitivity and specificity of the algorithmic procedure were assessed through the comparison of the proposed methodology against manual methods. Results Experimental results show the proposed fully automated methodology segments the epidermis with a mean specificity of 97.7%, a mean sensitivity of 89.4% and a mean accuracy of 96.5%. When a simple user interaction step is included, the specificity increases to 98.0%, the sensitivity to 91.0% and the accuracy to 96.8%. The algorithm segments effectively for different severities of tissue damage. Conclusions Epidermal segmentation is a crucial first step in a range of applications including melanoma detection and the assessment of histopathological damage in skin. The proposed methodology is able to segment the epidermis with different levels of histological damage. The basic method framework could be applied to segmentation of other epithelial tissues. PMID:24521154

  13. Basic imaging in congenital heart disease. 3rd Ed

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Swischuk, L.E.; Sapire, D.W.

    1986-01-01

    The book retains its previous format with chapters on embryology, plain film interpretation, classification of pulmonary vascular patterns, cardiac malpositions and vascular anomalies, and illustrative cases. The book is organized with an abundance of illustrative figures, diagrams, and image reproductions. These include plain chest radiographs, angiograms, echocardiograms, and MR images. The authors present the pathophysiology and imaging of congenital heart lesions.

  14. Transition year labeling error characterization study. [Kansas, Minnesota, Montana, North Dakota, South Dakota, and Oklahoma

    NASA Technical Reports Server (NTRS)

    Clinton, N. J. (Principal Investigator)

    1980-01-01

    Labeling errors made in the large area crop inventory experiment transition year estimates by Earth Observation Division image analysts are identified and quantified. The analysis was made from a subset of blind sites in six U.S. Great Plains states (Oklahoma, Kansas, Montana, Minnesota, North and South Dakota). The image interpretation basically was well done, resulting in a total omission error rate of 24 percent and a commission error rate of 4 percent. The largest amount of error was caused by factors beyond the control of the analysts who were following the interpretation procedures. The odd signatures, the largest error cause group, occurred mostly in areas of moisture abnormality. Multicrop labeling was tabulated showing the distribution of labeling for all crops.

  15. The Art of Astronomy: A New General Education Course for Non-Science Majors

    NASA Astrophysics Data System (ADS)

    Pilachowski, Catherine A.; van Zee, Liese

    2017-01-01

    The Art of Astronomy is a new general education course developed at Indiana University. The topic appeals to a broad range of undergraduates and the course gives students the tools to understand and appreciate astronomical images in a new way. The course explores the science of imaging the universe and the technology that makes the images possible. Topics include the night sky, telescopes and cameras, light and color, and the science behind the images. Coloring the Universe: An Insider's Look at Making Spectacular Images of Space" by T. A. Rector, K. Arcand, and M. Watzke serves as the basic text for the course, supplemented by readings from the web. Through the course, students participate in exploration activities designed to help them first to understand astronomy images, and then to create them. Learning goals include an understanding of scientific inquiry, an understanding of the basics of imaging science as applied in astronomy, a knowledge of the electromagnetic spectrum and how observations at different wavelengths inform us about different environments in the universe, and an ability to interpret astronomical images to learn about the universe and to model and understand the physical world.

  16. Spinal fusion-hardware construct: Basic concepts and imaging review

    PubMed Central

    Nouh, Mohamed Ragab

    2012-01-01

    The interpretation of spinal images fixed with metallic hardware forms an increasing bulk of daily practice in a busy imaging department. Radiologists are required to be familiar with the instrumentation and operative options used in spinal fixation and fusion procedures, especially in his or her institute. This is critical in evaluating the position of implants and potential complications associated with the operative approaches and spinal fixation devices used. Thus, the radiologist can play an important role in patient care and outcome. This review outlines the advantages and disadvantages of commonly used imaging methods and reports on the best yield for each modality and how to overcome the problematic issues associated with the presence of metallic hardware during imaging. Baseline radiographs are essential as they are the baseline point for evaluation of future studies should patients develop symptoms suggesting possible complications. They may justify further imaging workup with computed tomography, magnetic resonance and/or nuclear medicine studies as the evaluation of a patient with a spinal implant involves a multi-modality approach. This review describes imaging features of potential complications associated with spinal fusion surgery as well as the instrumentation used. This basic knowledge aims to help radiologists approach everyday practice in clinical imaging. PMID:22761979

  17. Technical design and system implementation of region-line primitive association framework

    NASA Astrophysics Data System (ADS)

    Wang, Min; Xing, Jinjin; Wang, Jie; Lv, Guonian

    2017-08-01

    Apart from regions, image edge lines are an important information source, and they deserve more attention in object-based image analysis (OBIA) than they currently receive. In the region-line primitive association framework (RLPAF), we promote straight-edge lines as line primitives to achieve powerful OBIAs. Along with regions, straight lines become basic units for subsequent extraction and analysis of OBIA features. This study develops a new software system called remote-sensing knowledge finder (RSFinder) to implement RLPAF for engineering application purposes. This paper introduces the extended technical framework, a comprehensively designed feature set, key technology, and software implementation. To our knowledge, RSFinder is the world's first OBIA system based on two types of primitives, namely, regions and lines. It is fundamentally different from other well-known region-only-based OBIA systems, such as eCogntion and ENVI feature extraction module. This paper has important reference values for the development of similarly structured OBIA systems and line-involved extraction algorithms of remote sensing information.

  18. Investigation of a novel approach to scoring Giemsa-stained malaria-infected thin blood films.

    PubMed

    Proudfoot, Owen; Drew, Nathan; Scholzen, Anja; Xiang, Sue; Plebanski, Magdalena

    2008-04-21

    Daily assessment of the percentage of erythrocytes that are infected ('percent-parasitaemia') across a time-course is a necessary step in many experimental studies of malaria, but represents a time-consuming and unpopular task among researchers. The most common method is extensive microscopic examination of Giemsa-stained thin blood-films. This study explored a method for the assessment of percent-parasitaemia that does not require extended periods of microscopy and results in a descriptive and permanent record of parasitaemia data that is highly amenable to subsequent 'data-mining'. Digital photography was utilized in conjunction with a basic purpose-written computer programme to test the viability of the concept. Partial automation of the determination of percent parasitaemia was then explored, resulting in the successful customization of commercially available broad-spectrum image analysis software towards this aim. Lastly, automated discrimination between infected and uninfected RBCs based on analysis of digital parameters of individual cell images was explored in an effort to completely automate the calculation of an accurate percent-parasitaemia.

  19. Autonomous Onboard Science Data Analysis for Comet Missions

    NASA Technical Reports Server (NTRS)

    Thompson, David R.; Tran, Daniel Q.; McLaren, David; Chien, Steve A.; Bergman, Larry; Castano, Rebecca; Doyle, Richard; Estlin, Tara; Lenda, Matthew

    2012-01-01

    Coming years will bring several comet rendezvous missions. The Rosetta spacecraft arrives at Comet 67P/Churyumov-Gerasimenko in 2014. Subsequent rendezvous might include a mission such as the proposed Comet Hopper with multiple surface landings, as well as Comet Nucleus Sample Return (CNSR) and Coma Rendezvous and Sample Return (CRSR). These encounters will begin to shed light on a population that, despite several previous flybys, remains mysterious and poorly understood. Scientists still have little direct knowledge of interactions between the nucleus and coma, their variation across different comets or their evolution over time. Activity may change on short timescales so it is challenging to characterize with scripted data acquisition. Here we investigate automatic onboard image analysis that could act faster than round-trip light time to capture unexpected outbursts and plume activity. We describe one edge-based method for detect comet nuclei and plumes, and test the approach on an existing catalog of comet images. Finally, we quantify benefits to specific measurement objectives by simulating a basic plume monitoring campaign.

  20. High-content screening for the discovery of pharmacological compounds: advantages, challenges and potential benefits of recent technological developments.

    PubMed

    Soleilhac, Emmanuelle; Nadon, Robert; Lafanechere, Laurence

    2010-02-01

    Screening compounds with cell-based assays and microscopy image-based analysis is an approach currently favored for drug discovery. Because of its high information yield, the strategy is called high-content screening (HCS). This review covers the application of HCS in drug discovery and also in basic research of potential new pathways that can be targeted for treatment of pathophysiological diseases. HCS faces several challenges, however, including the extraction of pertinent information from the massive amount of data generated from images. Several proposed approaches to HCS data acquisition and analysis are reviewed. Different solutions from the fields of mathematics, bioinformatics and biotechnology are presented. Potential applications and limits of these recent technical developments are also discussed. HCS is a multidisciplinary and multistep approach for understanding the effects of compounds on biological processes at the cellular level. Reliable results depend on the quality of the overall process and require strong interdisciplinary collaborations.

  1. Data reduction of digitized images processed from calibrated photographic and spectroscopic films obtained from terrestial, rocket and space shuttle telescopic instruments

    NASA Technical Reports Server (NTRS)

    Hammond, Ernest C., Jr.

    1990-01-01

    The Microvax 2 computer, the basic software in VMS, and the Mitsubishi High Speed Disk were received and installed. The digital scanning tunneling microscope is fully installed and operational. A new technique was developed for pseudocolor analysis of the line plot images of a scanning tunneling microscope. Computer studies and mathematical modeling of the empirical data associated with many of the film calibration studies were presented. A gas can follow-up experiment which will be launched in September, on the Space Shuttle STS-50, was prepared and loaded. Papers were presented on the structure of the human hair strand using scanning electron microscopy and x ray analysis and updated research on the annual rings produced by the surf clam of the ocean estuaries of Maryland. Scanning electron microscopic work was conducted by the research team for the study of the Mossbauer and Magnetic Susceptibility Studies on NmNi(4.25)Fe(.85) and its Hydride.

  2. On the SAR derived alert in the detection of oil spills according to the analysis of the EGEMP.

    PubMed

    Ferraro, Guido; Baschek, Björn; de Montpellier, Geraldine; Njoten, Ove; Perkovic, Marko; Vespe, Michele

    2010-01-01

    Satellite services that deliver information about possible oil spills at sea currently use different labels of "confidence" to describe the detections based on radar image processing. A common approach is to use a classification differentiating between low, medium and high levels of confidence. There is an ongoing discussion on the suitability of the existing classification systems of possible oil spills detected by radar satellite images with regard to the relevant significance and correspondence to user requirements. This paper contains a basic analysis of user requirements, current technical possibilities of satellite services as well as proposals for a redesign of the classification system as an evolution towards a more structured alert system. This research work offers a first review of implemented methodologies for the categorisation of detected oil spills, together with the proposal of explorative ideas evaluated by the European Group of Experts on satellite Monitoring of sea-based oil Pollution (EGEMP). Copyright 2009 Elsevier Ltd. All rights reserved.

  3. Tissue polarimetry: concepts, challenges, applications, and outlook.

    PubMed

    Ghosh, Nirmalya; Vitkin, I Alex

    2011-11-01

    Polarimetry has a long and successful history in various forms of clear media. Driven by their biomedical potential, the use of the polarimetric approaches for biological tissue assessment has also recently received considerable attention. Specifically, polarization can be used as an effective tool to discriminate against multiply scattered light (acting as a gating mechanism) in order to enhance contrast and to improve tissue imaging resolution. Moreover, the intrinsic tissue polarimetry characteristics contain a wealth of morphological and functional information of potential biomedical importance. However, in a complex random medium-like tissue, numerous complexities due to multiple scattering and simultaneous occurrences of many scattering and polarization events present formidable challenges both in terms of accurate measurements and in terms of analysis of the tissue polarimetry signal. In order to realize the potential of the polarimetric approaches for tissue imaging and characterization/diagnosis, a number of researchers are thus pursuing innovative solutions to these challenges. In this review paper, we summarize these and other issues pertinent to the polarized light methodologies in tissues. Specifically, we discuss polarized light basics, Stokes-Muller formalism, methods of polarization measurements, polarized light modeling in turbid media, applications to tissue imaging, inverse analysis for polarimetric results quantification, applications to quantitative tissue assessment, etc.

  4. Quantitative analysis of cell columns in the cerebral cortex.

    PubMed

    Buxhoeveden, D P; Switala, A E; Roy, E; Casanova, M F

    2000-04-01

    We present a quantified imaging method that describes the cell column in mammalian cortex. The minicolumn is an ideal template with which to examine cortical organization because it is a basic unit of function, complete in itself, which interacts with adjacent and distance columns to form more complex levels of organization. The subtle details of columnar anatomy should reflect physiological changes that have occurred in evolution as well as those that might be caused by pathologies in the brain. In this semiautomatic method, images of Nissl-stained tissue are digitized or scanned into a computer imaging system. The software detects the presence of cell columns and describes details of their morphology and of the surrounding space. Columns are detected automatically on the basis of cell-poor and cell-rich areas using a Gaussian distribution. A line is fit to the cell centers by least squares analysis. The line becomes the center of the column from which the precise location of every cell can be measured. On this basis several algorithms describe the distribution of cells from the center line and in relation to the available surrounding space. Other algorithms use cluster analyses to determine the spatial orientation of every column.

  5. [New opportunities of magnetic-resonance imaging: an algorithm of CSD-HARDI tractography in reconstruction of the brainstem reticular formation fibers].

    PubMed

    Aleksandrova, E V; Batalov, A I; Pogosbekyan, E L; Zakharova, N E; Fadeeva, L M; Kravchuk, A D; Pronin, I N; Potapov, A A

    2018-01-01

    The study purpose was to develop a technique for intravital visualization of the brainstem reticular formation fibers in healthy volunteers using magnetic resonance imaging (MRI). The study included 21 subjects (13 males and 8 females) aged 21 to 62 years. The study was performed on a magnetic resonance imaging scanner with a magnetic field strength of 3 T in T1, T2, T2-FLAIR, DWI, and SWI modes. A CSD-HARDI algorithm was used to identify thin intersecting fibers of the reticular formatio. We developed a technique for reconstructing the reticular formation pathways, tested it in healthy volunteers, and obtained standard quantitative indicators (fractional anisotropy (FA), apparent diffusion coefficient (ACD), fiber length and density, and axial and radial diffusion). We performed a comparative analysis of these indicators in males and females. There was no difference between these groups and between indicators for the right and left brainstem. Our findings will enable comparative analysis of examination results in patients with brain pathology accompanied by brainstem injury, which may help predict the outcome. This work was supported by a grant of the Russian Foundation for Basic Research (#16-04-01472).

  6. [The application of X-ray imaging in forensic medicine].

    PubMed

    Kučerová, Stěpánka; Safr, Miroslav; Ublová, Michaela; Urbanová, Petra; Hejna, Petr

    2014-07-01

    X-ray is the most common, basic and essential imaging method used in forensic medicine. It serves to display and localize the foreign objects in the body and helps to detect various traumatic and pathological changes. X-ray imaging is valuable in anthropological assessment of an individual. X-ray allows non-invasive evaluation of important findings before the autopsy and thus selection of the optimal strategy for dissection. Basic indications for postmortem X-ray imaging in forensic medicine include gunshot and explosive fatalities (identification and localization of projectiles or other components of ammunition, visualization of secondary missiles), sharp force injuries (air embolism, identification of the weapon) and motor vehicle related deaths. The method is also helpful for complex injury evaluation in abused victims or in persons where abuse is suspected. Finally, X-ray imaging still remains the gold standard method for identification of unknown deceased. With time modern imaging methods, especially computed tomography and magnetic resonance imaging, are more and more applied in forensic medicine. Their application extends possibilities of the visualization the bony structures toward a more detailed imaging of soft tissues and internal organs. The application of modern imaging methods in postmortem body investigation is known as digital or virtual autopsy. At present digital postmortem imaging is considered as a bloodless alternative to the conventional autopsy.

  7. Image feature detection and extraction techniques performance evaluation for development of panorama under different light conditions

    NASA Astrophysics Data System (ADS)

    Patil, Venkat P.; Gohatre, Umakant B.

    2018-04-01

    The technique of obtaining a wider field-of-view of an image to get high resolution integrated image is normally required for development of panorama of a photographic images or scene from a sequence of part of multiple views. There are various image stitching methods developed recently. For image stitching five basic steps are adopted stitching which are Feature detection and extraction, Image registration, computing homography, image warping and Blending. This paper provides review of some of the existing available image feature detection and extraction techniques and image stitching algorithms by categorizing them into several methods. For each category, the basic concepts are first described and later on the necessary modifications made to the fundamental concepts by different researchers are elaborated. This paper also highlights about the some of the fundamental techniques for the process of photographic image feature detection and extraction methods under various illumination conditions. The Importance of Image stitching is applicable in the various fields such as medical imaging, astrophotography and computer vision. For comparing performance evaluation of the techniques used for image features detection three methods are considered i.e. ORB, SURF, HESSIAN and time required for input images feature detection is measured. Results obtained finally concludes that for daylight condition, ORB algorithm found better due to the fact that less tome is required for more features extracted where as for images under night light condition it shows that SURF detector performs better than ORB/HESSIAN detectors.

  8. Spinal Cystic Echinococcosis – A Systematic Analysis and Review of the Literature: Part 1. Epidemiology and Anatomy

    PubMed Central

    Neumayr, Andreas; Tamarozzi, Francesca; Goblirsch, Sam; Blum, Johannes; Brunetti, Enrico

    2013-01-01

    Bone involvement in human cystic echinococcosis (CE) is rare, but affects the spine in approximately 50% of cases. Despite significant advances in diagnostic imaging techniques as well as surgical and medical treatment of spinal CE, our basic understanding of the parasite's predilection for the spine remains incomplete. To fill this gap, we systematically reviewed the published literature of the last five decades to summarize and analyze the currently existing data on epidemiological and anatomical aspects of spinal CE. PMID:24086783

  9. Physical properties of the natural satellites. [excluding the Moon and including Saturnian Rings

    NASA Technical Reports Server (NTRS)

    Morrison, D.; Cruikshank, D. P.

    1974-01-01

    Review of the physical nature of all of the known satellites except the moon. Following a summary of the basic data regarding the size, mass, and density of satellite systems and a description of models that have been proposed for the composition and structure of these systems, a detailed analysis is made of the satellites of Mars, the Galilean satellites, Titan, the other satellites of Saturn, the rings of Saturn, and the remaining objects, with emphasis on studies of their surfaces by imaging, photometry, spectrophotometry, polarimetry, and radiometry.

  10. LANDSAT-4 TM image data quality analysis for energy-related applications

    NASA Technical Reports Server (NTRS)

    Wukelic, G. E.; Foote, H. P.

    1983-01-01

    LANDSAT-4 Thematic Mapper (TM) data performance and utility characteristics from an energy research and technology perspective is evaluated. The program focuses on evaluating applicational implications of using such data, in combination with other digital data, for current and future energy research and technology activities. Prime interest is in using TM data for siting, developing and operating federal energy facilities. Secondary interests involve the use of such data for resource exploration, environmental monitoring and basic scientific initiatives such as in support of the Continental Scientific Drilling Program.

  11. Geocoded data structures and their applications to Earth science investigations

    NASA Technical Reports Server (NTRS)

    Goldberg, M.

    1984-01-01

    A geocoded data structure is a means for digitally representing a geographically referenced map or image. The characteristics of representative cellular, linked, and hybrid geocoded data structures are reviewed. The data processing requirements of Earth science projects at the Goddard Space Flight Center and the basic tools of geographic data processing are described. Specific ways that new geocoded data structures can be used to adapt these tools to scientists' needs are presented. These include: expanding analysis and modeling capabilities; simplifying the merging of data sets from diverse sources; and saving computer storage space.

  12. Differential roles of low and high spatial frequency content in abnormal facial emotion perception in schizophrenia.

    PubMed

    McBain, Ryan; Norton, Daniel; Chen, Yue

    2010-09-01

    While schizophrenia patients are impaired at facial emotion perception, the role of basic visual processing in this deficit remains relatively unclear. We examined emotion perception when spatial frequency content of facial images was manipulated via high-pass and low-pass filtering. Unlike controls (n=29), patients (n=30) perceived images with low spatial frequencies as more fearful than those without this information, across emotional salience levels. Patients also perceived images with high spatial frequencies as happier. In controls, this effect was found only at low emotional salience. These results indicate that basic visual processing has an amplified modulatory effect on emotion perception in schizophrenia. (c) 2010 Elsevier B.V. All rights reserved.

  13. [Basic examination of an imagecharacteristic in Multivane].

    PubMed

    Ohshita, Tsuyoshi

    2011-01-01

    Deterioration in the image because of a patient's movement always becomes a problem in the MRI inspection. To solve this problem, the imaging procedure named Multivane was developed. The principle is similar to the periodically rotated overlapping parallel lines with enhanced reconstruction (PROPELLER) method. As for Multivane, the effect of the body motion correction is high. However, the filling method of k space is different than a past Cartesian method. A basic examination of the image characteristic of Multivane and Cartesian was utilized along with geostationary phantom. The examination items are SNR, CNR, and a spatial resolution. As a result, Multivane of SNR was high. Cartesian of the contrast and the spatial resolution was also high. It is important to recognize these features and to use Multivane.

  14. TH-E-202-02: The Use of Hypoxia PET Imaging for Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Humm, J.

    2016-06-15

    PET/CT is a very important imaging tool in the management of oncology patients. PET/CT has been applied for treatment planning and response evaluation in radiation therapy. This educational session will discuss: Pitfalls and remedies in PET/CT imaging for RT planning The use of hypoxia PET imaging for radiotherapy PET for tumor response evaluation The first presentation will address the issue of mis-registration between the CT and PET images in the thorax and the abdomen. We will discuss the challenges of respiratory gating and introduce an average CT technique to improve the registration for dose calculation and image-guidance in radiation therapy.more » The second presentation will discuss the use of hypoxia PET Imaging for radiation therapy. We will discuss various hypoxia radiotracers, the choice of clinical acquisition protocol (in particular a single late static acquisition versus a dynamic acquisition), and the compartmental modeling with different transfer rate constants explained. We will demonstrate applications of hypoxia imaging for dose escalation/de-escalation in clinical trials. The last presentation will discuss the use of PET/CT for tumor response evaluation. We will discuss anatomic response assessment vs. metabolic response assessment, visual evaluation and semi-quantitative evaluation, and limitations of current PET/CT assessment. We will summarize clinical trials using PET response in guiding adaptive radiotherapy. Finally, we will summarize recent advancements in PET/CT radiomics and non-FDG PET tracers for response assessment. Learning Objectives: Identify the causes of mis-registration of CT and PET images in PET/CT, and review the strategies to remedy the issue. Understand the basics of PET imaging of tumor hypoxia (radiotracers, how PET measures the hypoxia selective uptake, imaging protocols, applications in chemo-radiation therapy). Understand the basics of dynamic PET imaging, compartmental modeling and parametric images. Understand the basics of using FDG PET/CT for tumor response evaluation. Learn about recent advancement in PET/CT radiomics and non-FDG PET tracers for response assessment. This work was supported in part by the National Cancer Institute Grants R01CA172638.; W. Lu, This work was supported in part by the National Cancer Institute Grants R01CA172638.« less

  15. Quality Improvement With Discrete Event Simulation: A Primer for Radiologists.

    PubMed

    Booker, Michael T; O'Connell, Ryan J; Desai, Bhushan; Duddalwar, Vinay A

    2016-04-01

    The application of simulation software in health care has transformed quality and process improvement. Specifically, software based on discrete-event simulation (DES) has shown the ability to improve radiology workflows and systems. Nevertheless, despite the successful application of DES in the medical literature, the power and value of simulation remains underutilized. For this reason, the basics of DES modeling are introduced, with specific attention to medical imaging. In an effort to provide readers with the tools necessary to begin their own DES analyses, the practical steps of choosing a software package and building a basic radiology model are discussed. In addition, three radiology system examples are presented, with accompanying DES models that assist in analysis and decision making. Through these simulations, we provide readers with an understanding of the theory, requirements, and benefits of implementing DES in their own radiology practices. Copyright © 2016 American College of Radiology. All rights reserved.

  16. A data model and database for high-resolution pathology analytical image informatics.

    PubMed

    Wang, Fusheng; Kong, Jun; Cooper, Lee; Pan, Tony; Kurc, Tahsin; Chen, Wenjin; Sharma, Ashish; Niedermayr, Cristobal; Oh, Tae W; Brat, Daniel; Farris, Alton B; Foran, David J; Saltz, Joel

    2011-01-01

    The systematic analysis of imaged pathology specimens often results in a vast amount of morphological information at both the cellular and sub-cellular scales. While microscopy scanners and computerized analysis are capable of capturing and analyzing data rapidly, microscopy image data remain underutilized in research and clinical settings. One major obstacle which tends to reduce wider adoption of these new technologies throughout the clinical and scientific communities is the challenge of managing, querying, and integrating the vast amounts of data resulting from the analysis of large digital pathology datasets. This paper presents a data model, which addresses these challenges, and demonstrates its implementation in a relational database system. This paper describes a data model, referred to as Pathology Analytic Imaging Standards (PAIS), and a database implementation, which are designed to support the data management and query requirements of detailed characterization of micro-anatomic morphology through many interrelated analysis pipelines on whole-slide images and tissue microarrays (TMAs). (1) Development of a data model capable of efficiently representing and storing virtual slide related image, annotation, markup, and feature information. (2) Development of a database, based on the data model, capable of supporting queries for data retrieval based on analysis and image metadata, queries for comparison of results from different analyses, and spatial queries on segmented regions, features, and classified objects. The work described in this paper is motivated by the challenges associated with characterization of micro-scale features for comparative and correlative analyses involving whole-slides tissue images and TMAs. Technologies for digitizing tissues have advanced significantly in the past decade. Slide scanners are capable of producing high-magnification, high-resolution images from whole slides and TMAs within several minutes. Hence, it is becoming increasingly feasible for basic, clinical, and translational research studies to produce thousands of whole-slide images. Systematic analysis of these large datasets requires efficient data management support for representing and indexing results from hundreds of interrelated analyses generating very large volumes of quantifications such as shape and texture and of classifications of the quantified features. We have designed a data model and a database to address the data management requirements of detailed characterization of micro-anatomic morphology through many interrelated analysis pipelines. The data model represents virtual slide related image, annotation, markup and feature information. The database supports a wide range of metadata and spatial queries on images, annotations, markups, and features. We currently have three databases running on a Dell PowerEdge T410 server with CentOS 5.5 Linux operating system. The database server is IBM DB2 Enterprise Edition 9.7.2. The set of databases consists of 1) a TMA database containing image analysis results from 4740 cases of breast cancer, with 641 MB storage size; 2) an algorithm validation database, which stores markups and annotations from two segmentation algorithms and two parameter sets on 18 selected slides, with 66 GB storage size; and 3) an in silico brain tumor study database comprising results from 307 TCGA slides, with 365 GB storage size. The latter two databases also contain human-generated annotations and markups for regions and nuclei. Modeling and managing pathology image analysis results in a database provide immediate benefits on the value and usability of data in a research study. The database provides powerful query capabilities, which are otherwise difficult or cumbersome to support by other approaches such as programming languages. Standardized, semantic annotated data representation and interfaces also make it possible to more efficiently share image data and analysis results.

  17. Sensitivity of an eight-element phased array coil in 3 Tesla MR imaging: a basic analysis.

    PubMed

    Hiratsuka, Yoshiyasu; Miki, Hitoshi; Kikuchi, Keiichi; Kiriyama, Ikuko; Mochizuki, Teruhito; Takahashi, Shizue; Sadamoto, Kazuhiko

    2007-01-01

    To evaluate the performance advantages of an 8-element phased array head coil (8 ch coil) over a conventional quadrature-type birdcage head coil (QD coil) with regard to the signal-to-noise ratio (SNR) and image uniformity in 3 Tesla magnetic resonance (MR) imaging. We scanned a phantom filled with silicon oil using an 8 ch coil and a QD coil in a 3T MR imaging system and compared the SNR and image uniformity obtained from T(1)-weighted spin echo (SE) images and T(2)-weighted fast SE images between the 2 coils. We also visually evaluated images from 4 healthy volunteers. The SNR with the 8 ch coil was approximately twice that with the QD coil in the region of interest (ROI), which was set as 75% of the area in the center of the phantom images. With regard to the spatial variation of sensitivity, the SNR with the 8 ch coil was lower at the center of the images than at the periphery, whereas the SNR with the QD coil exhibited an inverse pattern. At the center of the images with the 8 ch coil, the SNR was somewhat lower, and that distribution was relatively flat compared to that in the periphery. Image uniformity varied less with the 8 ch coil than with the QD coil on both imaging sequences. The 8 ch phased array coil was useful for obtaining high quality 3T images because of its higher SNR and improved image uniformity than those obtained with conventional quadrature-type birdcage head coil.

  18. When the fl# Is Not the fl#.

    ERIC Educational Resources Information Center

    Biermann, Mark L.; Biermann, Lois A. A.

    1996-01-01

    Discusses descriptions of the way in which an optical system controls the quantity of light that reaches a point on the image plane, a basic feature of optical imaging systems such as cameras, telescopes, and microscopes. (JRH)

  19. Final Report 2007: DOE-FG02-87ER60561

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kilbourn, Michael R

    2007-04-26

    This project involved a multi-faceted approach to the improvement of techniques used in Positron Emission Tomography (PET), from radiochemistry to image processing and data analysis. New methods for radiochemical syntheses were examined, new radiochemicals prepared for evaluation and eventual use in human PET studies, and new pre-clinical methods examined for validation of biochemical parameters in animal studies. The value of small animal PET imaging in measuring small changes of in vivo biochemistry was examined and directly compared to traditional tissue sampling techniques. In human imaging studies, the ability to perform single experimental sessions utilizing two overlapping injections of radiopharmaceuticals wasmore » tested, and it was shown that valid biochemical measures for both radiotracers can be obtained through careful pharmacokinetic modeling of the PET emission data. Finally, improvements in reconstruction algorithms for PET data from small animal PET scanners was realized and these have been implemented in commercial releases. Together, the project represented an integrated effort to improve and extend all basic science aspects of PET imaging at both the animal and human level.« less

  20. Single-photon imaging in complementary metal oxide semiconductor processes

    PubMed Central

    Charbon, E.

    2014-01-01

    This paper describes the basics of single-photon counting in complementary metal oxide semiconductors, through single-photon avalanche diodes (SPADs), and the making of miniaturized pixels with photon-counting capability based on SPADs. Some applications, which may take advantage of SPAD image sensors, are outlined, such as fluorescence-based microscopy, three-dimensional time-of-flight imaging and biomedical imaging, to name just a few. The paper focuses on architectures that are best suited to those applications and the trade-offs they generate. In this context, architectures are described that efficiently collect the output of single pixels when designed in large arrays. Off-chip readout circuit requirements are described for a variety of applications in physics, medicine and the life sciences. Owing to the dynamic nature of SPADs, designs featuring a large number of SPADs require careful analysis of the target application for an optimal use of silicon real estate and of limited readout bandwidth. The paper also describes the main trade-offs involved in architecting such chips and the solutions adopted with focus on scalability and miniaturization. PMID:24567470

  1. Interpreting Underwater Acoustic Images of the Upper Ocean Boundary Layer

    ERIC Educational Resources Information Center

    Ulloa, Marco J.

    2007-01-01

    A challenging task in physical studies of the upper ocean using underwater sound is the interpretation of high-resolution acoustic images. This paper covers a number of basic concepts necessary for undergraduate and postgraduate students to identify the most distinctive features of the images, providing a link with the acoustic signatures of…

  2. Motion compensated image processing and optimal parameters for egg crack detection using modified pressure

    USDA-ARS?s Scientific Manuscript database

    Shell eggs with microcracks are often undetected during egg grading processes. In the past, a modified pressure imaging system was developed to detect eggs with microcracks without adversely affecting the quality of normal intact eggs. The basic idea of the modified pressure imaging system was to ap...

  3. Bridging the Gap between Basic and Clinical Sciences: A Description of a Radiological Anatomy Course

    ERIC Educational Resources Information Center

    Torres, Anna; Staskiewicz, Grzegorz J.; Lisiecka, Justyna; Pietrzyk, Lukasz; Czekajlo, Michael; Arancibia, Carlos U.; Maciejewski, Ryszard; Torres, Kamil

    2016-01-01

    A wide variety of medical imaging techniques pervade modern medicine, and the changing portability and performance of tools like ultrasound imaging have brought these medical imaging techniques into the everyday practice of many specialties outside of radiology. However, proper interpretation of ultrasonographic and computed tomographic images…

  4. Visual Literacy and Visual Thinking.

    ERIC Educational Resources Information Center

    Hortin, John A.

    It is proposed that visual literacy be defined as the ability to understand (read) and use (write) images and to think and learn in terms of images. This definition includes three basic principles: (1) visuals are a language and thus analogous to verbal language; (2) a visually literate person should be able to understand (read) images and use…

  5. infoRAD: computers for clinical practice and education in radiology. Teleradiology, information transfer, and PACS: implications for diagnostic imaging in the 1990s.

    PubMed

    Schilling, R B

    1993-05-01

    Picture archiving and communication systems (PACS) provide image viewing at diagnostic, reporting, consultation, and remote workstations; archival on magnetic or optical media by means of short- or long-term storage devices; communications by means of local or wide area networks or public communication services; and integrated systems with modality interfaces and gateways to health care facilities and departmental information systems. Research indicates three basic needs for image and report management: (a) improved communication and turnaround time between radiologists and other imaging specialists and referring physicians, (b) fast reliable access to both current and previously obtained images and reports, and (c) space-efficient archival support. Although PACS considerations are much more complex than those associated with single modalities, the same basic purchase criteria apply. These criteria include technical leadership, image quality, throughput, life cost (eg, initial cost, maintenance, upgrades, and depreciation), and total service. Because a PACS takes much longer to implement than a single modality, the customer and manufacturer must develop a closer working relationship than has been necessary in the past.

  6. Evaluation of Scanners for C-Scan Imaging for Nondestructive Inspection of Aircraft

    DTIC Science & Technology

    1994-09-01

    mechanized and nonmechanized designs. * The basic scanner designs were divided for the purposes of this report into eight different Stypes. These are 1...electronic switching through the transducer elements of the array. The basic scanner designs were divided for the purposes of this report into eight...of this project was to evaluate all the basic scanner types that are appropriate for aircraft NDI examinations. A number of vendors sell very similar

  7. [Research on Spectral Polarization Imaging System Based on Static Modulation].

    PubMed

    Zhao, Hai-bo; Li, Huan; Lin, Xu-ling; Wang, Zheng

    2015-04-01

    The main disadvantages of traditional spectral polarization imaging system are: complex structure, with moving parts, low throughput. A novel method of spectral polarization imaging system is discussed, which is based on static polarization intensity modulation combined with Savart polariscope interference imaging. The imaging system can obtain real-time information of spectral and four Stokes polarization messages. Compared with the conventional methods, the advantages of the imaging system are compactness, low mass and no moving parts, no electrical control, no slit and big throughput. The system structure and the basic theory are introduced. The experimental system is established in the laboratory. The experimental system consists of reimaging optics, polarization intensity module, interference imaging module, and CCD data collecting and processing module. The spectral range is visible and near-infrared (480-950 nm). The white board and the plane toy are imaged by using the experimental system. The ability of obtaining spectral polarization imaging information is verified. The calibration system of static polarization modulation is set up. The statistical error of polarization degree detection is less than 5%. The validity and feasibility of the basic principle is proved by the experimental result. The spectral polarization data captured by the system can be applied to object identification, object classification and remote sensing detection.

  8. Creation of virtual patients from CT images of cadavers to enhance integration of clinical and basic science student learning in anatomy.

    PubMed

    Jacobson, Stanley; Epstein, Scott K; Albright, Susan; Ochieng, Joseph; Griffiths, Jeffrey; Coppersmith, Veronica; Polak, Joseph F

    2009-08-01

    The goal of this study was to determine whether computerized tomographic (CT) images of cadavers could be used in addition to images from patients to develop virtual patients (VPs) to enhance integrated learning of basic and clinical science. We imaged 13 cadavers on a Siemens CT system. The DICOM images from the CT were noted to be of high quality by a radiologist who systematically identified all abnormal and pathological findings. The pathological findings from the CT images and the cause of death were used to develop plausible clinical cases and study questions. Each case was designed to highlight and explain the abnormal anatomic findings encountered during the cadaveric dissection. A 3D reconstruction was produced using OsiriX and then formatted into a QuickTime movie which was then stored on the Tufts University Sciences Knowledgebase (TUSK) as a VP. We conclude that CT scanning of cadavers produces high-quality images that can be used to develop VPs. Although the use of the VPs was optional and fewer than half of the students had an imaged cadaver for dissection, 59 of the 172 (34%) students accessed and reviewed the cases and images positively and were very encouraging for us to continue.

  9. Detection of high-grade atypia nuclei in breast cancer imaging

    NASA Astrophysics Data System (ADS)

    Noël, Henri; Roux, Ludovic; Lu, Shijian; Boudier, Thomas

    2015-03-01

    Along with mitotic count, nuclear pleomorphism or nuclear atypia is an important criterion for the grading of breast cancer in histopathology. Though some works have been done in mitosis detection (ICPR 2012,1 MICCAI 2013,2 and ICPR 2014), not much work has been dedicated to automated nuclear atypia grading, especially the most difficult task of detection of grade 3 nuclei. We propose the use of Convolutional Neural Networks for the automated detection of cell nuclei, using images from the three grades of breast cancer for training. The images were obtained from ICPR contests. Additional manual annotation was performed to classify pixels into five classes: stroma, nuclei, lymphocytes, mitosis and fat. At total of 3,000 thumbnail images of 101 × 101 pixels were used for training. By dividing this training set in an 80/20 ratio we could obtain good training results (around 90%). We tested our CNN on images of the three grades which were not in the training set. High grades nuclei were correctly classified. We then thresholded the classification map and performed basic analysis to keep only rounded objects. Our results show that mostly all atypical nuclei were correctly detected.

  10. Infrared Thermal Imaging System on a Mobile Phone

    PubMed Central

    Lee, Fu-Feng; Chen, Feng; Liu, Jing

    2015-01-01

    A novel concept towards pervasively available low-cost infrared thermal imaging system lunched on a mobile phone (MTIS) was proposed and demonstrated in this article. Through digestion on the evolutional development of milestone technologies in the area, it can be found that the portable and low-cost design would become the main stream of thermal imager for civilian purposes. As a representative trial towards this important goal, a MTIS consisting of a thermal infrared module (TIM) and mobile phone with embedded exclusive software (IRAPP) was presented. The basic strategy for the TIM construction is illustrated, including sensor adoption and optical specification. The user-oriented software was developed in the Android environment by considering its popularity and expandability. Computational algorithms with non-uniformity correction and scene-change detection are established to optimize the imaging quality and efficiency of TIM. The performance experiments and analysis indicated that the currently available detective distance for the MTIS is about 29 m. Furthermore, some family-targeted utilization enabled by MTIS was also outlined, such as sudden infant death syndrome (SIDS) prevention, etc. This work suggests a ubiquitous way of significantly extending thermal infrared image into rather wide areas especially health care in the coming time. PMID:25942639

  11. Very-large-area CCD image sensors: concept and cost-effective research

    NASA Astrophysics Data System (ADS)

    Bogaart, E. W.; Peters, I. M.; Kleimann, A. C.; Manoury, E. J. P.; Klaassens, W.; de Laat, W. T. F. M.; Draijer, C.; Frost, R.; Bosiers, J. T.

    2009-01-01

    A new-generation full-frame 36x48 mm2 48Mp CCD image sensor with vertical anti-blooming for professional digital still camera applications is developed by means of the so-called building block concept. The 48Mp devices are formed by stitching 1kx1k building blocks with 6.0 µm pixel pitch in 6x8 (hxv) format. This concept allows us to design four large-area (48Mp) and sixty-two basic (1Mp) devices per 6" wafer. The basic image sensor is relatively small in order to obtain data from many devices. Evaluation of the basic parameters such as the image pixel and on-chip amplifier provides us statistical data using a limited number of wafers. Whereas the large-area devices are evaluated for aspects typical to large-sensor operation and performance, such as the charge transport efficiency. Combined with the usability of multi-layer reticles, the sensor development is cost effective for prototyping. Optimisation of the sensor design and technology has resulted in a pixel charge capacity of 58 ke- and significantly reduced readout noise (12 electrons at 25 MHz pixel rate, after CDS). Hence, a dynamic range of 73 dB is obtained. Microlens and stack optimisation resulted in an excellent angular response that meets with the wide-angle photography demands.

  12. Original and creative stereoscopic film making

    NASA Astrophysics Data System (ADS)

    Criado, Enrique

    2008-02-01

    The stereoscopic cinema has become, once again, a hot topic in the film production. For filmmakers to be successful in this field, a technical background in the principles of binocular perception and how our brain interprets the incoming data from our eyes, are fundamental. It is also paramount for a stereoscopic production to adhere certain rules for comfort and safety. There is an immense variety of options in the art of standard "flat" photography, and the possibilities only can be multiply with the stereo. The stereoscopic imaging has its own unique areas for subjective, original and creative control that allow an incredible range of possible combinations by working inside the standards, and in some cases on the boundaries of the basic stereo rules. The stereoscopic imaging can be approached in a "flat" manner, like channeling sound through an audio equalizer with all the bands at the same level. It can provide a realistic perception, which in many cases can be sufficient, thanks to the rock-solid viewing inherent to the stereoscopic image, but there are many more possibilities. This document describes some of the basic operating parameters and concepts for stereoscopic imaging, but it also offers ideas for a creative process based on the variation and combination of these basic parameters, which can lead into a truly innovative and original viewing experience.

  13. Fundamentals of Structural Geology

    NASA Astrophysics Data System (ADS)

    Pollard, David D.; Fletcher, Raymond C.

    2005-09-01

    Fundamentals of Structural Geology provides a new framework for the investigation of geological structures by integrating field mapping and mechanical analysis. Assuming a basic knowledge of physical geology, introductory calculus and physics, it emphasizes the observational data, modern mapping technology, principles of continuum mechanics, and the mathematical and computational skills, necessary to quantitatively map, describe, model, and explain deformation in Earth's lithosphere. By starting from the fundamental conservation laws of mass and momentum, the constitutive laws of material behavior, and the kinematic relationships for strain and rate of deformation, the authors demonstrate the relevance of solid and fluid mechanics to structural geology. This book offers a modern quantitative approach to structural geology for advanced students and researchers in structural geology and tectonics. It is supported by a website hosting images from the book, additional colour images, student exercises and MATLAB scripts. Solutions to the exercises are available to instructors. The book integrates field mapping using modern technology with the analysis of structures based on a complete mechanics MATLAB is used to visualize physical fields and analytical results and MATLAB scripts can be downloaded from the website to recreate textbook graphics and enable students to explore their choice of parameters and boundary conditions The supplementary website hosts color images of outcrop photographs used in the text, supplementary color images, and images of textbook figures for classroom presentations The textbook website also includes student exercises designed to instill the fundamental relationships, and to encourage the visualization of the evolution of geological structures; solutions are available to instructors

  14. Digital document imaging systems: An overview and guide

    NASA Technical Reports Server (NTRS)

    1990-01-01

    This is an aid to NASA managers in planning the selection of a Digital Document Imaging System (DDIS) as a possible solution for document information processing and storage. Intended to serve as a manager's guide, this document contains basic information on digital imaging systems, technology, equipment standards, issues of interoperability and interconnectivity, and issues related to selecting appropriate imaging equipment based upon well defined needs.

  15. TH-E-202-00: PET for Radiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    PET/CT is a very important imaging tool in the management of oncology patients. PET/CT has been applied for treatment planning and response evaluation in radiation therapy. This educational session will discuss: Pitfalls and remedies in PET/CT imaging for RT planning The use of hypoxia PET imaging for radiotherapy PET for tumor response evaluation The first presentation will address the issue of mis-registration between the CT and PET images in the thorax and the abdomen. We will discuss the challenges of respiratory gating and introduce an average CT technique to improve the registration for dose calculation and image-guidance in radiation therapy.more » The second presentation will discuss the use of hypoxia PET Imaging for radiation therapy. We will discuss various hypoxia radiotracers, the choice of clinical acquisition protocol (in particular a single late static acquisition versus a dynamic acquisition), and the compartmental modeling with different transfer rate constants explained. We will demonstrate applications of hypoxia imaging for dose escalation/de-escalation in clinical trials. The last presentation will discuss the use of PET/CT for tumor response evaluation. We will discuss anatomic response assessment vs. metabolic response assessment, visual evaluation and semi-quantitative evaluation, and limitations of current PET/CT assessment. We will summarize clinical trials using PET response in guiding adaptive radiotherapy. Finally, we will summarize recent advancements in PET/CT radiomics and non-FDG PET tracers for response assessment. Learning Objectives: Identify the causes of mis-registration of CT and PET images in PET/CT, and review the strategies to remedy the issue. Understand the basics of PET imaging of tumor hypoxia (radiotracers, how PET measures the hypoxia selective uptake, imaging protocols, applications in chemo-radiation therapy). Understand the basics of dynamic PET imaging, compartmental modeling and parametric images. Understand the basics of using FDG PET/CT for tumor response evaluation. Learn about recent advancement in PET/CT radiomics and non-FDG PET tracers for response assessment. This work was supported in part by the National Cancer Institute Grants R01CA172638.; W. Lu, This work was supported in part by the National Cancer Institute Grants R01CA172638.« less

  16. TH-E-202-03: PET for Tumor Response Evaluation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, W.

    PET/CT is a very important imaging tool in the management of oncology patients. PET/CT has been applied for treatment planning and response evaluation in radiation therapy. This educational session will discuss: Pitfalls and remedies in PET/CT imaging for RT planning The use of hypoxia PET imaging for radiotherapy PET for tumor response evaluation The first presentation will address the issue of mis-registration between the CT and PET images in the thorax and the abdomen. We will discuss the challenges of respiratory gating and introduce an average CT technique to improve the registration for dose calculation and image-guidance in radiation therapy.more » The second presentation will discuss the use of hypoxia PET Imaging for radiation therapy. We will discuss various hypoxia radiotracers, the choice of clinical acquisition protocol (in particular a single late static acquisition versus a dynamic acquisition), and the compartmental modeling with different transfer rate constants explained. We will demonstrate applications of hypoxia imaging for dose escalation/de-escalation in clinical trials. The last presentation will discuss the use of PET/CT for tumor response evaluation. We will discuss anatomic response assessment vs. metabolic response assessment, visual evaluation and semi-quantitative evaluation, and limitations of current PET/CT assessment. We will summarize clinical trials using PET response in guiding adaptive radiotherapy. Finally, we will summarize recent advancements in PET/CT radiomics and non-FDG PET tracers for response assessment. Learning Objectives: Identify the causes of mis-registration of CT and PET images in PET/CT, and review the strategies to remedy the issue. Understand the basics of PET imaging of tumor hypoxia (radiotracers, how PET measures the hypoxia selective uptake, imaging protocols, applications in chemo-radiation therapy). Understand the basics of dynamic PET imaging, compartmental modeling and parametric images. Understand the basics of using FDG PET/CT for tumor response evaluation. Learn about recent advancement in PET/CT radiomics and non-FDG PET tracers for response assessment. This work was supported in part by the National Cancer Institute Grants R01CA172638.; W. Lu, This work was supported in part by the National Cancer Institute Grants R01CA172638.« less

  17. Monitoring the growth or decline of vegetation on mine dumps

    NASA Technical Reports Server (NTRS)

    Gilbertson, B. P. (Principal Investigator)

    1975-01-01

    The author has identified the following signficant results. It was established that particular mine dumps throughout the entire test area can be detected and identified. It was also established that patterns of vegetative growth on the mine dumps can be recognized from a simple visual analysis of photographic images. Because vegetation tends to occur in patches on many mine dumps, it is unsatisfactory to classify complete dumps into categories of percentage vegetative cover. A more desirable approach is to classify the patches of vegetation themselves. The coarse resolution of conventional densitometers restricts the accuracy of this procedure, and consequently a direct analysis of ERTS CCT's is preferred. A set of computer programs was written to perform the data reading and manipulating functions required for basic CCT analysis.

  18. X-ray imaging physics for nuclear medicine technologists. Part 1: Basic principles of x-ray production.

    PubMed

    Seibert, J Anthony

    2004-09-01

    The purpose is to review in a 4-part series: (i) the basic principles of x-ray production, (ii) x-ray interactions and data capture/conversion, (iii) acquisition/creation of the CT image, and (iv) operational details of a modern multislice CT scanner integrated with a PET scanner. Advances in PET technology have lead to widespread applications in diagnostic imaging and oncologic staging of disease. Combined PET/CT scanners provide the high-resolution anatomic imaging capability of CT with the metabolic and physiologic information by PET, to offer a significant increase in information content useful for the diagnostician and radiation oncologist, neurosurgeon, or other physician needing both anatomic detail and knowledge of disease extent. Nuclear medicine technologists at the forefront of PET should therefore have a good understanding of x-ray imaging physics and basic CT scanner operation, as covered by this 4-part series. After reading the first article on x-ray production, the nuclear medicine technologist will be familiar with (a) the physical characteristics of x-rays relative to other electromagnetic radiations, including gamma-rays in terms of energy, wavelength, and frequency; (b) methods of x-ray production and the characteristics of the output x-ray spectrum; (c) components necessary to produce x-rays, including the x-ray tube/x-ray generator and the parameters that control x-ray quality (energy) and quantity; (d) x-ray production limitations caused by heating and the impact on image acquisition and clinical throughput; and (e) a glossary of terms to assist in the understanding of this information.

  19. Basic Media in Education.

    ERIC Educational Resources Information Center

    Harrell, John

    Intended as a guide to the use of different media for use in the classroom, this document demonstrates alternative approaches that may be taken to depicting and communicating images and concepts to others. Some basic tools and materials--including a ruler, matte knife, rubber cement, stapler, felt-tip pens, paint brushes, and lettering pens--are…

  20. Basics of Videodisc and Optical Disk Technology.

    ERIC Educational Resources Information Center

    Paris, Judith

    1983-01-01

    Outlines basic videodisc and optical disk technology describing both optical and capacitance videodisc technology. Optical disk technology is defined as a mass digital image and data storage device and briefly compared with other information storage media including magnetic tape and microforms. The future of videodisc and optical disk is…

  1. Single Particle Analysis by Combined Chemical Imaging to Study Episodic Air Pollution Events in Vienna

    NASA Astrophysics Data System (ADS)

    Ofner, Johannes; Eitenberger, Elisabeth; Friedbacher, Gernot; Brenner, Florian; Hutter, Herbert; Schauer, Gerhard; Kistler, Magdalena; Greilinger, Marion; Lohninger, Hans; Lendl, Bernhard; Kasper-Giebl, Anne

    2017-04-01

    The aerosol composition of a city like Vienna is characterized by a complex interaction of local emissions and atmospheric input on a regional and continental scale. The identification of major aerosol constituents for basic source appointment and air quality issues needs a high analytical effort. Exceptional episodic air pollution events strongly change the typical aerosol composition of a city like Vienna on a time-scale of few hours to several days. Analyzing the chemistry of particulate matter from these events is often hampered by the sampling time and related sample amount necessary to apply the full range of bulk analytical methods needed for chemical characterization. Additionally, morphological and single particle features are hardly accessible. Chemical Imaging evolved to a powerful tool for image-based chemical analysis of complex samples. As a complementary technique to bulk analytical methods, chemical imaging can address a new access to study air pollution events by obtaining major aerosol constituents with single particle features at high temporal resolutions and small sample volumes. The analysis of the chemical imaging datasets is assisted by multivariate statistics with the benefit of image-based chemical structure determination for direct aerosol source appointment. A novel approach in chemical imaging is combined chemical imaging or so-called multisensor hyperspectral imaging, involving elemental imaging (electron microscopy-based energy dispersive X-ray imaging), vibrational imaging (Raman micro-spectroscopy) and mass spectrometric imaging (Time-of-Flight Secondary Ion Mass Spectrometry) with subsequent combined multivariate analytics. Combined chemical imaging of precipitated aerosol particles will be demonstrated by the following examples of air pollution events in Vienna: Exceptional episodic events like the transformation of Saharan dust by the impact of the city of Vienna will be discussed and compared to samples obtained at a high alpine background site (Sonnblick Observatory, Saharan Dust Event from April 2016). Further, chemical imaging of biological aerosol constituents of an autumnal pollen breakout in Vienna, with background samples from nearby locations from November 2016 will demonstrate the advantages of the chemical imaging approach. Additionally, the chemical fingerprint of an exceptional air pollution event from a local emission source, caused by the pull down process of a building in Vienna will unravel the needs for multisensor imaging, especially the combinational access. Obtained chemical images will be correlated to bulk analytical results. Benefits of the overall methodical access by combining bulk analytics and combined chemical imaging of exceptional episodic air pollution events will be discussed.

  2. Open source software in a practical approach for post processing of radiologic images.

    PubMed

    Valeri, Gianluca; Mazza, Francesco Antonino; Maggi, Stefania; Aramini, Daniele; La Riccia, Luigi; Mazzoni, Giovanni; Giovagnoni, Andrea

    2015-03-01

    The purpose of this paper is to evaluate the use of open source software (OSS) to process DICOM images. We selected 23 programs for Windows and 20 programs for Mac from 150 possible OSS programs including DICOM viewers and various tools (converters, DICOM header editors, etc.). The programs selected all meet the basic requirements such as free availability, stand-alone application, presence of graphical user interface, ease of installation and advanced features beyond simple display monitor. Capabilities of data import, data export, metadata, 2D viewer, 3D viewer, support platform and usability of each selected program were evaluated on a scale ranging from 1 to 10 points. Twelve programs received a score higher than or equal to eight. Among them, five obtained a score of 9: 3D Slicer, MedINRIA, MITK 3M3, VolView, VR Render; while OsiriX received 10. OsiriX appears to be the only program able to perform all the operations taken into consideration, similar to a workstation equipped with proprietary software, allowing the analysis and interpretation of images in a simple and intuitive way. OsiriX is a DICOM PACS workstation for medical imaging and software for image processing for medical research, functional imaging, 3D imaging, confocal microscopy and molecular imaging. This application is also a good tool for teaching activities because it facilitates the attainment of learning objectives among students and other specialists.

  3. Quantum image median filtering in the spatial domain

    NASA Astrophysics Data System (ADS)

    Li, Panchi; Liu, Xiande; Xiao, Hong

    2018-03-01

    Spatial filtering is one principal tool used in image processing for a broad spectrum of applications. Median filtering has become a prominent representation of spatial filtering because its performance in noise reduction is excellent. Although filtering of quantum images in the frequency domain has been described in the literature, and there is a one-to-one correspondence between linear spatial filters and filters in the frequency domain, median filtering is a nonlinear process that cannot be achieved in the frequency domain. We therefore investigated the spatial filtering of quantum image, focusing on the design method of the quantum median filter and applications in image de-noising. To this end, first, we presented the quantum circuits for three basic modules (i.e., Cycle Shift, Comparator, and Swap), and then, we design two composite modules (i.e., Sort and Median Calculation). We next constructed a complete quantum circuit that implements the median filtering task and present the results of several simulation experiments on some grayscale images with different noise patterns. Although experimental results show that the proposed scheme has almost the same noise suppression capacity as its classical counterpart, the complexity analysis shows that the proposed scheme can reduce the computational complexity of the classical median filter from the exponential function of image size n to the second-order polynomial function of image size n, so that the classical method can be speeded up.

  4. Automatic detection of the breast border and nipple position on digital mammograms using genetic algorithm for asymmetry approach to detection of microcalcifications.

    PubMed

    Karnan, M; Thangavel, K

    2007-07-01

    The presence of microcalcifications in breast tissue is one of the most incident signs considered by radiologist for an early diagnosis of breast cancer, which is one of the most common forms of cancer among women. In this paper, the Genetic Algorithm (GA) is proposed for automatic look at commonly prone area the breast border and nipple position to discover the suspicious regions on digital mammograms based on asymmetries between left and right breast image. The basic idea of the asymmetry approach is to scan left and right images are subtracted to extract the suspicious region. The proposed system consists of two steps: First, the mammogram images are enhanced using median filter, normalize the image, at the pectoral muscle region is excluding the border of the mammogram and comparing for both left and right images from the binary image. Further GA is applied to magnify the detected border. The figure of merit is calculated to evaluate whether the detected border is exact or not. And the nipple position is identified using GA. The some comparisons method is adopted for detection of suspected area. Second, using the border points and nipple position as the reference the mammogram images are aligned and subtracted to extract the suspicious region. The algorithms are tested on 114 abnormal digitized mammograms from Mammogram Image Analysis Society database.

  5. Comparative analysis of images of comet 1P/Halley in their perihelion passages in 1910 and 1986

    NASA Astrophysics Data System (ADS)

    Voelzke, Marcos Rincon

    This work is based on a systematic analysis of images of 1P/Halley comet collected during its penultimate and ultimate approaches, i.e., in 1910 and in 1986. The present research basically characterised, identified, classified, measured and compared some of the tail structures of comet 1P/Halley like DEs, wavy structures and solitons. The images illustrated in the Atlas of Comet Halley 1910 II (Donn et al., 1986), which shows the comet in its 1910 passage, were compared with the images illustrated in The International Halley Watch Atlas of Large-Scale Phenomena (Brandt et al., 1992), which shows the comet in its 1986 passage. While two onsets of DEs were discovered after the perihelion passage in 1910, the average value of the corrected cometocentric velocity Vc was (57 ± 15) km s (-1) ; ten were discovered after the perihelion passage in 1986 with an average of corrected velocities equal to (130 ± 37) km s (-1) .The mean value of the corrected wavelength of wavy structures, in 1910, is equal to (1.7 ± 0.1) x 10 (6) km and in 1986 is (2.2 ± 0.2) x 10 (6) km. The mean value of the amplitude A of the wave, in 1910, is equal to (1.4 ± 0.1) x 10 (5) km and in 1986 it is equal to (2.8 ± 0.5) x 10 (5) km. The goals of this research are to report the results obtained from the analysis of the P/Halleýs 1910 and 1986 images, to provide empirical data for comparison and to form the input for future physical/theoretical work.

  6. Segmentation and Quantitative Analysis of Apoptosis of Chinese Hamster Ovary Cells from Fluorescence Microscopy Images.

    PubMed

    Du, Yuncheng; Budman, Hector M; Duever, Thomas A

    2017-06-01

    Accurate and fast quantitative analysis of living cells from fluorescence microscopy images is useful for evaluating experimental outcomes and cell culture protocols. An algorithm is developed in this work to automatically segment and distinguish apoptotic cells from normal cells. The algorithm involves three steps consisting of two segmentation steps and a classification step. The segmentation steps are: (i) a coarse segmentation, combining a range filter with a marching square method, is used as a prefiltering step to provide the approximate positions of cells within a two-dimensional matrix used to store cells' images and the count of the number of cells for a given image; and (ii) a fine segmentation step using the Active Contours Without Edges method is applied to the boundaries of cells identified in the coarse segmentation step. Although this basic two-step approach provides accurate edges when the cells in a given image are sparsely distributed, the occurrence of clusters of cells in high cell density samples requires further processing. Hence, a novel algorithm for clusters is developed to identify the edges of cells within clusters and to approximate their morphological features. Based on the segmentation results, a support vector machine classifier that uses three morphological features: the mean value of pixel intensities in the cellular regions, the variance of pixel intensities in the vicinity of cell boundaries, and the lengths of the boundaries, is developed for distinguishing apoptotic cells from normal cells. The algorithm is shown to be efficient in terms of computational time, quantitative analysis, and differentiation accuracy, as compared with the use of the active contours method without the proposed preliminary coarse segmentation step.

  7. 75 FR 77885 - Government-Owned Inventions; Availability for Licensing

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-12-14

    ... of federally-funded research and development. Foreign patent applications are filed on selected... applications. Software System for Quantitative Assessment of Vasculature in Three Dimensional Images... three dimensional vascular networks from medical and basic research images. Deregulation of angiogenesis...

  8. Mapping on complex neutrosophic soft expert sets

    NASA Astrophysics Data System (ADS)

    Al-Quran, Ashraf; Hassan, Nasruddin

    2018-04-01

    We introduce the mapping on complex neutrosophic soft expert sets. Further, we investigated the basic operations and other related properties of complex neutrosophic soft expert image and complex neutrosophic soft expert inverse image of complex neutrosophic soft expert sets.

  9. Using Digital Imaging in Classroom and Outdoor Activities.

    ERIC Educational Resources Information Center

    Thomasson, Joseph R.

    2002-01-01

    Explains how to use digital cameras and related basic equipment during indoor and outdoor activities. Uses digital imaging in general botany class to identify unknown fungus samples. Explains how to select a digital camera and other necessary equipment. (YDS)

  10. The relationship between immediate relevant basic science knowledge and clinical knowledge: physiology knowledge and transthoracic echocardiography image interpretation.

    PubMed

    Nielsen, Dorte Guldbrand; Gotzsche, Ole; Sonne, Ole; Eika, Berit

    2012-10-01

    Two major views on the relationship between basic science knowledge and clinical knowledge stand out; the Two-world view seeing basic science and clinical science as two separate knowledge bases and the encapsulated knowledge view stating that basic science knowledge plays an overt role being encapsulated in the clinical knowledge. However, resent research has implied that a more complex relationship between the two knowledge bases exists. In this study, we explore the relationship between immediate relevant basic science (physiology) and clinical knowledge within a specific domain of medicine (echocardiography). Twenty eight medical students in their 3rd year and 45 physicians (15 interns, 15 cardiology residents and 15 cardiology consultants) took a multiple-choice test of physiology knowledge. The physicians also viewed images of a transthoracic echocardiography (TTE) examination and completed a checklist of possible pathologies found. A total score for each participant was calculated for the physiology test, and for all physicians also for the TTE checklist. Consultants scored significantly higher on the physiology test than did medical students and interns. A significant correlation between physiology test scores and TTE checklist scores was found for the cardiology residents only. Basic science knowledge of immediate relevance for daily clinical work expands with increased work experience within a specific domain. Consultants showed no relationship between physiology knowledge and TTE interpretation indicating that experts do not use basic science knowledge in routine daily practice, but knowledge of immediate relevance remains ready for use.

  11. Hybrid ANN optimized artificial fish swarm algorithm based classifier for classification of suspicious lesions in breast DCE-MRI

    NASA Astrophysics Data System (ADS)

    Janaki Sathya, D.; Geetha, K.

    2017-12-01

    Automatic mass or lesion classification systems are developed to aid in distinguishing between malignant and benign lesions present in the breast DCE-MR images, the systems need to improve both the sensitivity and specificity of DCE-MR image interpretation in order to be successful for clinical use. A new classifier (a set of features together with a classification method) based on artificial neural networks trained using artificial fish swarm optimization (AFSO) algorithm is proposed in this paper. The basic idea behind the proposed classifier is to use AFSO algorithm for searching the best combination of synaptic weights for the neural network. An optimal set of features based on the statistical textural features is presented. The investigational outcomes of the proposed suspicious lesion classifier algorithm therefore confirm that the resulting classifier performs better than other such classifiers reported in the literature. Therefore this classifier demonstrates that the improvement in both the sensitivity and specificity are possible through automated image analysis.

  12. A cloud-based system for automatic glaucoma screening.

    PubMed

    Fengshou Yin; Damon Wing Kee Wong; Ying Quan; Ai Ping Yow; Ngan Meng Tan; Gopalakrishnan, Kavitha; Beng Hai Lee; Yanwu Xu; Zhuo Zhang; Jun Cheng; Jiang Liu

    2015-08-01

    In recent years, there has been increasing interest in the use of automatic computer-based systems for the detection of eye diseases including glaucoma. However, these systems are usually standalone software with basic functions only, limiting their usage in a large scale. In this paper, we introduce an online cloud-based system for automatic glaucoma screening through the use of medical image-based pattern classification technologies. It is designed in a hybrid cloud pattern to offer both accessibility and enhanced security. Raw data including patient's medical condition and fundus image, and resultant medical reports are collected and distributed through the public cloud tier. In the private cloud tier, automatic analysis and assessment of colour retinal fundus images are performed. The ubiquitous anywhere access nature of the system through the cloud platform facilitates a more efficient and cost-effective means of glaucoma screening, allowing the disease to be detected earlier and enabling early intervention for more efficient intervention and disease management.

  13. A Workstation for Interactive Display and Quantitative Analysis of 3-D and 4-D Biomedical Images

    PubMed Central

    Robb, R.A.; Heffeman, P.B.; Camp, J.J.; Hanson, D.P.

    1986-01-01

    The capability to extract objective and quantitatively accurate information from 3-D radiographic biomedical images has not kept pace with the capabilities to produce the images themselves. This is rather an ironic paradox, since on the one hand the new 3-D and 4-D imaging capabilities promise significant potential for providing greater specificity and sensitivity (i.e., precise objective discrimination and accurate quantitative measurement of body tissue characteristics and function) in clinical diagnostic and basic investigative imaging procedures than ever possible before, but on the other hand, the momentous advances in computer and associated electronic imaging technology which have made these 3-D imaging capabilities possible have not been concomitantly developed for full exploitation of these capabilities. Therefore, we have developed a powerful new microcomputer-based system which permits detailed investigations and evaluation of 3-D and 4-D (dynamic 3-D) biomedical images. The system comprises a special workstation to which all the information in a large 3-D image data base is accessible for rapid display, manipulation, and measurement. The system provides important capabilities for simultaneously representing and analyzing both structural and functional data and their relationships in various organs of the body. This paper provides a detailed description of this system, as well as some of the rationale, background, theoretical concepts, and practical considerations related to system implementation. ImagesFigure 5Figure 7Figure 8Figure 9Figure 10Figure 11Figure 12Figure 13Figure 14Figure 15Figure 16

  14. The challenge of on-tissue digestion for MALDI MSI- a comparison of different protocols to improve imaging experiments.

    PubMed

    Diehl, Hanna C; Beine, Birte; Elm, Julian; Trede, Dennis; Ahrens, Maike; Eisenacher, Martin; Marcus, Katrin; Meyer, Helmut E; Henkel, Corinna

    2015-03-01

    Mass spectrometry imaging (MSI) has become a powerful and successful tool in the context of biomarker detection especially in recent years. This emerging technique is based on the combination of histological information of a tissue and its corresponding spatial resolved mass spectrometric information. The identification of differentially expressed protein peaks between samples is still the method's bottleneck. Therefore, peptide MSI compared to protein MSI is closer to the final goal of identification since peptides are easier to measure than proteins. Nevertheless, the processing of peptide imaging samples is challenging due to experimental complexity. To address this issue, a method development study for peptide MSI using cryoconserved and formalin-fixed paraffin-embedded (FFPE) rat brain tissue is provided. Different digestion times, matrices, and proteases were tested to define an optimal workflow for peptide MSI. All practical experiments were done in triplicates and analyzed by the SCiLS Lab software, using structures derived from myelin basic protein (MBP) peaks, principal component analysis (PCA) and probabilistic latent semantic analysis (pLSA) to rate the experiments' quality. Blinded experimental evaluation in case of defining countable structures in the datasets was performed by three individuals. Such an extensive method development for peptide matrix-assisted laser desorption/ionization (MALDI) imaging experiments has not been performed so far, and the resulting problems and consequences were analyzed and discussed.

  15. An open-source, FireWire camera-based, Labview-controlled image acquisition system for automated, dynamic pupillometry and blink detection.

    PubMed

    de Souza, John Kennedy Schettino; Pinto, Marcos Antonio da Silva; Vieira, Pedro Gabrielle; Baron, Jerome; Tierra-Criollo, Carlos Julio

    2013-12-01

    The dynamic, accurate measurement of pupil size is extremely valuable for studying a large number of neuronal functions and dysfunctions. Despite tremendous and well-documented progress in image processing techniques for estimating pupil parameters, comparatively little work has been reported on practical hardware issues involved in designing image acquisition systems for pupil analysis. Here, we describe and validate the basic features of such a system which is based on a relatively compact, off-the-shelf, low-cost FireWire digital camera. We successfully implemented two configurable modes of video record: a continuous mode and an event-triggered mode. The interoperability of the whole system is guaranteed by a set of modular software components hosted on a personal computer and written in Labview. An offline analysis suite of image processing algorithms for automatically estimating pupillary and eyelid parameters were assessed using data obtained in human subjects. Our benchmark results show that such measurements can be done in a temporally precise way at a sampling frequency of up to 120 Hz and with an estimated maximum spatial resolution of 0.03 mm. Our software is made available free of charge to the scientific community, allowing end users to either use the software as is or modify it to suit their own needs. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  16. On the role of spatial phase and phase correlation in vision, illusion, and cognition

    PubMed Central

    Gladilin, Evgeny; Eils, Roland

    2015-01-01

    Numerous findings indicate that spatial phase bears an important cognitive information. Distortion of phase affects topology of edge structures and makes images unrecognizable. In turn, appropriately phase-structured patterns give rise to various illusions of virtual image content and apparent motion. Despite a large body of phenomenological evidence not much is known yet about the role of phase information in neural mechanisms of visual perception and cognition. Here, we are concerned with analysis of the role of spatial phase in computational and biological vision, emergence of visual illusions and pattern recognition. We hypothesize that fundamental importance of phase information for invariant retrieval of structural image features and motion detection promoted development of phase-based mechanisms of neural image processing in course of evolution of biological vision. Using an extension of Fourier phase correlation technique, we show that the core functions of visual system such as motion detection and pattern recognition can be facilitated by the same basic mechanism. Our analysis suggests that emergence of visual illusions can be attributed to presence of coherently phase-shifted repetitive patterns as well as the effects of acuity compensation by saccadic eye movements. We speculate that biological vision relies on perceptual mechanisms effectively similar to phase correlation, and predict neural features of visual pattern (dis)similarity that can be used for experimental validation of our hypothesis of “cognition by phase correlation.” PMID:25954190

  17. On the role of spatial phase and phase correlation in vision, illusion, and cognition.

    PubMed

    Gladilin, Evgeny; Eils, Roland

    2015-01-01

    Numerous findings indicate that spatial phase bears an important cognitive information. Distortion of phase affects topology of edge structures and makes images unrecognizable. In turn, appropriately phase-structured patterns give rise to various illusions of virtual image content and apparent motion. Despite a large body of phenomenological evidence not much is known yet about the role of phase information in neural mechanisms of visual perception and cognition. Here, we are concerned with analysis of the role of spatial phase in computational and biological vision, emergence of visual illusions and pattern recognition. We hypothesize that fundamental importance of phase information for invariant retrieval of structural image features and motion detection promoted development of phase-based mechanisms of neural image processing in course of evolution of biological vision. Using an extension of Fourier phase correlation technique, we show that the core functions of visual system such as motion detection and pattern recognition can be facilitated by the same basic mechanism. Our analysis suggests that emergence of visual illusions can be attributed to presence of coherently phase-shifted repetitive patterns as well as the effects of acuity compensation by saccadic eye movements. We speculate that biological vision relies on perceptual mechanisms effectively similar to phase correlation, and predict neural features of visual pattern (dis)similarity that can be used for experimental validation of our hypothesis of "cognition by phase correlation."

  18. Gold nanoparticle contrast agents in advanced X-ray imaging technologies.

    PubMed

    Ahn, Sungsook; Jung, Sung Yong; Lee, Sang Joon

    2013-05-17

    Recently, there has been significant progress in the field of soft- and hard-X-ray imaging for a wide range of applications, both technically and scientifically, via developments in sources, optics and imaging methodologies. While one community is pursuing extensive applications of available X-ray tools, others are investigating improvements in techniques, including new optics, higher spatial resolutions and brighter compact sources. For increased image quality and more exquisite investigation on characteristic biological phenomena, contrast agents have been employed extensively in imaging technologies. Heavy metal nanoparticles are excellent absorbers of X-rays and can offer excellent improvements in medical diagnosis and X-ray imaging. In this context, the role of gold (Au) is important for advanced X-ray imaging applications. Au has a long-history in a wide range of medical applications and exhibits characteristic interactions with X-rays. Therefore, Au can offer a particular advantage as a tracer and a contrast enhancer in X-ray imaging technologies by sensing the variation in X-ray attenuation in a given sample volume. This review summarizes basic understanding on X-ray imaging from device set-up to technologies. Then this review covers recent studies in the development of X-ray imaging techniques utilizing gold nanoparticles (AuNPs) and their relevant applications, including two- and three-dimensional biological imaging, dynamical processes in a living system, single cell-based imaging and quantitative analysis of circulatory systems and so on. In addition to conventional medical applications, various novel research areas have been developed and are expected to be further developed through AuNP-based X-ray imaging technologies.

  19. Digital storage and analysis of color Doppler echocardiograms

    NASA Technical Reports Server (NTRS)

    Chandra, S.; Thomas, J. D.

    1997-01-01

    Color Doppler flow mapping has played an important role in clinical echocardiography. Most of the clinical work, however, has been primarily qualitative. Although qualitative information is very valuable, there is considerable quantitative information stored within the velocity map that has not been extensively exploited so far. Recently, many researchers have shown interest in using the encoded velocities to address the clinical problems such as quantification of valvular regurgitation, calculation of cardiac output, and characterization of ventricular filling. In this article, we review some basic physics and engineering aspects of color Doppler echocardiography, as well as drawbacks of trying to retrieve velocities from video tape data. Digital storage, which plays a critical role in performing quantitative analysis, is discussed in some detail with special attention to velocity encoding in DICOM 3.0 (medical image storage standard) and the use of digital compression. Lossy compression can considerably reduce file size with minimal loss of information (mostly redundant); this is critical for digital storage because of the enormous amount of data generated (a 10 minute study could require 18 Gigabytes of storage capacity). Lossy JPEG compression and its impact on quantitative analysis has been studied, showing that images compressed at 27:1 using the JPEG algorithm compares favorably with directly digitized video images, the current goldstandard. Some potential applications of these velocities in analyzing the proximal convergence zones, mitral inflow, and some areas of future development are also discussed in the article.

  20. Advanced forensic validation for human spermatozoa identification using SPERM HY-LITER™ Express with quantitative image analysis.

    PubMed

    Takamura, Ayari; Watanabe, Ken; Akutsu, Tomoko

    2017-07-01

    Identification of human semen is indispensable for the investigation of sexual assaults. Fluorescence staining methods using commercial kits, such as the series of SPERM HY-LITER™ kits, have been useful to detect human sperm via strong fluorescence. These kits have been examined from various forensic aspects. However, because of a lack of evaluation methods, these studies did not provide objective, or quantitative, descriptions of the results nor clear criteria for the decisions reached. In addition, the variety of validations was considerably limited. In this study, we conducted more advanced validations of SPERM HY-LITER™ Express using our established image analysis method. Use of this method enabled objective and specific identification of fluorescent sperm's spots and quantitative comparisons of the sperm detection performance under complex experimental conditions. For body fluid mixtures, we examined interference with the fluorescence staining from other body fluid components. Effects of sample decomposition were simulated in high humidity and high temperature conditions. Semen with quite low sperm concentrations, such as azoospermia and oligospermia samples, represented the most challenging cases in application of the kit. Finally, the tolerance of the kit against various acidic and basic environments was analyzed. The validations herein provide useful information for the practical applications of the SPERM HY-LITER™ Express kit, which were previously unobtainable. Moreover, the versatility of our image analysis method toward various complex cases was demonstrated.

  1. Nondestructive imaging of fiber structure in articular cartilage using optical polarization tractography

    NASA Astrophysics Data System (ADS)

    Yao, Xuan; Wang, Yuanbo; Ravanfar, Mohammadreza; Pfeiffer, Ferris M.; Duan, Dongsheng; Yao, Gang

    2016-11-01

    Collagen fiber orientation plays an important role in determining the structure and function of the articular cartilage. However, there is currently a lack of nondestructive means to image the fiber orientation from the cartilage surface. The purpose of this study is to investigate whether the newly developed optical polarization tractography (OPT) can image fiber structure in articular cartilage. OPT was applied to obtain the depth-dependent fiber orientation in fresh articular cartilage samples obtained from porcine phalanges. For comparison, we also obtained collagen fiber orientation in the superficial zone of the cartilage using the established split-line method. The direction of each split-line was quantified using image processing. The orientation measured in OPT agreed well with those obtained from the split-line method. The correlation analysis of a total of 112 split-lines showed a greater than 0.9 coefficient of determination (R2) between the split-line results and OPT measurements obtained between 40 and 108 μm in depth. In addition, the thickness of the superficial layer can also be assessed from the birefringence images obtained in OPT. These results support that OPT provides a nondestructive way to image the collagen fiber structure in articular cartilage. This technology may be valuable for both basic cartilage research and clinical orthopedic applications.

  2. Image characterization metrics for muon tomography

    NASA Astrophysics Data System (ADS)

    Luo, Weidong; Lehovich, Andre; Anashkin, Edward; Bai, Chuanyong; Kindem, Joel; Sossong, Michael; Steiger, Matt

    2014-05-01

    Muon tomography uses naturally occurring cosmic rays to detect nuclear threats in containers. Currently there are no systematic image characterization metrics for muon tomography. We propose a set of image characterization methods to quantify the imaging performance of muon tomography. These methods include tests of spatial resolution, uniformity, contrast, signal to noise ratio (SNR) and vertical smearing. Simulated phantom data and analysis methods were developed to evaluate metric applicability. Spatial resolution was determined as the FWHM of the point spread functions in X, Y and Z axis for 2.5cm tungsten cubes. Uniformity was measured by drawing a volume of interest (VOI) within a large water phantom and defined as the standard deviation of voxel values divided by the mean voxel value. Contrast was defined as the peak signals of a set of tungsten cubes divided by the mean voxel value of the water background. SNR was defined as the peak signals of cubes divided by the standard deviation (noise) of the water background. Vertical smearing, i.e. vertical thickness blurring along the zenith axis for a set of 2 cm thick tungsten plates, was defined as the FWHM of vertical spread function for the plate. These image metrics provided a useful tool to quantify the basic imaging properties for muon tomography.

  3. PCANet: A Simple Deep Learning Baseline for Image Classification?

    PubMed

    Chan, Tsung-Han; Jia, Kui; Gao, Shenghua; Lu, Jiwen; Zeng, Zinan; Ma, Yi

    2015-12-01

    In this paper, we propose a very simple deep learning network for image classification that is based on very basic data processing components: 1) cascaded principal component analysis (PCA); 2) binary hashing; and 3) blockwise histograms. In the proposed architecture, the PCA is employed to learn multistage filter banks. This is followed by simple binary hashing and block histograms for indexing and pooling. This architecture is thus called the PCA network (PCANet) and can be extremely easily and efficiently designed and learned. For comparison and to provide a better understanding, we also introduce and study two simple variations of PCANet: 1) RandNet and 2) LDANet. They share the same topology as PCANet, but their cascaded filters are either randomly selected or learned from linear discriminant analysis. We have extensively tested these basic networks on many benchmark visual data sets for different tasks, including Labeled Faces in the Wild (LFW) for face verification; the MultiPIE, Extended Yale B, AR, Facial Recognition Technology (FERET) data sets for face recognition; and MNIST for hand-written digit recognition. Surprisingly, for all tasks, such a seemingly naive PCANet model is on par with the state-of-the-art features either prefixed, highly hand-crafted, or carefully learned [by deep neural networks (DNNs)]. Even more surprisingly, the model sets new records for many classification tasks on the Extended Yale B, AR, and FERET data sets and on MNIST variations. Additional experiments on other public data sets also demonstrate the potential of PCANet to serve as a simple but highly competitive baseline for texture classification and object recognition.

  4. The VLA Sky Survey

    NASA Astrophysics Data System (ADS)

    Lacy, Mark; VLASS Survey Team, VLASS Survey Science Group

    2018-01-01

    The VLA Sky Survey (VLASS), which began in September 2017, is a seven year project to image the entire sky north of Declination -40 degrees in three epochs. The survey is being carried out in I,Q and U polarization at a frequency of 2-4GHz, and a resolution of 2.5 arcseconds, with each epoch being separated by 32 months. Raw data from the survey, along with basic "quicklook" images are made freely available shortly after observation. Within a few months, NRAO will begin making available further basic data products, including refined images and source lists. In this talk I shall describe the science goals and methodology of the survey, the current survey status, and some early results, along with plans for collaborations with external groups to produce enhanced, high level data products.

  5. PCIPS 2.0: Powerful multiprofile image processing implemented on PCs

    NASA Technical Reports Server (NTRS)

    Smirnov, O. M.; Piskunov, N. E.

    1992-01-01

    Over the years, the processing power of personal computers has steadily increased. Now, 386- and 486-based PC's are fast enough for many image processing applications, and inexpensive enough even for amateur astronomers. PCIPS is an image processing system based on these platforms that was designed to satisfy a broad range of data analysis needs, while requiring minimum hardware and providing maximum expandability. It will run (albeit at a slow pace) even on a 80286 with 640K memory, but will take full advantage of bigger memory and faster CPU's. Because the actual image processing is performed by external modules, the system can be easily upgraded by the user for all sorts of scientific data analysis. PCIPS supports large format lD and 2D images in any numeric type from 8-bit integer to 64-bit floating point. The images can be displayed, overlaid, printed and any part of the data examined via an intuitive graphical user interface that employs buttons, pop-up menus, and a mouse. PCIPS automatically converts images between different types and sizes to satisfy the requirements of various applications. PCIPS features an API that lets users develop custom applications in C or FORTRAN. While doing so, a programmer can concentrate on the actual data processing, because PCIPS assumes responsibility for accessing images and interacting with the user. This also ensures that all applications, even custom ones, have a consistent and user-friendly interface. The API is compatible with factory programming, a metaphor for constructing image processing procedures that will be implemented in future versions of the system. Several application packages were created under PCIPS. The basic package includes elementary arithmetics and statistics, geometric transformations and import/export in various formats (FITS, binary, ASCII, and GIF). The CCD processing package and the spectral analysis package were successfully used to reduce spectra from the Nordic Telescope at La Palma. A photometry package is also available, and other packages are being developed. A multitasking version of PCIPS that utilizes the factory programming concept is currently under development. This version will remain compatible (on the source code level) with existing application packages and custom applications.

  6. Visualization: A Tool for Enhancing Students' Concept Images of Basic Object-Oriented Concepts

    ERIC Educational Resources Information Center

    Cetin, Ibrahim

    2013-01-01

    The purpose of this study was twofold: to investigate students' concept images about class, object, and their relationship and to help them enhance their learning of these notions with a visualization tool. Fifty-six second-year university students participated in the study. To investigate his/her concept images, the researcher developed a survey…

  7. Basic Research on Three-Dimensional (3D) Electromagnetic (EM) Methods for Imaging the Flow of Organic Fluids in the Subsurface.

    DTIC Science & Technology

    1997-04-30

    Currently there are no systems available which allow for economical and accurate subsurface imaging of remediation sites. In some cases, high...system to address this need. This project has been very successful in showing a promising new direction for high resolution subsurface imaging . Our

  8. Photogrammetry Toolbox Reference Manual

    NASA Technical Reports Server (NTRS)

    Liu, Tianshu; Burner, Alpheus W.

    2014-01-01

    Specialized photogrammetric and image processing MATLAB functions useful for wind tunnel and other ground-based testing of aerospace structures are described. These functions include single view and multi-view photogrammetric solutions, basic image processing to determine image coordinates, 2D and 3D coordinate transformations and least squares solutions, spatial and radiometric camera calibration, epipolar relations, and various supporting utility functions.

  9. Tse computers. [Chinese pictograph character binary image processor design for high speed applications

    NASA Technical Reports Server (NTRS)

    Strong, J. P., III

    1973-01-01

    Tse computers have the potential of operating four or five orders of magnitude faster than present digital computers. The computers of the new design use binary images as their basic computational entity. The word 'tse' is the transliteration of the Chinese word for 'pictograph character.' Tse computers are large collections of devices that perform logical operations on binary images. The operations on binary images are to be performed over the entire image simultaneously.

  10. Spin echo SPI methods for quantitative analysis of fluids in porous media.

    PubMed

    Li, Linqing; Han, Hui; Balcom, Bruce J

    2009-06-01

    Fluid density imaging is highly desirable in a wide variety of porous media measurements. The SPRITE class of MRI methods has proven to be robust and general in their ability to generate density images in porous media, however the short encoding times required, with correspondingly high magnetic field gradient strengths and filter widths, and low flip angle RF pulses, yield sub-optimal S/N images, especially at low static field strength. This paper explores two implementations of pure phase encode spin echo 1D imaging, with application to a proposed new petroleum reservoir core analysis measurement. In the first implementation of the pulse sequence, we modify the spin echo single point imaging (SE-SPI) technique to acquire the k-space origin data point, with a near zero evolution time, from the free induction decay (FID) following a 90 degrees excitation pulse. Subsequent k-space data points are acquired by separately phase encoding individual echoes in a multi-echo acquisition. T(2) attenuation of the echo train yields an image convolution which causes blurring. The T(2) blur effect is moderate for porous media with T(2) lifetime distributions longer than 5 ms. As a robust, high S/N, and fast 1D imaging method, this method will be highly complementary to SPRITE techniques for the quantitative analysis of fluid content in porous media. In the second implementation of the SE-SPI pulse sequence, modification of the basic measurement permits fast determination of spatially resolved T(2) distributions in porous media through separately phase encoding each echo in a multi-echo CPMG pulse train. An individual T(2) weighted image may be acquired from each echo. The echo time (TE) of each T(2) weighted image may be reduced to 500 micros or less. These profiles can be fit to extract a T(2) distribution from each pixel employing a variety of standard inverse Laplace transform methods. Fluid content 1D images are produced as an essential by product of determining the spatially resolved T(2) distribution. These 1D images do not suffer from a T(2) related blurring. The above SE-SPI measurements are combined to generate 1D images of the local saturation and T(2) distribution as a function of saturation, upon centrifugation of petroleum reservoir core samples. The logarithm mean T(2) is observed to shift linearly with water saturation. This new reservoir core analysis measurement may provide a valuable calibration of the Coates equation for irreducible water saturation, which has been widely implemented in NMR well logging measurements.

  11. 75 FR 77882 - Government-Owned Inventions; Availability for Licensing

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-12-14

    ... of federally-funded research and development. Foreign patent applications are filed on selected... applications. Software System for Quantitative Assessment of Vasculature in Three Dimensional Images... vascular networks from medical and basic research images. Deregulation of angiogenesis plays a major role...

  12. Optical coherence tomography: A guide to interpretation of common macular diseases

    PubMed Central

    Bhende, Muna; Shetty, Sharan; Parthasarathy, Mohana Kuppuswamy; Ramya, S

    2018-01-01

    Optical coherence tomography is a quick, non invasive and reproducible imaging tool for macular lesions and has become an essential part of retina practice. This review address the common protocols for imaging the macula, basics of image interpretation, features of common macular disorders with clues to differentiate mimickers and an introduction to choroidal imaging. It includes case examples and also a practical algorithm for interpretation. PMID:29283118

  13. Tutorial on photoacoustic tomography

    NASA Astrophysics Data System (ADS)

    Zhou, Yong; Yao, Junjie; Wang, Lihong V.

    2016-06-01

    Photoacoustic tomography (PAT) has become one of the fastest growing fields in biomedical optics. Unlike pure optical imaging, such as confocal microscopy and two-photon microscopy, PAT employs acoustic detection to image optical absorption contrast with high-resolution deep into scattering tissue. So far, PAT has been widely used for multiscale anatomical, functional, and molecular imaging of biological tissues. We focus on PAT's basic principles, major implementations, imaging contrasts, and recent applications.

  14. T1ρ magnetic resonance: basic physics principles and applications in knee and intervertebral disc imaging.

    PubMed

    Wáng, Yì-Xiáng J; Zhang, Qinwei; Li, Xiaojuan; Chen, Weitian; Ahuja, Anil; Yuan, Jing

    2015-12-01

    T1ρ relaxation time provides a new contrast mechanism that differs from T1- and T2-weighted contrast, and is useful to study low-frequency motional processes and chemical exchange in biological tissues. T1ρ imaging can be performed in the forms of T1ρ-weighted image, T1ρ mapping and T1ρ dispersion. T1ρ imaging, particularly at low spin-lock frequency, is sensitive to B0 and B1 inhomogeneity. Various composite spin-lock pulses have been proposed to alleviate the influence of field inhomogeneity so as to reduce the banding-like spin-lock artifacts. T1ρ imaging could be specific absorption rate (SAR) intensive and time consuming. Efforts to address these issues and speed-up data acquisition are being explored to facilitate wider clinical applications. This paper reviews the T1ρ imaging's basic physic principles, as well as its application for cartilage imaging and intervertebral disc imaging. Compared to more established T2 relaxation time, it has been shown that T1ρ provides more sensitive detection of proteoglycan (PG) loss at early stages of cartilage degeneration. T1ρ has also been shown to provide more sensitive evaluation of annulus fibrosis (AF) degeneration of the discs.

  15. Analysis of regional radiotherapy dosimetry audit data and recommendations for future audits

    PubMed Central

    Palmer, A; Mzenda, B; Kearton, J; Wills, R

    2011-01-01

    Objectives Regional interdepartmental dosimetry audits within the UK provide basic assurances of the dosimetric accuracy of radiotherapy treatments. Methods This work reviews several years of audit results from the South East Central audit group including megavoltage (MV) and kilovoltage (kV) photons, electrons and iodine-125 seeds. Results Apart from some minor systematic errors that were resolved, the results of all audits have been within protocol tolerances, confirming the long-term stability and agreement of basic radiation dosimetric parameters between centres in the audit region. There is some evidence of improvement in radiation dosimetry with the adoption of newer codes of practice. Conclusion The value of current audit methods and the limitations of peer-to-peer auditing is discussed, particularly the influence of the audit schedule on the results obtained, where no “gold standard” exists. Recommendations are made for future audits, including an essential requirement to maintain the monitoring of basic fundamental dosimetry, such as MV photon and electron output, but audits must also be developed to include new treatment technologies such as image-guided radiotherapy and address the most common sources of error in radiotherapy. PMID:21159805

  16. Accuracy Analysis on Large Blocks of High Resolution Images

    NASA Technical Reports Server (NTRS)

    Passini, Richardo M.

    2007-01-01

    Although high altitude frequencies effects are removed at the time of basic image generation, low altitude (Yaw) effects are still present in form of affinity/angular affinity. They are effectively removed by additional parameters. Bundle block adjustment based on properly weighted ephemeris/altitude quaternions (BBABEQ) are not enough to remove the systematic effect. Moreover, due to the narrow FOV of the HRSI, position and altitude are highly correlated making it almost impossible to separate and remove their systematic effects without extending the geometric model (Self-Calib.) The systematic effects gets evident on the increase of accuracy (in terms of RMSE at GCPs) for looser and relaxed ground control at the expense of large and strong block deformation with large residuals at check points. Systematic errors are most freely distributed and their effects propagated all over the block.

  17. A Simple Method Based on the Application of a CCD Camera as a Sensor to Detect Low Concentrations of Barium Sulfate in Suspension

    PubMed Central

    de Sena, Rodrigo Caciano; Soares, Matheus; Pereira, Maria Luiza Oliveira; da Silva, Rogério Cruz Domingues; do Rosário, Francisca Ferreira; da Silva, Joao Francisco Cajaiba

    2011-01-01

    The development of a simple, rapid and low cost method based on video image analysis and aimed at the detection of low concentrations of precipitated barium sulfate is described. The proposed system is basically composed of a webcam with a CCD sensor and a conventional dichroic lamp. For this purpose, software for processing and analyzing the digital images based on the RGB (Red, Green and Blue) color system was developed. The proposed method had shown very good repeatability and linearity and also presented higher sensitivity than the standard turbidimetric method. The developed method is presented as a simple alternative for future applications in the study of precipitations of inorganic salts and also for detecting the crystallization of organic compounds. PMID:22346607

  18. The application of dam break monitoring based on BJ-2 images

    NASA Astrophysics Data System (ADS)

    Cui, Yan; Li, Suju; Wu, Wei; Liu, Ming

    2018-03-01

    Flood is one of the major disasters in China. There are heavy intensity and wide range rainstorm during flood season in eastern part of China, and the flood control capacity of rivers is lower somewhere, so the flood disaster is abrupt and caused lots of direct economic losses. In this paper, based on BJ-2 Spatio-temporal resolution remote sensing data, reference image, 30-meter Global Land Cover Dataset(GlobeLand 30) and basic geographic data, forming Dam break monitoring model which including BJ-2 date processing sub-model, flood inundation range monitoring sub-model, dam break change monitoring sub-model and crop inundation monitoring sub-model. Case analysis in Poyang County Jiangxi province in 20th, Jun, 2016 show that the model has a high precision and could monitoring flood inundation range, crops inundation range and breach.

  19. Research and implementation of SATA protocol link layer based on FPGA

    NASA Astrophysics Data System (ADS)

    Liu, Wen-long; Liu, Xue-bin; Qiang, Si-miao; Yan, Peng; Wen, Zhi-gang; Kong, Liang; Liu, Yong-zheng

    2018-02-01

    In order to solve the problem high-performance real-time, high-speed the image data storage generated by the detector. In this thesis, it choose an suitable portable image storage hard disk of SATA interface, it is relative to the existing storage media. It has a large capacity, high transfer rate, inexpensive, power-down data which is not lost, and many other advantages. This paper focuses on the link layer of the protocol, analysis the implementation process of SATA2.0 protocol, and build state machines. Then analyzes the characteristics resources of Kintex-7 FPGA family, builds state machines according to the agreement, write Verilog implement link layer modules, and run the simulation test. Finally, the test is on the Kintex-7 development board platform. It meets the requirements SATA2.0 protocol basically.

  20. Tibial stress changes in new combat recruits for special forces: patterns and timing at MR imaging.

    PubMed

    Hadid, Amir; Moran, Daniel S; Evans, Rachel K; Fuks, Yael; Schweitzer, Mark E; Shabshin, Nogah

    2014-11-01

    To characterize the incidence, location, grade, and patterns of magnetic resonance (MR) imaging findings in the tibia in asymptomatic recruits before and after 4-month basic training and to investigate whether MR imaging parameters correlated with pretraining activity levels or with future symptomatic injury. This study was approved by three institutional review boards and was conducted in compliance with HIPAA requirements. Volunteers were included in the study after they signed informed consent forms. MR imaging of the tibia of 55 men entering the Israeli Special Forces was performed on recruitment day and after basic training. Ten recruits who did not perform vigorous self-training prior to and during service served as control subjects. MR imaging studies in all recruits were evaluated for presence, type, length, and location of bone stress changes in the tibia. Anthropometric measurements and activity history data were collected. Relationships between bone stress changes, physical activity, and clinical findings and between lesion size and progression were analyzed. Bone stress changes were seen in 35 of 55 recruits (in 26 recruits at time 0 and in nine recruits after basic training). Most bone stress changes consisted of endosteal marrow edema. Approximately 50% of bone stress changes occurred between the middle and distal thirds of the tibia. Lesion size at time 0 had significant correlation with progression. All endosteal findings smaller than 100 mm resolved or did not change, while most findings larger than 100 mm progressed. Of 10 control subjects, one had bone stress changes at time 0, and one had bone stress changes at 4 months. Most tibial bone stress changes occurred before basic training, were usually endosteal, occurred between the middle and distal thirds of the tibia, were smaller than 100 mm, and did not progress. These findings are presumed to represent normal bone remodeling.

  1. The Indispensable Teachers' Guide to Computer Skills. Second Edition.

    ERIC Educational Resources Information Center

    Johnson, Doug

    This book provides a framework of technology skills that can be used for staff development. Part One presents critical components of effective staff development. Part Two describes the basic CODE 77 skills, including basic computer operation, file management, time management, word processing, network and Internet use, graphics and digital images,…

  2. Basic as well as detailed neurosonograms can be performed by offline analysis of three-dimensional fetal brain volumes.

    PubMed

    Bornstein, E; Monteagudo, A; Santos, R; Strock, I; Tsymbal, T; Lenchner, E; Timor-Tritsch, I E

    2010-07-01

    To evaluate the feasibility and the processing time of offline analysis of three-dimensional (3D) brain volumes to perform a basic, as well as a detailed, targeted, fetal neurosonogram. 3D fetal brain volumes were obtained in 103 consecutive healthy fetuses that underwent routine anatomical survey at 20-23 postmenstrual weeks. Transabdominal gray-scale and power Doppler volumes of the fetal brain were acquired by one of three experienced sonographers (an average of seven volumes per fetus). Acquisition was first attempted in the sagittal and coronal planes. When the fetal position did not enable easy and rapid access to these planes, axial acquisition at the level of the biparietal diameter was performed. Offline analysis of each volume was performed by two of the authors in a blinded manner. A systematic technique of 'volume manipulation' was used to identify a list of 25 brain dimensions/structures comprising a complete basic evaluation, intracranial biometry and a detailed targeted fetal neurosonogram. The feasibility and reproducibility of obtaining diagnostic-quality images of the different structures was evaluated, and processing times were recorded, by the two examiners. Diagnostic-quality visualization was feasible in all of the 25 structures, with an excellent visualization rate (85-100%) reported in 18 structures, a good visualization rate (69-97%) reported in five structures and a low visualization rate (38-54%) reported in two structures, by the two examiners. An average of 4.3 and 5.4 volumes were used to complete the examination by the two examiners, with a mean processing time of 7.2 and 8.8 minutes, respectively. The overall agreement rate for diagnostic visualization of the different brain structures between the two examiners was 89.9%, with a kappa coefficient of 0.5 (P < 0.001). In experienced hands, offline analysis of 3D brain volumes is a reproducible modality that can identify all structures necessary to complete both a basic and a detailed second-trimester fetal neurosonogram. Copyright 2010 ISUOG. Published by John Wiley & Sons, Ltd.

  3. Visual tool for estimating the fractal dimension of images

    NASA Astrophysics Data System (ADS)

    Grossu, I. V.; Besliu, C.; Rusu, M. V.; Jipa, Al.; Bordeianu, C. C.; Felea, D.

    2009-10-01

    This work presents a new Visual Basic 6.0 application for estimating the fractal dimension of images, based on an optimized version of the box-counting algorithm. Following the attempt to separate the real information from "noise", we considered also the family of all band-pass filters with the same band-width (specified as parameter). The fractal dimension can be thus represented as a function of the pixel color code. The program was used for the study of paintings cracks, as an additional tool which can help the critic to decide if an artistic work is original or not. Program summaryProgram title: Fractal Analysis v01 Catalogue identifier: AEEG_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEEG_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 29 690 No. of bytes in distributed program, including test data, etc.: 4 967 319 Distribution format: tar.gz Programming language: MS Visual Basic 6.0 Computer: PC Operating system: MS Windows 98 or later RAM: 30M Classification: 14 Nature of problem: Estimating the fractal dimension of images. Solution method: Optimized implementation of the box-counting algorithm. Use of a band-pass filter for separating the real information from "noise". User friendly graphical interface. Restrictions: Although various file-types can be used, the application was mainly conceived for the 8-bit grayscale, windows bitmap file format. Running time: In a first approximation, the algorithm is linear.

  4. A novel method of the image processing on irregular triangular meshes

    NASA Astrophysics Data System (ADS)

    Vishnyakov, Sergey; Pekhterev, Vitaliy; Sokolova, Elizaveta

    2018-04-01

    The paper describes a novel method of the image processing based on irregular triangular meshes implementation. The triangular mesh is adaptive to the image content, least mean square linear approximation is proposed for the basic interpolation within the triangle. It is proposed to use triangular numbers to simplify using of the local (barycentric) coordinates for the further analysis - triangular element of the initial irregular mesh is to be represented through the set of the four equilateral triangles. This allows to use fast and simple pixels indexing in local coordinates, e.g. "for" or "while" loops for access to the pixels. Moreover, representation proposed allows to use discrete cosine transform of the simple "rectangular" symmetric form without additional pixels reordering (as it is used for shape-adaptive DCT forms). Furthermore, this approach leads to the simple form of the wavelet transform on triangular mesh. The results of the method application are presented. It is shown that advantage of the method proposed is a combination of the flexibility of the image-adaptive irregular meshes with the simple form of the pixel indexing in local triangular coordinates and the using of the common forms of the discrete transforms for triangular meshes. Method described is proposed for the image compression, pattern recognition, image quality improvement, image search and indexing. It also may be used as a part of video coding (intra-frame or inter-frame coding, motion detection).

  5. Advances in basic science methodologies for clinical diagnosis in female stress urinary incontinence.

    PubMed

    Abdulaziz, Marwa; Deegan, Emily G; Kavanagh, Alex; Stothers, Lynn; Pugash, Denise; Macnab, Andrew

    2017-06-01

    We provide an overview of advanced imaging techniques currently being explored to gain greater understanding of the complexity of stress urinary incontinence (SUI) through better definition of structural anatomic data. Two methods of imaging and analysis are detailed for SUI with or without prolapse: 1) open magnetic resonance imaging (MRI) with or without the use of reference lines; and 2) 3D reconstruction of the pelvis using MRI. An additional innovative method of assessment includes the use of near infrared spectroscopy (NIRS), which uses non-invasive photonics in a vaginal speculum to objectively evaluate pelvic floor muscle (PFM) function as it relates to SUI pathology. Advantages and disadvantages of these techniques are described. The recent innovation of open-configuration magnetic resonance imaging (MRO) allows images to be captured in sitting and standing positions, which better simulates states that correlate with urinary leakage and can be further enhanced with 3D reconstruction. By detecting direct changes in oxygenated muscle tissue, the NIRS vaginal speculum is able to provide insight into how the oxidative capacity of the PFM influences SUI. The small number of units able to provide patient evaluation using these techniques and their cost and relative complexity are major considerations, but if such imaging can optimize diagnosis, treatment allocation, and selection for surgery enhanced imaging techniques may prove to be a worthwhile and cost-effective strategy for assessing and treating SUI.

  6. Positron Emission Tomography: Human Brain Function and Biochemistry.

    ERIC Educational Resources Information Center

    Phelps, Michael E.; Mazziotta, John C.

    1985-01-01

    Describes the method, present status, and application of positron emission tomography (PET), an analytical imaging technique for "in vivo" measurements of the anatomical distribution and rates of specific biochemical reactions. Measurements and image dynamic biochemistry link basic and clinical neurosciences with clinical findings…

  7. Medical Imaging with Ultrasound: Some Basic Physics.

    ERIC Educational Resources Information Center

    Gosling, R.

    1989-01-01

    Discussed are medical applications of ultrasound. The physics of the wave nature of ultrasound including its propagation and production, return by the body, spatial and contrast resolution, attenuation, image formation using pulsed echo ultrasound techniques, measurement of velocity and duplex scanning are described. (YP)

  8. Research on fast Fourier transforms algorithm of huge remote sensing image technology with GPU and partitioning technology.

    PubMed

    Yang, Xue; Li, Xue-You; Li, Jia-Guo; Ma, Jun; Zhang, Li; Yang, Jan; Du, Quan-Ye

    2014-02-01

    Fast Fourier transforms (FFT) is a basic approach to remote sensing image processing. With the improvement of capacity of remote sensing image capture with the features of hyperspectrum, high spatial resolution and high temporal resolution, how to use FFT technology to efficiently process huge remote sensing image becomes the critical step and research hot spot of current image processing technology. FFT algorithm, one of the basic algorithms of image processing, can be used for stripe noise removal, image compression, image registration, etc. in processing remote sensing image. CUFFT function library is the FFT algorithm library based on CPU and FFTW. FFTW is a FFT algorithm developed based on CPU in PC platform, and is currently the fastest CPU based FFT algorithm function library. However there is a common problem that once the available memory or memory is less than the capacity of image, there will be out of memory or memory overflow when using the above two methods to realize image FFT arithmetic. To address this problem, a CPU and partitioning technology based Huge Remote Fast Fourier Transform (HRFFT) algorithm is proposed in this paper. By improving the FFT algorithm in CUFFT function library, the problem of out of memory and memory overflow is solved. Moreover, this method is proved rational by experiment combined with the CCD image of HJ-1A satellite. When applied to practical image processing, it improves effect of the image processing, speeds up the processing, which saves the time of computation and achieves sound result.

  9. The ERTS-1 investigation (ER-600): A compendium of analysis results of the utility of ERTS-1 data for land resources management

    NASA Technical Reports Server (NTRS)

    Erb, R. B.

    1974-01-01

    The results of the ERTS-1 investigations conducted by the Earth Observations Division at the NASA Lyndon B. Johnson Space Center are summarized in this report, which is an overview of documents detailing individual investigations. Conventional image interpretation and computer-aided classification procedures were the two basic techniques used in analyzing the data for detecting, identifying, locating, and measuring surface features related to earth resources. Data from the ERTS-1 multispectral scanner system were useful for all applications studied, which included agriculture, coastal and estuarine analysis, forestry, range, land use and urban land use, and signature extension. Percentage classification accuracies are cited for the conventional and computer-aided techniques.

  10. Advances in fMRI Real-Time Neurofeedback.

    PubMed

    Watanabe, Takeo; Sasaki, Yuka; Shibata, Kazuhisa; Kawato, Mitsuo

    2017-12-01

    Functional magnetic resonance imaging (fMRI) neurofeedback is a type of biofeedback in which real-time online fMRI signals are used to self-regulate brain function. Since its advent in 2003 significant progress has been made in fMRI neurofeedback techniques. Specifically, the use of implicit protocols, external rewards, multivariate analysis, and connectivity analysis has allowed neuroscientists to explore a possible causal involvement of modified brain activity in modified behavior. These techniques have also been integrated into groundbreaking new neurofeedback technologies, specifically decoded neurofeedback (DecNef) and functional connectivity-based neurofeedback (FCNef). By modulating neural activity and behavior, DecNef and FCNef have substantially advanced both basic and clinical research. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  11. Wavelet analysis of birefringence images of myocardium tissue

    NASA Astrophysics Data System (ADS)

    Sakhnovskiy, M. Yu.; Ushenko, Yu. O.; Kushnerik, L.; Soltys, I. V.; Pavlyukovich, N.; Pavlyukovich, O.

    2018-01-01

    The paper consists of two parts. The first part presents short theoretical basics of the method of azimuthally-invariant Mueller-matrix description of optical anisotropy of biological tissues. It was provided experimentally measured coordinate distributions of Mueller-matrix invariants (MMI) of linear and circular birefringences of skeletal muscle tissue. It was defined the values of statistic moments, which characterize the distributions of amplitudes of wavelet coefficients of MMI at different scales of scanning. The second part presents the data of statistic analysis of the distributions of amplitude of wavelet coefficients of the distributions of linear birefringence of myocardium tissue died after the infarction and ischemic heart disease. It was defined the objective criteria of differentiation of the cause of death.

  12. Imagery analysis and the need for standards

    NASA Astrophysics Data System (ADS)

    Grant, Barbara G.

    2014-09-01

    While efforts within the optics community focus on the development of high-quality systems and data products, comparatively little attention is paid to their use. Our standards for verification and validation are high; but in some user domains, standards are either lax or do not exist at all. In forensic imagery analysis, for example, standards exist to judge image quality, but do not exist to judge the quality of an analysis. In litigation, a high quality analysis is by default the one performed by the victorious attorney's expert. This paper argues for the need to extend quality standards into the domain of imagery analysis, which is expected to increase in national visibility and significance with the increasing deployment of unmanned aerial vehicle—UAV, or "drone"—sensors in the continental U. S.. It argues that like a good radiometric calibration, made as independent of the calibrated instrument as possible, a good analysis should be subject to standards the most basic of which is the separation of issues of scientific fact from analysis results.

  13. Cognitive load imposed by knobology may adversely affect learners' perception of utility in using ultrasonography to learn physical examination skills, but not anatomy.

    PubMed

    Jamniczky, Heather A; McLaughlin, Kevin; Kaminska, Malgorzata E; Raman, Maitreyi; Somayaji, Ranjani; Wright, Bruce; Ma, Irene W Y

    2015-01-01

    Ultrasonography is increasingly used for teaching anatomy and physical examination skills but its effect on cognitive load is unknown. This study aimed to determine ultrasound's perceived utility for learning, and to investigate the effect of cognitive load on its perceived utility. Consenting first-year medical students (n = 137) completed ultrasound training that includes a didactic component and four ultrasound-guided anatomy and physical examination teaching sessions. Learners then completed a survey on comfort with physical examination techniques (three items; alpha = 0.77), perceived utility of ultrasound in learning (two items; alpha = 0.89), and cognitive load on ultrasound use [measured with a validated nine-point scale (10 items; alpha = 0.88)]. Learners found ultrasound useful for learning for both anatomy and physical examination (mean 4.2 ± 0.9 and 4.4 ± 0.8, respectively; where 1 = very useless and 5 = very useful). Principal components analysis on the cognitive load survey revealed two factors, "image interpretation" and "basic knobology," which accounted for 60.3% of total variance. Weighted factor scores were not associated with perceived utility in learning anatomy (beta = 0.01, P = 0.62 for "image interpretation" and beta = -0.04, P = 0.33 for "basic knobology"). However, factor score on "knobology" was inversely associated with perceived utility for learning physical examination (beta = -0.06; P = 0.03). While a basic introduction to ultrasound may suffice for teaching anatomy, more training may be required for teaching physical examination. Prior to teaching physical examination skills with ultrasonography, we recommend ensuring that learners have sufficient knobology skills. © 2014 American Association of Anatomists.

  14. Spec Tool; an online education and research resource

    NASA Astrophysics Data System (ADS)

    Maman, S.; Shenfeld, A.; Isaacson, S.; Blumberg, D. G.

    2016-06-01

    Education and public outreach (EPO) activities related to remote sensing, space, planetary and geo-physics sciences have been developed widely in the Earth and Planetary Image Facility (EPIF) at Ben-Gurion University of the Negev, Israel. These programs aim to motivate the learning of geo-scientific and technologic disciplines. For over the past decade, the facility hosts research and outreach activities for researchers, local community, school pupils, students and educators. As software and data are neither available nor affordable, the EPIF Spec tool was created as a web-based resource to assist in initial spectral analysis as a need for researchers and students. The tool is used both in the academic courses and in the outreach education programs and enables a better understanding of the theoretical data of spectroscopy and Imaging Spectroscopy in a 'hands-on' activity. This tool is available online and provides spectra visualization tools and basic analysis algorithms including Spectral plotting, Spectral angle mapping and Linear Unmixing. The tool enables to visualize spectral signatures from the USGS spectral library and additional spectra collected in the EPIF such as of dunes in southern Israel and from Turkmenistan. For researchers and educators, the tool allows loading collected samples locally for further analysis.

  15. Tensor Rank Preserving Discriminant Analysis for Facial Recognition.

    PubMed

    Tao, Dapeng; Guo, Yanan; Li, Yaotang; Gao, Xinbo

    2017-10-12

    Facial recognition, one of the basic topics in computer vision and pattern recognition, has received substantial attention in recent years. However, for those traditional facial recognition algorithms, the facial images are reshaped to a long vector, thereby losing part of the original spatial constraints of each pixel. In this paper, a new tensor-based feature extraction algorithm termed tensor rank preserving discriminant analysis (TRPDA) for facial image recognition is proposed; the proposed method involves two stages: in the first stage, the low-dimensional tensor subspace of the original input tensor samples was obtained; in the second stage, discriminative locality alignment was utilized to obtain the ultimate vector feature representation for subsequent facial recognition. On the one hand, the proposed TRPDA algorithm fully utilizes the natural structure of the input samples, and it applies an optimization criterion that can directly handle the tensor spectral analysis problem, thereby decreasing the computation cost compared those traditional tensor-based feature selection algorithms. On the other hand, the proposed TRPDA algorithm extracts feature by finding a tensor subspace that preserves most of the rank order information of the intra-class input samples. Experiments on the three facial databases are performed here to determine the effectiveness of the proposed TRPDA algorithm.

  16. A Web simulation of medical image reconstruction and processing as an educational tool.

    PubMed

    Papamichail, Dimitrios; Pantelis, Evaggelos; Papagiannis, Panagiotis; Karaiskos, Pantelis; Georgiou, Evangelos

    2015-02-01

    Web educational resources integrating interactive simulation tools provide students with an in-depth understanding of the medical imaging process. The aim of this work was the development of a purely Web-based, open access, interactive application, as an ancillary learning tool in graduate and postgraduate medical imaging education, including a systematic evaluation of learning effectiveness. The pedagogic content of the educational Web portal was designed to cover the basic concepts of medical imaging reconstruction and processing, through the use of active learning and motivation, including learning simulations that closely resemble actual tomographic imaging systems. The user can implement image reconstruction and processing algorithms under a single user interface and manipulate various factors to understand the impact on image appearance. A questionnaire for pre- and post-training self-assessment was developed and integrated in the online application. The developed Web-based educational application introduces the trainee in the basic concepts of imaging through textual and graphical information and proceeds with a learning-by-doing approach. Trainees are encouraged to participate in a pre- and post-training questionnaire to assess their knowledge gain. An initial feedback from a group of graduate medical students showed that the developed course was considered as effective and well structured. An e-learning application on medical imaging integrating interactive simulation tools was developed and assessed in our institution.

  17. Image Montaging for Creating a Virtual Pathology Slide: An Innovative and Economical Tool to Obtain a Whole Slide Image

    PubMed Central

    Pandurangappa, Rohit; Annavajjula, Saileela; Rajashekaraiah, Premalatha Bidadi

    2016-01-01

    Background. Microscopes are omnipresent throughout the field of biological research. With microscopes one can see in detail what is going on at the cellular level in tissues. Though it is a ubiquitous tool, the limitation is that with high magnification there is a small field of view. It is often advantageous to see an entire sample at high magnification. Over the years technological advancements in optics have helped to provide solutions to this limitation of microscopes by creating the so-called dedicated “slide scanners” which can provide a “whole slide digital image.” These scanners can provide seamless, large-field-of-view, high resolution image of entire tissue section. The only disadvantage of such complete slide imaging system is its outrageous cost, thereby hindering their practical use by most laboratories, especially in developing and low resource countries. Methods. In a quest for their substitute, we tried commonly used image editing software Adobe Photoshop along with a basic image capturing device attached to a trinocular microscope to create a digital pathology slide. Results. The seamless image created using Adobe Photoshop maintained its diagnostic quality. Conclusion. With time and effort photomicrographs obtained from a basic camera-microscope set up can be combined and merged in Adobe Photoshop to create a whole slide digital image of practically usable quality at a negligible cost. PMID:27747147

  18. Image Montaging for Creating a Virtual Pathology Slide: An Innovative and Economical Tool to Obtain a Whole Slide Image.

    PubMed

    Banavar, Spoorthi Ravi; Chippagiri, Prashanthi; Pandurangappa, Rohit; Annavajjula, Saileela; Rajashekaraiah, Premalatha Bidadi

    2016-01-01

    Background . Microscopes are omnipresent throughout the field of biological research. With microscopes one can see in detail what is going on at the cellular level in tissues. Though it is a ubiquitous tool, the limitation is that with high magnification there is a small field of view. It is often advantageous to see an entire sample at high magnification. Over the years technological advancements in optics have helped to provide solutions to this limitation of microscopes by creating the so-called dedicated "slide scanners" which can provide a "whole slide digital image." These scanners can provide seamless, large-field-of-view, high resolution image of entire tissue section. The only disadvantage of such complete slide imaging system is its outrageous cost, thereby hindering their practical use by most laboratories, especially in developing and low resource countries. Methods . In a quest for their substitute, we tried commonly used image editing software Adobe Photoshop along with a basic image capturing device attached to a trinocular microscope to create a digital pathology slide. Results . The seamless image created using Adobe Photoshop maintained its diagnostic quality. Conclusion . With time and effort photomicrographs obtained from a basic camera-microscope set up can be combined and merged in Adobe Photoshop to create a whole slide digital image of practically usable quality at a negligible cost.

  19. Initial On-Orbit Spatial Resolution Characterization of OrbView-3 Panchromatic Images

    NASA Technical Reports Server (NTRS)

    Blonski, Slawomir

    2006-01-01

    Characterization was conducted under the Memorandum of Understanding among Orbital Sciences Corp., ORBIMAGE, Inc., and NASA Applied Sciences Directorate. Acquired five OrbView-3 panchromatic images of the permanent Stennis Space Center edge targets painted on a concrete surface. Each image is available at two processing levels: Georaw and Basic. Georaw is an intermediate image in which individual pixels are aligned by a nominal shift in the along-scan direction to adjust for the staggered layout of the panchromatic detectors along the focal plane array. Georaw images are engineering data and are not delivered to customers. The Basic product includes a cubic interpolation to align the pixels better along the focal plane and to correct for sensor artifacts, such as smile and attitude smoothing. This product retains satellite geometry - no rectification is performed. Processing of the characterized images did not include image sharpening, which is applied by default to OrbView-3 image products delivered by ORBIMAGE to customers. Edge responses were extracted from images of tilted edges in two directions: along-scan and cross-scan. Each edge response was approximated with a superposition of three sigmoidal functions through a nonlinear least-squares curve-fitting. Line Spread Functions (LSF) were derived by differentiation of the analytical approximation. Modulation Transfer Functions (MTF) were obtained after applying the discrete Fourier transform to the LSF.

  20. Comprehensive machine learning analysis of Hydra behavior reveals a stable basal behavioral repertoire

    PubMed Central

    Taralova, Ekaterina; Dupre, Christophe; Yuste, Rafael

    2018-01-01

    Animal behavior has been studied for centuries, but few efficient methods are available to automatically identify and classify it. Quantitative behavioral studies have been hindered by the subjective and imprecise nature of human observation, and the slow speed of annotating behavioral data. Here, we developed an automatic behavior analysis pipeline for the cnidarian Hydra vulgaris using machine learning. We imaged freely behaving Hydra, extracted motion and shape features from the videos, and constructed a dictionary of visual features to classify pre-defined behaviors. We also identified unannotated behaviors with unsupervised methods. Using this analysis pipeline, we quantified 6 basic behaviors and found surprisingly similar behavior statistics across animals within the same species, regardless of experimental conditions. Our analysis indicates that the fundamental behavioral repertoire of Hydra is stable. This robustness could reflect a homeostatic neural control of "housekeeping" behaviors which could have been already present in the earliest nervous systems. PMID:29589829

  1. TH-E-202-01: Pitfalls and Remedies in PET/CT Imaging for RT Planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pan, T.

    2016-06-15

    PET/CT is a very important imaging tool in the management of oncology patients. PET/CT has been applied for treatment planning and response evaluation in radiation therapy. This educational session will discuss: Pitfalls and remedies in PET/CT imaging for RT planning The use of hypoxia PET imaging for radiotherapy PET for tumor response evaluation The first presentation will address the issue of mis-registration between the CT and PET images in the thorax and the abdomen. We will discuss the challenges of respiratory gating and introduce an average CT technique to improve the registration for dose calculation and image-guidance in radiation therapy.more » The second presentation will discuss the use of hypoxia PET Imaging for radiation therapy. We will discuss various hypoxia radiotracers, the choice of clinical acquisition protocol (in particular a single late static acquisition versus a dynamic acquisition), and the compartmental modeling with different transfer rate constants explained. We will demonstrate applications of hypoxia imaging for dose escalation/de-escalation in clinical trials. The last presentation will discuss the use of PET/CT for tumor response evaluation. We will discuss anatomic response assessment vs. metabolic response assessment, visual evaluation and semi-quantitative evaluation, and limitations of current PET/CT assessment. We will summarize clinical trials using PET response in guiding adaptive radiotherapy. Finally, we will summarize recent advancements in PET/CT radiomics and non-FDG PET tracers for response assessment. Learning Objectives: Identify the causes of mis-registration of CT and PET images in PET/CT, and review the strategies to remedy the issue. Understand the basics of PET imaging of tumor hypoxia (radiotracers, how PET measures the hypoxia selective uptake, imaging protocols, applications in chemo-radiation therapy). Understand the basics of dynamic PET imaging, compartmental modeling and parametric images. Understand the basics of using FDG PET/CT for tumor response evaluation. Learn about recent advancement in PET/CT radiomics and non-FDG PET tracers for response assessment. This work was supported in part by the National Cancer Institute Grants R01CA172638.; W. Lu, This work was supported in part by the National Cancer Institute Grants R01CA172638.« less

  2. Contact Angle Measurements Using a Simplified Experimental Setup

    ERIC Educational Resources Information Center

    Lamour, Guillaume; Hamraoui, Ahmed; Buvailo, Andrii; Xing, Yangjun; Keuleyan, Sean; Prakash, Vivek; Eftekhari-Bafrooei, Ali; Borguet, Eric

    2010-01-01

    A basic and affordable experimental apparatus is described that measures the static contact angle of a liquid drop in contact with a solid. The image of the drop is made with a simple digital camera by taking a picture that is magnified by an optical lens. The profile of the drop is then processed with ImageJ free software. The ImageJ contact…

  3. Longitudinal in vivo two-photon fluorescence imaging

    PubMed Central

    Crowe, Sarah E.; Ellis-Davies, Graham C.R.

    2014-01-01

    Fluorescence microscopy is an essential technique for the basic sciences, especially biomedical research. Since the invention of laser scanning confocal microscopy in 1980s, that enabled imaging both fixed and living biological tissue with three-dimensional precision, high-resolution fluorescence imaging has revolutionized biological research. Confocal microscopy, by its very nature, has one fundamental limitation. Due to the confocal pinhole, deep tissue fluorescence imaging is not practical. In contrast (no pun intended), two-photon fluorescence microscopy allows, in principle, the collection of all emitted photons from fluorophores in the imaged voxel, dramatically extending our ability to see deep into living tissue. Since the development of transgenic mice with genetically encoded fluorescent protein in neocortical cells in 2000, two-photon imaging has enabled the dynamics of individual synapses to be followed for up to two years. Since the initial landmark contributions to this field in 2002, the technique has been used to understand how neuronal structure are changed by experience, learning and memory and various diseases. Here we provide a basic summary of the crucial elements that are required for such studies, and discuss many applications of longitudinal two-photon fluorescence microscopy that have appeared since 2002. PMID:24214350

  4. Image processing for improved eye-tracking accuracy

    NASA Technical Reports Server (NTRS)

    Mulligan, J. B.; Watson, A. B. (Principal Investigator)

    1997-01-01

    Video cameras provide a simple, noninvasive method for monitoring a subject's eye movements. An important concept is that of the resolution of the system, which is the smallest eye movement that can be reliably detected. While hardware systems are available that estimate direction of gaze in real-time from a video image of the pupil, such systems must limit image processing to attain real-time performance and are limited to a resolution of about 10 arc minutes. Two ways to improve resolution are discussed. The first is to improve the image processing algorithms that are used to derive an estimate. Off-line analysis of the data can improve resolution by at least one order of magnitude for images of the pupil. A second avenue by which to improve resolution is to increase the optical gain of the imaging setup (i.e., the amount of image motion produced by a given eye rotation). Ophthalmoscopic imaging of retinal blood vessels provides increased optical gain and improved immunity to small head movements but requires a highly sensitive camera. The large number of images involved in a typical experiment imposes great demands on the storage, handling, and processing of data. A major bottleneck had been the real-time digitization and storage of large amounts of video imagery, but recent developments in video compression hardware have made this problem tractable at a reasonable cost. Images of both the retina and the pupil can be analyzed successfully using a basic toolbox of image-processing routines (filtering, correlation, thresholding, etc.), which are, for the most part, well suited to implementation on vectorizing supercomputers.

  5. The Study of Residential Areas Extraction Based on GF-3 Texture Image Segmentation

    NASA Astrophysics Data System (ADS)

    Shao, G.; Luo, H.; Tao, X.; Ling, Z.; Huang, Y.

    2018-04-01

    The study chooses the standard stripe and dual polarization SAR images of GF-3 as the basic data. Residential areas extraction processes and methods based upon GF-3 images texture segmentation are compared and analyzed. GF-3 images processes include radiometric calibration, complex data conversion, multi-look processing, images filtering, and then conducting suitability analysis for different images filtering methods, the filtering result show that the filtering method of Kuan is efficient for extracting residential areas, then, we calculated and analyzed the texture feature vectors using the GLCM (the Gary Level Co-occurrence Matrix), texture feature vectors include the moving window size, step size and angle, the result show that window size is 11*11, step is 1, and angle is 0°, which is effective and optimal for the residential areas extracting. And with the FNEA (Fractal Net Evolution Approach), we segmented the GLCM texture images, and extracted the residential areas by threshold setting. The result of residential areas extraction verified and assessed by confusion matrix. Overall accuracy is 0.897, kappa is 0.881, and then we extracted the residential areas by SVM classification based on GF-3 images, the overall accuracy is less 0.09 than the accuracy of extraction method based on GF-3 Texture Image Segmentation. We reached the conclusion that residential areas extraction based on GF-3 SAR texture image multi-scale segmentation is simple and highly accurate. although, it is difficult to obtain multi-spectrum remote sensing image in southern China, in cloudy and rainy weather throughout the year, this paper has certain reference significance.

  6. How Imaging Can Impact Clinical Trial Design: Molecular Imaging as a Biomarker for Targeted Cancer Therapy.

    PubMed

    Mankoff, David A; Farwell, Michael D; Clark, Amy S; Pryma, Daniel A

    2015-01-01

    The ability to measure biochemical and molecular processes to guide cancer treatment represents a potentially powerful tool for trials of targeted cancer therapy. These assays have traditionally been performed by analysis of tissue samples. However, more recently, functional and molecular imaging has been developed that is capable of in vivo assays of cancer biochemistry and molecular biology and is highly complementary to tissue-based assays. Cancer imaging biomarkers can play a key role in increasing the efficacy and efficiency of therapeutic clinical trials and also provide insight into the biologic mechanisms that bring about a therapeutic response. Future progress will depend on close collaboration between imaging scientists and cancer physicians and on public and commercial sponsors, to take full advantage of what imaging has to offer for clinical trials of targeted cancer therapy. This review will provide examples of how molecular imaging can inform targeted cancer clinical trials and clinical decision making by (1) measuring regional expression of the therapeutic target, (2) assessing early (pharmacodynamic) response to treatment, and (3) predicting therapeutic outcome. The review includes a discussion of basic principles of molecular imaging biomarkers in cancer, with an emphasis on those methods that have been tested in patients. We then review clinical trials designed to evaluate imaging tests as integrated markers embedded in a therapeutic clinical trial with the goal of validating the imaging tests as integral markers that can aid patient selection and direct response-adapted treatment strategies. Examples of recently completed multicenter trials using imaging biomarkers are highlighted.

  7. Multisource image fusion method using support value transform.

    PubMed

    Zheng, Sheng; Shi, Wen-Zhong; Liu, Jian; Zhu, Guang-Xi; Tian, Jin-Wen

    2007-07-01

    With the development of numerous imaging sensors, many images can be simultaneously pictured by various sensors. However, there are many scenarios where no one sensor can give the complete picture. Image fusion is an important approach to solve this problem and produces a single image which preserves all relevant information from a set of different sensors. In this paper, we proposed a new image fusion method using the support value transform, which uses the support value to represent the salient features of image. This is based on the fact that, in support vector machines (SVMs), the data with larger support values have a physical meaning in the sense that they reveal relative more importance of the data points for contributing to the SVM model. The mapped least squares SVM (mapped LS-SVM) is used to efficiently compute the support values of image. The support value analysis is developed by using a series of multiscale support value filters, which are obtained by filling zeros in the basic support value filter deduced from the mapped LS-SVM to match the resolution of the desired level. Compared with the widely used image fusion methods, such as the Laplacian pyramid, discrete wavelet transform methods, the proposed method is an undecimated transform-based approach. The fusion experiments are undertaken on multisource images. The results demonstrate that the proposed approach is effective and is superior to the conventional image fusion methods in terms of the pertained quantitative fusion evaluation indexes, such as quality of visual information (Q(AB/F)), the mutual information, etc.

  8. Digital image processing: a primer for JVIR authors and readers: Part 3: Digital image editing.

    PubMed

    LaBerge, Jeanne M; Andriole, Katherine P

    2003-12-01

    This is the final installment of a three-part series on digital image processing intended to prepare authors for online submission of manuscripts. In the first two articles of the series, the fundamentals of digital image architecture were reviewed and methods of importing images to the computer desktop were described. In this article, techniques are presented for editing images in preparation for online submission. A step-by-step guide to basic editing with use of Adobe Photoshop is provided and the ethical implications of this activity are explored.

  9. Multiflash X ray with Image Detanglement for Single Image Isolation

    DTIC Science & Technology

    2017-08-31

    known and separated into individual images. A proof-of- principle study was performed using 4 X-ray flashes and copper masks with sub-millimeter holes...Popular Science article.2 For decades, that basic concept dominated the color television market . Those were the days when a large color television...proof-of- principle study was performed using 4 X-ray flashes and copper masks with sub-millimeter holes that allowed development of the required image

  10. Biological imaging with coherent Raman scattering microscopy: a tutorial

    PubMed Central

    Alfonso-García, Alba; Mittal, Richa; Lee, Eun Seong; Potma, Eric O.

    2014-01-01

    Abstract. Coherent Raman scattering (CRS) microscopy is gaining acceptance as a valuable addition to the imaging toolset of biological researchers. Optimal use of this label-free imaging technique benefits from a basic understanding of the physical principles and technical merits of the CRS microscope. This tutorial offers qualitative explanations of the principles behind CRS microscopy and provides information about the applicability of this nonlinear optical imaging approach for biological research. PMID:24615671

  11. Tutorial on photoacoustic tomography

    PubMed Central

    Zhou, Yong; Yao, Junjie; Wang, Lihong V.

    2016-01-01

    Abstract. Photoacoustic tomography (PAT) has become one of the fastest growing fields in biomedical optics. Unlike pure optical imaging, such as confocal microscopy and two-photon microscopy, PAT employs acoustic detection to image optical absorption contrast with high-resolution deep into scattering tissue. So far, PAT has been widely used for multiscale anatomical, functional, and molecular imaging of biological tissues. We focus on PAT’s basic principles, major implementations, imaging contrasts, and recent applications. PMID:27086868

  12. SHERPA: an image segmentation and outline feature extraction tool for diatoms and other objects

    PubMed Central

    2014-01-01

    Background Light microscopic analysis of diatom frustules is widely used both in basic and applied research, notably taxonomy, morphometrics, water quality monitoring and paleo-environmental studies. In these applications, usually large numbers of frustules need to be identified and/or measured. Although there is a need for automation in these applications, and image processing and analysis methods supporting these tasks have previously been developed, they did not become widespread in diatom analysis. While methodological reports for a wide variety of methods for image segmentation, diatom identification and feature extraction are available, no single implementation combining a subset of these into a readily applicable workflow accessible to diatomists exists. Results The newly developed tool SHERPA offers a versatile image processing workflow focused on the identification and measurement of object outlines, handling all steps from image segmentation over object identification to feature extraction, and providing interactive functions for reviewing and revising results. Special attention was given to ease of use, applicability to a broad range of data and problems, and supporting high throughput analyses with minimal manual intervention. Conclusions Tested with several diatom datasets from different sources and of various compositions, SHERPA proved its ability to successfully analyze large amounts of diatom micrographs depicting a broad range of species. SHERPA is unique in combining the following features: application of multiple segmentation methods and selection of the one giving the best result for each individual object; identification of shapes of interest based on outline matching against a template library; quality scoring and ranking of resulting outlines supporting quick quality checking; extraction of a wide range of outline shape descriptors widely used in diatom studies and elsewhere; minimizing the need for, but enabling manual quality control and corrections. Although primarily developed for analyzing images of diatom valves originating from automated microscopy, SHERPA can also be useful for other object detection, segmentation and outline-based identification problems. PMID:24964954

  13. SHERPA: an image segmentation and outline feature extraction tool for diatoms and other objects.

    PubMed

    Kloster, Michael; Kauer, Gerhard; Beszteri, Bánk

    2014-06-25

    Light microscopic analysis of diatom frustules is widely used both in basic and applied research, notably taxonomy, morphometrics, water quality monitoring and paleo-environmental studies. In these applications, usually large numbers of frustules need to be identified and/or measured. Although there is a need for automation in these applications, and image processing and analysis methods supporting these tasks have previously been developed, they did not become widespread in diatom analysis. While methodological reports for a wide variety of methods for image segmentation, diatom identification and feature extraction are available, no single implementation combining a subset of these into a readily applicable workflow accessible to diatomists exists. The newly developed tool SHERPA offers a versatile image processing workflow focused on the identification and measurement of object outlines, handling all steps from image segmentation over object identification to feature extraction, and providing interactive functions for reviewing and revising results. Special attention was given to ease of use, applicability to a broad range of data and problems, and supporting high throughput analyses with minimal manual intervention. Tested with several diatom datasets from different sources and of various compositions, SHERPA proved its ability to successfully analyze large amounts of diatom micrographs depicting a broad range of species. SHERPA is unique in combining the following features: application of multiple segmentation methods and selection of the one giving the best result for each individual object; identification of shapes of interest based on outline matching against a template library; quality scoring and ranking of resulting outlines supporting quick quality checking; extraction of a wide range of outline shape descriptors widely used in diatom studies and elsewhere; minimizing the need for, but enabling manual quality control and corrections. Although primarily developed for analyzing images of diatom valves originating from automated microscopy, SHERPA can also be useful for other object detection, segmentation and outline-based identification problems.

  14. Intraocular lens based on double-liquid variable-focus lens.

    PubMed

    Peng, Runling; Li, Yifan; Hu, Shuilan; Wei, Maowei; Chen, Jiabi

    2014-01-10

    In this work, the crystalline lens in the Gullstrand-Le Grand human eye model is replaced by a double-liquid variable-focus lens, the structure data of which are based on theoretical analysis and experimental results. When the pseudoaphakic eye is built in Zemax, aspherical surfaces are introduced to the double-liquid variable-focus lens to reduce the axial spherical aberration existent in the system. After optimization, the zoom range of the pseudoaphakic eye greatly exceeds that of normal human eyes, and the spot size on an image plane basically reaches the normal human eye's limit of resolution.

  15. Electron Tomography: A Three-Dimensional Analytic Tool for Hard and Soft Materials Research.

    PubMed

    Ercius, Peter; Alaidi, Osama; Rames, Matthew J; Ren, Gang

    2015-10-14

    Three-dimensional (3D) structural analysis is essential to understand the relationship between the structure and function of an object. Many analytical techniques, such as X-ray diffraction, neutron spectroscopy, and electron microscopy imaging, are used to provide structural information. Transmission electron microscopy (TEM), one of the most popular analytic tools, has been widely used for structural analysis in both physical and biological sciences for many decades, in which 3D objects are projected into two-dimensional (2D) images. In many cases, 2D-projection images are insufficient to understand the relationship between the 3D structure and the function of nanoscale objects. Electron tomography (ET) is a technique that retrieves 3D structural information from a tilt series of 2D projections, and is gradually becoming a mature technology with sub-nanometer resolution. Distinct methods to overcome sample-based limitations have been separately developed in both physical and biological science, although they share some basic concepts of ET. This review discusses the common basis for 3D characterization, and specifies difficulties and solutions regarding both hard and soft materials research. It is hoped that novel solutions based on current state-of-the-art techniques for advanced applications in hybrid matter systems can be motivated. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. Medical applications of infrared thermography: A review

    NASA Astrophysics Data System (ADS)

    Lahiri, B. B.; Bagavathiappan, S.; Jayakumar, T.; Philip, John

    2012-07-01

    Abnormal body temperature is a natural indicator of illness. Infrared thermography (IRT) is a fast, passive, non-contact and non-invasive alternative to conventional clinical thermometers for monitoring body temperature. Besides, IRT can also map body surface temperature remotely. Last five decades witnessed a steady increase in the utility of thermal imaging cameras to obtain correlations between the thermal physiology and skin temperature. IRT has been successfully used in diagnosis of breast cancer, diabetes neuropathy and peripheral vascular disorders. It has also been used to detect problems associated with gynecology, kidney transplantation, dermatology, heart, neonatal physiology, fever screening and brain imaging. With the advent of modern infrared cameras, data acquisition and processing techniques, it is now possible to have real time high resolution thermographic images, which is likely to surge further research in this field. The present efforts are focused on automatic analysis of temperature distribution of regions of interest and their statistical analysis for detection of abnormalities. This critical review focuses on advances in the area of medical IRT. The basics of IRT, essential theoretical background, the procedures adopted for various measurements and applications of IRT in various medical fields are discussed in this review. Besides background information is provided for beginners for better understanding of the subject.

  17. Stress-Induced Fracturing of Reservoir Rocks: Acoustic Monitoring and μCT Image Analysis

    NASA Astrophysics Data System (ADS)

    Pradhan, Srutarshi; Stroisz, Anna M.; Fjær, Erling; Stenebråten, Jørn F.; Lund, Hans K.; Sønstebø, Eyvind F.

    2015-11-01

    Stress-induced fracturing in reservoir rocks is an important issue for the petroleum industry. While productivity can be enhanced by a controlled fracturing operation, it can trigger borehole instability problems by reactivating existing fractures/faults in a reservoir. However, safe fracturing can improve the quality of operations during CO2 storage, geothermal installation and gas production at and from the reservoir rocks. Therefore, understanding the fracturing behavior of different types of reservoir rocks is a basic need for planning field operations toward these activities. In our study, stress-induced fracturing of rock samples has been monitored by acoustic emission (AE) and post-experiment computer tomography (CT) scans. We have used hollow cylinder cores of sandstones and chalks, which are representatives of reservoir rocks. The fracture-triggering stress has been measured for different rocks and compared with theoretical estimates. The population of AE events shows the location of main fracture arms which is in a good agreement with post-test CT image analysis, and the fracture patterns inside the samples are visualized through 3D image reconstructions. The amplitudes and energies of acoustic events clearly indicate initiation and propagation of the main fractures. Time evolution of the radial strain measured in the fracturing tests will later be compared to model predictions of fracture size.

  18. Electron Tomography: A Three-Dimensional Analytic Tool for Hard and Soft Materials Research

    PubMed Central

    Alaidi, Osama; Rames, Matthew J.

    2016-01-01

    Three-dimensional (3D) structural analysis is essential to understand the relationship between the structure and function of an object. Many analytical techniques, such as X-ray diffraction, neutron spectroscopy, and electron microscopy imaging, are used to provide structural information. Transmission electron microscopy (TEM), one of the most popular analytic tools, has been widely used for structural analysis in both physical and biological sciences for many decades, in which 3D objects are projected into two-dimensional (2D) images. In many cases, 2D-projection images are insufficient to understand the relationship between the 3D structure and the function of nanoscale objects. Electron tomography (ET) is a technique that retrieves 3D structural information from a tilt series of 2D projections, and is gradually becoming a mature technology with sub-nanometer resolution. Distinct methods to overcome sample-based limitations have been separately developed in both physical and biological science, although they share some basic concepts of ET. This review discusses the common basis for 3D characterization, and specifies difficulties and solutions regarding both hard and soft materials research. It is hoped that novel solutions based on current state-of-the-art techniques for advanced applications in hybrid matter systems can be motivated. PMID:26087941

  19. Digital techniques for processing Landsat imagery

    NASA Technical Reports Server (NTRS)

    Green, W. B.

    1978-01-01

    An overview of the basic techniques used to process Landsat images with a digital computer, and the VICAR image processing software developed at JPL and available to users through the NASA sponsored COSMIC computer program distribution center is presented. Examples of subjective processing performed to improve the information display for the human observer, such as contrast enhancement, pseudocolor display and band rationing, and of quantitative processing using mathematical models, such as classification based on multispectral signatures of different areas within a given scene and geometric transformation of imagery into standard mapping projections are given. Examples are illustrated by Landsat scenes of the Andes mountains and Altyn-Tagh fault zone in China before and after contrast enhancement and classification of land use in Portland, Oregon. The VICAR image processing software system which consists of a language translator that simplifies execution of image processing programs and provides a general purpose format so that imagery from a variety of sources can be processed by the same basic set of general applications programs is described.

  20. Physics of MRI: a primer.

    PubMed

    Plewes, Donald B; Kucharczyk, Walter

    2012-05-01

    This article is based on an introductory lecture given for the past many years during the "MR Physics and Techniques for Clinicians" course at the Annual Meeting of the ISMRM. This introduction is not intended to be a comprehensive overview of the field, as the subject of magnetic resonance imaging (MRI) physics is large and complex. Rather, it is intended to lay a conceptual foundation by which magnetic resonance image formation can be understood from an intuitive perspective. The presentation is nonmathematical, relying on simple models that take the reader progressively from the basic spin physics of nuclei, through descriptions of how the magnetic resonance signal is generated and detected in an MRI scanner, the foundations of nuclear magnetic resonance (NMR) relaxation, and a discussion of the Fourier transform and its relation to MR image formation. The article continues with a discussion of how magnetic field gradients are used to facilitate spatial encoding and concludes with a development of basic pulse sequences and the factors defining image contrast. Copyright © 2012 Wiley Periodicals, Inc.

Top