Sample records for standard image processing

  1. The effect of image processing on the detection of cancers in digital mammography.

    PubMed

    Warren, Lucy M; Given-Wilson, Rosalind M; Wallis, Matthew G; Cooke, Julie; Halling-Brown, Mark D; Mackenzie, Alistair; Chakraborty, Dev P; Bosmans, Hilde; Dance, David R; Young, Kenneth C

    2014-08-01

    OBJECTIVE. The objective of our study was to investigate the effect of image processing on the detection of cancers in digital mammography images. MATERIALS AND METHODS. Two hundred seventy pairs of breast images (both breasts, one view) were collected from eight systems using Hologic amorphous selenium detectors: 80 image pairs showed breasts containing subtle malignant masses; 30 image pairs, biopsy-proven benign lesions; 80 image pairs, simulated calcification clusters; and 80 image pairs, no cancer (normal). The 270 image pairs were processed with three types of image processing: standard (full enhancement), low contrast (intermediate enhancement), and pseudo-film-screen (no enhancement). Seven experienced observers inspected the images, locating and rating regions they suspected to be cancer for likelihood of malignancy. The results were analyzed using a jackknife-alternative free-response receiver operating characteristic (JAFROC) analysis. RESULTS. The detection of calcification clusters was significantly affected by the type of image processing: The JAFROC figure of merit (FOM) decreased from 0.65 with standard image processing to 0.63 with low-contrast image processing (p = 0.04) and from 0.65 with standard image processing to 0.61 with film-screen image processing (p = 0.0005). The detection of noncalcification cancers was not significantly different among the image-processing types investigated (p > 0.40). CONCLUSION. These results suggest that image processing has a significant impact on the detection of calcification clusters in digital mammography. For the three image-processing versions and the system investigated, standard image processing was optimal for the detection of calcification clusters. The effect on cancer detection should be considered when selecting the type of image processing in the future.

  2. Developing an ANSI standard for image quality tools for the testing of active millimeter wave imaging systems

    NASA Astrophysics Data System (ADS)

    Barber, Jeffrey; Greca, Joseph; Yam, Kevin; Weatherall, James C.; Smith, Peter R.; Smith, Barry T.

    2017-05-01

    In 2016, the millimeter wave (MMW) imaging community initiated the formation of a standard for millimeter wave image quality metrics. This new standard, American National Standards Institute (ANSI) N42.59, will apply to active MMW systems for security screening of humans. The Electromagnetic Signatures of Explosives Laboratory at the Transportation Security Laboratory is supporting the ANSI standards process via the creation of initial prototypes for round-robin testing with MMW imaging system manufacturers and experts. Results obtained for these prototypes will be used to inform the community and lead to consensus objective standards amongst stakeholders. Images collected with laboratory systems are presented along with results of preliminary image analysis. Future directions for object design, data collection and image processing are discussed.

  3. Amplitude image processing by diffractive optics.

    PubMed

    Cagigal, Manuel P; Valle, Pedro J; Canales, V F

    2016-02-22

    In contrast to the standard digital image processing, which operates over the detected image intensity, we propose to perform amplitude image processing. Amplitude processing, like low pass or high pass filtering, is carried out using diffractive optics elements (DOE) since it allows to operate over the field complex amplitude before it has been detected. We show the procedure for designing the DOE that corresponds to each operation. Furthermore, we accomplish an analysis of amplitude image processing performances. In particular, a DOE Laplacian filter is applied to simulated astronomical images for detecting two stars one Airy ring apart. We also check by numerical simulations that the use of a Laplacian amplitude filter produces less noisy images than the standard digital image processing.

  4. Multiscale image processing and antiscatter grids in digital radiography.

    PubMed

    Lo, Winnie Y; Hornof, William J; Zwingenberger, Allison L; Robertson, Ian D

    2009-01-01

    Scatter radiation is a source of noise and results in decreased signal-to-noise ratio and thus decreased image quality in digital radiography. We determined subjectively whether a digitally processed image made without a grid would be of similar quality to an image made with a grid but without image processing. Additionally the effects of exposure dose and of a using a grid with digital radiography on overall image quality were studied. Thoracic and abdominal radiographs of five dogs of various sizes were made. Four acquisition techniques were included (1) with a grid, standard exposure dose, digital image processing; (2) without a grid, standard exposure dose, digital image processing; (3) without a grid, half the exposure dose, digital image processing; and (4) with a grid, standard exposure dose, no digital image processing (to mimic a film-screen radiograph). Full-size radiographs as well as magnified images of specific anatomic regions were generated. Nine reviewers rated the overall image quality subjectively using a five-point scale. All digitally processed radiographs had higher overall scores than nondigitally processed radiographs regardless of patient size, exposure dose, or use of a grid. The images made at half the exposure dose had a slightly lower quality than those made at full dose, but this was only statistically significant in magnified images. Using a grid with digital image processing led to a slight but statistically significant increase in overall quality when compared with digitally processed images made without a grid but whether this increase in quality is clinically significant is unknown.

  5. CR softcopy display presets based on optimum visualization of specific findings

    NASA Astrophysics Data System (ADS)

    Andriole, Katherine P.; Gould, Robert G.; Webb, W. R.

    1999-07-01

    The purpose of this research is to assess the utility of providing presets for computed radiography (CR) softcopy display, based not on the window/level settings, but on image processing applied to the image based on optimization for visualization of specific findings, pathologies, etc. Clinical chest images are acquired using an Agfa ADC 70 CR scanner, and transferred over the PACS network to an image processing station which has the capability to perform multiscale contrast equalization. The optimal image processing settings per finding are developed in conjunction with a thoracic radiologist by manipulating the multiscale image contrast amplification algorithm parameters. Softcopy display of images processed with finding-specific settings are compared with the standard default image presentation for fifty cases of each category. Comparison is scored using a five point scale with positive one and two denoting the standard presentation is preferred over the finding-specific presets, negative one and two denoting the finding-specific preset is preferred over the standard presentation, and zero denoting no difference. Presets have been developed for pneumothorax and clinical cases are currently being collected in preparation for formal clinical trials. Subjective assessments indicate a preference for the optimized-preset presentation of images over the standard default, particularly by inexperienced radiology residents and referring clinicians.

  6. Finding-specific display presets for computed radiography soft-copy reading.

    PubMed

    Andriole, K P; Gould, R G; Webb, W R

    1999-05-01

    Much work has been done to optimize the display of cross-sectional modality imaging examinations for soft-copy reading (i.e., window/level tissue presets, and format presentations such as tile and stack modes, four-on-one, nine-on-one, etc). Less attention has been paid to the display of digital forms of the conventional projection x-ray. The purpose of this study is to assess the utility of providing presets for computed radiography (CR) soft-copy display, based not on the window/level settings, but on processing applied to the image optimized for visualization of specific findings, pathologies, etc (i.e., pneumothorax, tumor, tube location). It is felt that digital display of CR images based on finding-specific processing presets has the potential to: speed reading of digital projection x-ray examinations on soft copy; improve diagnostic efficacy; standardize display across examination type, clinical scenario, important key findings, and significant negatives; facilitate image comparison; and improve confidence in and acceptance of soft-copy reading. Clinical chest images are acquired using an Agfa-Gevaert (Mortsel, Belgium) ADC 70 CR scanner and Fuji (Stamford, CT) 9000 and AC2 CR scanners. Those demonstrating pertinent findings are transferred over the clinical picture archiving and communications system (PACS) network to a research image processing station (Agfa PS5000), where the optimal image-processing settings per finding, pathologic category, etc, are developed in conjunction with a thoracic radiologist, by manipulating the multiscale image contrast amplification (Agfa MUSICA) algorithm parameters. Soft-copy display of images processed with finding-specific settings are compared with the standard default image presentation for 50 cases of each category. Comparison is scored using a 5-point scale with the positive scale denoting the standard presentation is preferred over the finding-specific processing, the negative scale denoting the finding-specific processing is preferred over the standard presentation, and zero denoting no difference. Processing settings have been developed for several findings including pneumothorax and lung nodules, and clinical cases are currently being collected in preparation for formal clinical trials. Preliminary results indicate a preference for the optimized-processing presentation of images over the standard default, particularly by inexperienced radiology residents and referring clinicians.

  7. Photometric Calibrations of Gemini Images of NGC 6253

    NASA Astrophysics Data System (ADS)

    Pearce, Sean; Jeffery, Elizabeth

    2017-01-01

    We present preliminary results of our analysis of the metal-rich open cluster NGC 6253 using imaging data from GMOS on the Gemini-South Observatory. These data are part of a larger project to observe the effects of high metallicity on white dwarf cooling processes, especially the white dwarf cooling age, which have important implications on the processes of stellar evolution. To standardize the Gemini photometry, we have also secured imaging data of both the cluster and standard star fields using the 0.6-m SARA Observatory at CTIO. By analyzing and comparing the standard star fields of both the SARA data and the published Gemini zero-points of the standard star fields, we will calibrate the data obtained for the cluster. These calibrations are an important part of the project to obtain a standardized deep color-magnitude diagram to analyze the cluster. We present the process of verifying our standardization process. With a standardized CMD, we also present an analysis of the cluster's main sequence turn off age.

  8. Standardizing Quality Assessment of Fused Remotely Sensed Images

    NASA Astrophysics Data System (ADS)

    Pohl, C.; Moellmann, J.; Fries, K.

    2017-09-01

    The multitude of available operational remote sensing satellites led to the development of many image fusion techniques to provide high spatial, spectral and temporal resolution images. The comparison of different techniques is necessary to obtain an optimized image for the different applications of remote sensing. There are two approaches in assessing image quality: 1. Quantitatively by visual interpretation and 2. Quantitatively using image quality indices. However an objective comparison is difficult due to the fact that a visual assessment is always subject and a quantitative assessment is done by different criteria. Depending on the criteria and indices the result varies. Therefore it is necessary to standardize both processes (qualitative and quantitative assessment) in order to allow an objective image fusion quality evaluation. Various studies have been conducted at the University of Osnabrueck (UOS) to establish a standardized process to objectively compare fused image quality. First established image fusion quality assessment protocols, i.e. Quality with No Reference (QNR) and Khan's protocol, were compared on varies fusion experiments. Second the process of visual quality assessment was structured and standardized with the aim to provide an evaluation protocol. This manuscript reports on the results of the comparison and provides recommendations for future research.

  9. The COST Action IC0604 "Telepathology Network in Europe" (EURO-TELEPATH).

    PubMed

    García-Rojo, Marcial; Gonçalves, Luís; Blobel, Bernd

    2012-01-01

    The COST Action IC0604 "Telepathology Network in Europe" (EURO-TELEPATH) is a European COST Action that has been running from 2007 to 2011. COST Actions are funded by the COST (European Cooperation in the field of Scientific and Technical Research) Agency, supported by the Seventh Framework Programme for Research and Technological Development (FP7), of the European Union. EURO-TELEPATH's main objectives were evaluating and validating the common technological framework and communication standards required to access, transmit and manage digital medical records by pathologists and other medical professionals in a networked environment. The project was organized in four working groups. orking Group 1 "Business modeling in pathology" has designed main pathology processes - Frozen Study, Formalin Fixed Specimen Study, Telepathology, Cytology, and Autopsy -using Business Process Modeling Notation (BPMN). orking Group 2 "Informatics standards in pathology" has been dedicated to promoting the development and application of informatics standards in pathology, collaborating with Integrating the Healthcare Enterprise (IHE), Digital Imaging and Communications in Medicine (DICOM), Health Level Seven (HL7), and other standardization bodies. Working Group 3 "Images: Analysis, Processing, Retrieval and Management" worked on the use of virtual or digital slides that are fostering the use of image processing and analysis in pathology not only for research purposes, but also in daily practice. Working Group 4 "Technology and Automation in Pathology" was focused on studying the adequacy of current existing technical solutions, including, e.g., the quality of images obtained by slide scanners, or the efficiency of image analysis applications. Major outcome of this action are the collaboration with international health informatics standardization bodies to foster the development of standards for digital pathology, offering a new approach for workflow analysis, based in business process modeling. Health terminology standardization research has become a topic of high interest. Future research work should focus on standardization of automatic image analysis and tissue microarrays imaging.

  10. 76 FR 51993 - Draft Guidance for Industry on Standards for Clinical Trial Imaging Endpoints; Availability

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-19

    ... assist the office in processing your requests. See the SUPPLEMENTARY INFORMATION section for electronic... considerations for standardization of image acquisition, image interpretation methods, and other procedures to help ensure imaging data quality. The draft guidance describes two categories of image acquisition and...

  11. funcLAB/G-service-oriented architecture for standards-based analysis of functional magnetic resonance imaging in HealthGrids.

    PubMed

    Erberich, Stephan G; Bhandekar, Manasee; Chervenak, Ann; Kesselman, Carl; Nelson, Marvin D

    2007-01-01

    Functional MRI is successfully being used in clinical and research applications including preoperative planning, language mapping, and outcome monitoring. However, clinical use of fMRI is less widespread due to its complexity of imaging, image workflow, post-processing, and lack of algorithmic standards hindering result comparability. As a consequence, wide-spread adoption of fMRI as clinical tool is low contributing to the uncertainty of community physicians how to integrate fMRI into practice. In addition, training of physicians with fMRI is in its infancy and requires clinical and technical understanding. Therefore, many institutions which perform fMRI have a team of basic researchers and physicians to perform fMRI as a routine imaging tool. In order to provide fMRI as an advanced diagnostic tool to the benefit of a larger patient population, image acquisition and image post-processing must be streamlined, standardized, and available at any institution which does not have these resources available. Here we describe a software architecture, the functional imaging laboratory (funcLAB/G), which addresses (i) standardized image processing using Statistical Parametric Mapping and (ii) its extension to secure sharing and availability for the community using standards-based Grid technology (Globus Toolkit). funcLAB/G carries the potential to overcome the limitations of fMRI in clinical use and thus makes standardized fMRI available to the broader healthcare enterprise utilizing the Internet and HealthGrid Web Services technology.

  12. A low-cost vector processor boosting compute-intensive image processing operations

    NASA Technical Reports Server (NTRS)

    Adorf, Hans-Martin

    1992-01-01

    Low-cost vector processing (VP) is within reach of everyone seriously engaged in scientific computing. The advent of affordable add-on VP-boards for standard workstations complemented by mathematical/statistical libraries is beginning to impact compute-intensive tasks such as image processing. A case in point in the restoration of distorted images from the Hubble Space Telescope. A low-cost implementation is presented of the standard Tarasko-Richardson-Lucy restoration algorithm on an Intel i860-based VP-board which is seamlessly interfaced to a commercial, interactive image processing system. First experience is reported (including some benchmarks for standalone FFT's) and some conclusions are drawn.

  13. Electronic Photography at the NASA Langley Research Center

    NASA Technical Reports Server (NTRS)

    Holm, Jack; Judge, Nancianne

    1995-01-01

    An electronic photography facility has been established in the Imaging & Photographic Technology Section, Visual Imaging Branch, at the NASA Langley Research Center (LaRC). The purpose of this facility is to provide the LaRC community with access to digital imaging technology. In particular, capabilities have been established for image scanning, direct image capture, optimized image processing for storage, image enhancement, and optimized device dependent image processing for output. Unique approaches include: evaluation and extraction of the entire film information content through scanning; standardization of image file tone reproduction characteristics for optimal bit utilization and viewing; education of digital imaging personnel on the effects of sampling and quantization to minimize image processing related information loss; investigation of the use of small kernel optimal filters for image restoration; characterization of a large array of output devices and development of image processing protocols for standardized output. Currently, the laboratory has a large collection of digital image files which contain essentially all the information present on the original films. These files are stored at 8-bits per color, but the initial image processing was done at higher bit depths and/or resolutions so that the full 8-bits are used in the stored files. The tone reproduction of these files has also been optimized so the available levels are distributed according to visual perceptibility. Look up tables are available which modify these files for standardized output on various devices, although color reproduction has been allowed to float to some extent to allow for full utilization of output device gamut.

  14. Digital data registration and differencing compression system

    NASA Technical Reports Server (NTRS)

    Ransford, Gary A. (Inventor); Cambridge, Vivien J. (Inventor)

    1990-01-01

    A process is disclosed for x ray registration and differencing which results in more efficient compression. Differencing of registered modeled subject image with a modeled reference image forms a differenced image for compression with conventional compression algorithms. Obtention of a modeled reference image includes modeling a relatively unrelated standard reference image upon a three-dimensional model, which three-dimensional model is also used to model the subject image for obtaining the modeled subject image. The registration process of the modeled subject image and modeled reference image translationally correlates such modeled images for resulting correlation thereof in spatial and spectral dimensions. Prior to compression, a portion of the image falling outside a designated area of interest may be eliminated, for subsequent replenishment with a standard reference image. The compressed differenced image may be subsequently transmitted and/or stored, for subsequent decompression and addition to a standard reference image so as to form a reconstituted or approximated subject image at either a remote location and/or at a later moment in time. Overall effective compression ratios of 100:1 are possible for thoracic x ray digital images.

  15. Digital Data Registration and Differencing Compression System

    NASA Technical Reports Server (NTRS)

    Ransford, Gary A. (Inventor); Cambridge, Vivien J. (Inventor)

    1996-01-01

    A process for X-ray registration and differencing results in more efficient compression. Differencing of registered modeled subject image with a modeled reference image forms a differenced image for compression with conventional compression algorithms. Obtention of a modeled reference image includes modeling a relatively unrelated standard reference image upon a three-dimensional model, which three-dimensional model is also used to model the subject image for obtaining the modeled subject image. The registration process of the modeled subject image and modeled reference image translationally correlates such modeled images for resulting correlation thereof in spatial and spectral dimensions. Prior to compression, a portion of the image falling outside a designated area of interest may be eliminated, for subsequent replenishment with a standard reference image. The compressed differenced image may be subsequently transmitted and/or stored, for subsequent decompression and addition to a standard reference image so as to form a reconstituted or approximated subject image at either a remote location and/or at a later moment in time. Overall effective compression ratios of 100:1 are possible for thoracic X-ray digital images.

  16. Digital data registration and differencing compression system

    NASA Technical Reports Server (NTRS)

    Ransford, Gary A. (Inventor); Cambridge, Vivien J. (Inventor)

    1992-01-01

    A process for x ray registration and differencing results in more efficient compression is discussed. Differencing of registered modeled subject image with a modeled reference image forms a differential image for compression with conventional compression algorithms. Obtention of a modeled reference image includes modeling a relatively unrelated standard reference image upon a three dimensional model, which three dimensional model is also used to model the subject image for obtaining the modeled subject image. The registration process of the modeled subject image and modeled reference image translationally correlates such modeled images for resulting correlation thereof in spatial and spectral dimensions. Prior to compression, a portion of the image falling outside a designated area of interest may be eliminated, for subsequent replenishment with a standard reference image. The compressed differenced image may be subsequently transmitted and/or stored, for subsequent decompression and addition to a standard reference image so as to form a reconstituted or approximated subject image at either remote location and/or at a later moment in time. Overall effective compression ratios of 100:1 are possible for thoracic x ray digital images.

  17. From plastic to gold: a unified classification scheme for reference standards in medical image processing

    NASA Astrophysics Data System (ADS)

    Lehmann, Thomas M.

    2002-05-01

    Reliable evaluation of medical image processing is of major importance for routine applications. Nonetheless, evaluation is often omitted or methodically defective when novel approaches or algorithms are introduced. Adopted from medical diagnosis, we define the following criteria to classify reference standards: 1. Reliance, if the generation or capturing of test images for evaluation follows an exactly determined and reproducible protocol. 2. Equivalence, if the image material or relationships considered within an algorithmic reference standard equal real-life data with respect to structure, noise, or other parameters of importance. 3. Independence, if any reference standard relies on a different procedure than that to be evaluated, or on other images or image modalities than that used routinely. This criterion bans the simultaneous use of one image for both, training and test phase. 4. Relevance, if the algorithm to be evaluated is self-reproducible. If random parameters or optimization strategies are applied, reliability of the algorithm must be shown before the reference standard is applied for evaluation. 5. Significance, if the number of reference standard images that are used for evaluation is sufficient large to enable statistically founded analysis. We demand that a true gold standard must satisfy the Criteria 1 to 3. Any standard only satisfying two criteria, i.e., Criterion 1 and Criterion 2 or Criterion 1 and Criterion 3, is referred to as silver standard. Other standards are termed to be from plastic. Before exhaustive evaluation based on gold or silver standards is performed, its relevance must be shown (Criterion 4) and sufficient tests must be carried out to found statistical analysis (Criterion 5). In this paper, examples are given for each class of reference standards.

  18. Content standards for medical image metadata

    NASA Astrophysics Data System (ADS)

    d'Ornellas, Marcos C.; da Rocha, Rafael P.

    2003-12-01

    Medical images are at the heart of the healthcare diagnostic procedures. They have provided not only a noninvasive mean to view anatomical cross-sections of internal organs but also a mean for physicians to evaluate the patient"s diagnosis and monitor the effects of the treatment. For a Medical Center, the emphasis may shift from the generation of image to post processing and data management since the medical staff may generate even more processed images and other data from the original image after various analyses and post processing. A medical image data repository for health care information system is becoming a critical need. This data repository would contain comprehensive patient records, including information such as clinical data and related diagnostic images, and post-processed images. Due to the large volume and complexity of the data as well as the diversified user access requirements, the implementation of the medical image archive system will be a complex and challenging task. This paper discusses content standards for medical image metadata. In addition it also focuses on the image metadata content evaluation and metadata quality management.

  19. Device and methods for "gold standard" registration of clinical 3D and 2D cerebral angiograms

    NASA Astrophysics Data System (ADS)

    Madan, Hennadii; Likar, Boštjan; Pernuš, Franjo; Å piclin, Žiga

    2015-03-01

    Translation of any novel and existing 3D-2D image registration methods into clinical image-guidance systems is limited due to lack of their objective validation on clinical image datasets. The main reason is that, besides the calibration of the 2D imaging system, a reference or "gold standard" registration is very difficult to obtain on clinical image datasets. In the context of cerebral endovascular image-guided interventions (EIGIs), we present a calibration device in the form of a headband with integrated fiducial markers and, secondly, propose an automated pipeline comprising 3D and 2D image processing, analysis and annotation steps, the result of which is a retrospective calibration of the 2D imaging system and an optimal, i.e., "gold standard" registration of 3D and 2D images. The device and methods were used to create the "gold standard" on 15 datasets of 3D and 2D cerebral angiograms, whereas each dataset was acquired on a patient undergoing EIGI for either aneurysm coiling or embolization of arteriovenous malformation. The use of the device integrated seamlessly in the clinical workflow of EIGI. While the automated pipeline eliminated all manual input or interactive image processing, analysis or annotation. In this way, the time to obtain the "gold standard" was reduced from 30 to less than one minute and the "gold standard" of 3D-2D registration on all 15 datasets of cerebral angiograms was obtained with a sub-0.1 mm accuracy.

  20. Standardization efforts of digital pathology in Europe.

    PubMed

    Rojo, Marcial García; Daniel, Christel; Schrader, Thomas

    2012-01-01

    EURO-TELEPATH is a European COST Action IC0604. It started in 2007 and will end in November 2011. Its main objectives are evaluating and validating the common technological framework and communication standards required to access, transmit, and manage digital medical records by pathologists and other medical specialties in a networked environment. Working Group 1, "Business Modelling in Pathology," has designed main pathology processes - Frozen Study, Formalin Fixed Specimen Study, Telepathology, Cytology, and Autopsy - using Business Process Modelling Notation (BPMN). Working Group 2 has been dedicated to promoting the application of informatics standards in pathology, collaborating with Integrating Healthcare Enterprise (IHE), Digital Imaging and Communications in Medicine (DICOM), Health Level Seven (HL7), and other standardization bodies. Health terminology standardization research has become a topic of great interest. Future research work should focus on standardizing automatic image analysis and tissue microarrays imaging.

  1. Spatial Standard Observer

    NASA Technical Reports Server (NTRS)

    Watson, Andrw B. (Inventor)

    2010-01-01

    The present invention relates to devices and methods for the measurement and/or for the specification of the perceptual intensity of a visual image. or the perceptual distance between a pair of images. Grayscale test and reference images are processed to produce test and reference luminance images. A luminance filter function is convolved with the reference luminance image to produce a local mean luminance reference image . Test and reference contrast images are produced from the local mean luminance reference image and the test and reference luminance images respectively, followed by application of a contrast sensitivity filter. The resulting images are combined according to mathematical prescriptions to produce a Just Noticeable Difference, JND value, indicative of a Spatial Standard Observer. SSO. Some embodiments include masking functions. window functions. special treatment for images lying on or near border and pre-processing of test images.

  2. Spatial Standard Observer

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B. (Inventor)

    2012-01-01

    The present invention relates to devices and methods for the measurement and/or for the specification of the perceptual intensity of a visual image, or the perceptual distance between a pair of images. Grayscale test and reference images are processed to produce test and reference luminance images. A luminance filter function is convolved with the reference luminance image to produce a local mean luminance reference image. Test and reference contrast images are produced from the local mean luminance reference image and the test and reference luminance images respectively, followed by application of a contrast sensitivity filter. The resulting images are combined according to mathematical prescriptions to produce a Just Noticeable Difference, JND value, indicative of a Spatial Standard Observer, SSO. Some embodiments include masking functions, window functions, special treatment for images lying on or near borders and pre-processing of test images.

  3. Automatic Feature Extraction System.

    DTIC Science & Technology

    1982-12-01

    exploitation. It was used for * processing of black and white and multispectral reconnaissance photography, side-looking synthetic aperture radar imagery...the image data and different software modules for image queing and formatting, the result of the input process will be images in standard AFES file...timely manner. The FFS configuration provides the environment necessary for integrated testing of image processing functions and design and

  4. Electrophoresis gel image processing and analysis using the KODAK 1D software.

    PubMed

    Pizzonia, J

    2001-06-01

    The present article reports on the performance of the KODAK 1D Image Analysis Software for the acquisition of information from electrophoresis experiments and highlights the utility of several mathematical functions for subsequent image processing, analysis, and presentation. Digital images of Coomassie-stained polyacrylamide protein gels containing molecular weight standards and ethidium bromide stained agarose gels containing DNA mass standards are acquired using the KODAK Electrophoresis Documentation and Analysis System 290 (EDAS 290). The KODAK 1D software is used to optimize lane and band identification using features such as isomolecular weight lines. Mathematical functions for mass standard representation are presented, and two methods for estimation of unknown band mass are compared. Given the progressive transition of electrophoresis data acquisition and daily reporting in peer-reviewed journals to digital formats ranging from 8-bit systems such as EDAS 290 to more expensive 16-bit systems, the utility of algorithms such as Gaussian modeling, which can correct geometric aberrations such as clipping due to signal saturation common at lower bit depth levels, is discussed. Finally, image-processing tools that can facilitate image preparation for presentation are demonstrated.

  5. Electronic photography at NASA Langley Research Center

    NASA Technical Reports Server (NTRS)

    Holm, Jack M.

    1994-01-01

    The field of photography began a metamorphosis several years ago which promises to fundamentally change how images are captured, transmitted, and output. At this time the metamorphosis is still in the early stages, but already new processes, hardware, and software are allowing many individuals and organizations to explore the entry of imaging into the information revolution. Exploration at this time is prerequisite to leading expertise in the future, and a number of branches at LaRC have ventured into electronic and digital imaging. Their progress until recently has been limited by two factors: the lack of an integrated approach and the lack of an electronic photographic capability. The purpose of the research conducted was to address these two items. In some respects, the lack of electronic photographs has prevented application of an integrated imaging approach. Since everything could not be electronic, the tendency was to work with hard copy. Over the summer, the Photographics Section has set up an Electronic Photography Laboratory. This laboratory now has the capability to scan film images, process the images, and output the images in a variety of forms. Future plans also include electronic capture capability. The current forms of image processing available include sharpening, noise reduction, dust removal, tone correction, color balancing, image editing, cropping, electronic separations, and halftoning. Output choices include customer specified electronic file formats which can be output on magnetic or optical disks or over the network, 4400 line photographic quality prints and transparencies to 8.5 by 11 inches, and 8000 line film negatives and transparencies to 4 by 5 inches. The problem of integrated imaging involves a number of branches at LaRC including Visual Imaging, Research Printing and Publishing, Data Visualization and Animation, Advanced Computing, and various research groups. These units must work together to develop common approaches to image processing and archiving. The ultimate goal is to be able to search for images using an on-line database and image catalog. These images could then be retrieved over the network as needed, along with information on the acquisition and processing prior to storage. For this goal to be realized, a number of standard processing protocols must be developed to allow the classification of images into categories. Standard series of processing algorithms can then be applied to each category (although many of these may be adaptive between images). Since the archived image files would be standardized, it should also be possible to develop standard output processing protocols for a number of output devices. If LaRC continues the research effort begun this summer, it may be one of the first organizations to develop an integrated approach to imaging. As such, it could serve as a model for other organizations in government and the private sector.

  6. Performance of a Method to Standardize Breast Ultrasound Interpretation Using Image Processing and Case-Based Reasoning

    NASA Astrophysics Data System (ADS)

    André, M. P.; Galperin, M.; Berry, A.; Ojeda-Fournier, H.; O'Boyle, M.; Olson, L.; Comstock, C.; Taylor, A.; Ledgerwood, M.

    Our computer-aided diagnostic (CADx) tool uses advanced image processing and artificial intelligence to analyze findings on breast sonography images. The goal is to standardize reporting of such findings using well-defined descriptors and to improve accuracy and reproducibility of interpretation of breast ultrasound by radiologists. This study examined several factors that may impact accuracy and reproducibility of the CADx software, which proved to be highly accurate and stabile over several operating conditions.

  7. Overlay metrology for double patterning processes

    NASA Astrophysics Data System (ADS)

    Leray, Philippe; Cheng, Shaunee; Laidler, David; Kandel, Daniel; Adel, Mike; Dinu, Berta; Polli, Marco; Vasconi, Mauro; Salski, Bartlomiej

    2009-03-01

    The double patterning (DPT) process is foreseen by the industry to be the main solution for the 32 nm technology node and even beyond. Meanwhile process compatibility has to be maintained and the performance of overlay metrology has to improve. To achieve this for Image Based Overlay (IBO), usually the optics of overlay tools are improved. It was also demonstrated that these requirements are achievable with a Diffraction Based Overlay (DBO) technique named SCOLTM [1]. In addition, we believe that overlay measurements with respect to a reference grid are required to achieve the required overlay control [2]. This induces at least a three-fold increase in the number of measurements (2 for double patterned layers to the reference grid and 1 between the double patterned layers). The requirements of process compatibility, enhanced performance and large number of measurements make the choice of overlay metrology for DPT very challenging. In this work we use different flavors of the standard overlay metrology technique (IBO) as well as the new technique (SCOL) to address these three requirements. The compatibility of the corresponding overlay targets with double patterning processes (Litho-Etch-Litho-Etch (LELE); Litho-Freeze-Litho-Etch (LFLE), Spacer defined) is tested. The process impact on different target types is discussed (CD bias LELE, Contrast for LFLE). We compare the standard imaging overlay metrology with non-standard imaging techniques dedicated to double patterning processes (multilayer imaging targets allowing one overlay target instead of three, very small imaging targets). In addition to standard designs already discussed [1], we investigate SCOL target designs specific to double patterning processes. The feedback to the scanner is determined using the different techniques. The final overlay results obtained are compared accordingly. We conclude with the pros and cons of each technique and suggest the optimal metrology strategy for overlay control in double patterning processes.

  8. The Vector, Signal, and Image Processing Library (VSIPL): an Open Standard for Astronomical Data Processing

    NASA Astrophysics Data System (ADS)

    Kepner, J. V.; Janka, R. S.; Lebak, J.; Richards, M. A.

    1999-12-01

    The Vector/Signal/Image Processing Library (VSIPL) is a DARPA initiated effort made up of industry, government and academic representatives who have defined an industry standard API for vector, signal, and image processing primitives for real-time signal processing on high performance systems. VSIPL supports a wide range of data types (int, float, complex, ...) and layouts (vectors, matrices and tensors) and is ideal for astronomical data processing. The VSIPL API is intended to serve as an open, vendor-neutral, industry standard interface. The object-based VSIPL API abstracts the memory architecture of the underlying machine by using the concept of memory blocks and views. Early experiments with VSIPL code conversions have been carried out by the High Performance Computing Program team at the UCSD. Commercially, several major vendors of signal processors are actively developing implementations. VSIPL has also been explicitly required as part of a recent Rome Labs teraflop procurement. This poster presents the VSIPL API, its functionality and the status of various implementations.

  9. A New Feedback-Based Method for Parameter Adaptation in Image Processing Routines.

    PubMed

    Khan, Arif Ul Maula; Mikut, Ralf; Reischl, Markus

    2016-01-01

    The parametrization of automatic image processing routines is time-consuming if a lot of image processing parameters are involved. An expert can tune parameters sequentially to get desired results. This may not be productive for applications with difficult image analysis tasks, e.g. when high noise and shading levels in an image are present or images vary in their characteristics due to different acquisition conditions. Parameters are required to be tuned simultaneously. We propose a framework to improve standard image segmentation methods by using feedback-based automatic parameter adaptation. Moreover, we compare algorithms by implementing them in a feedforward fashion and then adapting their parameters. This comparison is proposed to be evaluated by a benchmark data set that contains challenging image distortions in an increasing fashion. This promptly enables us to compare different standard image segmentation algorithms in a feedback vs. feedforward implementation by evaluating their segmentation quality and robustness. We also propose an efficient way of performing automatic image analysis when only abstract ground truth is present. Such a framework evaluates robustness of different image processing pipelines using a graded data set. This is useful for both end-users and experts.

  10. A New Feedback-Based Method for Parameter Adaptation in Image Processing Routines

    PubMed Central

    Mikut, Ralf; Reischl, Markus

    2016-01-01

    The parametrization of automatic image processing routines is time-consuming if a lot of image processing parameters are involved. An expert can tune parameters sequentially to get desired results. This may not be productive for applications with difficult image analysis tasks, e.g. when high noise and shading levels in an image are present or images vary in their characteristics due to different acquisition conditions. Parameters are required to be tuned simultaneously. We propose a framework to improve standard image segmentation methods by using feedback-based automatic parameter adaptation. Moreover, we compare algorithms by implementing them in a feedforward fashion and then adapting their parameters. This comparison is proposed to be evaluated by a benchmark data set that contains challenging image distortions in an increasing fashion. This promptly enables us to compare different standard image segmentation algorithms in a feedback vs. feedforward implementation by evaluating their segmentation quality and robustness. We also propose an efficient way of performing automatic image analysis when only abstract ground truth is present. Such a framework evaluates robustness of different image processing pipelines using a graded data set. This is useful for both end-users and experts. PMID:27764213

  11. Effects of image processing on the detective quantum efficiency

    NASA Astrophysics Data System (ADS)

    Park, Hye-Suk; Kim, Hee-Joung; Cho, Hyo-Min; Lee, Chang-Lae; Lee, Seung-Wan; Choi, Yu-Na

    2010-04-01

    Digital radiography has gained popularity in many areas of clinical practice. This transition brings interest in advancing the methodologies for image quality characterization. However, as the methodologies for such characterizations have not been standardized, the results of these studies cannot be directly compared. The primary objective of this study was to standardize methodologies for image quality characterization. The secondary objective was to evaluate affected factors to Modulation transfer function (MTF), noise power spectrum (NPS), and detective quantum efficiency (DQE) according to image processing algorithm. Image performance parameters such as MTF, NPS, and DQE were evaluated using the international electro-technical commission (IEC 62220-1)-defined RQA5 radiographic techniques. Computed radiography (CR) images of hand posterior-anterior (PA) for measuring signal to noise ratio (SNR), slit image for measuring MTF, white image for measuring NPS were obtained and various Multi-Scale Image Contrast Amplification (MUSICA) parameters were applied to each of acquired images. In results, all of modified images were considerably influence on evaluating SNR, MTF, NPS, and DQE. Modified images by the post-processing had higher DQE than the MUSICA=0 image. This suggests that MUSICA values, as a post-processing, have an affect on the image when it is evaluating for image quality. In conclusion, the control parameters of image processing could be accounted for evaluating characterization of image quality in same way. The results of this study could be guided as a baseline to evaluate imaging systems and their imaging characteristics by measuring MTF, NPS, and DQE.

  12. Dependency of image quality on acquisition protocol and image processing in chest tomosynthesis-a visual grading study based on clinical data.

    PubMed

    Jadidi, Masoud; Båth, Magnus; Nyrén, Sven

    2018-04-09

    To compare the quality of images obtained with two different protocols with different acquisition time and the influence from image post processing in a chest digital tomosynthesis (DTS) system. 20 patients with suspected lung cancer were imaged with a chest X-ray equipment with tomosynthesis option. Two examination protocols with different acquisition times (6.3 and 12 s) were performed on each patient. Both protocols were presented with two different image post-processing (standard DTS processing and more advanced processing optimised for chest radiography). Thus, 4 series from each patient, altogether 80 series, were presented anonymously and in a random order. Five observers rated the quality of the reconstructed section images according to predefined quality criteria in three different classes. Visual grading characteristics (VGC) was used to analyse the data and the area under the VGC curve (AUC VGC ) was used as figure-of-merit. The 12 s protocol and the standard DTS processing were used as references in the analyses. The protocol with 6.3 s acquisition time had a statistically significant advantage over the vendor-recommended protocol with 12 s acquisition time for the classes of criteria, Demarcation (AUC VGC = 0.56, p = 0.009) and Disturbance (AUC VGC = 0.58, p < 0.001). A similar value of AUC VGC was found also for the class Structure (definition of bone structures in the spine) (0.56) but it could not be statistically separated from 0.5 (p = 0.21). For the image processing, the VGC analysis showed a small but statistically significant advantage for the standard DTS processing over the more advanced processing for the classes of criteria Demarcation (AUC VGC = 0.45, p = 0.017) and Disturbance (AUC VGC = 0.43, p = 0.005). A similar value of AUC VGC was found also for the class Structure (0.46), but it could not be statistically separated from 0.5 (p = 0.31). The study indicates that the protocol with 6.3 s acquisition time yields slightly better image quality than the vender-recommended protocol with acquisition time 12 s for several anatomical structures. Furthermore, the standard gradation processing  (the vendor-recommended post-processing for DTS), yields to some extent advantage over the gradation processing/multiobjective frequency processing/flexible noise control processing in terms of image quality for all classes of criteria. Advances in knowledge: The study proves that the image quality may be strongly affected by the selection of DTS protocol and that the vendor-recommended protocol may not always be the optimal choice.

  13. Applying and extending ISO/TC42 digital camera resolution standards to mobile imaging products

    NASA Astrophysics Data System (ADS)

    Williams, Don; Burns, Peter D.

    2007-01-01

    There are no fundamental differences between today's mobile telephone cameras and consumer digital still cameras that suggest many existing ISO imaging performance standards do not apply. To the extent that they have lenses, color filter arrays, detectors, apertures, image processing, and are hand held, there really are no operational or architectural differences. Despite this, there are currently differences in the levels of imaging performance. These are driven by physical and economic constraints, and image-capture conditions. Several ISO standards for resolution, well established for digital consumer digital cameras, require care when applied to the current generation of cell phone cameras. In particular, accommodation of optical flare, shading non-uniformity and distortion are recommended. We offer proposals for the application of existing ISO imaging resolution performance standards to mobile imaging products, and suggestions for extending performance standards to the characteristic behavior of camera phones.

  14. A simple 2D composite image analysis technique for the crystal growth study of L-ascorbic acid.

    PubMed

    Kumar, Krishan; Kumar, Virender; Lal, Jatin; Kaur, Harmeet; Singh, Jasbir

    2017-06-01

    This work was destined for 2D crystal growth studies of L-ascorbic acid using the composite image analysis technique. Growth experiments on the L-ascorbic acid crystals were carried out by standard (optical) microscopy, laser diffraction analysis, and composite image analysis. For image analysis, the growth of L-ascorbic acid crystals was captured as digital 2D RGB images, which were then processed to composite images. After processing, the crystal boundaries emerged as white lines against the black (cancelled) background. The crystal boundaries were well differentiated by peaks in the intensity graphs generated for the composite images. The lengths of crystal boundaries measured from the intensity graphs of composite images were in good agreement (correlation coefficient "r" = 0.99) with the lengths measured by standard microscopy. On the contrary, the lengths measured by laser diffraction were poorly correlated with both techniques. Therefore, the composite image analysis can replace the standard microscopy technique for the crystal growth studies of L-ascorbic acid. © 2017 Wiley Periodicals, Inc.

  15. Technology and Technique Standards for Camera-Acquired Digital Dermatologic Images: A Systematic Review.

    PubMed

    Quigley, Elizabeth A; Tokay, Barbara A; Jewell, Sarah T; Marchetti, Michael A; Halpern, Allan C

    2015-08-01

    Photographs are invaluable dermatologic diagnostic, management, research, teaching, and documentation tools. Digital Imaging and Communications in Medicine (DICOM) standards exist for many types of digital medical images, but there are no DICOM standards for camera-acquired dermatologic images to date. To identify and describe existing or proposed technology and technique standards for camera-acquired dermatologic images in the scientific literature. Systematic searches of the PubMed, EMBASE, and Cochrane databases were performed in January 2013 using photography and digital imaging, standardization, and medical specialty and medical illustration search terms and augmented by a gray literature search of 14 websites using Google. Two reviewers independently screened titles of 7371 unique publications, followed by 3 sequential full-text reviews, leading to the selection of 49 publications with the most recent (1985-2013) or detailed description of technology or technique standards related to the acquisition or use of images of skin disease (or related conditions). No universally accepted existing technology or technique standards for camera-based digital images in dermatology were identified. Recommendations are summarized for technology imaging standards, including spatial resolution, color resolution, reproduction (magnification) ratios, postacquisition image processing, color calibration, compression, output, archiving and storage, and security during storage and transmission. Recommendations are also summarized for technique imaging standards, including environmental conditions (lighting, background, and camera position), patient pose and standard view sets, and patient consent, privacy, and confidentiality. Proposed standards for specific-use cases in total body photography, teledermatology, and dermoscopy are described. The literature is replete with descriptions of obtaining photographs of skin disease, but universal imaging standards have not been developed, validated, and adopted to date. Dermatologic imaging is evolving without defined standards for camera-acquired images, leading to variable image quality and limited exchangeability. The development and adoption of universal technology and technique standards may first emerge in scenarios when image use is most associated with a defined clinical benefit.

  16. Enhanced coronary calcium visualization and detection from dual energy chest x-rays with sliding organ registration.

    PubMed

    Wen, Di; Nye, Katelyn; Zhou, Bo; Gilkeson, Robert C; Gupta, Amit; Ranim, Shiraz; Couturier, Spencer; Wilson, David L

    2018-03-01

    We have developed a technique to image coronary calcium, an excellent biomarker for atherosclerotic disease, using low cost, low radiation dual energy (DE) chest radiography, with potential for widespread screening from an already ordered exam. Our dual energy coronary calcium (DECC) processing method included automatic heart silhouette segmentation, sliding organ registration and scatter removal to create a bone-image-like, coronary calcium image with significant reduction in motion artifacts and improved calcium conspicuity compared to standard, clinically available DE processing. Experiments with a physical dynamic cardiac phantom showed that DECC processing reduced 73% of misregistration error caused by cardiac motion over a wide range of heart rates and x-ray radiation exposures. Using the functional measurement test (FMT), we determined significant image quality improvement in clinical images with DECC processing (p < 0.0001), where DECC images were chosen best in 94% of human readings. Comparing DECC images to registered and projected CT calcium images, we found good correspondence between the size and location of calcification signals. In a very preliminary coronary calcium ROC study, we used CT Agatston calcium score >50 as the gold standard for an actual positive test result. AUC performance was significantly improved from 0.73 ± 0.14 with standard DE to 0.87 ± 0.10 with DECC (p = 0.0095) for this limited set of surgical patient data biased towards heavy calcifications. The proposed DECC processing shows good potential for coronary calcium detection in DE chest radiography, giving impetus for a larger clinical evaluation. Copyright © 2018. Published by Elsevier Ltd.

  17. Normative Databases for Imaging Instrumentation.

    PubMed

    Realini, Tony; Zangwill, Linda M; Flanagan, John G; Garway-Heath, David; Patella, Vincent M; Johnson, Chris A; Artes, Paul H; Gaddie, Ian B; Fingeret, Murray

    2015-08-01

    To describe the process by which imaging devices undergo reference database development and regulatory clearance. The limitations and potential improvements of reference (normative) data sets for ophthalmic imaging devices will be discussed. A symposium was held in July 2013 in which a series of speakers discussed issues related to the development of reference databases for imaging devices. Automated imaging has become widely accepted and used in glaucoma management. The ability of such instruments to discriminate healthy from glaucomatous optic nerves, and to detect glaucomatous progression over time is limited by the quality of reference databases associated with the available commercial devices. In the absence of standardized rules governing the development of reference databases, each manufacturer's database differs in size, eligibility criteria, and ethnic make-up, among other key features. The process for development of imaging reference databases may be improved by standardizing eligibility requirements and data collection protocols. Such standardization may also improve the degree to which results may be compared between commercial instruments.

  18. Normative Databases for Imaging Instrumentation

    PubMed Central

    Realini, Tony; Zangwill, Linda; Flanagan, John; Garway-Heath, David; Patella, Vincent Michael; Johnson, Chris; Artes, Paul; Ben Gaddie, I.; Fingeret, Murray

    2015-01-01

    Purpose To describe the process by which imaging devices undergo reference database development and regulatory clearance. The limitations and potential improvements of reference (normative) data sets for ophthalmic imaging devices will be discussed. Methods A symposium was held in July 2013 in which a series of speakers discussed issues related to the development of reference databases for imaging devices. Results Automated imaging has become widely accepted and used in glaucoma management. The ability of such instruments to discriminate healthy from glaucomatous optic nerves, and to detect glaucomatous progression over time is limited by the quality of reference databases associated with the available commercial devices. In the absence of standardized rules governing the development of reference databases, each manufacturer’s database differs in size, eligibility criteria, and ethnic make-up, among other key features. Conclusions The process for development of imaging reference databases may be improved by standardizing eligibility requirements and data collection protocols. Such standardization may also improve the degree to which results may be compared between commercial instruments. PMID:25265003

  19. Neural Substrates for Processing Task-Irrelevant Sad Images in Adolescents

    ERIC Educational Resources Information Center

    Wang, Lihong; Huettel, Scott; De Bellis, Michael D.

    2008-01-01

    Neural systems related to cognitive and emotional processing were examined in adolescents using event-related functional magnetic resonance imaging (fMRI). Ten healthy adolescents performed an emotional oddball task. Subjects detected infrequent circles (targets) within a continual stream of phase-scrambled images (standards). Sad and neutral…

  20. The FBI compression standard for digitized fingerprint images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brislawn, C.M.; Bradley, J.N.; Onyshczak, R.J.

    1996-10-01

    The FBI has formulated national standards for digitization and compression of gray-scale fingerprint images. The compression algorithm for the digitized images is based on adaptive uniform scalar quantization of a discrete wavelet transform subband decomposition, a technique referred to as the wavelet/scalar quantization method. The algorithm produces archival-quality images at compression ratios of around 15 to 1 and will allow the current database of paper fingerprint cards to be replaced by digital imagery. A compliance testing program is also being implemented to ensure high standards of image quality and interchangeability of data between different implementations. We will review the currentmore » status of the FBI standard, including the compliance testing process and the details of the first-generation encoder.« less

  1. FBI compression standard for digitized fingerprint images

    NASA Astrophysics Data System (ADS)

    Brislawn, Christopher M.; Bradley, Jonathan N.; Onyshczak, Remigius J.; Hopper, Thomas

    1996-11-01

    The FBI has formulated national standards for digitization and compression of gray-scale fingerprint images. The compression algorithm for the digitized images is based on adaptive uniform scalar quantization of a discrete wavelet transform subband decomposition, a technique referred to as the wavelet/scalar quantization method. The algorithm produces archival-quality images at compression ratios of around 15 to 1 and will allow the current database of paper fingerprint cards to be replaced by digital imagery. A compliance testing program is also being implemented to ensure high standards of image quality and interchangeability of data between different implementations. We will review the current status of the FBI standard, including the compliance testing process and the details of the first-generation encoder.

  2. Quantitative imaging biomarker ontology (QIBO) for knowledge representation of biomedical imaging biomarkers.

    PubMed

    Buckler, Andrew J; Liu, Tiffany Ting; Savig, Erica; Suzek, Baris E; Ouellette, M; Danagoulian, J; Wernsing, G; Rubin, Daniel L; Paik, David

    2013-08-01

    A widening array of novel imaging biomarkers is being developed using ever more powerful clinical and preclinical imaging modalities. These biomarkers have demonstrated effectiveness in quantifying biological processes as they occur in vivo and in the early prediction of therapeutic outcomes. However, quantitative imaging biomarker data and knowledge are not standardized, representing a critical barrier to accumulating medical knowledge based on quantitative imaging data. We use an ontology to represent, integrate, and harmonize heterogeneous knowledge across the domain of imaging biomarkers. This advances the goal of developing applications to (1) improve precision and recall of storage and retrieval of quantitative imaging-related data using standardized terminology; (2) streamline the discovery and development of novel imaging biomarkers by normalizing knowledge across heterogeneous resources; (3) effectively annotate imaging experiments thus aiding comprehension, re-use, and reproducibility; and (4) provide validation frameworks through rigorous specification as a basis for testable hypotheses and compliance tests. We have developed the Quantitative Imaging Biomarker Ontology (QIBO), which currently consists of 488 terms spanning the following upper classes: experimental subject, biological intervention, imaging agent, imaging instrument, image post-processing algorithm, biological target, indicated biology, and biomarker application. We have demonstrated that QIBO can be used to annotate imaging experiments with standardized terms in the ontology and to generate hypotheses for novel imaging biomarker-disease associations. Our results established the utility of QIBO in enabling integrated analysis of quantitative imaging data.

  3. Integration of image capture and processing: beyond single-chip digital camera

    NASA Astrophysics Data System (ADS)

    Lim, SukHwan; El Gamal, Abbas

    2001-05-01

    An important trend in the design of digital cameras is the integration of capture and processing onto a single CMOS chip. Although integrating the components of a digital camera system onto a single chip significantly reduces system size and power, it does not fully exploit the potential advantages of integration. We argue that a key advantage of integration is the ability to exploit the high speed imaging capability of CMOS image senor to enable new applications such as multiple capture for enhancing dynamic range and to improve the performance of existing applications such as optical flow estimation. Conventional digital cameras operate at low frame rates and it would be too costly, if not infeasible, to operate their chips at high frame rates. Integration solves this problem. The idea is to capture images at much higher frame rates than he standard frame rate, process the high frame rate data on chip, and output the video sequence and the application specific data at standard frame rate. This idea is applied to optical flow estimation, where significant performance improvements are demonstrate over methods using standard frame rate sequences. We then investigate the constraints on memory size and processing power that can be integrated with a CMOS image sensor in a 0.18 micrometers process and below. We show that enough memory and processing power can be integrated to be able to not only perform the functions of a conventional camera system but also to perform applications such as real time optical flow estimation.

  4. Optimization of dual-energy subtraction chest radiography by use of a direct-conversion flat-panel detector system.

    PubMed

    Fukao, Mari; Kawamoto, Kiyosumi; Matsuzawa, Hiroaki; Honda, Osamu; Iwaki, Takeshi; Doi, Tsukasa

    2015-01-01

    We aimed to optimize the exposure conditions in the acquisition of soft-tissue images using dual-energy subtraction chest radiography with a direct-conversion flat-panel detector system. Two separate chest images were acquired at high- and low-energy exposures with standard or thick chest phantoms. The high-energy exposure was fixed at 120 kVp with the use of an auto-exposure control technique. For the low-energy exposure, the tube voltages and entrance surface doses ranged 40-80 kVp and 20-100 % of the dose required for high-energy exposure, respectively. Further, a repetitive processing algorithm was used for reduction of the image noise generated by the subtraction process. Seven radiology technicians ranked soft-tissue images, and these results were analyzed using the normalized-rank method. Images acquired at 60 kVp were of acceptable quality regardless of the entrance surface dose and phantom size. Using a repetitive processing algorithm, the minimum acceptable doses were reduced from 75 to 40 % for the standard phantom and to 50 % for the thick phantom. We determined that the optimum low-energy exposure was 60 kVp at 50 % of the dose required for the high-energy exposure. This allowed the simultaneous acquisition of standard radiographs and soft-tissue images at 1.5 times the dose required for a standard radiograph, which is significantly lower than the values reported previously.

  5. Radiation dose reduction in digital breast tomosynthesis (DBT) by means of deep-learning-based supervised image processing

    NASA Astrophysics Data System (ADS)

    Liu, Junchi; Zarshenas, Amin; Qadir, Ammar; Wei, Zheng; Yang, Limin; Fajardo, Laurie; Suzuki, Kenji

    2018-03-01

    To reduce cumulative radiation exposure and lifetime risks for radiation-induced cancer from breast cancer screening, we developed a deep-learning-based supervised image-processing technique called neural network convolution (NNC) for radiation dose reduction in DBT. NNC employed patched-based neural network regression in a convolutional manner to convert lower-dose (LD) to higher-dose (HD) tomosynthesis images. We trained our NNC with quarter-dose (25% of the standard dose: 12 mAs at 32 kVp) raw projection images and corresponding "teaching" higher-dose (HD) images (200% of the standard dose: 99 mAs at 32 kVp) of a breast cadaver phantom acquired with a DBT system (Selenia Dimensions, Hologic, CA). Once trained, NNC no longer requires HD images. It converts new LD images to images that look like HD images; thus the term "virtual" HD (VHD) images. We reconstructed tomosynthesis slices on a research DBT system. To determine a dose reduction rate, we acquired 4 studies of another test phantom at 4 different radiation doses (1.35, 2.7, 4.04, and 5.39 mGy entrance dose). Structural SIMilarity (SSIM) index was used to evaluate the image quality. For testing, we collected half-dose (50% of the standard dose: 32+/-14 mAs at 33+/-5 kVp) and full-dose (standard dose: 68+/-23 mAs at 33+/-5 kvp) images of 10 clinical cases with the DBT system at University of Iowa Hospitals and Clinics. NNC converted half-dose DBT images of 10 clinical cases to VHD DBT images that were equivalent to full dose DBT images. Our cadaver phantom experiment demonstrated 79% dose reduction.

  6. WE-G-BRD-07: Automated MR Image Standardization and Auto-Contouring Strategy for MRI-Based Adaptive Brachytherapy for Cervix Cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saleh, H Al; Erickson, B; Paulson, E

    Purpose: MRI-based adaptive brachytherapy (ABT) is an emerging treatment modality for patients with gynecological tumors. However, MR image intensity non-uniformities (IINU) can vary from fraction to fraction, complicating image interpretation and auto-contouring accuracy. We demonstrate here an automated MR image standardization and auto-contouring strategy for MRI-based ABT of cervix cancer. Methods: MR image standardization consisted of: 1) IINU correction using the MNI N3 algorithm, 2) noise filtering using anisotropic diffusion, and 3) signal intensity normalization using the volumetric median. This post-processing chain was implemented as a series of custom Matlab and Java extensions in MIM (v6.4.5, MIM Software) and wasmore » applied to 3D T2 SPACE images of six patients undergoing MRI-based ABT at 3T. Coefficients of variation (CV=σ/µ) were calculated for both original and standardized images and compared using Mann-Whitney tests. Patient-specific cumulative MR atlases of bladder, rectum, and sigmoid contours were constructed throughout ABT, using original and standardized MR images from all previous ABT fractions. Auto-contouring was performed in MIM two ways: 1) best-match of one atlas image to the daily MR image, 2) multi-match of all previous fraction atlas images to the daily MR image. Dice’s Similarity Coefficients (DSCs) were calculated for auto-generated contours relative to reference contours for both original and standardized MR images and compared using Mann-Whitney tests. Results: Significant improvements in CV were detected following MR image standardization (p=0.0043), demonstrating an improvement in MR image uniformity. DSCs consistently increased for auto-contoured bladder, rectum, and sigmoid following MR image standardization, with the highest DSCs detected when the combination of MR image standardization and multi-match cumulative atlas-based auto-contouring was utilized. Conclusion: MR image standardization significantly improves MR image uniformity. The combination of MR image standardization and multi-match cumulative atlas-based auto-contouring produced the highest DSCs and is a promising strategy for MRI-based ABT for cervix cancer.« less

  7. Image acquisition context: procedure description attributes for clinically relevant indexing and selective retrieval of biomedical images.

    PubMed

    Bidgood, W D; Bray, B; Brown, N; Mori, A R; Spackman, K A; Golichowski, A; Jones, R H; Korman, L; Dove, B; Hildebrand, L; Berg, M

    1999-01-01

    To support clinically relevant indexing of biomedical images and image-related information based on the attributes of image acquisition procedures and the judgments (observations) expressed by observers in the process of image interpretation. The authors introduce the notion of "image acquisition context," the set of attributes that describe image acquisition procedures, and present a standards-based strategy for utilizing the attributes of image acquisition context as indexing and retrieval keys for digital image libraries. The authors' indexing strategy is based on an interdependent message/terminology architecture that combines the Digital Imaging and Communication in Medicine (DICOM) standard, the SNOMED (Systematized Nomenclature of Human and Veterinary Medicine) vocabulary, and the SNOMED DICOM microglossary. The SNOMED DICOM microglossary provides context-dependent mapping of terminology to DICOM data elements. The capability of embedding standard coded descriptors in DICOM image headers and image-interpretation reports improves the potential for selective retrieval of image-related information. This favorably affects information management in digital libraries.

  8. Using normalization 3D model for automatic clinical brain quantative analysis and evaluation

    NASA Astrophysics Data System (ADS)

    Lin, Hong-Dun; Yao, Wei-Jen; Hwang, Wen-Ju; Chung, Being-Tau; Lin, Kang-Ping

    2003-05-01

    Functional medical imaging, such as PET or SPECT, is capable of revealing physiological functions of the brain, and has been broadly used in diagnosing brain disorders by clinically quantitative analysis for many years. In routine procedures, physicians manually select desired ROIs from structural MR images and then obtain physiological information from correspondent functional PET or SPECT images. The accuracy of quantitative analysis thus relies on that of the subjectively selected ROIs. Therefore, standardizing the analysis procedure is fundamental and important in improving the analysis outcome. In this paper, we propose and evaluate a normalization procedure with a standard 3D-brain model to achieve precise quantitative analysis. In the normalization process, the mutual information registration technique was applied for realigning functional medical images to standard structural medical images. Then, the standard 3D-brain model that shows well-defined brain regions was used, replacing the manual ROIs in the objective clinical analysis. To validate the performance, twenty cases of I-123 IBZM SPECT images were used in practical clinical evaluation. The results show that the quantitative analysis outcomes obtained from this automated method are in agreement with the clinical diagnosis evaluation score with less than 3% error in average. To sum up, the method takes advantage of obtaining precise VOIs, information automatically by well-defined standard 3-D brain model, sparing manually drawn ROIs slice by slice from structural medical images in traditional procedure. That is, the method not only can provide precise analysis results, but also improve the process rate for mass medical images in clinical.

  9. Translational Imaging Spectroscopy for Proximal Sensing

    PubMed Central

    Rogass, Christian; Koerting, Friederike M.; Mielke, Christian; Brell, Maximilian; Boesche, Nina K.; Bade, Maria; Hohmann, Christian

    2017-01-01

    Proximal sensing as the near field counterpart of remote sensing offers a broad variety of applications. Imaging spectroscopy in general and translational laboratory imaging spectroscopy in particular can be utilized for a variety of different research topics. Geoscientific applications require a precise pre-processing of hyperspectral data cubes to retrieve at-surface reflectance in order to conduct spectral feature-based comparison of unknown sample spectra to known library spectra. A new pre-processing chain called GeoMAP-Trans for at-surface reflectance retrieval is proposed here as an analogue to other algorithms published by the team of authors. It consists of a radiometric, a geometric and a spectral module. Each module consists of several processing steps that are described in detail. The processing chain was adapted to the broadly used HySPEX VNIR/SWIR imaging spectrometer system and tested using geological mineral samples. The performance was subjectively and objectively evaluated using standard artificial image quality metrics and comparative measurements of mineral and Lambertian diffuser standards with standard field and laboratory spectrometers. The proposed algorithm provides highly qualitative results, offers broad applicability through its generic design and might be the first one of its kind to be published. A high radiometric accuracy is achieved by the incorporation of the Reduction of Miscalibration Effects (ROME) framework. The geometric accuracy is higher than 1 μpixel. The critical spectral accuracy was relatively estimated by comparing spectra of standard field spectrometers to those from HySPEX for a Lambertian diffuser. The achieved spectral accuracy is better than 0.02% for the full spectrum and better than 98% for the absorption features. It was empirically shown that point and imaging spectrometers provide different results for non-Lambertian samples due to their different sensing principles, adjacency scattering impacts on the signal and anisotropic surface reflection properties. PMID:28800111

  10. Symmetric Phase Only Filtering for Improved DPIV Data Processing

    NASA Technical Reports Server (NTRS)

    Wernet, Mark P.

    2006-01-01

    The standard approach in Digital Particle Image Velocimetry (DPIV) data processing is to use Fast Fourier Transforms to obtain the cross-correlation of two single exposure subregions, where the location of the cross-correlation peak is representative of the most probable particle displacement across the subregion. This standard DPIV processing technique is analogous to Matched Spatial Filtering, a technique commonly used in optical correlators to perform the crosscorrelation operation. Phase only filtering is a well known variation of Matched Spatial Filtering, which when used to process DPIV image data yields correlation peaks which are narrower and up to an order of magnitude larger than those obtained using traditional DPIV processing. In addition to possessing desirable correlation plane features, phase only filters also provide superior performance in the presence of DC noise in the correlation subregion. When DPIV image subregions contaminated with surface flare light or high background noise levels are processed using phase only filters, the correlation peak pertaining only to the particle displacement is readily detected above any signal stemming from the DC objects. Tedious image masking or background image subtraction are not required. Both theoretical and experimental analyses of the signal-to-noise ratio performance of the filter functions are presented. In addition, a new Symmetric Phase Only Filtering (SPOF) technique, which is a variation on the traditional phase only filtering technique, is described and demonstrated. The SPOF technique exceeds the performance of the traditionally accepted phase only filtering techniques and is easily implemented in standard DPIV FFT based correlation processing with no significant computational performance penalty. An "Automatic" SPOF algorithm is presented which determines when the SPOF is able to provide better signal to noise results than traditional PIV processing. The SPOF based optical correlation processing approach is presented as a new paradigm for more robust cross-correlation processing of low signal-to-noise ratio DPIV image data."

  11. The Classification of Tongue Colors with Standardized Acquisition and ICC Profile Correction in Traditional Chinese Medicine

    PubMed Central

    Tu, Li-ping; Chen, Jing-bo; Hu, Xiao-juan; Zhang, Zhi-feng

    2016-01-01

    Background and Goal. The application of digital image processing techniques and machine learning methods in tongue image classification in Traditional Chinese Medicine (TCM) has been widely studied nowadays. However, it is difficult for the outcomes to generalize because of lack of color reproducibility and image standardization. Our study aims at the exploration of tongue colors classification with a standardized tongue image acquisition process and color correction. Methods. Three traditional Chinese medical experts are chosen to identify the selected tongue pictures taken by the TDA-1 tongue imaging device in TIFF format through ICC profile correction. Then we compare the mean value of L * a * b * of different tongue colors and evaluate the effect of the tongue color classification by machine learning methods. Results. The L * a * b * values of the five tongue colors are statistically different. Random forest method has a better performance than SVM in classification. SMOTE algorithm can increase classification accuracy by solving the imbalance of the varied color samples. Conclusions. At the premise of standardized tongue acquisition and color reproduction, preliminary objectification of tongue color classification in Traditional Chinese Medicine (TCM) is feasible. PMID:28050555

  12. The Classification of Tongue Colors with Standardized Acquisition and ICC Profile Correction in Traditional Chinese Medicine.

    PubMed

    Qi, Zhen; Tu, Li-Ping; Chen, Jing-Bo; Hu, Xiao-Juan; Xu, Jia-Tuo; Zhang, Zhi-Feng

    2016-01-01

    Background and Goal . The application of digital image processing techniques and machine learning methods in tongue image classification in Traditional Chinese Medicine (TCM) has been widely studied nowadays. However, it is difficult for the outcomes to generalize because of lack of color reproducibility and image standardization. Our study aims at the exploration of tongue colors classification with a standardized tongue image acquisition process and color correction. Methods . Three traditional Chinese medical experts are chosen to identify the selected tongue pictures taken by the TDA-1 tongue imaging device in TIFF format through ICC profile correction. Then we compare the mean value of L * a * b * of different tongue colors and evaluate the effect of the tongue color classification by machine learning methods. Results . The L * a * b * values of the five tongue colors are statistically different. Random forest method has a better performance than SVM in classification. SMOTE algorithm can increase classification accuracy by solving the imbalance of the varied color samples. Conclusions . At the premise of standardized tongue acquisition and color reproduction, preliminary objectification of tongue color classification in Traditional Chinese Medicine (TCM) is feasible.

  13. Image Acquisition Context

    PubMed Central

    Bidgood, W. Dean; Bray, Bruce; Brown, Nicolas; Mori, Angelo Rossi; Spackman, Kent A.; Golichowski, Alan; Jones, Robert H.; Korman, Louis; Dove, Brent; Hildebrand, Lloyd; Berg, Michael

    1999-01-01

    Objective: To support clinically relevant indexing of biomedical images and image-related information based on the attributes of image acquisition procedures and the judgments (observations) expressed by observers in the process of image interpretation. Design: The authors introduce the notion of “image acquisition context,” the set of attributes that describe image acquisition procedures, and present a standards-based strategy for utilizing the attributes of image acquisition context as indexing and retrieval keys for digital image libraries. Methods: The authors' indexing strategy is based on an interdependent message/terminology architecture that combines the Digital Imaging and Communication in Medicine (DICOM) standard, the SNOMED (Systematized Nomenclature of Human and Veterinary Medicine) vocabulary, and the SNOMED DICOM microglossary. The SNOMED DICOM microglossary provides context-dependent mapping of terminology to DICOM data elements. Results: The capability of embedding standard coded descriptors in DICOM image headers and image-interpretation reports improves the potential for selective retrieval of image-related information. This favorably affects information management in digital libraries. PMID:9925229

  14. Analyses of requirements for computer control and data processing experiment subsystems. Volume 2: ATM experiment S-056 image data processing system software development

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The IDAPS (Image Data Processing System) is a user-oriented, computer-based, language and control system, which provides a framework or standard for implementing image data processing applications, simplifies set-up of image processing runs so that the system may be used without a working knowledge of computer programming or operation, streamlines operation of the image processing facility, and allows multiple applications to be run in sequence without operator interaction. The control system loads the operators, interprets the input, constructs the necessary parameters for each application, and cells the application. The overlay feature of the IBSYS loader (IBLDR) provides the means of running multiple operators which would otherwise overflow core storage.

  15. Improved Bat Algorithm Applied to Multilevel Image Thresholding

    PubMed Central

    2014-01-01

    Multilevel image thresholding is a very important image processing technique that is used as a basis for image segmentation and further higher level processing. However, the required computational time for exhaustive search grows exponentially with the number of desired thresholds. Swarm intelligence metaheuristics are well known as successful and efficient optimization methods for intractable problems. In this paper, we adjusted one of the latest swarm intelligence algorithms, the bat algorithm, for the multilevel image thresholding problem. The results of testing on standard benchmark images show that the bat algorithm is comparable with other state-of-the-art algorithms. We improved standard bat algorithm, where our modifications add some elements from the differential evolution and from the artificial bee colony algorithm. Our new proposed improved bat algorithm proved to be better than five other state-of-the-art algorithms, improving quality of results in all cases and significantly improving convergence speed. PMID:25165733

  16. Low level image processing techniques using the pipeline image processing engine in the flight telerobotic servicer

    NASA Technical Reports Server (NTRS)

    Nashman, Marilyn; Chaconas, Karen J.

    1988-01-01

    The sensory processing system for the NASA/NBS Standard Reference Model (NASREM) for telerobotic control is described. This control system architecture was adopted by NASA of the Flight Telerobotic Servicer. The control system is hierarchically designed and consists of three parallel systems: task decomposition, world modeling, and sensory processing. The Sensory Processing System is examined, and in particular the image processing hardware and software used to extract features at low levels of sensory processing for tasks representative of those envisioned for the Space Station such as assembly and maintenance are described.

  17. Improving the scalability of hyperspectral imaging applications on heterogeneous platforms using adaptive run-time data compression

    NASA Astrophysics Data System (ADS)

    Plaza, Antonio; Plaza, Javier; Paz, Abel

    2010-10-01

    Latest generation remote sensing instruments (called hyperspectral imagers) are now able to generate hundreds of images, corresponding to different wavelength channels, for the same area on the surface of the Earth. In previous work, we have reported that the scalability of parallel processing algorithms dealing with these high-dimensional data volumes is affected by the amount of data to be exchanged through the communication network of the system. However, large messages are common in hyperspectral imaging applications since processing algorithms are pixel-based, and each pixel vector to be exchanged through the communication network is made up of hundreds of spectral values. Thus, decreasing the amount of data to be exchanged could improve the scalability and parallel performance. In this paper, we propose a new framework based on intelligent utilization of wavelet-based data compression techniques for improving the scalability of a standard hyperspectral image processing chain on heterogeneous networks of workstations. This type of parallel platform is quickly becoming a standard in hyperspectral image processing due to the distributed nature of collected hyperspectral data as well as its flexibility and low cost. Our experimental results indicate that adaptive lossy compression can lead to improvements in the scalability of the hyperspectral processing chain without sacrificing analysis accuracy, even at sub-pixel precision levels.

  18. Effect of image quality on calcification detection in digital mammography

    PubMed Central

    Warren, Lucy M.; Mackenzie, Alistair; Cooke, Julie; Given-Wilson, Rosalind M.; Wallis, Matthew G.; Chakraborty, Dev P.; Dance, David R.; Bosmans, Hilde; Young, Kenneth C.

    2012-01-01

    Purpose: This study aims to investigate if microcalcification detection varies significantly when mammographic images are acquired using different image qualities, including: different detectors, dose levels, and different image processing algorithms. An additional aim was to determine how the standard European method of measuring image quality using threshold gold thickness measured with a CDMAM phantom and the associated limits in current EU guidelines relate to calcification detection. Methods: One hundred and sixty two normal breast images were acquired on an amorphous selenium direct digital (DR) system. Microcalcification clusters extracted from magnified images of slices of mastectomies were electronically inserted into half of the images. The calcification clusters had a subtle appearance. All images were adjusted using a validated mathematical method to simulate the appearance of images from a computed radiography (CR) imaging system at the same dose, from both systems at half this dose, and from the DR system at quarter this dose. The original 162 images were processed with both Hologic and Agfa (Musica-2) image processing. All other image qualities were processed with Agfa (Musica-2) image processing only. Seven experienced observers marked and rated any identified suspicious regions. Free response operating characteristic (FROC) and ROC analyses were performed on the data. The lesion sensitivity at a nonlesion localization fraction (NLF) of 0.1 was also calculated. Images of the CDMAM mammographic test phantom were acquired using the automatic setting on the DR system. These images were modified to the additional image qualities used in the observer study. The images were analyzed using automated software. In order to assess the relationship between threshold gold thickness and calcification detection a power law was fitted to the data. Results: There was a significant reduction in calcification detection using CR compared with DR: the alternative FROC (AFROC) area decreased from 0.84 to 0.63 and the ROC area decreased from 0.91 to 0.79 (p < 0.0001). This corresponded to a 30% drop in lesion sensitivity at a NLF equal to 0.1. Detection was also sensitive to the dose used. There was no significant difference in detection between the two image processing algorithms used (p > 0.05). It was additionally found that lower threshold gold thickness from CDMAM analysis implied better cluster detection. The measured threshold gold thickness passed the acceptable limit set in the EU standards for all image qualities except half dose CR. However, calcification detection varied significantly between image qualities. This suggests that the current EU guidelines may need revising. Conclusions: Microcalcification detection was found to be sensitive to detector and dose used. Standard measurements of image quality were a good predictor of microcalcification cluster detection. PMID:22755704

  19. Effect of image quality on calcification detection in digital mammography.

    PubMed

    Warren, Lucy M; Mackenzie, Alistair; Cooke, Julie; Given-Wilson, Rosalind M; Wallis, Matthew G; Chakraborty, Dev P; Dance, David R; Bosmans, Hilde; Young, Kenneth C

    2012-06-01

    This study aims to investigate if microcalcification detection varies significantly when mammographic images are acquired using different image qualities, including: different detectors, dose levels, and different image processing algorithms. An additional aim was to determine how the standard European method of measuring image quality using threshold gold thickness measured with a CDMAM phantom and the associated limits in current EU guidelines relate to calcification detection. One hundred and sixty two normal breast images were acquired on an amorphous selenium direct digital (DR) system. Microcalcification clusters extracted from magnified images of slices of mastectomies were electronically inserted into half of the images. The calcification clusters had a subtle appearance. All images were adjusted using a validated mathematical method to simulate the appearance of images from a computed radiography (CR) imaging system at the same dose, from both systems at half this dose, and from the DR system at quarter this dose. The original 162 images were processed with both Hologic and Agfa (Musica-2) image processing. All other image qualities were processed with Agfa (Musica-2) image processing only. Seven experienced observers marked and rated any identified suspicious regions. Free response operating characteristic (FROC) and ROC analyses were performed on the data. The lesion sensitivity at a nonlesion localization fraction (NLF) of 0.1 was also calculated. Images of the CDMAM mammographic test phantom were acquired using the automatic setting on the DR system. These images were modified to the additional image qualities used in the observer study. The images were analyzed using automated software. In order to assess the relationship between threshold gold thickness and calcification detection a power law was fitted to the data. There was a significant reduction in calcification detection using CR compared with DR: the alternative FROC (AFROC) area decreased from 0.84 to 0.63 and the ROC area decreased from 0.91 to 0.79 (p < 0.0001). This corresponded to a 30% drop in lesion sensitivity at a NLF equal to 0.1. Detection was also sensitive to the dose used. There was no significant difference in detection between the two image processing algorithms used (p > 0.05). It was additionally found that lower threshold gold thickness from CDMAM analysis implied better cluster detection. The measured threshold gold thickness passed the acceptable limit set in the EU standards for all image qualities except half dose CR. However, calcification detection varied significantly between image qualities. This suggests that the current EU guidelines may need revising. Microcalcification detection was found to be sensitive to detector and dose used. Standard measurements of image quality were a good predictor of microcalcification cluster detection. © 2012 American Association of Physicists in Medicine.

  20. Jenkins-CI, an Open-Source Continuous Integration System, as a Scientific Data and Image-Processing Platform.

    PubMed

    Moutsatsos, Ioannis K; Hossain, Imtiaz; Agarinis, Claudia; Harbinski, Fred; Abraham, Yann; Dobler, Luc; Zhang, Xian; Wilson, Christopher J; Jenkins, Jeremy L; Holway, Nicholas; Tallarico, John; Parker, Christian N

    2017-03-01

    High-throughput screening generates large volumes of heterogeneous data that require a diverse set of computational tools for management, processing, and analysis. Building integrated, scalable, and robust computational workflows for such applications is challenging but highly valuable. Scientific data integration and pipelining facilitate standardized data processing, collaboration, and reuse of best practices. We describe how Jenkins-CI, an "off-the-shelf," open-source, continuous integration system, is used to build pipelines for processing images and associated data from high-content screening (HCS). Jenkins-CI provides numerous plugins for standard compute tasks, and its design allows the quick integration of external scientific applications. Using Jenkins-CI, we integrated CellProfiler, an open-source image-processing platform, with various HCS utilities and a high-performance Linux cluster. The platform is web-accessible, facilitates access and sharing of high-performance compute resources, and automates previously cumbersome data and image-processing tasks. Imaging pipelines developed using the desktop CellProfiler client can be managed and shared through a centralized Jenkins-CI repository. Pipelines and managed data are annotated to facilitate collaboration and reuse. Limitations with Jenkins-CI (primarily around the user interface) were addressed through the selection of helper plugins from the Jenkins-CI community.

  1. Jenkins-CI, an Open-Source Continuous Integration System, as a Scientific Data and Image-Processing Platform

    PubMed Central

    Moutsatsos, Ioannis K.; Hossain, Imtiaz; Agarinis, Claudia; Harbinski, Fred; Abraham, Yann; Dobler, Luc; Zhang, Xian; Wilson, Christopher J.; Jenkins, Jeremy L.; Holway, Nicholas; Tallarico, John; Parker, Christian N.

    2016-01-01

    High-throughput screening generates large volumes of heterogeneous data that require a diverse set of computational tools for management, processing, and analysis. Building integrated, scalable, and robust computational workflows for such applications is challenging but highly valuable. Scientific data integration and pipelining facilitate standardized data processing, collaboration, and reuse of best practices. We describe how Jenkins-CI, an “off-the-shelf,” open-source, continuous integration system, is used to build pipelines for processing images and associated data from high-content screening (HCS). Jenkins-CI provides numerous plugins for standard compute tasks, and its design allows the quick integration of external scientific applications. Using Jenkins-CI, we integrated CellProfiler, an open-source image-processing platform, with various HCS utilities and a high-performance Linux cluster. The platform is web-accessible, facilitates access and sharing of high-performance compute resources, and automates previously cumbersome data and image-processing tasks. Imaging pipelines developed using the desktop CellProfiler client can be managed and shared through a centralized Jenkins-CI repository. Pipelines and managed data are annotated to facilitate collaboration and reuse. Limitations with Jenkins-CI (primarily around the user interface) were addressed through the selection of helper plugins from the Jenkins-CI community. PMID:27899692

  2. Automating PACS quality control with the Vanderbilt image processing enterprise resource

    NASA Astrophysics Data System (ADS)

    Esparza, Michael L.; Welch, E. Brian; Landman, Bennett A.

    2012-02-01

    Precise image acquisition is an integral part of modern patient care and medical imaging research. Periodic quality control using standardized protocols and phantoms ensures that scanners are operating according to specifications, yet such procedures do not ensure that individual datasets are free from corruption; for example due to patient motion, transient interference, or physiological variability. If unacceptable artifacts are noticed during scanning, a technologist can repeat a procedure. Yet, substantial delays may be incurred if a problematic scan is not noticed until a radiologist reads the scans or an automated algorithm fails. Given scores of slices in typical three-dimensional scans and widevariety of potential use cases, a technologist cannot practically be expected inspect all images. In large-scale research, automated pipeline systems have had great success in achieving high throughput. However, clinical and institutional workflows are largely based on DICOM and PACS technologies; these systems are not readily compatible with research systems due to security and privacy restrictions. Hence, quantitative quality control has been relegated to individual investigators and too often neglected. Herein, we propose a scalable system, the Vanderbilt Image Processing Enterprise Resource (VIPER) to integrate modular quality control and image analysis routines with a standard PACS configuration. This server unifies image processing routines across an institutional level and provides a simple interface so that investigators can collaborate to deploy new analysis technologies. VIPER integrates with high performance computing environments has successfully analyzed all standard scans from our institutional research center over the course of the last 18 months.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Warren, Lucy M.; Mackenzie, Alistair; Cooke, Julie

    Purpose: This study aims to investigate if microcalcification detection varies significantly when mammographic images are acquired using different image qualities, including: different detectors, dose levels, and different image processing algorithms. An additional aim was to determine how the standard European method of measuring image quality using threshold gold thickness measured with a CDMAM phantom and the associated limits in current EU guidelines relate to calcification detection. Methods: One hundred and sixty two normal breast images were acquired on an amorphous selenium direct digital (DR) system. Microcalcification clusters extracted from magnified images of slices of mastectomies were electronically inserted into halfmore » of the images. The calcification clusters had a subtle appearance. All images were adjusted using a validated mathematical method to simulate the appearance of images from a computed radiography (CR) imaging system at the same dose, from both systems at half this dose, and from the DR system at quarter this dose. The original 162 images were processed with both Hologic and Agfa (Musica-2) image processing. All other image qualities were processed with Agfa (Musica-2) image processing only. Seven experienced observers marked and rated any identified suspicious regions. Free response operating characteristic (FROC) and ROC analyses were performed on the data. The lesion sensitivity at a nonlesion localization fraction (NLF) of 0.1 was also calculated. Images of the CDMAM mammographic test phantom were acquired using the automatic setting on the DR system. These images were modified to the additional image qualities used in the observer study. The images were analyzed using automated software. In order to assess the relationship between threshold gold thickness and calcification detection a power law was fitted to the data. Results: There was a significant reduction in calcification detection using CR compared with DR: the alternative FROC (AFROC) area decreased from 0.84 to 0.63 and the ROC area decreased from 0.91 to 0.79 (p < 0.0001). This corresponded to a 30% drop in lesion sensitivity at a NLF equal to 0.1. Detection was also sensitive to the dose used. There was no significant difference in detection between the two image processing algorithms used (p > 0.05). It was additionally found that lower threshold gold thickness from CDMAM analysis implied better cluster detection. The measured threshold gold thickness passed the acceptable limit set in the EU standards for all image qualities except half dose CR. However, calcification detection varied significantly between image qualities. This suggests that the current EU guidelines may need revising. Conclusions: Microcalcification detection was found to be sensitive to detector and dose used. Standard measurements of image quality were a good predictor of microcalcification cluster detection.« less

  4. An enhanced fast scanning algorithm for image segmentation

    NASA Astrophysics Data System (ADS)

    Ismael, Ahmed Naser; Yusof, Yuhanis binti

    2015-12-01

    Segmentation is an essential and important process that separates an image into regions that have similar characteristics or features. This will transform the image for a better image analysis and evaluation. An important benefit of segmentation is the identification of region of interest in a particular image. Various algorithms have been proposed for image segmentation and this includes the Fast Scanning algorithm which has been employed on food, sport and medical images. It scans all pixels in the image and cluster each pixel according to the upper and left neighbor pixels. The clustering process in Fast Scanning algorithm is performed by merging pixels with similar neighbor based on an identified threshold. Such an approach will lead to a weak reliability and shape matching of the produced segments. This paper proposes an adaptive threshold function to be used in the clustering process of the Fast Scanning algorithm. This function used the gray'value in the image's pixels and variance Also, the level of the image that is more the threshold are converted into intensity values between 0 and 1, and other values are converted into intensity values zero. The proposed enhanced Fast Scanning algorithm is realized on images of the public and private transportation in Iraq. Evaluation is later made by comparing the produced images of proposed algorithm and the standard Fast Scanning algorithm. The results showed that proposed algorithm is faster in terms the time from standard fast scanning.

  5. Study Methods to Standardize Thermography NDE

    NASA Technical Reports Server (NTRS)

    Walker, James L.; Workman, Gary L.

    1998-01-01

    The purpose of this work is to develop thermographic inspection methods and standards for use in evaluating structural composites and aerospace hardware. Qualification techniques and calibration methods are investigated to standardize the thermographic method for use in the field. Along with the inspections of test standards structural hardware, support hardware is designed and fabricated to aid in the thermographic process. Also, a standard operating procedure is developed for performing inspections with the Bales Thermal Image Processor (TIP). Inspections are performed on a broad range of structural composites. These materials include various graphite/epoxies, graphite/cyanide-ester, graphite/silicon-carbide, graphite phenolic and Keviar/epoxy. Also metal honeycomb (titanium and aluminum faceplates over an aluminum honeycomb core) structures are investigated. Various structural shapes are investigated and the thickness of the structures vary from as few as 3 plies to as many as 80 plies. Special emphasis is placed on characterizing defects in attachment holes and bondlines, in addition to those resulting from impact damage and the inclusion of foreign matter. Image processing through statistical analysis and digital filtering is investigated to enhance the quality and quantify the NDE thermal images when necessary.

  6. Study Methods to Standardize Thermography NDE

    NASA Technical Reports Server (NTRS)

    Walker, James L.; Workman, Gary L.

    1998-01-01

    The purpose of this work is to develop thermographic inspection methods and standards for use in evaluating structural composites and aerospace hardware. Qualification techniques and calibration methods are investigated to standardize the thermographic method for use in the field. Along with the inspections of test standards structural hardware, support hardware is designed and fabricated to aid in the thermographic process. Also, a standard operating procedure is developed for performing inspections with the Bales Thermal Image Processor (TIP). Inspections are performed on a broad range of structural composites. These materials include graphite/epoxies, graphite/cyanide-ester, graphite/silicon-carbide, graphite phenolic and Kevlar/epoxy. Also metal honeycomb (titanium and aluminum faceplates over an aluminum honeycomb core) structures are investigated. Various structural shapes are investigated and the thickness of the structures vary from as few as 3 plies to as many as 80 plies. Special emphasis is placed on characterizing defects in attachment holes and bondlines, in addition to those resulting from impact damage and the inclusion of foreign matter. Image processing through statistical analysis and digital filtering is investigated to enhance the quality and quantify the NDE thermal images when necessary.

  7. Research on oral test modeling based on multi-feature fusion

    NASA Astrophysics Data System (ADS)

    Shi, Yuliang; Tao, Yiyue; Lei, Jun

    2018-04-01

    In this paper, the spectrum of speech signal is taken as an input of feature extraction. The advantage of PCNN in image segmentation and other processing is used to process the speech spectrum and extract features. And a new method combining speech signal processing and image processing is explored. At the same time of using the features of the speech map, adding the MFCC to establish the spectral features and integrating them with the features of the spectrogram to further improve the accuracy of the spoken language recognition. Considering that the input features are more complicated and distinguishable, we use Support Vector Machine (SVM) to construct the classifier, and then compare the extracted test voice features with the standard voice features to achieve the spoken standard detection. Experiments show that the method of extracting features from spectrograms using PCNN is feasible, and the fusion of image features and spectral features can improve the detection accuracy.

  8. Targeting Cell Surface Proteins in Molecular Photoacoustic Imaging to Detect Ovarian Cancer Early

    DTIC Science & Technology

    2013-07-01

    biology, nanotechnology, and imaging technology, molecular imaging utilizes specific probes as contrast agents to visualize cellular processes at the...This reagent was covalently coupled to the oligosaccharides attached to polypeptide side-chains of extracellular membrane proteins on living cells...website. The normal tissue gene expression profile dataset was modified and processed as described by Fang (8) and mean intensities and standard

  9. Process for guidance, containment, treatment, and imaging in a subsurface environment utilizing ferro-fluids

    DOEpatents

    Moridis, George J.; Oldenburg, Curtis M.

    2001-01-01

    Disclosed are processes for monitoring and control of underground contamination, which involve the application of ferrofluids. Two broad uses of ferrofluids are described: (1) to control liquid movement by the application of strong external magnetic fields; and (2) to image liquids by standard geophysical methods.

  10. DICOM for quantitative imaging biomarker development: a standards based approach to sharing clinical data and structured PET/CT analysis results in head and neck cancer research.

    PubMed

    Fedorov, Andriy; Clunie, David; Ulrich, Ethan; Bauer, Christian; Wahle, Andreas; Brown, Bartley; Onken, Michael; Riesmeier, Jörg; Pieper, Steve; Kikinis, Ron; Buatti, John; Beichel, Reinhard R

    2016-01-01

    Background. Imaging biomarkers hold tremendous promise for precision medicine clinical applications. Development of such biomarkers relies heavily on image post-processing tools for automated image quantitation. Their deployment in the context of clinical research necessitates interoperability with the clinical systems. Comparison with the established outcomes and evaluation tasks motivate integration of the clinical and imaging data, and the use of standardized approaches to support annotation and sharing of the analysis results and semantics. We developed the methodology and tools to support these tasks in Positron Emission Tomography and Computed Tomography (PET/CT) quantitative imaging (QI) biomarker development applied to head and neck cancer (HNC) treatment response assessment, using the Digital Imaging and Communications in Medicine (DICOM(®)) international standard and free open-source software. Methods. Quantitative analysis of PET/CT imaging data collected on patients undergoing treatment for HNC was conducted. Processing steps included Standardized Uptake Value (SUV) normalization of the images, segmentation of the tumor using manual and semi-automatic approaches, automatic segmentation of the reference regions, and extraction of the volumetric segmentation-based measurements. Suitable components of the DICOM standard were identified to model the various types of data produced by the analysis. A developer toolkit of conversion routines and an Application Programming Interface (API) were contributed and applied to create a standards-based representation of the data. Results. DICOM Real World Value Mapping, Segmentation and Structured Reporting objects were utilized for standards-compliant representation of the PET/CT QI analysis results and relevant clinical data. A number of correction proposals to the standard were developed. The open-source DICOM toolkit (DCMTK) was improved to simplify the task of DICOM encoding by introducing new API abstractions. Conversion and visualization tools utilizing this toolkit were developed. The encoded objects were validated for consistency and interoperability. The resulting dataset was deposited in the QIN-HEADNECK collection of The Cancer Imaging Archive (TCIA). Supporting tools for data analysis and DICOM conversion were made available as free open-source software. Discussion. We presented a detailed investigation of the development and application of the DICOM model, as well as the supporting open-source tools and toolkits, to accommodate representation of the research data in QI biomarker development. We demonstrated that the DICOM standard can be used to represent the types of data relevant in HNC QI biomarker development, and encode their complex relationships. The resulting annotated objects are amenable to data mining applications, and are interoperable with a variety of systems that support the DICOM standard.

  11. A real time quality control application for animal production by image processing.

    PubMed

    Sungur, Cemil; Özkan, Halil

    2015-11-01

    Standards of hygiene and health are of major importance in food production, and quality control has become obligatory in this field. Thanks to rapidly developing technologies, it is now possible for automatic and safe quality control of food production. For this purpose, image-processing-based quality control systems used in industrial applications are being employed to analyze the quality of food products. In this study, quality control of chicken (Gallus domesticus) eggs was achieved using a real time image-processing technique. In order to execute the quality control processes, a conveying mechanism was used. Eggs passing on a conveyor belt were continuously photographed in real time by cameras located above the belt. The images obtained were processed by various methods and techniques. Using digital instrumentation, the volume of the eggs was measured, broken/cracked eggs were separated and dirty eggs were determined. In accordance with international standards for classifying the quality of eggs, the class of separated eggs was determined through a fuzzy implication model. According to tests carried out on thousands of eggs, a quality control process with an accuracy of 98% was possible. © 2014 Society of Chemical Industry.

  12. MO-PIS-Exhibit Hall-01: Tools for TG-142 Linac Imaging QA I

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clements, M; Wiesmeyer, M

    2014-06-15

    Partners in Solutions is an exciting new program in which AAPM partners with our vendors to present practical “hands-on” information about the equipment and software systems that we use in our clinics. The therapy topic this year is solutions for TG-142 recommendations for linear accelerator imaging QA. Note that the sessions are being held in a special purpose room built on the Exhibit Hall Floor, to encourage further interaction with the vendors. Automated Imaging QA for TG-142 with RIT Presentation Time: 2:45 – 3:15 PM This presentation will discuss software tools for automated imaging QA and phantom analysis for TG-142.more » All modalities used in radiation oncology will be discussed, including CBCT, planar kV imaging, planar MV imaging, and imaging and treatment coordinate coincidence. Vendor supplied phantoms as well as a variety of third-party phantoms will be shown, along with appropriate analyses, proper phantom setup procedures and scanning settings, and a discussion of image quality metrics. Tools for process automation will be discussed which include: RIT Cognition (machine learning for phantom image identification), RIT Cerberus (automated file system monitoring and searching), and RunQueueC (batch processing of multiple images). In addition to phantom analysis, tools for statistical tracking, trending, and reporting will be discussed. This discussion will include an introduction to statistical process control, a valuable tool in analyzing data and determining appropriate tolerances. An Introduction to TG-142 Imaging QA Using Standard Imaging Products Presentation Time: 3:15 – 3:45 PM Medical Physicists want to understand the logic behind TG-142 Imaging QA. What is often missing is a firm understanding of the connections between the EPID and OBI phantom imaging, the software “algorithms” that calculate the QA metrics, the establishment of baselines, and the analysis and interpretation of the results. The goal of our brief presentation will be to establish and solidify these connections. Our talk will be motivated by the Standard Imaging, Inc. phantom and software solutions. We will present and explain each of the image quality metrics in TG-142 in terms of the theory, mathematics, and algorithms used to implement them in the Standard Imaging PIPSpro software. In the process, we will identify the regions of phantom images that are analyzed by each algorithm. We then will discuss the process of the creation of baselines and typical ranges of acceptable values for each imaging quality metric.« less

  13. Updated standards and processes for accreditation of echocardiographic laboratories from The European Association of Cardiovascular Imaging.

    PubMed

    Popescu, Bogdan A; Stefanidis, Alexandros; Nihoyannopoulos, Petros; Fox, Kevin F; Ray, Simon; Cardim, Nuno; Rigo, Fausto; Badano, Luigi P; Fraser, Alan G; Pinto, Fausto; Zamorano, Jose Luis; Habib, Gilbert; Maurer, Gerald; Lancellotti, Patrizio; Andrade, Maria Joao; Donal, Erwan; Edvardsen, Thor; Varga, Albert

    2014-07-01

    Standards for echocardiographic laboratories were proposed by the European Association of Echocardiography (now the European Association of Cardiovascular Imaging) 7 years ago in order to raise standards of practice and improve the quality of care. Criteria and requirements were published at that time for transthoracic, transoesophageal, and stress echocardiography. This paper reassesses and updates the quality standards to take account of experience and the technical developments of modern echocardiographic practice. It also discusses quality control, the incentives for laboratories to apply for accreditation, the reaccreditation criteria, and the current status and future prospects of the laboratory accreditation process. Published on behalf of the European Society of Cardiology. All rights reserved. © The Author 2014. For permissions please email: journals.permissions@oup.com.

  14. Error-proofing test system of industrial components based on image processing

    NASA Astrophysics Data System (ADS)

    Huang, Ying; Huang, Tao

    2018-05-01

    Due to the improvement of modern industrial level and accuracy, conventional manual test fails to satisfy the test standards of enterprises, so digital image processing technique should be utilized to gather and analyze the information on the surface of industrial components, so as to achieve the purpose of test. To test the installation parts of automotive engine, this paper employs camera to capture the images of the components. After these images are preprocessed including denoising, the image processing algorithm relying on flood fill algorithm is used to test the installation of the components. The results prove that this system has very high test accuracy.

  15. ARTIP: Automated Radio Telescope Image Processing Pipeline

    NASA Astrophysics Data System (ADS)

    Sharma, Ravi; Gyanchandani, Dolly; Kulkarni, Sarang; Gupta, Neeraj; Pathak, Vineet; Pande, Arti; Joshi, Unmesh

    2018-02-01

    The Automated Radio Telescope Image Processing Pipeline (ARTIP) automates the entire process of flagging, calibrating, and imaging for radio-interferometric data. ARTIP starts with raw data, i.e. a measurement set and goes through multiple stages, such as flux calibration, bandpass calibration, phase calibration, and imaging to generate continuum and spectral line images. Each stage can also be run independently. The pipeline provides continuous feedback to the user through various messages, charts and logs. It is written using standard python libraries and the CASA package. The pipeline can deal with datasets with multiple spectral windows and also multiple target sources which may have arbitrary combinations of flux/bandpass/phase calibrators.

  16. A natural-color mapping for single-band night-time image based on FPGA

    NASA Astrophysics Data System (ADS)

    Wang, Yilun; Qian, Yunsheng

    2018-01-01

    A natural-color mapping for single-band night-time image method based on FPGA can transmit the color of the reference image to single-band night-time image, which is consistent with human visual habits and can help observers identify the target. This paper introduces the processing of the natural-color mapping algorithm based on FPGA. Firstly, the image can be transformed based on histogram equalization, and the intensity features and standard deviation features of reference image are stored in SRAM. Then, the real-time digital images' intensity features and standard deviation features are calculated by FPGA. At last, FPGA completes the color mapping through matching pixels between images using the features in luminance channel.

  17. Quantitative Evaluation of Surface Color of Tomato Fruits Cultivated in Remote Farm Using Digital Camera Images

    NASA Astrophysics Data System (ADS)

    Hashimoto, Atsushi; Suehara, Ken-Ichiro; Kameoka, Takaharu

    To measure the quantitative surface color information of agricultural products with the ambient information during cultivation, a color calibration method for digital camera images and a remote monitoring system of color imaging using the Web were developed. Single-lens reflex and web digital cameras were used for the image acquisitions. The tomato images through the post-ripening process were taken by the digital camera in both the standard image acquisition system and in the field conditions from the morning to evening. Several kinds of images were acquired with the standard RGB color chart set up just behind the tomato fruit on a black matte, and a color calibration was carried out. The influence of the sunlight could be experimentally eliminated, and the calibrated color information consistently agreed with the standard ones acquired in the system through the post-ripening process. Furthermore, the surface color change of the tomato on the tree in a greenhouse was remotely monitored during maturation using the digital cameras equipped with the Field Server. The acquired digital color images were sent from the Farm Station to the BIFE Laboratory of Mie University via VPN. The time behavior of the tomato surface color change during the maturing process could be measured using the color parameter calculated based on the obtained and calibrated color images along with the ambient atmospheric record. This study is a very important step in developing the surface color analysis for both the simple and rapid evaluation of the crop vigor in the field and to construct an ambient and networked remote monitoring system for food security, precision agriculture, and agricultural research.

  18. Spatial Statistics for Tumor Cell Counting and Classification

    NASA Astrophysics Data System (ADS)

    Wirjadi, Oliver; Kim, Yoo-Jin; Breuel, Thomas

    To count and classify cells in histological sections is a standard task in histology. One example is the grading of meningiomas, benign tumors of the meninges, which requires to assess the fraction of proliferating cells in an image. As this process is very time consuming when performed manually, automation is required. To address such problems, we propose a novel application of Markov point process methods in computer vision, leading to algorithms for computing the locations of circular objects in images. In contrast to previous algorithms using such spatial statistics methods in image analysis, the present one is fully trainable. This is achieved by combining point process methods with statistical classifiers. Using simulated data, the method proposed in this paper will be shown to be more accurate and more robust to noise than standard image processing methods. On the publicly available SIMCEP benchmark for cell image analysis algorithms, the cell count performance of the present paper is significantly more accurate than results published elsewhere, especially when cells form dense clusters. Furthermore, the proposed system performs as well as a state-of-the-art algorithm for the computer-aided histological grading of meningiomas when combined with a simple k-nearest neighbor classifier for identifying proliferating cells.

  19. STAMPS: Software Tool for Automated MRI Post-processing on a supercomputer.

    PubMed

    Bigler, Don C; Aksu, Yaman; Miller, David J; Yang, Qing X

    2009-08-01

    This paper describes a Software Tool for Automated MRI Post-processing (STAMP) of multiple types of brain MRIs on a workstation and for parallel processing on a supercomputer (STAMPS). This software tool enables the automation of nonlinear registration for a large image set and for multiple MR image types. The tool uses standard brain MRI post-processing tools (such as SPM, FSL, and HAMMER) for multiple MR image types in a pipeline fashion. It also contains novel MRI post-processing features. The STAMP image outputs can be used to perform brain analysis using Statistical Parametric Mapping (SPM) or single-/multi-image modality brain analysis using Support Vector Machines (SVMs). Since STAMPS is PBS-based, the supercomputer may be a multi-node computer cluster or one of the latest multi-core computers.

  20. The influence of the microscope lamp filament colour temperature on the process of digital images of histological slides acquisition standardization.

    PubMed

    Korzynska, Anna; Roszkowiak, Lukasz; Pijanowska, Dorota; Kozlowski, Wojciech; Markiewicz, Tomasz

    2014-01-01

    The aim of this study is to compare the digital images of the tissue biopsy captured with optical microscope using bright field technique under various light conditions. The range of colour's variation in immunohistochemically stained with 3,3'-Diaminobenzidine and Haematoxylin tissue samples is immense and coming from various sources. One of them is inadequate setting of camera's white balance to microscope's light colour temperature. Although this type of error can be easily handled during the stage of image acquisition, it can be eliminated with use of colour adjustment algorithms. The examination of the dependence of colour variation from microscope's light temperature and settings of the camera is done as an introductory research to the process of automatic colour standardization. Six fields of view with empty space among the tissue samples have been selected for analysis. Each field of view has been acquired 225 times with various microscope light temperature and camera white balance settings. The fourteen randomly chosen images have been corrected and compared, with the reference image, by the following methods: Mean Square Error, Structural SIMilarity and visual assessment of viewer. For two types of backgrounds and two types of objects, the statistical image descriptors: range, median, mean and its standard deviation of chromaticity on a and b channels from CIELab colour space, and luminance L, and local colour variability for objects' specific area have been calculated. The results have been averaged for 6 images acquired in the same light conditions and camera settings for each sample. The analysis of the results leads to the following conclusions: (1) the images collected with white balance setting adjusted to light colour temperature clusters in certain area of chromatic space, (2) the process of white balance correction for images collected with white balance camera settings not matched to the light temperature moves image descriptors into proper chromatic space but simultaneously the value of luminance changes. So the process of the image unification in a sense of colour fidelity can be solved in separate introductory stage before the automatic image analysis.

  1. Super-resolution for everybody: An image processing workflow to obtain high-resolution images with a standard confocal microscope.

    PubMed

    Lam, France; Cladière, Damien; Guillaume, Cyndélia; Wassmann, Katja; Bolte, Susanne

    2017-02-15

    In the presented work we aimed at improving confocal imaging to obtain highest possible resolution in thick biological samples, such as the mouse oocyte. We therefore developed an image processing workflow that allows improving the lateral and axial resolution of a standard confocal microscope. Our workflow comprises refractive index matching, the optimization of microscope hardware parameters and image restoration by deconvolution. We compare two different deconvolution algorithms, evaluate the necessity of denoising and establish the optimal image restoration procedure. We validate our workflow by imaging sub resolution fluorescent beads and measuring the maximum lateral and axial resolution of the confocal system. Subsequently, we apply the parameters to the imaging and data restoration of fluorescently labelled meiotic spindles of mouse oocytes. We measure a resolution increase of approximately 2-fold in the lateral and 3-fold in the axial direction throughout a depth of 60μm. This demonstrates that with our optimized workflow we reach a resolution that is comparable to 3D-SIM-imaging, but with better depth penetration for confocal images of beads and the biological sample. Copyright © 2016 Elsevier Inc. All rights reserved.

  2. Development of a viability standard curve for microencapsulated probiotic bacteria using confocal microscopy and image analysis software.

    PubMed

    Moore, Sarah; Kailasapathy, Kasipathy; Phillips, Michael; Jones, Mark R

    2015-07-01

    Microencapsulation is proposed to protect probiotic strains from food processing procedures and to maintain probiotic viability. Little research has described the in situ viability of microencapsulated probiotics. This study successfully developed a real-time viability standard curve for microencapsulated bacteria using confocal microscopy, fluorescent dyes and image analysis software. Copyright © 2015 Elsevier B.V. All rights reserved.

  3. Correlation pattern recognition: optimal parameters for quality standards control of chocolate marshmallow candy

    NASA Astrophysics Data System (ADS)

    Flores, Jorge L.; García-Torales, G.; Ponce Ávila, Cristina

    2006-08-01

    This paper describes an in situ image recognition system designed to inspect the quality standards of the chocolate pops during their production. The essence of the recognition system is the localization of the events (i.e., defects) in the input images that affect the quality standards of pops. To this end, processing modules, based on correlation filter, and segmentation of images are employed with the objective of measuring the quality standards. Therefore, we designed the correlation filter and defined a set of features from the correlation plane. The desired values for these parameters are obtained by exploiting information about objects to be rejected in order to find the optimal discrimination capability of the system. Regarding this set of features, the pop can be correctly classified. The efficacy of the system has been tested thoroughly under laboratory conditions using at least 50 images, containing 3 different types of possible defects.

  4. Are Disposable and Standard Gonioscopy Lenses Comparable?

    PubMed

    Lee, Bonny; Szirth, Bernard C; Fechtner, Robert D; Khouri, Albert S

    2017-04-01

    Gonioscopy is important in the evaluation and treatment of glaucoma. With increased scrutiny of acceptable sterilization processes for health care instruments, disposable gonioscopy lenses have recently been introduced. Single-time use lenses are theorized to decrease infection risk and eliminate the issue of wear and tear seen on standard, reusable lenses. However, patient care would be compromised if the quality of images produced by the disposable lens were inferior to those produced by the reusable lens. The purpose of this study was to compare the quality of images produced by disposable versus standard gonioscopy lenses. A disposable single mirror lens (Sensor Medical Technology) and a standard Volk G-1 gonioscopy lens were used to image 21 volunteers who were prospectively recruited for the study. Images of the inferior and temporal angles of each subject's left eye were acquired using a slit-lamp camera through the disposable and standard gonioscopy lens. In total, 74 images were graded using the Spaeth gonioscopic system and for clarity and quality. Clarity was scored as 1 or 2 and defined as either (1) all structures perceived or (2) all structures not perceived. Quality was scored as 1, 2, or 3, and defined as (1) all angle landmarks clear and well focused, (2) some angle landmarks clear, others blurred, or (3) angle landmarks could not be ascertained. The 74 images were divided into images taken with the disposable single mirror lens and images taken with the standard Volk G-1 gonioscopy lens. The clarity and quality scores for each of these 2 image groups were averaged and P-values were calculated. Average quality of images produced with the standard lens was 1.46±0.56 compared with 1.54±0.61 for those produced with the disposable lens (P=0.55). Average clarity of images produced with the standard lens was 1.47±0.51 compared with 1.49±0.51 (P=0.90) with the disposable lens. We conclude that there is no significant difference in quality of images produced with standard versus disposable gonioscopy lenses. Disposable gonioscopy lenses may be an acceptable alternative to standard reusable lenses, especially in conditions where sterilization is difficult.

  5. Geologic analyses of LANDSAT-1 multispectral imagery of a possible power plant site employing digital and analog image processing. [in Pennsylvania

    NASA Technical Reports Server (NTRS)

    Lovegreen, J. R.; Prosser, W. J.; Millet, R. A.

    1975-01-01

    A site in the Great Valley subsection of the Valley and Ridge physiographic province in eastern Pennsylvania was studied to evaluate the use of digital and analog image processing for geologic investigations. Ground truth at the site was obtained by a field mapping program, a subsurface exploration investigation and a review of available published and unpublished literature. Remote sensing data were analyzed using standard manual techniques. LANDSAT-1 imagery was analyzed using digital image processing employing the multispectral Image 100 system and using analog color processing employing the VP-8 image analyzer. This study deals primarily with linears identified employing image processing and correlation of these linears with known structural features and with linears identified manual interpretation; and the identification of rock outcrops in areas of extensive vegetative cover employing image processing. The results of this study indicate that image processing can be a cost-effective tool for evaluating geologic and linear features for regional studies encompassing large areas such as for power plant siting. Digital image processing can be an effective tool for identifying rock outcrops in areas of heavy vegetative cover.

  6. A new algorithm to reduce noise in microscopy images implemented with a simple program in python.

    PubMed

    Papini, Alessio

    2012-03-01

    All microscopical images contain noise, increasing when (e.g., transmission electron microscope or light microscope) approaching the resolution limit. Many methods are available to reduce noise. One of the most commonly used is image averaging. We propose here to use the mode of pixel values. Simple Python programs process a given number of images, recorded consecutively from the same subject. The programs calculate the mode of the pixel values in a given position (a, b). The result is a new image containing in (a, b) the mode of the values. Therefore, the final pixel value corresponds to that read in at least two of the pixels in position (a, b). The application of the program on a set of images obtained by applying salt and pepper noise and GIMP hurl noise with 10-90% standard deviation showed that the mode performs better than averaging with three-eight images. The data suggest that the mode would be more efficient (in the sense of a lower number of recorded images to process to reduce noise below a given limit) for lower number of total noisy pixels and high standard deviation (as impulse noise and salt and pepper noise), while averaging would be more efficient when the number of varying pixels is high, and the standard deviation is low, as in many cases of Gaussian noise affected images. The two methods may be used serially. Copyright © 2011 Wiley Periodicals, Inc.

  7. A web service system supporting three-dimensional post-processing of medical images based on WADO protocol.

    PubMed

    He, Longjun; Xu, Lang; Ming, Xing; Liu, Qian

    2015-02-01

    Three-dimensional post-processing operations on the volume data generated by a series of CT or MR images had important significance on image reading and diagnosis. As a part of the DIOCM standard, WADO service defined how to access DICOM objects on the Web, but it didn't involve three-dimensional post-processing operations on the series images. This paper analyzed the technical features of three-dimensional post-processing operations on the volume data, and then designed and implemented a web service system for three-dimensional post-processing operations of medical images based on the WADO protocol. In order to improve the scalability of the proposed system, the business tasks and calculation operations were separated into two modules. As results, it was proved that the proposed system could support three-dimensional post-processing service of medical images for multiple clients at the same moment, which met the demand of accessing three-dimensional post-processing operations on the volume data on the web.

  8. A new image representation for compact and secure communication

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prasad, Lakshman; Skourikhine, A. N.

    In many areas of nuclear materials management there is a need for communication, archival, and retrieval of annotated image data between heterogeneous platforms and devices to effectively implement safety, security, and safeguards of nuclear materials. Current image formats such as JPEG are not ideally suited in such scenarios as they are not scalable to different viewing formats, and do not provide a high-level representation of images that facilitate automatic object/change detection or annotation. The new Scalable Vector Graphics (SVG) open standard for representing graphical information, recommended by the World Wide Web Consortium (W3C) is designed to address issues of imagemore » scalability, portability, and annotation. However, until now there has been no viable technology to efficiently field images of high visual quality under this standard. Recently, LANL has developed a vectorized image representation that is compatible with the SVG standard and preserves visual quality. This is based on a new geometric framework for characterizing complex features in real-world imagery that incorporates perceptual principles of processing visual information known from cognitive psychology and vision science, to obtain a polygonal image representation of high fidelity. This representation can take advantage of all textual compression and encryption routines unavailable to other image formats. Moreover, this vectorized image representation can be exploited to facilitate automated object recognition that can reduce time required for data review. The objects/features of interest in these vectorized images can be annotated via animated graphics to facilitate quick and easy display and comprehension of processed image content.« less

  9. DICOM for quantitative imaging biomarker development: a standards based approach to sharing clinical data and structured PET/CT analysis results in head and neck cancer research

    PubMed Central

    Clunie, David; Ulrich, Ethan; Bauer, Christian; Wahle, Andreas; Brown, Bartley; Onken, Michael; Riesmeier, Jörg; Pieper, Steve; Kikinis, Ron; Buatti, John; Beichel, Reinhard R.

    2016-01-01

    Background. Imaging biomarkers hold tremendous promise for precision medicine clinical applications. Development of such biomarkers relies heavily on image post-processing tools for automated image quantitation. Their deployment in the context of clinical research necessitates interoperability with the clinical systems. Comparison with the established outcomes and evaluation tasks motivate integration of the clinical and imaging data, and the use of standardized approaches to support annotation and sharing of the analysis results and semantics. We developed the methodology and tools to support these tasks in Positron Emission Tomography and Computed Tomography (PET/CT) quantitative imaging (QI) biomarker development applied to head and neck cancer (HNC) treatment response assessment, using the Digital Imaging and Communications in Medicine (DICOM®) international standard and free open-source software. Methods. Quantitative analysis of PET/CT imaging data collected on patients undergoing treatment for HNC was conducted. Processing steps included Standardized Uptake Value (SUV) normalization of the images, segmentation of the tumor using manual and semi-automatic approaches, automatic segmentation of the reference regions, and extraction of the volumetric segmentation-based measurements. Suitable components of the DICOM standard were identified to model the various types of data produced by the analysis. A developer toolkit of conversion routines and an Application Programming Interface (API) were contributed and applied to create a standards-based representation of the data. Results. DICOM Real World Value Mapping, Segmentation and Structured Reporting objects were utilized for standards-compliant representation of the PET/CT QI analysis results and relevant clinical data. A number of correction proposals to the standard were developed. The open-source DICOM toolkit (DCMTK) was improved to simplify the task of DICOM encoding by introducing new API abstractions. Conversion and visualization tools utilizing this toolkit were developed. The encoded objects were validated for consistency and interoperability. The resulting dataset was deposited in the QIN-HEADNECK collection of The Cancer Imaging Archive (TCIA). Supporting tools for data analysis and DICOM conversion were made available as free open-source software. Discussion. We presented a detailed investigation of the development and application of the DICOM model, as well as the supporting open-source tools and toolkits, to accommodate representation of the research data in QI biomarker development. We demonstrated that the DICOM standard can be used to represent the types of data relevant in HNC QI biomarker development, and encode their complex relationships. The resulting annotated objects are amenable to data mining applications, and are interoperable with a variety of systems that support the DICOM standard. PMID:27257542

  10. Geometric accuracy of Landsat-4 and Landsat-5 Thematic Mapper images.

    USGS Publications Warehouse

    Borgeson, W.T.; Batson, R.M.; Kieffer, H.H.

    1985-01-01

    The geometric accuracy of the Landsat Thematic Mappers was assessed by a linear least-square comparison of the positions of conspicuous ground features in digital images with their geographic locations as determined from 1:24 000-scale maps. For a Landsat-5 image, the single-dimension standard deviations of the standard digital product, and of this image with additional linear corrections, are 11.2 and 10.3 m, respectively (0.4 pixel). An F-test showed that skew and affine distortion corrections are not significant. At this level of accuracy, the granularity of the digital image and the probable inaccuracy of the 1:24 000 maps began to affect the precision of the comparison. The tested image, even with a moderate accuracy loss in the digital-to-graphic conversion, meets National Horizontal Map Accuracy standards for scales of 1:100 000 and smaller. Two Landsat-4 images, obtained with the Multispectral Scanner on and off, and processed by an interim software system, contain significant skew and affine distortions. -Authors

  11. Simulating patient-specific heart shape and motion using SPECT perfusion images with the MCAT phantom

    NASA Astrophysics Data System (ADS)

    Faber, Tracy L.; Garcia, Ernest V.; Lalush, David S.; Segars, W. Paul; Tsui, Benjamin M.

    2001-05-01

    The spline-based Mathematical Cardiac Torso (MCAT) phantom is a realistic software simulation designed to simulate single photon emission computed tomographic (SPECT) data. It incorporates a heart model of known size and shape; thus, it is invaluable for measuring accuracy of acquisition, reconstruction, and post-processing routines. New functionality has been added by replacing the standard heart model with left ventricular (LV) epicaridal and endocardial surface points detected from actual patient SPECT perfusion studies. LV surfaces detected from standard post-processing quantitation programs are converted through interpolation in space and time into new B-spline models. Perfusion abnormalities are added to the model based on results of standard perfusion quantification. The new LV is translated and rotated to fit within existing atria and right ventricular models, which are scaled based on the size of the LV. Simulations were created for five different patients with myocardial infractions who had undergone SPECT perfusion imaging. Shape, size, and motion of the resulting activity map were compared visually to the original SPECT images. In all cases, size, shape and motion of simulated LVs matched well with the original images. Thus, realistic simulations with known physiologic and functional parameters can be created for evaluating efficacy of processing algorithms.

  12. Profiling and sorting Mangifera Indica morphology for quality attributes and grade standards using integrated image processing algorithms

    NASA Astrophysics Data System (ADS)

    Balbin, Jessie R.; Fausto, Janette C.; Janabajab, John Michael M.; Malicdem, Daryl James L.; Marcelo, Reginald N.; Santos, Jan Jeffrey Z.

    2017-06-01

    Mango production is highly vital in the Philippines. It is very essential in the food industry as it is being used in markets and restaurants daily. The quality of mangoes can affect the income of a mango farmer, thus incorrect time of harvesting will result to loss of quality mangoes and income. Scientific farming is much needed nowadays together with new gadgets because wastage of mangoes increase annually due to uncouth quality. This research paper focuses on profiling and sorting of Mangifera Indica using image processing techniques and pattern recognition. The image of a mango is captured on a weekly basis from its early stage. In this study, the researchers monitor the growth and color transition of a mango for profiling purposes. Actual dimensions of the mango are determined through image conversion and determination of pixel and RGB values covered through MATLAB. A program is developed to determine the range of the maximum size of a standard ripe mango. Hue, light, saturation (HSL) correction is used in the filtering process to assure the exactness of RGB values of a mango subject. By pattern recognition technique, the program can determine if a mango is standard and ready to be exported.

  13. Small lung cancers: improved detection by use of bone suppression imaging--comparison with dual-energy subtraction chest radiography.

    PubMed

    Li, Feng; Engelmann, Roger; Pesce, Lorenzo L; Doi, Kunio; Metz, Charles E; Macmahon, Heber

    2011-12-01

    To determine whether use of bone suppression (BS) imaging, used together with a standard radiograph, could improve radiologists' performance for detection of small lung cancers compared with use of standard chest radiographs alone and whether BS imaging would provide accuracy equivalent to that of dual-energy subtraction (DES) radiography. Institutional review board approval was obtained. The requirement for informed consent was waived. The study was HIPAA compliant. Standard and DES chest radiographs of 50 patients with 55 confirmed primary nodular cancers (mean diameter, 20 mm) as well as 30 patients without cancers were included in the observer study. A new BS imaging processing system that can suppress the conspicuity of bones was applied to the standard radiographs to create corresponding BS images. Ten observers, including six experienced radiologists and four radiology residents, indicated their confidence levels regarding the presence or absence of a lung cancer for each lung, first by using a standard image, then a BS image, and finally DES soft-tissue and bone images. Receiver operating characteristic (ROC) analysis was used to evaluate observer performance. The average area under the ROC curve (AUC) for all observers was significantly improved from 0.807 to 0.867 with BS imaging and to 0.916 with DES (both P < .001). The average AUC for the six experienced radiologists was significantly improved from 0.846 with standard images to 0.894 with BS images (P < .001) and from 0.894 to 0.945 with DES images (P = .001). Use of BS imaging together with a standard radiograph can improve radiologists' accuracy for detection of small lung cancers on chest radiographs. Further improvements can be achieved by use of DES radiography but with the requirement for special equipment and a potential small increase in radiation dose. © RSNA, 2011.

  14. Quality Assurance By Laser Scanning And Imaging Techniques

    NASA Astrophysics Data System (ADS)

    SchmalfuB, Harald J.; Schinner, Karl Ludwig

    1989-03-01

    Laser scanning systems are well established in the world of fast industrial in-process quality inspection systems. The materials inspected by laser scanning systems are e.g. "endless" sheets of steel, paper, textile, film or foils. The web width varies from 50 mm up to 5000 mm or more. The web speed depends strongly on the production process and can reach several hundred meters per minute. The continuous data flow in one of different channels of the optical receiving system exceeds ten Megapixels/sec. Therefore it is clear that the electronic evaluation system has to process these data streams in real time and no image storage is possible. But sometimes (e.g. first installation of the system, change of the defect classification) it would be very helpful to have the possibility for a visual look on the original, i.e. not processed sensor data. At first we show the principle set up of a standard laser scanning system. Then we will introduce a large image memory especially designed for the needs of high-speed inspection sensors. This image memory co-operates with the standard on-line evaluation electronics and provides therefore an easy comparison between processed and non-processed data. We will discuss the basic system structure and we will show the first industrial results.

  15. The brain imaging data structure, a format for organizing and describing outputs of neuroimaging experiments.

    PubMed

    Gorgolewski, Krzysztof J; Auer, Tibor; Calhoun, Vince D; Craddock, R Cameron; Das, Samir; Duff, Eugene P; Flandin, Guillaume; Ghosh, Satrajit S; Glatard, Tristan; Halchenko, Yaroslav O; Handwerker, Daniel A; Hanke, Michael; Keator, David; Li, Xiangrui; Michael, Zachary; Maumet, Camille; Nichols, B Nolan; Nichols, Thomas E; Pellman, John; Poline, Jean-Baptiste; Rokem, Ariel; Schaefer, Gunnar; Sochat, Vanessa; Triplett, William; Turner, Jessica A; Varoquaux, Gaël; Poldrack, Russell A

    2016-06-21

    The development of magnetic resonance imaging (MRI) techniques has defined modern neuroimaging. Since its inception, tens of thousands of studies using techniques such as functional MRI and diffusion weighted imaging have allowed for the non-invasive study of the brain. Despite the fact that MRI is routinely used to obtain data for neuroscience research, there has been no widely adopted standard for organizing and describing the data collected in an imaging experiment. This renders sharing and reusing data (within or between labs) difficult if not impossible and unnecessarily complicates the application of automatic pipelines and quality assurance protocols. To solve this problem, we have developed the Brain Imaging Data Structure (BIDS), a standard for organizing and describing MRI datasets. The BIDS standard uses file formats compatible with existing software, unifies the majority of practices already common in the field, and captures the metadata necessary for most common data processing operations.

  16. The brain imaging data structure, a format for organizing and describing outputs of neuroimaging experiments

    PubMed Central

    Gorgolewski, Krzysztof J.; Auer, Tibor; Calhoun, Vince D.; Craddock, R. Cameron; Das, Samir; Duff, Eugene P.; Flandin, Guillaume; Ghosh, Satrajit S.; Glatard, Tristan; Halchenko, Yaroslav O.; Handwerker, Daniel A.; Hanke, Michael; Keator, David; Li, Xiangrui; Michael, Zachary; Maumet, Camille; Nichols, B. Nolan; Nichols, Thomas E.; Pellman, John; Poline, Jean-Baptiste; Rokem, Ariel; Schaefer, Gunnar; Sochat, Vanessa; Triplett, William; Turner, Jessica A.; Varoquaux, Gaël; Poldrack, Russell A.

    2016-01-01

    The development of magnetic resonance imaging (MRI) techniques has defined modern neuroimaging. Since its inception, tens of thousands of studies using techniques such as functional MRI and diffusion weighted imaging have allowed for the non-invasive study of the brain. Despite the fact that MRI is routinely used to obtain data for neuroscience research, there has been no widely adopted standard for organizing and describing the data collected in an imaging experiment. This renders sharing and reusing data (within or between labs) difficult if not impossible and unnecessarily complicates the application of automatic pipelines and quality assurance protocols. To solve this problem, we have developed the Brain Imaging Data Structure (BIDS), a standard for organizing and describing MRI datasets. The BIDS standard uses file formats compatible with existing software, unifies the majority of practices already common in the field, and captures the metadata necessary for most common data processing operations. PMID:27326542

  17. Application of two-dimensional crystallography and image processing to atomic resolution Z-contrast images.

    PubMed

    Morgan, David G; Ramasse, Quentin M; Browning, Nigel D

    2009-06-01

    Zone axis images recorded using high-angle annular dark-field scanning transmission electron microscopy (HAADF-STEM or Z-contrast imaging) reveal the atomic structure with a resolution that is defined by the probe size of the microscope. In most cases, the full images contain many sub-images of the crystal unit cell and/or interface structure. Thanks to the repetitive nature of these images, it is possible to apply standard image processing techniques that have been developed for the electron crystallography of biological macromolecules and have been used widely in other fields of electron microscopy for both organic and inorganic materials. These methods can be used to enhance the signal-to-noise present in the original images, to remove distortions in the images that arise from either the instrumentation or the specimen itself and to quantify properties of the material in ways that are difficult without such data processing. In this paper, we describe briefly the theory behind these image processing techniques and demonstrate them for aberration-corrected, high-resolution HAADF-STEM images of Si(46) clathrates developed for hydrogen storage.

  18. Phenopix: a R package to process digital images of a vegetation cover

    NASA Astrophysics Data System (ADS)

    Filippa, Gianluca; Cremonese, Edoardo; Migliavacca, Mirco; Galvagno, Marta; Morra di Cella, Umberto; Richardson, Andrew

    2015-04-01

    Plant phenology is a globally recognized indicator of the effects of climate change on the terrestrial biosphere. Accordingly, new tools to automatically track the seasonal development of a vegetation cover are becoming available and more and more deployed. Among them, near-continuous digital images are being collected in several networks in the US, Europe, Asia and Australia in a range of different ecosystems, including agricultural lands, deciduous and evergreen forests, and grasslands. The growing scientific interest in vegetation image analysis highlights the need of easy to use, flexible and standardized processing techniques. In this contribution we illustrate a new open source package called "phenopix" written in R language that allows to process images of a vegetation cover. The main features include: (i) define of one or more areas of interest on an image and process pixel information within them, (ii) compute vegetation indexes based on red green and blue channels, (iii) fit a curve to the seasonal trajectory of vegetation indexes and extract relevant dates (aka thresholds) on the seasonal trajectory; (iv) analyze image pixels separately to extract spatially explicit phenological information. The utilities of the package will be illustrated in detail for two subalpine sites, a grassland and a larch stand at about 2000 m in the Italian Western Alps. The phenopix package is a cost free and easy-to-use tool that allows to process digital images of a vegetation cover in a standardized, flexible and reproducible way. The software is available for download at the R forge web site (r-forge.r-project.org/projects/phenopix/).

  19. AstroImageJ: Image Processing and Photometric Extraction for Ultra-precise Astronomical Light Curves

    NASA Astrophysics Data System (ADS)

    Collins, Karen A.; Kielkopf, John F.; Stassun, Keivan G.; Hessman, Frederic V.

    2017-02-01

    ImageJ is a graphical user interface (GUI) driven, public domain, Java-based, software package for general image processing traditionally used mainly in life sciences fields. The image processing capabilities of ImageJ are useful and extendable to other scientific fields. Here we present AstroImageJ (AIJ), which provides an astronomy specific image display environment and tools for astronomy specific image calibration and data reduction. Although AIJ maintains the general purpose image processing capabilities of ImageJ, AIJ is streamlined for time-series differential photometry, light curve detrending and fitting, and light curve plotting, especially for applications requiring ultra-precise light curves (e.g., exoplanet transits). AIJ reads and writes standard Flexible Image Transport System (FITS) files, as well as other common image formats, provides FITS header viewing and editing, and is World Coordinate System aware, including an automated interface to the astrometry.net web portal for plate solving images. AIJ provides research grade image calibration and analysis tools with a GUI driven approach, and easily installed cross-platform compatibility. It enables new users, even at the level of undergraduate student, high school student, or amateur astronomer, to quickly start processing, modeling, and plotting astronomical image data with one tightly integrated software package.

  20. Single-image hard-copy display of the spine utilizing digital radiography

    NASA Astrophysics Data System (ADS)

    Artz, Dorothy S.; Janchar, Timothy; Milzman, David; Freedman, Matthew T.; Mun, Seong K.

    1997-04-01

    Regions of the entire spine contain a wide latitude of tissue densities within the imaged field of view presenting a problem for adequate radiological evaluation. With screen/film technology, the optimal technique for one area of the radiograph is sub-optimal for another area. Computed radiography (CR) with its inherent wide dynamic range, has been shown to be better than screen/film for lateral cervical spine imaging, but limitations are still present with standard image processing. By utilizing a dynamic range control (DRC) algorithm based on unsharp masking and signal transformation prior to gradation and frequency processing within the CR system, more vertebral bodies can be seen on a single hard copy display of the lateral cervical, thoracic, and thoracolumbar examinations. Examinations of the trauma cross-table lateral cervical spine, lateral thoracic spine, and lateral thoracolumbar spine were collected on live patient using photostimulable storage phosphor plates, the Fuji FCR 9000 reader, and the Fuji AC-3 computed radiography reader. Two images were produced from a single exposure; one with standard image processing and the second image with the standard process and the additional DRC algorithm. Both sets were printed from a Fuji LP 414 laser printer. Two different DRC algorithms were applied depending on which portion of the spine was not well visualized. One algorithm increased optical density and the second algorithm decreased optical density. The resultant image pairs were then reviewed by a panel of radiologists. Images produced with the additional DRC algorithm demonstrated improved visualization of previously 'under exposed' and 'over exposed' regions within the same image. Where lung field had previously obscured bony detail of the lateral thoracolumbar spine due to 'over exposure,' the image with the DRC applied to decrease the optical density allowed for easy visualization of the entire area of interest. For areas of the lateral cervical spine and lateral thoracic spine that typically have a low optical density value, the DRC algorithm used increased the optical density over that region improving visualization of C7-T2 and T11-L2 vertebral bodies; critical in trauma radiography. Emergency medicine physicians also reviewing the lateral cervical spine images were able to clear 37% of the DRC images compared to 30% of the non-DRC images for removal of the cervical collar. The DRC processed images reviewed by the physicians do not have a typical screen/film appearance; however, these different images were preferred for the three examinations in this study. This method of image processing after being tested and accepted, is in use clinically at Georgetown University Medical Center Department of Radiology for the following examinations: cervical spine, lateral thoracic spine, lateral thoracolumbar examinations, facial bones, shoulder, sternum, feet and portable chest. Computed radiography imaging of the spine is improved with the addition of histogram equalization known as dynamic range control (DRC). More anatomical structures are visualized on a single hard copy display.

  1. Objective and expert-independent validation of retinal image registration algorithms by a projective imaging distortion model.

    PubMed

    Lee, Sangyeol; Reinhardt, Joseph M; Cattin, Philippe C; Abràmoff, Michael D

    2010-08-01

    Fundus camera imaging of the retina is widely used to diagnose and manage ophthalmologic disorders including diabetic retinopathy, glaucoma, and age-related macular degeneration. Retinal images typically have a limited field of view, and multiple images can be joined together using an image registration technique to form a montage with a larger field of view. A variety of methods for retinal image registration have been proposed, but evaluating such methods objectively is difficult due to the lack of a reference standard for the true alignment of the individual images that make up the montage. A method of generating simulated retinal images by modeling the geometric distortions due to the eye geometry and the image acquisition process is described in this paper. We also present a validation process that can be used for any retinal image registration method by tracing through the distortion path and assessing the geometric misalignment in the coordinate system of the reference standard. The proposed method can be used to perform an accuracy evaluation over the whole image, so that distortion in the non-overlapping regions of the montage components can be easily assessed. We demonstrate the technique by generating test image sets with a variety of overlap conditions and compare the accuracy of several retinal image registration models. Copyright 2010 Elsevier B.V. All rights reserved.

  2. Blind retrospective motion correction of MR images.

    PubMed

    Loktyushin, Alexander; Nickisch, Hannes; Pohmann, Rolf; Schölkopf, Bernhard

    2013-12-01

    Subject motion can severely degrade MR images. A retrospective motion correction algorithm, Gradient-based motion correction, which significantly reduces ghosting and blurring artifacts due to subject motion was proposed. The technique uses the raw data of standard imaging sequences; no sequence modifications or additional equipment such as tracking devices are required. Rigid motion is assumed. The approach iteratively searches for the motion trajectory yielding the sharpest image as measured by the entropy of spatial gradients. The vast space of motion parameters is efficiently explored by gradient-based optimization with a convergence guarantee. The method has been evaluated on both synthetic and real data in two and three dimensions using standard imaging techniques. MR images are consistently improved over different kinds of motion trajectories. Using a graphics processing unit implementation, computation times are in the order of a few minutes for a full three-dimensional volume. The presented technique can be an alternative or a complement to prospective motion correction methods and is able to improve images with strong motion artifacts from standard imaging sequences without requiring additional data. Copyright © 2013 Wiley Periodicals, Inc., a Wiley company.

  3. A report on the ST ScI optical disk workstation

    NASA Technical Reports Server (NTRS)

    1985-01-01

    The STScI optical disk project was designed to explore the options, opportunities and problems presented by the optical disk technology, and to see if optical disks are a viable, and inexpensive, means of storing the large amount of data which are found in astronomical digital imagery. A separate workstation was purchased on which the development can be done and serves as an astronomical image processing computer, incorporating the optical disks into the solution of standard image processing tasks. It is indicated that small workstations can be powerful tools for image processing, and that astronomical image processing may be more conveniently and cost-effectively performed on microcomputers than on the mainframe and super-minicomputers. The optical disks provide unique capabilities in data storage.

  4. Microscopic validation of whole mouse micro-metastatic tumor imaging agents using cryo-imaging and sliding organ image registration.

    PubMed

    Liu, Yiqiao; Zhou, Bo; Qutaish, Mohammed; Wilson, David L

    2016-01-01

    We created a metastasis imaging, analysis platform consisting of software and multi-spectral cryo-imaging system suitable for evaluating emerging imaging agents targeting micro-metastatic tumor. We analyzed CREKA-Gd in MRI, followed by cryo-imaging which repeatedly sectioned and tiled microscope images of the tissue block face, providing anatomical bright field and molecular fluorescence, enabling 3D microscopic imaging of the entire mouse with single metastatic cell sensitivity. To register MRI volumes to the cryo bright field reference, we used our standard mutual information, non-rigid registration which proceeded: preprocess → affine → B-spline non-rigid 3D registration. In this report, we created two modified approaches: mask where we registered locally over a smaller rectangular solid, and sliding organ . Briefly, in sliding organ , we segmented the organ, registered the organ and body volumes separately and combined results. Though s liding organ required manual annotation, it provided the best result as a standard to measure other registration methods. Regularization parameters for standard and mask methods were optimized in a grid search. Evaluations consisted of DICE, and visual scoring of a checkerboard display. Standard had accuracy of 2 voxels in all regions except near the kidney, where there were 5 voxels sliding. After mask and sliding organ correction, kidneys sliding were within 2 voxels, and Dice overlap increased 4%-10% in mask compared to standard . Mask generated comparable results with sliding organ and allowed a semi-automatic process.

  5. Microscopic validation of whole mouse micro-metastatic tumor imaging agents using cryo-imaging and sliding organ image registration

    NASA Astrophysics Data System (ADS)

    Liu, Yiqiao; Zhou, Bo; Qutaish, Mohammed; Wilson, David L.

    2016-03-01

    We created a metastasis imaging, analysis platform consisting of software and multi-spectral cryo-imaging system suitable for evaluating emerging imaging agents targeting micro-metastatic tumor. We analyzed CREKA-Gd in MRI, followed by cryo-imaging which repeatedly sectioned and tiled microscope images of the tissue block face, providing anatomical bright field and molecular fluorescence, enabling 3D microscopic imaging of the entire mouse with single metastatic cell sensitivity. To register MRI volumes to the cryo bright field reference, we used our standard mutual information, non-rigid registration which proceeded: preprocess --> affine --> B-spline non-rigid 3D registration. In this report, we created two modified approaches: mask where we registered locally over a smaller rectangular solid, and sliding organ. Briefly, in sliding organ, we segmented the organ, registered the organ and body volumes separately and combined results. Though sliding organ required manual annotation, it provided the best result as a standard to measure other registration methods. Regularization parameters for standard and mask methods were optimized in a grid search. Evaluations consisted of DICE, and visual scoring of a checkerboard display. Standard had accuracy of 2 voxels in all regions except near the kidney, where there were 5 voxels sliding. After mask and sliding organ correction, kidneys sliding were within 2 voxels, and Dice overlap increased 4%-10% in mask compared to standard. Mask generated comparable results with sliding organ and allowed a semi-automatic process.

  6. Application of AIS Technology to Forest Mapping

    NASA Technical Reports Server (NTRS)

    Yool, S. R.; Star, J. L.

    1985-01-01

    Concerns about environmental effects of large scale deforestation have prompted efforts to map forests over large areas using various remote sensing data and image processing techniques. Basic research on the spectral characteristics of forest vegetation are required to form a basis for development of new techniques, and for image interpretation. Examination of LANDSAT data and image processing algorithms over a portion of boreal forest have demonstrated the complexity of relations between the various expressions of forest canopies, environmental variability, and the relative capacities of different image processing algorithms to achieve high classification accuracies under these conditions. Airborne Imaging Spectrometer (AIS) data may in part provide the means to interpret the responses of standard data and techniques to the vegetation based on its relatively high spectral resolution.

  7. Spatial recurrence analysis: A sensitive and fast detection tool in digital mammography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prado, T. L.; Galuzio, P. P.; Lopes, S. R.

    Efficient diagnostics of breast cancer requires fast digital mammographic image processing. Many breast lesions, both benign and malignant, are barely visible to the untrained eye and requires accurate and reliable methods of image processing. We propose a new method of digital mammographic image analysis that meets both needs. It uses the concept of spatial recurrence as the basis of a spatial recurrence quantification analysis, which is the spatial extension of the well-known time recurrence analysis. The recurrence-based quantifiers are able to evidence breast lesions in a way as good as the best standard image processing methods available, but with amore » better control over the spurious fragments in the image.« less

  8. MMX-I: data-processing software for multimodal X-ray imaging and tomography.

    PubMed

    Bergamaschi, Antoine; Medjoubi, Kadda; Messaoudi, Cédric; Marco, Sergio; Somogyi, Andrea

    2016-05-01

    A new multi-platform freeware has been developed for the processing and reconstruction of scanning multi-technique X-ray imaging and tomography datasets. The software platform aims to treat different scanning imaging techniques: X-ray fluorescence, phase, absorption and dark field and any of their combinations, thus providing an easy-to-use data processing tool for the X-ray imaging user community. A dedicated data input stream copes with the input and management of large datasets (several hundred GB) collected during a typical multi-technique fast scan at the Nanoscopium beamline and even on a standard PC. To the authors' knowledge, this is the first software tool that aims at treating all of the modalities of scanning multi-technique imaging and tomography experiments.

  9. Automatized image processing of bovine blastocysts produced in vitro for quantitative variable determination

    NASA Astrophysics Data System (ADS)

    Rocha, José Celso; Passalia, Felipe José; Matos, Felipe Delestro; Takahashi, Maria Beatriz; Maserati, Marc Peter, Jr.; Alves, Mayra Fernanda; de Almeida, Tamie Guibu; Cardoso, Bruna Lopes; Basso, Andrea Cristina; Nogueira, Marcelo Fábio Gouveia

    2017-12-01

    There is currently no objective, real-time and non-invasive method for evaluating the quality of mammalian embryos. In this study, we processed images of in vitro produced bovine blastocysts to obtain a deeper comprehension of the embryonic morphological aspects that are related to the standard evaluation of blastocysts. Information was extracted from 482 digital images of blastocysts. The resulting imaging data were individually evaluated by three experienced embryologists who graded their quality. To avoid evaluation bias, each image was related to the modal value of the evaluations. Automated image processing produced 36 quantitative variables for each image. The images, the modal and individual quality grades, and the variables extracted could potentially be used in the development of artificial intelligence techniques (e.g., evolutionary algorithms and artificial neural networks), multivariate modelling and the study of defined structures of the whole blastocyst.

  10. Traumatic Brain Injury Diffusion Magnetic Resonance Imaging Research Roadmap Development Project

    DTIC Science & Technology

    2011-10-01

    promising technology on the horizon is the Diffusion Tensor Imaging ( DTI ). Diffusion tensor imaging ( DTI ) is a magnetic resonance imaging (MRI)-based...in the brain. The potential for DTI to improve our understanding of TBI has not been fully explored and challenges associated with non-existent...processing tools, quality control standards, and a shared image repository. The recommendations will be disseminated and pilot tested. A DTI of TBI

  11. Quality control and assurance for validation of DOS/I measurements

    NASA Astrophysics Data System (ADS)

    Cerussi, Albert; Durkin, Amanda; Kwong, Richard; Quang, Timothy; Hill, Brian; Tromberg, Bruce J.; MacKinnon, Nick; Mantulin, William W.

    2010-02-01

    Ongoing multi-center clinical trials are crucial for Biophotonics to gain acceptance in medical imaging. In these trials, quality control (QC) and assurance (QA) are key to success and provide "data insurance". Quality control and assurance deal with standardization, validation, and compliance of procedures, materials and instrumentation. Specifically, QC/QA involves systematic assessment of testing materials, instrumentation performance, standard operating procedures, data logging, analysis, and reporting. QC and QA are important for FDA accreditation and acceptance by the clinical community. Our Biophotonics research in the Network for Translational Research in Optical Imaging (NTROI) program for breast cancer characterization focuses on QA/QC issues primarily related to the broadband Diffuse Optical Spectroscopy and Imaging (DOS/I) instrumentation, because this is an emerging technology with limited standardized QC/QA in place. In the multi-center trial environment, we implement QA/QC procedures: 1. Standardize and validate calibration standards and procedures. (DOS/I technology requires both frequency domain and spectral calibration procedures using tissue simulating phantoms and reflectance standards, respectively.) 2. Standardize and validate data acquisition, processing and visualization (optimize instrument software-EZDOS; centralize data processing) 3. Monitor, catalog and maintain instrument performance (document performance; modularize maintenance; integrate new technology) 4. Standardize and coordinate trial data entry (from individual sites) into centralized database 5. Monitor, audit and communicate all research procedures (database, teleconferences, training sessions) between participants ensuring "calibration". This manuscript describes our ongoing efforts, successes and challenges implementing these strategies.

  12. Cloud Engineering Principles and Technology Enablers for Medical Image Processing-as-a-Service.

    PubMed

    Bao, Shunxing; Plassard, Andrew J; Landman, Bennett A; Gokhale, Aniruddha

    2017-04-01

    Traditional in-house, laboratory-based medical imaging studies use hierarchical data structures (e.g., NFS file stores) or databases (e.g., COINS, XNAT) for storage and retrieval. The resulting performance from these approaches is, however, impeded by standard network switches since they can saturate network bandwidth during transfer from storage to processing nodes for even moderate-sized studies. To that end, a cloud-based "medical image processing-as-a-service" offers promise in utilizing the ecosystem of Apache Hadoop, which is a flexible framework providing distributed, scalable, fault tolerant storage and parallel computational modules, and HBase, which is a NoSQL database built atop Hadoop's distributed file system. Despite this promise, HBase's load distribution strategy of region split and merge is detrimental to the hierarchical organization of imaging data (e.g., project, subject, session, scan, slice). This paper makes two contributions to address these concerns by describing key cloud engineering principles and technology enhancements we made to the Apache Hadoop ecosystem for medical imaging applications. First, we propose a row-key design for HBase, which is a necessary step that is driven by the hierarchical organization of imaging data. Second, we propose a novel data allocation policy within HBase to strongly enforce collocation of hierarchically related imaging data. The proposed enhancements accelerate data processing by minimizing network usage and localizing processing to machines where the data already exist. Moreover, our approach is amenable to the traditional scan, subject, and project-level analysis procedures, and is compatible with standard command line/scriptable image processing software. Experimental results for an illustrative sample of imaging data reveals that our new HBase policy results in a three-fold time improvement in conversion of classic DICOM to NiFTI file formats when compared with the default HBase region split policy, and nearly a six-fold improvement over a commonly available network file system (NFS) approach even for relatively small file sets. Moreover, file access latency is lower than network attached storage.

  13. Split-screen display system and standardized methods for ultrasound image acquisition and multi-frame data processing

    NASA Technical Reports Server (NTRS)

    Selzer, Robert H. (Inventor); Hodis, Howard N. (Inventor)

    2011-01-01

    A standardized acquisition methodology assists operators to accurately replicate high resolution B-mode ultrasound images obtained over several spaced-apart examinations utilizing a split-screen display in which the arterial ultrasound image from an earlier examination is displayed on one side of the screen while a real-time "live" ultrasound image from a current examination is displayed next to the earlier image on the opposite side of the screen. By viewing both images, whether simultaneously or alternately, while manually adjusting the ultrasound transducer, an operator is able to bring into view the real-time image that best matches a selected image from the earlier ultrasound examination. Utilizing this methodology, dynamic material properties of arterial structures, such as IMT and diameter, are measured in a standard region over successive image frames. Each frame of the sequence has its echo edge boundaries automatically determined by using the immediately prior frame's true echo edge coordinates as initial boundary conditions. Computerized echo edge recognition and tracking over multiple successive image frames enhances measurement of arterial diameter and IMT and allows for improved vascular dimension measurements, including vascular stiffness and IMT determinations.

  14. All-CMOS night vision viewer with integrated microdisplay

    NASA Astrophysics Data System (ADS)

    Goosen, Marius E.; Venter, Petrus J.; du Plessis, Monuko; Faure, Nicolaas M.; Janse van Rensburg, Christo; Rademeyer, Pieter

    2014-02-01

    The unrivalled integration potential of CMOS has made it the dominant technology for digital integrated circuits. With the advent of visible light emission from silicon through hot carrier electroluminescence, several applications arose, all of which rely upon the advantages of mature CMOS technologies for a competitive edge in a very active and attractive market. In this paper we present a low-cost night vision viewer which employs only standard CMOS technologies. A commercial CMOS imager is utilized for near infrared image capturing with a 128x96 pixel all-CMOS microdisplay implemented to convey the image to the user. The display is implemented in a standard 0.35 μm CMOS process, with no process alterations or post processing. The display features a 25 μm pixel pitch and a 3.2 mm x 2.4 mm active area, which through magnification presents the virtual image to the user equivalent of a 19-inch display viewed from a distance of 3 meters. This work represents the first application of a CMOS microdisplay in a low-cost consumer product.

  15. Error image aware content restoration

    NASA Astrophysics Data System (ADS)

    Choi, Sungwoo; Lee, Moonsik; Jung, Byunghee

    2015-12-01

    As the resolution of TV significantly increased, content consumers have become increasingly sensitive to the subtlest defect in TV contents. This rising standard in quality demanded by consumers has posed a new challenge in today's context where the tape-based process has transitioned to the file-based process: the transition necessitated digitalizing old archives, a process which inevitably produces errors such as disordered pixel blocks, scattered white noise, or totally missing pixels. Unsurprisingly, detecting and fixing such errors require a substantial amount of time and human labor to meet the standard demanded by today's consumers. In this paper, we introduce a novel, automated error restoration algorithm which can be applied to different types of classic errors by utilizing adjacent images while preserving the undamaged parts of an error image as much as possible. We tested our method to error images detected from our quality check system in KBS(Korean Broadcasting System) video archive. We are also implementing the algorithm as a plugin of well-known NLE(Non-linear editing system), which is a familiar tool for quality control agent.

  16. Image Processing for Bioluminescence Resonance Energy Transfer Measurement-BRET-Analyzer.

    PubMed

    Chastagnier, Yan; Moutin, Enora; Hemonnot, Anne-Laure; Perroy, Julie

    2017-01-01

    A growing number of tools now allow live recordings of various signaling pathways and protein-protein interaction dynamics in time and space by ratiometric measurements, such as Bioluminescence Resonance Energy Transfer (BRET) Imaging. Accurate and reproducible analysis of ratiometric measurements has thus become mandatory to interpret quantitative imaging. In order to fulfill this necessity, we have developed an open source toolset for Fiji- BRET-Analyzer -allowing a systematic analysis, from image processing to ratio quantification. We share this open source solution and a step-by-step tutorial at https://github.com/ychastagnier/BRET-Analyzer. This toolset proposes (1) image background subtraction, (2) image alignment over time, (3) a composite thresholding method of the image used as the denominator of the ratio to refine the precise limits of the sample, (4) pixel by pixel division of the images and efficient distribution of the ratio intensity on a pseudocolor scale, and (5) quantification of the ratio mean intensity and standard variation among pixels in chosen areas. In addition to systematize the analysis process, we show that the BRET-Analyzer allows proper reconstitution and quantification of the ratiometric image in time and space, even from heterogeneous subcellular volumes. Indeed, analyzing twice the same images, we demonstrate that compared to standard analysis BRET-Analyzer precisely define the luminescent specimen limits, enlightening proficient strengths from small and big ensembles over time. For example, we followed and quantified, in live, scaffold proteins interaction dynamics in neuronal sub-cellular compartments including dendritic spines, for half an hour. In conclusion, BRET-Analyzer provides a complete, versatile and efficient toolset for automated reproducible and meaningful image ratio analysis.

  17. Developing tools for digital radar image data evaluation

    NASA Technical Reports Server (NTRS)

    Domik, G.; Leberl, F.; Raggam, J.

    1986-01-01

    The refinement of radar image analysis methods has led to a need for a systems approach to radar image processing software. Developments stimulated through satellite radar are combined with standard image processing techniques to create a user environment to manipulate and analyze airborne and satellite radar images. One aim is to create radar products for the user from the original data to enhance the ease of understanding the contents. The results are called secondary image products and derive from the original digital images. Another aim is to support interactive SAR image analysis. Software methods permit use of a digital height model to create ortho images, synthetic images, stereo-ortho images, radar maps or color combinations of different component products. Efforts are ongoing to integrate individual tools into a combined hardware/software environment for interactive radar image analysis.

  18. Integration of CBIR in radiological routine in accordance with IHE

    NASA Astrophysics Data System (ADS)

    Welter, Petra; Deserno, Thomas M.; Fischer, Benedikt; Wein, Berthold B.; Ott, Bastian; Günther, Rolf W.

    2009-02-01

    Increasing use of digital imaging processing leads to an enormous amount of imaging data. The access to picture archiving and communication systems (PACS), however, is solely textually, leading to sparse retrieval results because of ambiguous or missing image descriptions. Content-based image retrieval (CBIR) systems can improve the clinical diagnostic outcome significantly. However, current CBIR systems are not able to integrate their results with clinical workflow and PACS. Existing communication standards like DICOM and HL7 leave many options for implementation and do not ensure full interoperability. We present a concept of the standardized integration of a CBIR system for the radiology workflow in accordance with the Integrating the Healthcare Enterprise (IHE) framework. This is based on the IHE integration profile 'Post-Processing Workflow' (PPW) defining responsibilities as well as standardized communication and utilizing the DICOM Structured Report (DICOM SR). Because nowadays most of PACS and RIS systems are not yet fully IHE compliant to PPW, we also suggest an intermediate approach with the concepts of the CAD-PACS Toolkit. The integration is independent of the particular PACS and RIS system. Therefore, it supports the widespread application of CBIR in radiological routine. As a result, the approach is exemplarily applied to the Image Retrieval in Medical Applications (IRMA) framework.

  19. A method of camera calibration in the measurement process with reference mark for approaching observation space target

    NASA Astrophysics Data System (ADS)

    Zhang, Hua; Zeng, Luan

    2017-11-01

    Binocular stereoscopic vision can be used for space-based space targets near observation. In order to solve the problem that the traditional binocular vision system cannot work normally after interference, an online calibration method of binocular stereo measuring camera with self-reference is proposed. The method uses an auxiliary optical imaging device to insert the image of the standard reference object into the edge of the main optical path and image with the target on the same focal plane, which is equivalent to a standard reference in the binocular imaging optical system; When the position of the system and the imaging device parameters are disturbed, the image of the standard reference will change accordingly in the imaging plane, and the position of the standard reference object does not change. The camera's external parameters can be re-calibrated by the visual relationship of the standard reference object. The experimental results show that the maximum mean square error of the same object can be reduced from the original 72.88mm to 1.65mm when the right camera is deflected by 0.4 degrees and the left camera is high and low with 0.2° rotation. This method can realize the online calibration of binocular stereoscopic vision measurement system, which can effectively improve the anti - jamming ability of the system.

  20. Comparison of DP3 Signals Evoked by Comfortable 3D Images and 2D Images — an Event-Related Potential Study using an Oddball Task

    NASA Astrophysics Data System (ADS)

    Ye, Peng; Wu, Xiang; Gao, Dingguo; Liang, Haowen; Wang, Jiahui; Deng, Shaozhi; Xu, Ningsheng; She, Juncong; Chen, Jun

    2017-02-01

    The horizontal binocular disparity is a critical factor for the visual fatigue induced by watching stereoscopic TVs. Stereoscopic images that possess the disparity within the ‘comfort zones’ and remain still in the depth direction are considered comfortable to the viewers as 2D images. However, the difference in brain activities between processing such comfortable stereoscopic images and 2D images is still less studied. The DP3 (differential P3) signal refers to an event-related potential (ERP) component indicating attentional processes, which is typically evoked by odd target stimuli among standard stimuli in an oddball task. The present study found that the DP3 signal elicited by the comfortable 3D images exhibits the delayed peak latency and enhanced peak amplitude over the anterior and central scalp regions compared to the 2D images. The finding suggests that compared to the processing of the 2D images, more attentional resources are involved in the processing of the stereoscopic images even though they are subjectively comfortable.

  1. Retinal Information Processing for Minimum Laser Lesion Detection and Cumulative Damage

    DTIC Science & Technology

    1992-09-17

    TAL3Unaqr~orJ:ccd [] J ,;--Wicic tion --------------... MYRON....... . ................... ... ....... ...........................MYRON L. WOLBARSHT B D ist...possible beneficial visual function of the small retinal image movements. B . Visual System Models Prior models of visual system information processing have...against standard secondary sources whose calibrations can be traced to the National Bureau of Standards. B . Electrophysiological Techniques Extracellular

  2. IR CMOS: near infrared enhanced digital imaging (Presentation Recording)

    NASA Astrophysics Data System (ADS)

    Pralle, Martin U.; Carey, James E.; Joy, Thomas; Vineis, Chris J.; Palsule, Chintamani

    2015-08-01

    SiOnyx has demonstrated imaging at light levels below 1 mLux (moonless starlight) at video frame rates with a 720P CMOS image sensor in a compact, low latency camera. Low light imaging is enabled by the combination of enhanced quantum efficiency in the near infrared together with state of the art low noise image sensor design. The quantum efficiency enhancements are achieved by applying Black Silicon, SiOnyx's proprietary ultrafast laser semiconductor processing technology. In the near infrared, silicon's native indirect bandgap results in low absorption coefficients and long absorption lengths. The Black Silicon nanostructured layer fundamentally disrupts this paradigm by enhancing the absorption of light within a thin pixel layer making 5 microns of silicon equivalent to over 300 microns of standard silicon. This results in a demonstrate 10 fold improvements in near infrared sensitivity over incumbent imaging technology while maintaining complete compatibility with standard CMOS image sensor process flows. Applications include surveillance, nightvision, and 1064nm laser see spot. Imaging performance metrics will be discussed. Demonstrated performance characteristics: Pixel size : 5.6 and 10 um Array size: 720P/1.3Mpix Frame rate: 60 Hz Read noise: 2 ele/pixel Spectral sensitivity: 400 to 1200 nm (with 10x QE at 1064nm) Daytime imaging: color (Bayer pattern) Nighttime imaging: moonless starlight conditions 1064nm laser imaging: daytime imaging out to 2Km

  3. Digital document imaging systems: An overview and guide

    NASA Technical Reports Server (NTRS)

    1990-01-01

    This is an aid to NASA managers in planning the selection of a Digital Document Imaging System (DDIS) as a possible solution for document information processing and storage. Intended to serve as a manager's guide, this document contains basic information on digital imaging systems, technology, equipment standards, issues of interoperability and interconnectivity, and issues related to selecting appropriate imaging equipment based upon well defined needs.

  4. Concepts for on-board satellite image registration. Volume 2: IAS prototype performance evaluation standard definition

    NASA Astrophysics Data System (ADS)

    Daluge, D. R.; Ruedger, W. H.

    1981-06-01

    Problems encountered in testing onboard signal processing hardware designed to achieve radiometric and geometric correction of satellite imaging data are considered. These include obtaining representative image and ancillary data for simulation and the transfer and storage of a large quantity of image data at very high speed. The high resolution, high speed preprocessing of LANDSAT-D imagery is considered.

  5. Potential Bone to Implant Contact Area of Short Versus Standard Implants: An In Vitro Micro-Computed Tomography Analysis.

    PubMed

    Quaranta, Alessandro; DʼIsidoro, Orlando; Bambini, Fabrizio; Putignano, Angelo

    2016-02-01

    To compare the available potential bone-implant contact (PBIC) area of standard and short dental implants by micro-computed tomography (μCT) assessment. Three short implants with different diameters (4.5 × 6 mm, 4.1 × 7 mm, and 4.1 × 6 mm) and 2 standard implants (3.5 × 10 mm and 3.3 × 9 mm) with diverse design and surface features were scanned with μCT. Cross-sectional images were obtained. Image data were manually processed to find the plane that corresponds to the most coronal contact point between the crestal bone and implant. The available PBIC was calculated for each sample. Later on, the cross-sectional slices were processed by a 3-dimensional (3D) software, and 3D images of each sample were used for descriptive analysis and display the microtopography and macrotopography. The wide-diameter short implant (4.5 × 6 mm) showed the higher PBIC (210.89 mm) value followed by the standard (178.07 mm and 185.37 mm) and short implants (130.70 mm and 110.70 mm). Wide-diameter short implants show a surface area comparable with standard implants. Micro-CT analysis is a promising technique to evaluate surface area in dental implants with different macrodesign, microdesign, and surface features.

  6. PICASSO: an end-to-end image simulation tool for space and airborne imaging systems

    NASA Astrophysics Data System (ADS)

    Cota, Steve A.; Bell, Jabin T.; Boucher, Richard H.; Dutton, Tracy E.; Florio, Chris J.; Franz, Geoffrey A.; Grycewicz, Thomas J.; Kalman, Linda S.; Keller, Robert A.; Lomheim, Terrence S.; Paulson, Diane B.; Willkinson, Timothy S.

    2008-08-01

    The design of any modern imaging system is the end result of many trade studies, each seeking to optimize image quality within real world constraints such as cost, schedule and overall risk. Image chain analysis - the prediction of image quality from fundamental design parameters - is an important part of this design process. At The Aerospace Corporation we have been using a variety of image chain analysis tools for many years, the Parameterized Image Chain Analysis & Simulation SOftware (PICASSO) among them. In this paper we describe our PICASSO tool, showing how, starting with a high quality input image and hypothetical design descriptions representative of the current state of the art in commercial imaging satellites, PICASSO can generate standard metrics of image quality in support of the decision processes of designers and program managers alike.

  7. PICASSO: an end-to-end image simulation tool for space and airborne imaging systems

    NASA Astrophysics Data System (ADS)

    Cota, Stephen A.; Bell, Jabin T.; Boucher, Richard H.; Dutton, Tracy E.; Florio, Christopher J.; Franz, Geoffrey A.; Grycewicz, Thomas J.; Kalman, Linda S.; Keller, Robert A.; Lomheim, Terrence S.; Paulson, Diane B.; Wilkinson, Timothy S.

    2010-06-01

    The design of any modern imaging system is the end result of many trade studies, each seeking to optimize image quality within real world constraints such as cost, schedule and overall risk. Image chain analysis - the prediction of image quality from fundamental design parameters - is an important part of this design process. At The Aerospace Corporation we have been using a variety of image chain analysis tools for many years, the Parameterized Image Chain Analysis & Simulation SOftware (PICASSO) among them. In this paper we describe our PICASSO tool, showing how, starting with a high quality input image and hypothetical design descriptions representative of the current state of the art in commercial imaging satellites, PICASSO can generate standard metrics of image quality in support of the decision processes of designers and program managers alike.

  8. Color standardization and optimization in whole slide imaging.

    PubMed

    Yagi, Yukako

    2011-03-30

    Standardization and validation of the color displayed by digital slides is an important aspect of digital pathology implementation. While the most common reason for color variation is the variance in the protocols and practices in the histology lab, the color displayed can also be affected by variation in capture parameters (for example, illumination and filters), image processing and display factors in the digital systems themselves. We have been developing techniques for color validation and optimization along two paths. The first was based on two standard slides that are scanned and displayed by the imaging system in question. In this approach, one slide is embedded with nine filters with colors selected especially for H&E stained slides (looking like tiny Macbeth color chart); the specific color of the nine filters were determined in our previous study and modified for whole slide imaging (WSI). The other slide is an H&E stained mouse embryo. Both of these slides were scanned and the displayed images were compared to a standard. The second approach was based on our previous multispectral imaging research. As a first step, the two slide method (above) was used to identify inaccurate display of color and its cause, and to understand the importance of accurate color in digital pathology. We have also improved the multispectral-based algorithm for more consistent results in stain standardization. In near future, the results of the two slide and multispectral techniques can be combined and will be widely available. We have been conducting a series of researches and developing projects to improve image quality to establish Image Quality Standardization. This paper discusses one of most important aspects of image quality - color.

  9. Improving Spectral Image Classification through Band-Ratio Optimization and Pixel Clustering

    NASA Astrophysics Data System (ADS)

    O'Neill, M.; Burt, C.; McKenna, I.; Kimblin, C.

    2017-12-01

    The Underground Nuclear Explosion Signatures Experiment (UNESE) seeks to characterize non-prompt observables from underground nuclear explosions (UNE). As part of this effort, we evaluated the ability of DigitalGlobe's WorldView-3 (WV3) to detect and map UNE signatures. WV3 is the current state-of-the-art, commercial, multispectral imaging satellite; however, it has relatively limited spectral and spatial resolutions. These limitations impede image classifiers from detecting targets that are spatially small and lack distinct spectral features. In order to improve classification results, we developed custom algorithms to reduce false positive rates while increasing true positive rates via a band-ratio optimization and pixel clustering front-end. The clusters resulting from these algorithms were processed with standard spectral image classifiers such as Mixture-Tuned Matched Filter (MTMF) and Adaptive Coherence Estimator (ACE). WV3 and AVIRIS data of Cuprite, Nevada, were used as a validation data set. These data were processed with a standard classification approach using MTMF and ACE algorithms. They were also processed using the custom front-end prior to the standard approach. A comparison of the results shows that the custom front-end significantly increases the true positive rate and decreases the false positive rate.This work was done by National Security Technologies, LLC, under Contract No. DE-AC52-06NA25946 with the U.S. Department of Energy. DOE/NV/25946-3283.

  10. Optimizing MR imaging-guided navigation for focused ultrasound interventions in the brain

    NASA Astrophysics Data System (ADS)

    Werner, B.; Martin, E.; Bauer, R.; O'Gorman, R.

    2017-03-01

    MR imaging during transcranial MR imaging-guided Focused Ultrasound surgery (tcMRIgFUS) is challenging due to the complex ultrasound transducer setup and the water bolus used for acoustic coupling. Achievable image quality in the tcMRIgFUS setup using the standard body coil is significantly inferior to current neuroradiologic standards. As a consequence, MR image guidance for precise navigation in functional neurosurgical interventions using tcMRIgFUS is basically limited to the acquisition of MR coordinates of salient landmarks such as the anterior and posterior commissure for aligning a stereotactic atlas. Here, we show how improved MR image quality provided by a custom built MR coil and optimized MR imaging sequences can support imaging-guided navigation for functional tcMRIgFUS neurosurgery by visualizing anatomical landmarks that can be integrated into the navigation process to accommodate for patient specific anatomy.

  11. Retinal imaging analysis based on vessel detection.

    PubMed

    Jamal, Arshad; Hazim Alkawaz, Mohammed; Rehman, Amjad; Saba, Tanzila

    2017-07-01

    With an increase in the advancement of digital imaging and computing power, computationally intelligent technologies are in high demand to be used in ophthalmology cure and treatment. In current research, Retina Image Analysis (RIA) is developed for optometrist at Eye Care Center in Management and Science University. This research aims to analyze the retina through vessel detection. The RIA assists in the analysis of the retinal images and specialists are served with various options like saving, processing and analyzing retinal images through its advanced interface layout. Additionally, RIA assists in the selection process of vessel segment; processing these vessels by calculating its diameter, standard deviation, length, and displaying detected vessel on the retina. The Agile Unified Process is adopted as the methodology in developing this research. To conclude, Retina Image Analysis might help the optometrist to get better understanding in analyzing the patient's retina. Finally, the Retina Image Analysis procedure is developed using MATLAB (R2011b). Promising results are attained that are comparable in the state of art. © 2017 Wiley Periodicals, Inc.

  12. Visually representing reality: aesthetics and accessibility aspects

    NASA Astrophysics Data System (ADS)

    van Nes, Floris L.

    2009-02-01

    This paper gives an overview of the visual representation of reality with three imaging technologies: painting, photography and electronic imaging. The contribution of the important image aspects, called dimensions hereafter, such as color, fine detail and total image size, to the degree of reality and aesthetic value of the rendered image are described for each of these technologies. Whereas quite a few of these dimensions - or approximations, or even only suggestions thereof - were already present in prehistoric paintings, apparent motion and true stereoscopic vision only recently were added - unfortunately also introducing accessibility and image safety issues. Efforts are made to reduce the incidence of undesirable biomedical effects such as photosensitive seizures (PSS), visually induced motion sickness (VIMS), and visual fatigue from stereoscopic images (VFSI) by international standardization of the image parameters to be avoided by image providers and display manufacturers. The history of this type of standardization, from an International Workshop Agreement to a strategy for accomplishing effective international standardization by ISO, is treated at some length. One of the difficulties to be mastered in this process is the reconciliation of the, sometimes opposing, interests of vulnerable persons, thrill-seeking viewers, creative video designers and the game industry.

  13. Installing and Executing Information Object Analysis, Intent, Dissemination, and Enhancement (IOAIDE) and Its Dependencies

    DTIC Science & Technology

    2017-02-01

    Image Processing Web Server Administration ...........................17 Fig. 18 Microsoft ASP.NET MVC 4 installation...algorithms are made into client applications that can be accessed from an image processing web service2 developed following Representational State...Transfer (REST) standards by a mobile app, laptop PC, and other devices. Similarly, weather tweets can be accessed via the Weather Digest Web Service

  14. A new iterative triclass thresholding technique in image segmentation.

    PubMed

    Cai, Hongmin; Yang, Zhong; Cao, Xinhua; Xia, Weiming; Xu, Xiaoyin

    2014-03-01

    We present a new method in image segmentation that is based on Otsu's method but iteratively searches for subregions of the image for segmentation, instead of treating the full image as a whole region for processing. The iterative method starts with Otsu's threshold and computes the mean values of the two classes as separated by the threshold. Based on the Otsu's threshold and the two mean values, the method separates the image into three classes instead of two as the standard Otsu's method does. The first two classes are determined as the foreground and background and they will not be processed further. The third class is denoted as a to-be-determined (TBD) region that is processed at next iteration. At the succeeding iteration, Otsu's method is applied on the TBD region to calculate a new threshold and two class means and the TBD region is again separated into three classes, namely, foreground, background, and a new TBD region, which by definition is smaller than the previous TBD regions. Then, the new TBD region is processed in the similar manner. The process stops when the Otsu's thresholds calculated between two iterations is less than a preset threshold. Then, all the intermediate foreground and background regions are, respectively, combined to create the final segmentation result. Tests on synthetic and real images showed that the new iterative method can achieve better performance than the standard Otsu's method in many challenging cases, such as identifying weak objects and revealing fine structures of complex objects while the added computational cost is minimal.

  15. The Multimission Image Processing Laboratory's virtual frame buffer interface

    NASA Technical Reports Server (NTRS)

    Wolfe, T.

    1984-01-01

    Large image processing systems use multiple frame buffers with differing architectures and vendor supplied interfaces. This variety of architectures and interfaces creates software development, maintenance and portability problems for application programs. Several machine-dependent graphics standards such as ANSI Core and GKS are available, but none of them are adequate for image processing. Therefore, the Multimission Image Processing laboratory project has implemented a programmer level virtual frame buffer interface. This interface makes all frame buffers appear as a generic frame buffer with a specified set of characteristics. This document defines the virtual frame uffer interface and provides information such as FORTRAN subroutine definitions, frame buffer characteristics, sample programs, etc. It is intended to be used by application programmers and system programmers who are adding new frame buffers to a system.

  16. MMX-I: data-processing software for multimodal X-ray imaging and tomography

    PubMed Central

    Bergamaschi, Antoine; Medjoubi, Kadda; Messaoudi, Cédric; Marco, Sergio; Somogyi, Andrea

    2016-01-01

    A new multi-platform freeware has been developed for the processing and reconstruction of scanning multi-technique X-ray imaging and tomography datasets. The software platform aims to treat different scanning imaging techniques: X-ray fluorescence, phase, absorption and dark field and any of their combinations, thus providing an easy-to-use data processing tool for the X-ray imaging user community. A dedicated data input stream copes with the input and management of large datasets (several hundred GB) collected during a typical multi-technique fast scan at the Nanoscopium beamline and even on a standard PC. To the authors’ knowledge, this is the first software tool that aims at treating all of the modalities of scanning multi-technique imaging and tomography experiments. PMID:27140159

  17. a Novel 3d Intelligent Fuzzy Algorithm Based on Minkowski-Clustering

    NASA Astrophysics Data System (ADS)

    Toori, S.; Esmaeily, A.

    2017-09-01

    Assessing and monitoring the state of the earth surface is a key requirement for global change research. In this paper, we propose a new consensus fuzzy clustering algorithm that is based on the Minkowski distance. This research concentrates on Tehran's vegetation mass and its changes during 29 years using remote sensing technology. The main purpose of this research is to evaluate the changes in vegetation mass using a new process by combination of intelligent NDVI fuzzy clustering and Minkowski distance operation. The dataset includes the images of Landsat8 and Landsat TM, from 1989 to 2016. For each year three images of three continuous days were used to identify vegetation impact and recovery. The result was a 3D NDVI image, with one dimension for each day NDVI. The next step was the classification procedure which is a complicated process of categorizing pixels into a finite number of separate classes, based on their data values. If a pixel satisfies a certain set of standards, the pixel is allocated to the class that corresponds to those criteria. This method is less sensitive to noise and can integrate solutions from multiple samples of data or attributes for processing data in the processing industry. The result was a fuzzy one dimensional image. This image was also computed for the next 28 years. The classification was done in both specified urban and natural park areas of Tehran. Experiments showed that our method worked better in classifying image pixels in comparison with the standard classification methods.

  18. Strain analysis in CRT candidates using the novel segment length in cine (SLICE) post-processing technique on standard CMR cine images.

    PubMed

    Zweerink, Alwin; Allaart, Cornelis P; Kuijer, Joost P A; Wu, LiNa; Beek, Aernout M; van de Ven, Peter M; Meine, Mathias; Croisille, Pierre; Clarysse, Patrick; van Rossum, Albert C; Nijveldt, Robin

    2017-12-01

    Although myocardial strain analysis is a potential tool to improve patient selection for cardiac resynchronization therapy (CRT), there is currently no validated clinical approach to derive segmental strains. We evaluated the novel segment length in cine (SLICE) technique to derive segmental strains from standard cardiovascular MR (CMR) cine images in CRT candidates. Twenty-seven patients with left bundle branch block underwent CMR examination including cine imaging and myocardial tagging (CMR-TAG). SLICE was performed by measuring segment length between anatomical landmarks throughout all phases on short-axis cines. This measure of frame-to-frame segment length change was compared to CMR-TAG circumferential strain measurements. Subsequently, conventional markers of CRT response were calculated. Segmental strains showed good to excellent agreement between SLICE and CMR-TAG (septum strain, intraclass correlation coefficient (ICC) 0.76; lateral wall strain, ICC 0.66). Conventional markers of CRT response also showed close agreement between both methods (ICC 0.61-0.78). Reproducibility of SLICE was excellent for intra-observer testing (all ICC ≥0.76) and good for interobserver testing (all ICC ≥0.61). The novel SLICE post-processing technique on standard CMR cine images offers both accurate and robust segmental strain measures compared to the 'gold standard' CMR-TAG technique, and has the advantage of being widely available. • Myocardial strain analysis could potentially improve patient selection for CRT. • Currently a well validated clinical approach to derive segmental strains is lacking. • The novel SLICE technique derives segmental strains from standard CMR cine images. • SLICE-derived strain markers of CRT response showed close agreement with CMR-TAG. • Future studies will focus on the prognostic value of SLICE in CRT candidates.

  19. A back-illuminated megapixel CMOS image sensor

    NASA Technical Reports Server (NTRS)

    Pain, Bedabrata; Cunningham, Thomas; Nikzad, Shouleh; Hoenk, Michael; Jones, Todd; Wrigley, Chris; Hancock, Bruce

    2005-01-01

    In this paper, we present the test and characterization results for a back-illuminated megapixel CMOS imager. The imager pixel consists of a standard junction photodiode coupled to a three transistor-per-pixel switched source-follower readout [1]. The imager also consists of integrated timing and control and bias generation circuits, and provides analog output. The analog column-scan circuits were implemented in such a way that the imager could be configured to run in off-chip correlated double-sampling (CDS) mode. The imager was originally designed for normal front-illuminated operation, and was fabricated in a commercially available 0.5 pn triple-metal CMOS-imager compatible process. For backside illumination, the imager was thinned by etching away the substrate was etched away in a post-fabrication processing step.

  20. Go With the Flow, on Jupiter and Snow. Coherence from Model-Free Video Data Without Trajectories

    NASA Astrophysics Data System (ADS)

    AlMomani, Abd AlRahman R.; Bollt, Erik

    2018-06-01

    Viewing a data set such as the clouds of Jupiter, coherence is readily apparent to human observers, especially the Great Red Spot, but also other great storms and persistent structures. There are now many different definitions and perspectives mathematically describing coherent structures, but we will take an image processing perspective here. We describe an image processing perspective inference of coherent sets from a fluidic system directly from image data, without attempting to first model underlying flow fields, related to a concept in image processing called motion tracking. In contrast to standard spectral methods for image processing which are generally related to a symmetric affinity matrix, leading to standard spectral graph theory, we need a not symmetric affinity which arises naturally from the underlying arrow of time. We develop an anisotropic, directed diffusion operator corresponding to flow on a directed graph, from a directed affinity matrix developed with coherence in mind, and corresponding spectral graph theory from the graph Laplacian. Our methodology is not offered as more accurate than other traditional methods of finding coherent sets, but rather our approach works with alternative kinds of data sets, in the absence of vector field. Our examples will include partitioning the weather and cloud structures of Jupiter, and a local to Potsdam, NY, lake effect snow event on Earth, as well as the benchmark test double-gyre system.

  1. Investigation of autofocus algorithms for brightfield microscopy of unstained cells

    NASA Astrophysics Data System (ADS)

    Wu, Shu Yu; Dugan, Nazim; Hennelly, Bryan M.

    2014-05-01

    In the past decade there has been significant interest in image processing for brightfield cell microscopy. Much of the previous research on image processing for microscopy has focused on fluorescence microscopy, including cell counting, cell tracking, cell segmentation and autofocusing. Fluorescence microscopy provides functional image information that involves the use of labels in the form of chemical stains or dyes. For some applications, where the biochemical integrity of the cell is required to remain unchanged so that sensitive chemical testing can later be applied, it is necessary to avoid staining. For this reason the challenge of processing images of unstained cells has become a topic of increasing attention. These cells are often effectively transparent and appear to have a homogenous intensity profile when they are in focus. Bright field microscopy is the most universally available and most widely used form of optical microscopy and for this reason we are interested in investigating image processing of unstained cells recorded using a standard bright field microscope. In this paper we investigate the application of a range of different autofocus metrics applied to unstained bladder cancer cell lines using a standard inverted bright field microscope with microscope objectives that have high magnification and numerical aperture. We present a number of conclusions on the optimum metrics and the manner in which they should be applied for this application.

  2. Super-resolution imaging of subcortical white matter using stochastic optical reconstruction microscopy (STORM) and super-resolution optical fluctuation imaging (SOFI)

    PubMed Central

    Hainsworth, A. H.; Lee, S.; Patel, A.; Poon, W. W.; Knight, A. E.

    2018-01-01

    Aims The spatial resolution of light microscopy is limited by the wavelength of visible light (the ‘diffraction limit’, approximately 250 nm). Resolution of sub-cellular structures, smaller than this limit, is possible with super resolution methods such as stochastic optical reconstruction microscopy (STORM) and super-resolution optical fluctuation imaging (SOFI). We aimed to resolve subcellular structures (axons, myelin sheaths and astrocytic processes) within intact white matter, using STORM and SOFI. Methods Standard cryostat-cut sections of subcortical white matter from donated human brain tissue and from adult rat and mouse brain were labelled, using standard immunohistochemical markers (neurofilament-H, myelin-associated glycoprotein, glial fibrillary acidic protein, GFAP). Image sequences were processed for STORM (effective pixel size 8–32 nm) and for SOFI (effective pixel size 80 nm). Results In human, rat and mouse, subcortical white matter high-quality images for axonal neurofilaments, myelin sheaths and filamentous astrocytic processes were obtained. In quantitative measurements, STORM consistently underestimated width of axons and astrocyte processes (compared with electron microscopy measurements). SOFI provided more accurate width measurements, though with somewhat lower spatial resolution than STORM. Conclusions Super resolution imaging of intact cryo-cut human brain tissue is feasible. For quantitation, STORM can under-estimate diameters of thin fluorescent objects. SOFI is more robust. The greatest limitation for super-resolution imaging in brain sections is imposed by sample preparation. We anticipate that improved strategies to reduce autofluorescence and to enhance fluorophore performance will enable rapid expansion of this approach. PMID:28696566

  3. Super-resolution imaging of subcortical white matter using stochastic optical reconstruction microscopy (STORM) and super-resolution optical fluctuation imaging (SOFI).

    PubMed

    Hainsworth, A H; Lee, S; Foot, P; Patel, A; Poon, W W; Knight, A E

    2018-06-01

    The spatial resolution of light microscopy is limited by the wavelength of visible light (the 'diffraction limit', approximately 250 nm). Resolution of sub-cellular structures, smaller than this limit, is possible with super resolution methods such as stochastic optical reconstruction microscopy (STORM) and super-resolution optical fluctuation imaging (SOFI). We aimed to resolve subcellular structures (axons, myelin sheaths and astrocytic processes) within intact white matter, using STORM and SOFI. Standard cryostat-cut sections of subcortical white matter from donated human brain tissue and from adult rat and mouse brain were labelled, using standard immunohistochemical markers (neurofilament-H, myelin-associated glycoprotein, glial fibrillary acidic protein, GFAP). Image sequences were processed for STORM (effective pixel size 8-32 nm) and for SOFI (effective pixel size 80 nm). In human, rat and mouse, subcortical white matter high-quality images for axonal neurofilaments, myelin sheaths and filamentous astrocytic processes were obtained. In quantitative measurements, STORM consistently underestimated width of axons and astrocyte processes (compared with electron microscopy measurements). SOFI provided more accurate width measurements, though with somewhat lower spatial resolution than STORM. Super resolution imaging of intact cryo-cut human brain tissue is feasible. For quantitation, STORM can under-estimate diameters of thin fluorescent objects. SOFI is more robust. The greatest limitation for super-resolution imaging in brain sections is imposed by sample preparation. We anticipate that improved strategies to reduce autofluorescence and to enhance fluorophore performance will enable rapid expansion of this approach. © 2017 British Neuropathological Society.

  4. Application of off-line image processing for optimization in chest computed radiography using a low cost system.

    PubMed

    Muhogora, Wilbroad E; Msaki, Peter; Padovani, Renato

    2015-03-08

     The objective of this study was to improve the visibility of anatomical details by applying off-line postimage processing in chest computed radiography (CR). Four spatial domain-based external image processing techniques were developed by using MATLAB software version 7.0.0.19920 (R14) and image processing tools. The developed techniques were implemented to sample images and their visual appearances confirmed by two consultant radiologists to be clinically adequate. The techniques were then applied to 200 chest clinical images and randomized with other 100 images previously processed online. These 300 images were presented to three experienced radiologists for image quality assessment using standard quality criteria. The mean and ranges of the average scores for three radiologists were characterized for each of the developed technique and imaging system. The Mann-Whitney U-test was used to test the difference of details visibility between the images processed using each of the developed techniques and the corresponding images processed using default algorithms. The results show that the visibility of anatomical features improved significantly (0.005 ≤ p ≤ 0.02) with combinations of intensity values adjustment and/or spatial linear filtering techniques for images acquired using 60 ≤ kVp ≤ 70. However, there was no improvement for images acquired using 102 ≤ kVp ≤ 107 (0.127 ≤ p ≤ 0.48). In conclusion, the use of external image processing for optimization can be effective in chest CR, but should be implemented in consultations with the radiologists.

  5. Application of off‐line image processing for optimization in chest computed radiography using a low cost system

    PubMed Central

    Msaki, Peter; Padovani, Renato

    2015-01-01

    The objective of this study was to improve the visibility of anatomical details by applying off‐line postimage processing in chest computed radiography (CR). Four spatial domain‐based external image processing techniques were developed by using MATLAB software version 7.0.0.19920 (R14) and image processing tools. The developed techniques were implemented to sample images and their visual appearances confirmed by two consultant radiologists to be clinically adequate. The techniques were then applied to 200 chest clinical images and randomized with other 100 images previously processed online. These 300 images were presented to three experienced radiologists for image quality assessment using standard quality criteria. The mean and ranges of the average scores for three radiologists were characterized for each of the developed technique and imaging system. The Mann‐Whitney U‐test was used to test the difference of details visibility between the images processed using each of the developed techniques and the corresponding images processed using default algorithms. The results show that the visibility of anatomical features improved significantly (0.005≤p≤0.02) with combinations of intensity values adjustment and/or spatial linear filtering techniques for images acquired using 60≤kVp≤70. However, there was no improvement for images acquired using 102≤kVp≤107 (0.127≤p≤0.48). In conclusion, the use of external image processing for optimization can be effective in chest CR, but should be implemented in consultations with the radiologists. PACS number: 87.59.−e, 87.59.−B, 87.59.−bd PMID:26103165

  6. PACS-Based Computer-Aided Detection and Diagnosis

    NASA Astrophysics Data System (ADS)

    Huang, H. K. (Bernie); Liu, Brent J.; Le, Anh HongTu; Documet, Jorge

    The ultimate goal of Picture Archiving and Communication System (PACS)-based Computer-Aided Detection and Diagnosis (CAD) is to integrate CAD results into daily clinical practice so that it becomes a second reader to aid the radiologist's diagnosis. Integration of CAD and Hospital Information System (HIS), Radiology Information System (RIS) or PACS requires certain basic ingredients from Health Level 7 (HL7) standard for textual data, Digital Imaging and Communications in Medicine (DICOM) standard for images, and Integrating the Healthcare Enterprise (IHE) workflow profiles in order to comply with the Health Insurance Portability and Accountability Act (HIPAA) requirements to be a healthcare information system. Among the DICOM standards and IHE workflow profiles, DICOM Structured Reporting (DICOM-SR); and IHE Key Image Note (KIN), Simple Image and Numeric Report (SINR) and Post-processing Work Flow (PWF) are utilized in CAD-HIS/RIS/PACS integration. These topics with examples are presented in this chapter.

  7. Fast Image Subtraction Using Multi-cores and GPUs

    NASA Astrophysics Data System (ADS)

    Hartung, Steven; Shukla, H.

    2013-01-01

    Many important image processing techniques in astronomy require a massive number of computations per pixel. Among them is an image differencing technique known as Optimal Image Subtraction (OIS), which is very useful for detecting and characterizing transient phenomena. Like many image processing routines, OIS computations increase proportionally with the number of pixels being processed, and the number of pixels in need of processing is increasing rapidly. Utilizing many-core graphical processing unit (GPU) technology in a hybrid conjunction with multi-core CPU and computer clustering technologies, this work presents a new astronomy image processing pipeline architecture. The chosen OIS implementation focuses on the 2nd order spatially-varying kernel with the Dirac delta function basis, a powerful image differencing method that has seen limited deployment in part because of the heavy computational burden. This tool can process standard image calibration and OIS differencing in a fashion that is scalable with the increasing data volume. It employs several parallel processing technologies in a hierarchical fashion in order to best utilize each of their strengths. The Linux/Unix based application can operate on a single computer, or on an MPI configured cluster, with or without GPU hardware. With GPU hardware available, even low-cost commercial video cards, the OIS convolution and subtraction times for large images can be accelerated by up to three orders of magnitude.

  8. ACR Imaging IT Reference Guide: Image Sharing: Evolving Solutions in the Age of Interoperability

    PubMed Central

    Erickson, Bradley J.; Choy, Garry

    2014-01-01

    Interoperability is a major focus of the quickly evolving world of Health Information Technology. Easy, yet secure and confidential exchange of imaging exams and the associated reports must be a part of the solutions that are implemented. The availability of historical exams is essential in providing a quality interpretation and reducing inappropriate utilization of imaging services. Today exchange of imaging exams is most often achieved via a CD. We describe the virtues of this solution as well as challenges that have surfaced. Internet and cloud based technologies employed for many consumer services can provide a better solution. Vendors are making these solutions available. Standards for internet based exchange are emerging. Just as Radiology converged on DICOM as a standard to store and view images we need a common exchange standard. We will review the existing standards, and how they are organized into useful workflows through Integrating the Healthcare Enterprise (IHE) profiles. IHE and standards development processes are discussed. Healthcare and the domain of Radiology must stay current with quickly evolving internet standards. The successful use of the “cloud” will depend upon both the technologies we discuss and the policies put into place around these technologies. We discuss both aspects. The Radiology community must lead the way and provide a solution that works for radiologists and clinicians in the Electronic Medical Record (EMR). Lastly we describe the features we believe radiologists should consider when considering adding internet based exchange solutions to their practice. PMID:25467903

  9. Image-Guided Abdominal Surgery and Therapy Delivery

    PubMed Central

    Galloway, Robert L.; Herrell, S. Duke; Miga, Michael I.

    2013-01-01

    Image-Guided Surgery has become the standard of care in intracranial neurosurgery providing more exact resections while minimizing damage to healthy tissue. Moving that process to abdominal organs presents additional challenges in the form of image segmentation, image to physical space registration, organ motion and deformation. In this paper, we present methodologies and results for addressing these challenges in two specific organs: the liver and the kidney. PMID:25077012

  10. Concepts for on-board satellite image registration. Volume 2: IAS prototype performance evaluation standard definition. [NEEDS Information Adaptive System

    NASA Technical Reports Server (NTRS)

    Daluge, D. R.; Ruedger, W. H.

    1981-01-01

    Problems encountered in testing onboard signal processing hardware designed to achieve radiometric and geometric correction of satellite imaging data are considered. These include obtaining representative image and ancillary data for simulation and the transfer and storage of a large quantity of image data at very high speed. The high resolution, high speed preprocessing of LANDSAT-D imagery is considered.

  11. Higher resolution satellite remote sensing and the impact on image mapping

    USGS Publications Warehouse

    Watkins, Allen H.; Thormodsgard, June M.

    1987-01-01

    Recent advances in spatial, spectral, and temporal resolution of civil land remote sensing satellite data are presenting new opportunities for image mapping applications. The U.S. Geological Survey's experimental satellite image mapping program is evolving toward larger scale image map products with increased information content as a result of improved image processing techniques and increased resolution. Thematic mapper data are being used to produce experimental image maps at 1:100,000 scale that meet established U.S. and European map accuracy standards. Availability of high quality, cloud-free, 30-meter ground resolution multispectral data from the Landsat thematic mapper sensor, along with 10-meter ground resolution panchromatic and 20-meter ground resolution multispectral data from the recently launched French SPOT satellite, present new cartographic and image processing challenges.The need to fully exploit these higher resolution data increases the complexity of processing the images into large-scale image maps. The removal of radiometric artifacts and noise prior to geometric correction can be accomplished by using a variety of image processing filters and transforms. Sensor modeling and image restoration techniques allow maximum retention of spatial and radiometric information. An optimum combination of spectral information and spatial resolution can be obtained by merging different sensor types. These processing techniques are discussed and examples are presented.

  12. Real-time image-processing algorithm for markerless tumour tracking using X-ray fluoroscopic imaging.

    PubMed

    Mori, S

    2014-05-01

    To ensure accuracy in respiratory-gating treatment, X-ray fluoroscopic imaging is used to detect tumour position in real time. Detection accuracy is strongly dependent on image quality, particularly positional differences between the patient and treatment couch. We developed a new algorithm to improve the quality of images obtained in X-ray fluoroscopic imaging and report the preliminary results. Two oblique X-ray fluoroscopic images were acquired using a dynamic flat panel detector (DFPD) for two patients with lung cancer. The weighting factor was applied to the DFPD image in respective columns, because most anatomical structures, as well as the treatment couch and port cover edge, were aligned in the superior-inferior direction when the patient lay on the treatment couch. The weighting factors for the respective columns were varied until the standard deviation of the pixel values within the image region was minimized. Once the weighting factors were calculated, the quality of the DFPD image was improved by applying the factors to multiframe images. Applying the image-processing algorithm produced substantial improvement in the quality of images, and the image contrast was increased. The treatment couch and irradiation port edge, which were not related to a patient's position, were removed. The average image-processing time was 1.1 ms, showing that this fast image processing can be applied to real-time tumour-tracking systems. These findings indicate that this image-processing algorithm improves the image quality in patients with lung cancer and successfully removes objects not related to the patient. Our image-processing algorithm might be useful in improving gated-treatment accuracy.

  13. A study of the standard brain in Japanese children: morphological comparison with the MNI template.

    PubMed

    Uchiyama, Hitoshi T; Seki, Ayumi; Tanaka, Daisuke; Koeda, Tatsuya; Jcs Group

    2013-03-01

    Functional magnetic resonance imaging (MRI) studies involve normalization so that the brains of different subjects can be described using the same coordinate system. However, standard brain templates, including the Montreal Neurological Institute (MNI) template that is most frequently used at present, were created based on the brains of Western adults. Because morphological characteristics of the brain differ by race and ethnicity and between adults and children, errors are likely to occur when data from the brains of non-Western individuals are processed using these templates. Therefore, this study was conducted to collect basic data for the creation of a Japanese pediatric standard brain. Participants in this study were 45 healthy children (contributing 65 brain images) between the ages of 6 and 9 years, who had nothing notable in their perinatal and other histories and neurological findings, had normal physical findings and cognitive function, exhibited no behavioral abnormalities, and provided analyzable MR images. 3D-T1-weighted images were obtained using a 1.5-T MRI device, and images from each child were adjusted to the reference image by affine transformation using SPM8. The lengths were measured and compared with those of the MNI template. The Western adult standard brain and the Japanese pediatric standard brain obtained in this study differed greatly in size, particularly along the anteroposterior diameter and in height, suggesting that the correction rates are high, and that errors are likely to occur in the normalization of pediatric brain images. We propose that the use of the Japanese pediatric standard brain created in this study will improve the accuracy of identification of brain regions in functional brain imaging studies involving children. Copyright © 2012 The Japanese Society of Child Neurology. Published by Elsevier B.V. All rights reserved.

  14. Digital techniques for processing Landsat imagery

    NASA Technical Reports Server (NTRS)

    Green, W. B.

    1978-01-01

    An overview of the basic techniques used to process Landsat images with a digital computer, and the VICAR image processing software developed at JPL and available to users through the NASA sponsored COSMIC computer program distribution center is presented. Examples of subjective processing performed to improve the information display for the human observer, such as contrast enhancement, pseudocolor display and band rationing, and of quantitative processing using mathematical models, such as classification based on multispectral signatures of different areas within a given scene and geometric transformation of imagery into standard mapping projections are given. Examples are illustrated by Landsat scenes of the Andes mountains and Altyn-Tagh fault zone in China before and after contrast enhancement and classification of land use in Portland, Oregon. The VICAR image processing software system which consists of a language translator that simplifies execution of image processing programs and provides a general purpose format so that imagery from a variety of sources can be processed by the same basic set of general applications programs is described.

  15. LANDSAT: Non-US standard catalog no. N-33

    NASA Technical Reports Server (NTRS)

    1975-01-01

    A catalog used for dissemination of information regarding the availability of LANDSAT imagery is presented. The Image Processing Facility of the Goddard Space Flight Center, publishes a U.S. and a Non-U.S. Standard Catalog on a monthly schedule, and the catalogs identify imagery which has been processed and input to the data files during the referenced month. The U.S. Standard Catalog includes imagery covering the continental United States, Alaska and Hawaii; the Non-U.S. Catalog identifies all the remaining coverage. Imagery adjacent to the continental U.S. and Alaska borders is included in the U.S. Standard Catalog.

  16. ISO/IEC's image interchange facility

    NASA Astrophysics Data System (ADS)

    Blum, Christof; Hofmann, Georg R.

    1992-04-01

    This paper gives a technical description of the Image Interchange Facility (IIF), which comprises both a formate definition and a functional gateway specification. IIF is a part of the first International Image Processing and Interchange Standard (IPI), which is under elaboration by ISO/IEC JTC1/SC24. This paper reflects the related committee work performed up until January 1992. Considering the deficiencies and drawbacks of existing formats and current practices in exchanging digital images, the need for a new and more general approach to image interchange can be seen. This paper describes the requirements and design principles of the IIF data format and the IIF gateway. Furthermore, it explains the relation to the reference model for open communication (OSI) as well as the relation to the other parts of the IPI standard.

  17. Edge-Based Image Compression with Homogeneous Diffusion

    NASA Astrophysics Data System (ADS)

    Mainberger, Markus; Weickert, Joachim

    It is well-known that edges contain semantically important image information. In this paper we present a lossy compression method for cartoon-like images that exploits information at image edges. These edges are extracted with the Marr-Hildreth operator followed by hysteresis thresholding. Their locations are stored in a lossless way using JBIG. Moreover, we encode the grey or colour values at both sides of each edge by applying quantisation, subsampling and PAQ coding. In the decoding step, information outside these encoded data is recovered by solving the Laplace equation, i.e. we inpaint with the steady state of a homogeneous diffusion process. Our experiments show that the suggested method outperforms the widely-used JPEG standard and can even beat the advanced JPEG2000 standard for cartoon-like images.

  18. Automated processing of label-free Raman microscope images of macrophage cells with standardized regression for high-throughput analysis.

    PubMed

    Milewski, Robert J; Kumagai, Yutaro; Fujita, Katsumasa; Standley, Daron M; Smith, Nicholas I

    2010-11-19

    Macrophages represent the front lines of our immune system; they recognize and engulf pathogens or foreign particles thus initiating the immune response. Imaging macrophages presents unique challenges, as most optical techniques require labeling or staining of the cellular compartments in order to resolve organelles, and such stains or labels have the potential to perturb the cell, particularly in cases where incomplete information exists regarding the precise cellular reaction under observation. Label-free imaging techniques such as Raman microscopy are thus valuable tools for studying the transformations that occur in immune cells upon activation, both on the molecular and organelle levels. Due to extremely low signal levels, however, Raman microscopy requires sophisticated image processing techniques for noise reduction and signal extraction. To date, efficient, automated algorithms for resolving sub-cellular features in noisy, multi-dimensional image sets have not been explored extensively. We show that hybrid z-score normalization and standard regression (Z-LSR) can highlight the spectral differences within the cell and provide image contrast dependent on spectral content. In contrast to typical Raman imaging processing methods using multivariate analysis, such as single value decomposition (SVD), our implementation of the Z-LSR method can operate nearly in real-time. In spite of its computational simplicity, Z-LSR can automatically remove background and bias in the signal, improve the resolution of spatially distributed spectral differences and enable sub-cellular features to be resolved in Raman microscopy images of mouse macrophage cells. Significantly, the Z-LSR processed images automatically exhibited subcellular architectures whereas SVD, in general, requires human assistance in selecting the components of interest. The computational efficiency of Z-LSR enables automated resolution of sub-cellular features in large Raman microscopy data sets without compromise in image quality or information loss in associated spectra. These results motivate further use of label free microscopy techniques in real-time imaging of live immune cells.

  19. Contour Detection and Completion for Inpainting and Segmentation Based on Topological Gradient and Fast Marching Algorithms

    PubMed Central

    Auroux, Didier; Cohen, Laurent D.; Masmoudi, Mohamed

    2011-01-01

    We combine in this paper the topological gradient, which is a powerful method for edge detection in image processing, and a variant of the minimal path method in order to find connected contours. The topological gradient provides a more global analysis of the image than the standard gradient and identifies the main edges of an image. Several image processing problems (e.g., inpainting and segmentation) require continuous contours. For this purpose, we consider the fast marching algorithm in order to find minimal paths in the topological gradient image. This coupled algorithm quickly provides accurate and connected contours. We present then two numerical applications, to image inpainting and segmentation, of this hybrid algorithm. PMID:22194734

  20. Image processing improvement for optical observations of space debris with the TAROT telescopes

    NASA Astrophysics Data System (ADS)

    Thiebaut, C.; Theron, S.; Richard, P.; Blanchet, G.; Klotz, A.; Boër, M.

    2016-07-01

    CNES is involved in the Inter-Agency Space Debris Coordination Committee (IADC) and is observing space debris with two robotic ground based fully automated telescopes called TAROT and operated by the CNRS. An image processing algorithm devoted to debris detection in geostationary orbit is implemented in the standard pipeline. Nevertheless, this algorithm is unable to deal with debris tracking mode images, this mode being the preferred one for debris detectability. We present an algorithm improvement for this mode and give results in terms of false detection rate.

  1. Multi-template image matching using alpha-rooted biquaternion phase correlation with application to logo recognition

    NASA Astrophysics Data System (ADS)

    DelMarco, Stephen

    2011-06-01

    Hypercomplex approaches are seeing increased application to signal and image processing problems. The use of multicomponent hypercomplex numbers, such as quaternions, enables the simultaneous co-processing of multiple signal or image components. This joint processing capability can provide improved exploitation of the information contained in the data, thereby leading to improved performance in detection and recognition problems. In this paper, we apply hypercomplex processing techniques to the logo image recognition problem. Specifically, we develop an image matcher by generalizing classical phase correlation to the biquaternion case. We further incorporate biquaternion Fourier domain alpha-rooting enhancement to create Alpha-Rooted Biquaternion Phase Correlation (ARBPC). We present the mathematical properties which justify use of ARBPC as an image matcher. We present numerical performance results of a logo verification problem using real-world logo data, demonstrating the performance improvement obtained using the hypercomplex approach. We compare results of the hypercomplex approach to standard multi-template matching approaches.

  2. Space-Time Processing for Tactical Mobile Ad Hoc Networks

    DTIC Science & Technology

    2009-08-01

    the results for the standard 8 bits per pixel ( bpp ) 512512 Lena image [3] with a transmission rate of 0.375 bpp . To compare the image quality, we...use peak-signal-to-noise ratio (PSNR), defined as  DE 2255 log10PSNR  (dB) (2) where 255 is due to the 8 bpp image

  3. The microcomputer in the dental office: a new diagnostic aid.

    PubMed

    van der Stelt, P F

    1985-06-01

    The first computer applications in the dental office were based upon standard accountancy procedures. Recently, more and more computer applications have become available to meet the specific requirements of dental practice. This implies not only business procedures, but also facilities to store patient records in the system and retrieve them easily. Another development concerns the automatic calculation of diagnostic data such as those provided in cephalometric analysis. Furthermore, growth and surgical results in the craniofacial area can be predicted by computerized extrapolation. Computers have been useful in obtaining the patient's anamnestic data objectively and for the making of decisions based on such data. Computer-aided instruction systems have been developed for undergraduate students to bridge the gap between textbook and patient interaction without the risks inherent in the latter. Radiology will undergo substantial changes as a result of the application of electronic imaging devices instead of the conventional radiographic films. Computer-assisted electronic imaging will enable image processing, image enhancement, pattern recognition and data transmission for consultation and storage purposes. Image processing techniques will increase image quality whilst still allowing low-dose systems. Standardization of software and system configuration and the development of 'user friendly' programs is the major concern for the near future.

  4. A generic FPGA-based detector readout and real-time image processing board

    NASA Astrophysics Data System (ADS)

    Sarpotdar, Mayuresh; Mathew, Joice; Safonova, Margarita; Murthy, Jayant

    2016-07-01

    For space-based astronomical observations, it is important to have a mechanism to capture the digital output from the standard detector for further on-board analysis and storage. We have developed a generic (application- wise) field-programmable gate array (FPGA) board to interface with an image sensor, a method to generate the clocks required to read the image data from the sensor, and a real-time image processor system (on-chip) which can be used for various image processing tasks. The FPGA board is applied as the image processor board in the Lunar Ultraviolet Cosmic Imager (LUCI) and a star sensor (StarSense) - instruments developed by our group. In this paper, we discuss the various design considerations for this board and its applications in the future balloon and possible space flights.

  5. Mirion--a software package for automatic processing of mass spectrometric images.

    PubMed

    Paschke, C; Leisner, A; Hester, A; Maass, K; Guenther, S; Bouschen, W; Spengler, B

    2013-08-01

    Mass spectrometric imaging (MSI) techniques are of growing interest for the Life Sciences. In recent years, the development of new instruments employing ion sources that are tailored for spatial scanning allowed the acquisition of large data sets. A subsequent data processing, however, is still a bottleneck in the analytical process, as a manual data interpretation is impossible within a reasonable time frame. The transformation of mass spectrometric data into spatial distribution images of detected compounds turned out to be the most appropriate method to visualize the results of such scans, as humans are able to interpret images faster and easier than plain numbers. Image generation, thus, is a time-consuming and complex yet very efficient task. The free software package "Mirion," presented in this paper, allows the handling and analysis of data sets acquired by mass spectrometry imaging. Mirion can be used for image processing of MSI data obtained from many different sources, as it uses the HUPO-PSI-based standard data format imzML, which is implemented in the proprietary software of most of the mass spectrometer companies. Different graphical representations of the recorded data are available. Furthermore, automatic calculation and overlay of mass spectrometric images promotes direct comparison of different analytes for data evaluation. The program also includes tools for image processing and image analysis.

  6. Fault detection and isolation in the challenging Tennessee Eastman process by using image processing techniques.

    PubMed

    Hajihosseini, Payman; Anzehaee, Mohammad Mousavi; Behnam, Behzad

    2018-05-22

    The early fault detection and isolation in industrial systems is a critical factor in preventing equipment damage. In the proposed method, instead of using the time signals of sensors, the 2D image obtained by placing these signals next to each other in a matrix has been used; and then a novel fault detection and isolation procedure has been carried out based on image processing techniques. Different features including texture, wavelet transform, mean and standard deviation of the image accompanied with MLP and RBF neural networks based classifiers have been used for this purpose. Obtained results indicate the notable efficacy and success of the proposed method in detecting and isolating faults of the Tennessee Eastman benchmark process and its superiority over previous techniques. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  7. SU-E-T-497: Semi-Automated in Vivo Radiochromic Film Dosimetry Using a Novel Image Processing Algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reyhan, M; Yue, N

    Purpose: To validate an automated image processing algorithm designed to detect the center of radiochromic film used for in vivo film dosimetry against the current gold standard of manual selection. Methods: An image processing algorithm was developed to automatically select the region of interest (ROI) in *.tiff images that contain multiple pieces of radiochromic film (0.5x1.3cm{sup 2}). After a user has linked a calibration file to the processing algorithm and selected a *.tiff file for processing, an ROI is automatically detected for all films by a combination of thresholding and erosion, which removes edges and any additional markings for orientation.more » Calibration is applied to the mean pixel values from the ROIs and a *.tiff image is output displaying the original image with an overlay of the ROIs and the measured doses. Validation of the algorithm was determined by comparing in vivo dose determined using the current gold standard (manually drawn ROIs) versus automated ROIs for n=420 scanned films. Bland-Altman analysis, paired t-test, and linear regression were performed to demonstrate agreement between the processes. Results: The measured doses ranged from 0.2-886.6cGy. Bland-Altman analysis of the two techniques (automatic minus manual) revealed a bias of -0.28cGy and a 95% confidence interval of (5.5cGy,-6.1cGy). These values demonstrate excellent agreement between the two techniques. Paired t-test results showed no statistical differences between the two techniques, p=0.98. Linear regression with a forced zero intercept demonstrated that Automatic=0.997*Manual, with a Pearson correlation coefficient of 0.999. The minimal differences between the two techniques may be explained by the fact that the hand drawn ROIs were not identical to the automatically selected ones. The average processing time was 6.7seconds in Matlab on an IntelCore2Duo processor. Conclusion: An automated image processing algorithm has been developed and validated, which will help minimize user interaction and processing time of radiochromic film used for in vivo dosimetry.« less

  8. Recent progress in the development of ISO 19751

    NASA Astrophysics Data System (ADS)

    Farnand, Susan P.; Dalal, Edul N.; Ng, Yee S.

    2006-01-01

    A small number of general visual attributes have been recognized as essential in describing image quality. These include micro-uniformity, macro-uniformity, colour rendition, text and line quality, gloss, sharpness, and spatial adjacency or temporal adjacency attributes. The multiple-part International Standard discussed here was initiated by the INCITS W1 committee on the standardization of office equipment to address the need for unambiguously documented procedures and methods, which are widely applicable over the multiple printing technologies employed in office applications, for the appearance-based evaluation of these visually significant image quality attributes of printed image quality. 1,2 The resulting proposed International Standard, for which ISO/IEC WD 19751-1 3 presents an overview and an outline of the overall procedure and common methods, is based on a proposal that was predicated on the idea that image quality could be described by a small set of broad-based attributes. 4 Five ad hoc teams were established (now six since a sharpness team is in the process of being formed) to generate standards for one or more of these image quality attributes. Updates on the colour rendition, text and line quality, and gloss attributes are provided.

  9. An objectively-analyzed method for measuring the useful penetration of x-ray imaging systems.

    PubMed

    Glover, Jack L; Hudson, Lawrence T

    2016-06-01

    The ability to detect wires is an important capability of the cabinet x-ray imaging systems that are used in aviation security as well as the portable x-ray systems that are used by domestic law enforcement and military bomb squads. A number of national and international standards describe methods for testing this capability using the so called useful penetration test metric, where wires are imaged behind different thicknesses of blocking material. Presently, these tests are scored based on human judgments of wire visibility, which are inherently subjective. We propose a new method in which the useful penetration capabilities of an x-ray system are objectively evaluated by an image processing algorithm operating on digital images of a standard test object. The algorithm advantageously applies the Radon transform for curve parameter detection that reduces the problem of wire detection from two dimensions to one. The sensitivity of the wire detection method is adjustable and we demonstrate how the threshold parameter can be set to give agreement with human-judged results. The method was developed to be used in technical performance standards and is currently under ballot for inclusion in a US national aviation security standard.

  10. An objectively-analyzed method for measuring the useful penetration of x-ray imaging systems

    PubMed Central

    Glover, Jack L.; Hudson, Lawrence T.

    2016-01-01

    The ability to detect wires is an important capability of the cabinet x-ray imaging systems that are used in aviation security as well as the portable x-ray systems that are used by domestic law enforcement and military bomb squads. A number of national and international standards describe methods for testing this capability using the so called useful penetration test metric, where wires are imaged behind different thicknesses of blocking material. Presently, these tests are scored based on human judgments of wire visibility, which are inherently subjective. We propose a new method in which the useful penetration capabilities of an x-ray system are objectively evaluated by an image processing algorithm operating on digital images of a standard test object. The algorithm advantageously applies the Radon transform for curve parameter detection that reduces the problem of wire detection from two dimensions to one. The sensitivity of the wire detection method is adjustable and we demonstrate how the threshold parameter can be set to give agreement with human-judged results. The method was developed to be used in technical performance standards and is currently under ballot for inclusion in a US national aviation security standard. PMID:27499586

  11. An objectively-analyzed method for measuring the useful penetration of x-ray imaging systems

    NASA Astrophysics Data System (ADS)

    Glover, Jack L.; Hudson, Lawrence T.

    2016-06-01

    The ability to detect wires is an important capability of the cabinet x-ray imaging systems that are used in aviation security as well as the portable x-ray systems that are used by domestic law enforcement and military bomb squads. A number of national and international standards describe methods for testing this capability using the so called useful penetration test metric, where wires are imaged behind different thicknesses of blocking material. Presently, these tests are scored based on human judgments of wire visibility, which are inherently subjective. We propose a new method in which the useful penetration capabilities of an x-ray system are objectively evaluated by an image processing algorithm operating on digital images of a standard test object. The algorithm advantageously applies the Radon transform for curve parameter detection that reduces the problem of wire detection from two dimensions to one. The sensitivity of the wire detection method is adjustable and we demonstrate how the threshold parameter can be set to give agreement with human-judged results. The method was developed to be used in technical performance standards and is currently under ballot for inclusion in an international aviation security standard.

  12. Real-time catheter localization and visualization using three-dimensional echocardiography

    NASA Astrophysics Data System (ADS)

    Kozlowski, Pawel; Bandaru, Raja Sekhar; D'hooge, Jan; Samset, Eigil

    2017-03-01

    Real-time three-dimensional transesophageal echocardiography (RT3D-TEE) is increasingly used during minimally invasive cardiac surgeries (MICS). In many cath labs, RT3D-TEE is already one of the requisite tools for image guidance during MICS. However, the visualization of the catheter is not always satisfactory making 3D- TEE challenging to use as the only modality for guidance. We propose a novel technique for better visualization of the catheter along with the cardiac anatomy using TEE alone - exploiting both beamforming and post processing methods. We extended our earlier method called Delay and Standard Deviation (DASD) beamforming to 3D in order to enhance specular reflections. The beam-formed image was further post-processed by the Frangi filter to segment the catheter. Multi-variate visualization techniques enabled us to render both the standard tissue and the DASD beam-formed image on a clinical ultrasound scanner simultaneously. A frame rate of 15 FPS was achieved.

  13. Application of QC_DR software for acceptance testing and routine quality control of direct digital radiography systems: initial experiences using the Italian Association of Physicist in Medicine quality control protocol.

    PubMed

    Nitrosi, Andrea; Bertolini, Marco; Borasi, Giovanni; Botti, Andrea; Barani, Adriana; Rivetti, Stefano; Pierotti, Luisa

    2009-12-01

    Ideally, medical x-ray imaging systems should be designed to deliver maximum image quality at an acceptable radiation risk to the patient. Quality assurance procedures are employed to ensure that these standards are maintained. A quality control protocol for direct digital radiography (DDR) systems is described and discussed. Software to automatically process and analyze the required images was developed. In this paper, the initial results obtained on equipment of different DDR manufacturers were reported. The protocol was developed to highlight even small discrepancies in standard operating performance.

  14. Parallel Processing of Images in Mobile Devices using BOINC

    NASA Astrophysics Data System (ADS)

    Curiel, Mariela; Calle, David F.; Santamaría, Alfredo S.; Suarez, David F.; Flórez, Leonardo

    2018-04-01

    Medical image processing helps health professionals make decisions for the diagnosis and treatment of patients. Since some algorithms for processing images require substantial amounts of resources, one could take advantage of distributed or parallel computing. A mobile grid can be an adequate computing infrastructure for this problem. A mobile grid is a grid that includes mobile devices as resource providers. In a previous step of this research, we selected BOINC as the infrastructure to build our mobile grid. However, parallel processing of images in mobile devices poses at least two important challenges: the execution of standard libraries for processing images and obtaining adequate performance when compared to desktop computers grids. By the time we started our research, the use of BOINC in mobile devices also involved two issues: a) the execution of programs in mobile devices required to modify the code to insert calls to the BOINC API, and b) the division of the image among the mobile devices as well as its merging required additional code in some BOINC components. This article presents answers to these four challenges.

  15. Advanced Topics in Space Situational Awareness

    DTIC Science & Technology

    2007-11-07

    34super-resolution." Such optical superresolution is characteristic of many model-based image processing algorithms, and reflects the incorporation of...Sampling Theorem," J. Opt. Soc. Am. A, vol. 24, 311-325 (2007). [39] S. Prasad, "Digital and Optical Superresolution of Low-Resolution Image Sequences," Un...wavefront coding for the specific application of extension of image depth well beyond what is possible in a standard imaging system. The problem of optical

  16. Image thumbnails that represent blur and noise.

    PubMed

    Samadani, Ramin; Mauer, Timothy A; Berfanger, David M; Clark, James H

    2010-02-01

    The information about the blur and noise of an original image is lost when a standard image thumbnail is generated by filtering and subsampling. Image browsing becomes difficult since the standard thumbnails do not distinguish between high-quality and low-quality originals. In this paper, an efficient algorithm with a blur-generating component and a noise-generating component preserves the local blur and the noise of the originals. The local blur is rapidly estimated using a scale-space expansion of the standard thumbnail and subsequently used to apply a space-varying blur to the thumbnail. The noise is estimated and rendered by using multirate signal transformations that allow most of the processing to occur at the lower spatial sampling rate of the thumbnail. The new thumbnails provide a quick, natural way for users to identify images of good quality. A subjective evaluation shows the new thumbnails are more representative of their originals for blurry images. The noise generating component improves the results for noisy images, but degrades the results for textured images. The blur generating component of the new thumbnails may always be used to advantage. The decision to use the noise generating component of the new thumbnails should be based on testing with the particular image mix expected for the application.

  17. WE-E-12A-01: Medical Physics 1.0 to 2.0: MRI, Displays, Informatics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pickens, D; Flynn, M; Peck, D

    Medical Physics 2.0 is a bold vision for an existential transition of clinical imaging physics in face of the new realities of value-based and evidence-based medicine, comparative effectiveness, and meaningful use. It speaks to how clinical imaging physics can expand beyond traditional insular models of inspection and acceptance testing, oriented toward compliance, towards team-based models of operational engagement, prospective definition and assurance of effective use, and retrospective evaluation of clinical performance. Organized into four sessions of the AAPM, this particular session focuses on three specific modalities as outlined below. MRI 2.0: This presentation will look into the future of clinicalmore » MR imaging and what the clinical medical physicist will need to be doing as the technology of MR imaging evolves. Many of the measurement techniques used today will need to be expanded to address the advent of higher field imaging systems and dedicated imagers for specialty applications. Included will be the need to address quality assurance and testing metrics for multi-channel MR imagers and hybrid devices such as MR/PET systems. New pulse sequences and acquisition methods, increasing use of MR spectroscopy, and real-time guidance procedures will place the burden on the medical physicist to define and use new tools to properly evaluate these systems, but the clinical applications must be understood so that these tools are use correctly. Finally, new rules, clinical requirements, and regulations will mean that the medical physicist must actively work to keep her/his sites compliant and must work closely with physicians to ensure best performance of these systems. Informatics Display 1.0 to 2.0: Medical displays are an integral part of medical imaging operation. The DICOM and AAPM (TG18) efforts have led to clear definitions of performance requirements of monochrome medical displays that can be followed by medical physicists to ensure proper performance. However, effective implementation of that oversight has been challenging due to the number and extend of medical displays in use at a facility. The advent of color display and mobile displays has added additional challenges to the task of the medical physicist. This informatics display lecture first addresses the current display guidelines (the 1.0 paradigm) and further outlines the initiatives and prospects for color and mobile displays (the 2.0 paradigm). Informatics Management 1.0 to 2.0: Imaging informatics is part of every radiology practice today. Imaging informatics covers everything from the ordering of a study, through the data acquisition and processing, display and archiving, reporting of findings and the billing for the services performed. The standardization of the processes used to manage the information and methodologies to integrate these standards is being developed and advanced continuously. These developments are done in an open forum and imaging organizations and professionals all have a part in the process. In the Informatics Management presentation, the flow of information and the integration of the standards used in the processes will be reviewed. The role of radiologists and physicists in the process will be discussed. Current methods (the 1.0 paradigm) and evolving methods (the 2.0 paradigm) for validation of informatics systems function will also be discussed. Learning Objectives: Identify requirements for improving quality assurance and compliance tools for advanced and hybrid MRI systems. Identify the need for new quality assurance metrics and testing procedures for advanced systems. Identify new hardware systems and new procedures needed to evaluate MRI systems. Understand the components of current medical physics expectation for medical displays. Understand the role and prospect fo medical physics for color and mobile display devices. Understand different areas of imaging informatics and the methodology for developing informatics standards. Understand the current status of informatics standards and the role of physicists and radiologists in the process, and the current technology for validating the function of these systems.« less

  18. Impervious surfaces mapping using high resolution satellite imagery

    NASA Astrophysics Data System (ADS)

    Shirmeen, Tahmina

    In recent years, impervious surfaces have emerged not only as an indicator of the degree of urbanization, but also as an indicator of environmental quality. As impervious surface area increases, storm water runoff increases in velocity, quantity, temperature and pollution load. Any of these attributes can contribute to the degradation of natural hydrology and water quality. Various image processing techniques have been used to identify the impervious surfaces, however, most of the existing impervious surface mapping tools used moderate resolution imagery. In this project, the potential of standard image processing techniques to generate impervious surface data for change detection analysis using high-resolution satellite imagery was evaluated. The city of Oxford, MS was selected as the study site for this project. Standard image processing techniques, including Normalized Difference Vegetation Index (NDVI), Principal Component Analysis (PCA), a combination of NDVI and PCA, and image classification algorithms, were used to generate impervious surfaces from multispectral IKONOS and QuickBird imagery acquired in both leaf-on and leaf-off conditions. Accuracy assessments were performed, using truth data generated by manual classification, with Kappa statistics and Zonal statistics to select the most appropriate image processing techniques for impervious surface mapping. The performance of selected image processing techniques was enhanced by incorporating Soil Brightness Index (SBI) and Greenness Index (GI) derived from Tasseled Cap Transformed (TCT) IKONOS and QuickBird imagery. A time series of impervious surfaces for the time frame between 2001 and 2007 was made using the refined image processing techniques to analyze the changes in IS in Oxford. It was found that NDVI and the combined NDVI--PCA methods are the most suitable image processing techniques for mapping impervious surfaces in leaf-off and leaf-on conditions respectively, using high resolution multispectral imagery. It was also found that IS data generated by these techniques can be refined by removing the conflicting dry soil patches using SBI and GI obtained from TCT of the same imagery used for IS data generation. The change detection analysis of the IS time series shows that Oxford experienced the major changes in IS from the year 2001 to 2004 and 2006 to 2007.

  19. Cloud Engineering Principles and Technology Enablers for Medical Image Processing-as-a-Service

    PubMed Central

    Bao, Shunxing; Plassard, Andrew J.; Landman, Bennett A.; Gokhale, Aniruddha

    2017-01-01

    Traditional in-house, laboratory-based medical imaging studies use hierarchical data structures (e.g., NFS file stores) or databases (e.g., COINS, XNAT) for storage and retrieval. The resulting performance from these approaches is, however, impeded by standard network switches since they can saturate network bandwidth during transfer from storage to processing nodes for even moderate-sized studies. To that end, a cloud-based “medical image processing-as-a-service” offers promise in utilizing the ecosystem of Apache Hadoop, which is a flexible framework providing distributed, scalable, fault tolerant storage and parallel computational modules, and HBase, which is a NoSQL database built atop Hadoop’s distributed file system. Despite this promise, HBase’s load distribution strategy of region split and merge is detrimental to the hierarchical organization of imaging data (e.g., project, subject, session, scan, slice). This paper makes two contributions to address these concerns by describing key cloud engineering principles and technology enhancements we made to the Apache Hadoop ecosystem for medical imaging applications. First, we propose a row-key design for HBase, which is a necessary step that is driven by the hierarchical organization of imaging data. Second, we propose a novel data allocation policy within HBase to strongly enforce collocation of hierarchically related imaging data. The proposed enhancements accelerate data processing by minimizing network usage and localizing processing to machines where the data already exist. Moreover, our approach is amenable to the traditional scan, subject, and project-level analysis procedures, and is compatible with standard command line/scriptable image processing software. Experimental results for an illustrative sample of imaging data reveals that our new HBase policy results in a three-fold time improvement in conversion of classic DICOM to NiFTI file formats when compared with the default HBase region split policy, and nearly a six-fold improvement over a commonly available network file system (NFS) approach even for relatively small file sets. Moreover, file access latency is lower than network attached storage. PMID:28884169

  20. Magnetic tape

    NASA Technical Reports Server (NTRS)

    Robinson, Harriss

    1992-01-01

    The move to visualization and image processing in data systems is increasing the demand for larger and faster mass storage systems. The technology of choice is magnetic tape. This paper briefly reviews the technology past, present, and projected. A case is made for standards and the value of the standards to users.

  1. Cracks in Continuing Education's Mirror and a Fix To Correct Its Distorted Internal and External Image.

    ERIC Educational Resources Information Center

    Loch, John R.

    2003-01-01

    Outlines problems in continuing higher education, suggesting that it lacks (1) a standard name; (2) a unified voice on national issues; (3) a standard set of roles and functions; (4) a standard title for the chief administrative officer; (5) an accreditation body and process; and (6) resolution of the centralization/decentralization issue. (SK)

  2. Fully automatic and reference-marker-free image stitching method for full-spine and full-leg imaging with computed radiography

    NASA Astrophysics Data System (ADS)

    Wang, Xiaohui; Foos, David H.; Doran, James; Rogers, Michael K.

    2004-05-01

    Full-leg and full-spine imaging with standard computed radiography (CR) systems requires several cassettes/storage phosphor screens to be placed in a staggered arrangement and exposed simultaneously to achieve an increased imaging area. A method has been developed that can automatically and accurately stitch the acquired sub-images without relying on any external reference markers. It can detect and correct the order, orientation, and overlap arrangement of the subimages for stitching. The automatic determination of the order, orientation, and overlap arrangement of the sub-images consists of (1) constructing a hypothesis list that includes all cassette/screen arrangements, (2) refining hypotheses based on a set of rules derived from imaging physics, (3) correlating each consecutive sub-image pair in each hypothesis and establishing an overall figure-of-merit, (4) selecting the hypothesis of maximum figure-of-merit. The stitching process requires the CR reader to over scan each CR screen so that the screen edges are completely visible in the acquired sub-images. The rotational displacement and vertical displacement between two consecutive sub-images are calculated by matching the orientation and location of the screen edge in the front image and its corresponding shadow in the back image. The horizontal displacement is estimated by maximizing the correlation function between the two image sections in the overlap region. Accordingly, the two images are stitched together. This process is repeated for the newly stitched composite image and the next consecutive sub-image until a full-image composite is created. The method has been evaluated in both phantom experiments and clinical studies. The standard deviation of image misregistration is below one image pixel.

  3. Estimation of bladder wall location in ultrasound images.

    PubMed

    Topper, A K; Jernigan, M E

    1991-05-01

    A method of automatically estimating the location of the bladder wall in ultrasound images is proposed. Obtaining this estimate is intended to be the first stage in the development of an automatic bladder volume calculation system. The first step in the bladder wall estimation scheme involves globally processing the images using standard image processing techniques to highlight the bladder wall. Separate processing sequences are required to highlight the anterior bladder wall and the posterior bladder wall. The sequence to highlight the anterior bladder wall involves Gaussian smoothing and second differencing followed by zero-crossing detection. Median filtering followed by thresholding and gradient detection is used to highlight as much of the rest of the bladder wall as was visible in the original images. Then a 'bladder wall follower'--a line follower with rules based on the characteristics of ultrasound imaging and the anatomy involved--is applied to the processed images to estimate the bladder wall location by following the portions of the bladder wall which are highlighted and filling in the missing segments. The results achieved using this scheme are presented.

  4. Nanostructure size determination in p-type porous silicon by the use of transmission electron diffraction image processing

    NASA Astrophysics Data System (ADS)

    Ramirez-Porras, A.

    2005-06-01

    The structure of p-type porous silicon (PS) has been investigated by the use of transmission electron diffraction (TED) microscopy and image processing. The results suggest the presence of well oriented crystalline phases and polycrystalline phases characterized by random orientation. These phases are believed to be formed by spheres with a mean diameter of 4.3 nm and a standard deviation of 1.3 nm.

  5. Vectorized image segmentation via trixel agglomeration

    DOEpatents

    Prasad, Lakshman [Los Alamos, NM; Skourikhine, Alexei N [Los Alamos, NM

    2006-10-24

    A computer implemented method transforms an image comprised of pixels into a vectorized image specified by a plurality of polygons that can be subsequently used to aid in image processing and understanding. The pixelated image is processed to extract edge pixels that separate different colors and a constrained Delaunay triangulation of the edge pixels forms a plurality of triangles having edges that cover the pixelated image. A color for each one of the plurality of triangles is determined from the color pixels within each triangle. A filter is formed with a set of grouping rules related to features of the pixelated image and applied to the plurality of triangle edges to merge adjacent triangles consistent with the filter into polygons having a plurality of vertices. The pixelated image may be then reformed into an array of the polygons, that can be represented collectively and efficiently by standard vector image.

  6. Reduced exposure using asymmetric cone beam processing for wide area detector cardiac CT

    PubMed Central

    Bedayat, Arash; Kumamaru, Kanako; Powers, Sara L.; Signorelli, Jason; Steigner, Michael L.; Steveson, Chloe; Soga, Shigeyoshi; Adams, Kimberly; Mitsouras, Dimitrios; Clouse, Melvin; Mather, Richard T.

    2011-01-01

    The purpose of this study was to estimate dose reduction after implementation of asymmetrical cone beam processing using exposure differences measured in a water phantom and a small cohort of clinical coronary CTA patients. Two separate 320 × 0.5 mm detector row scans of a water phantom used identical cardiac acquisition parameters before and after software modifications from symmetric to asymmetric cone beam acquisition and processing. Exposure was measured at the phantom surface with Optically Stimulated Luminescence (OSL) dosimeters at 12 equally spaced angular locations. Mean HU and standard deviation (SD) for both approaches were compared using ROI measurements obtained at the center plus four peripheral locations in the water phantom. To assess image quality, mean HU and standard deviation (SD) for both approaches were compared using ROI measurements obtained at five points within the water phantom. Retrospective evaluation of 64 patients (37 symmetric; 27 asymmetric acquisition) included clinical data, scanning parameters, quantitative plus qualitative image assessment, and estimated radiation dose. In the water phantom, the asymmetric cone beam processing reduces exposure by approximately 20% with no change in image quality. The clinical coronary CTA patient groups had comparable demographics. The estimated dose reduction after implementation of the asymmetric approach was roughly 24% with no significant difference between the symmetric and asymmetric approach with respect to objective measures of image quality or subjective assessment using a four point scale. When compared to a symmetric approach, the decreased exposure, subsequent lower patient radiation dose, and similar image quality from asymmetric cone beam processing supports its routine clinical use. PMID:21336552

  7. Reduced exposure using asymmetric cone beam processing for wide area detector cardiac CT.

    PubMed

    Bedayat, Arash; Rybicki, Frank J; Kumamaru, Kanako; Powers, Sara L; Signorelli, Jason; Steigner, Michael L; Steveson, Chloe; Soga, Shigeyoshi; Adams, Kimberly; Mitsouras, Dimitrios; Clouse, Melvin; Mather, Richard T

    2012-02-01

    The purpose of this study was to estimate dose reduction after implementation of asymmetrical cone beam processing using exposure differences measured in a water phantom and a small cohort of clinical coronary CTA patients. Two separate 320 × 0.5 mm detector row scans of a water phantom used identical cardiac acquisition parameters before and after software modifications from symmetric to asymmetric cone beam acquisition and processing. Exposure was measured at the phantom surface with Optically Stimulated Luminescence (OSL) dosimeters at 12 equally spaced angular locations. Mean HU and standard deviation (SD) for both approaches were compared using ROI measurements obtained at the center plus four peripheral locations in the water phantom. To assess image quality, mean HU and standard deviation (SD) for both approaches were compared using ROI measurements obtained at five points within the water phantom. Retrospective evaluation of 64 patients (37 symmetric; 27 asymmetric acquisition) included clinical data, scanning parameters, quantitative plus qualitative image assessment, and estimated radiation dose. In the water phantom, the asymmetric cone beam processing reduces exposure by approximately 20% with no change in image quality. The clinical coronary CTA patient groups had comparable demographics. The estimated dose reduction after implementation of the asymmetric approach was roughly 24% with no significant difference between the symmetric and asymmetric approach with respect to objective measures of image quality or subjective assessment using a four point scale. When compared to a symmetric approach, the decreased exposure, subsequent lower patient radiation dose, and similar image quality from asymmetric cone beam processing supports its routine clinical use.

  8. Sensitivity to image recurrence across eye-movement-like image transitions through local serial inhibition in the retina

    PubMed Central

    Krishnamoorthy, Vidhyasankar; Weick, Michael; Gollisch, Tim

    2017-01-01

    Standard models of stimulus encoding in the retina postulate that image presentations activate neurons according to the increase of preferred contrast inside the receptive field. During natural vision, however, images do not arrive in isolation, but follow each other rapidly, separated by sudden gaze shifts. We here report that, contrary to standard models, specific ganglion cells in mouse retina are suppressed after a rapid image transition by changes in visual patterns across the transition, but respond with a distinct spike burst when the same pattern reappears. This sensitivity to image recurrence depends on opposing effects of glycinergic and GABAergic inhibition and can be explained by a circuit of local serial inhibition. Rapid image transitions thus trigger a mode of operation that differs from the processing of simpler stimuli and allows the retina to tag particular image parts or to detect transition types that lead to recurring stimulus patterns. DOI: http://dx.doi.org/10.7554/eLife.22431.001 PMID:28230526

  9. New methods of MR image intensity standardization via generalized scale

    NASA Astrophysics Data System (ADS)

    Madabhushi, Anant; Udupa, Jayaram K.

    2005-04-01

    Image intensity standardization is a post-acquisition processing operation designed for correcting acquisition-to-acquisition signal intensity variations (non-standardness) inherent in Magnetic Resonance (MR) images. While existing standardization methods based on histogram landmarks have been shown to produce a significant gain in the similarity of resulting image intensities, their weakness is that, in some instances the same histogram-based landmark may represent one tissue, while in other cases it may represent different tissues. This is often true for diseased or abnormal patient studies in which significant changes in the image intensity characteristics may occur. In an attempt to overcome this problem, in this paper, we present two new intensity standardization methods based on the concept of generalized scale. In reference 1 we introduced the concept of generalized scale (g-scale) to overcome the shape, topological, and anisotropic constraints imposed by other local morphometric scale models. Roughly speaking, the g-scale of a voxel in a scene was defined as the largest set of voxels connected to the voxel that satisfy some homogeneity criterion. We subsequently formulated a variant of the generalized scale notion, referred to as generalized ball scale (gB-scale), which, in addition to having the advantages of g-scale, also has superior noise resistance properties. These scale concepts are utilized in this paper to accurately determine principal tissue regions within MR images, and landmarks derived from these regions are used to perform intensity standardization. The new methods were qualitatively and quantitatively evaluated on a total of 67 clinical 3D MR images corresponding to four different protocols and to normal, Multiple Sclerosis (MS), and brain tumor patient studies. The generalized scale-based methods were found to be better than the existing methods, with a significant improvement observed for severely diseased and abnormal patient studies.

  10. An approach for quantitative image quality analysis for CT

    NASA Astrophysics Data System (ADS)

    Rahimi, Amir; Cochran, Joe; Mooney, Doug; Regensburger, Joe

    2016-03-01

    An objective and standardized approach to assess image quality of Compute Tomography (CT) systems is required in a wide variety of imaging processes to identify CT systems appropriate for a given application. We present an overview of the framework we have developed to help standardize and to objectively assess CT image quality for different models of CT scanners used for security applications. Within this framework, we have developed methods to quantitatively measure metrics that should correlate with feature identification, detection accuracy and precision, and image registration capabilities of CT machines and to identify strengths and weaknesses in different CT imaging technologies in transportation security. To that end we have designed, developed and constructed phantoms that allow for systematic and repeatable measurements of roughly 88 image quality metrics, representing modulation transfer function, noise equivalent quanta, noise power spectra, slice sensitivity profiles, streak artifacts, CT number uniformity, CT number consistency, object length accuracy, CT number path length consistency, and object registration. Furthermore, we have developed a sophisticated MATLAB based image analysis tool kit to analyze CT generated images of phantoms and report these metrics in a format that is standardized across the considered models of CT scanners, allowing for comparative image quality analysis within a CT model or between different CT models. In addition, we have developed a modified sparse principal component analysis (SPCA) method to generate a modified set of PCA components as compared to the standard principal component analysis (PCA) with sparse loadings in conjunction with Hotelling T2 statistical analysis method to compare, qualify, and detect faults in the tested systems.

  11. A Study of Alternative Computer Architectures for System Reliability and Software Simplification.

    DTIC Science & Technology

    1981-04-22

    compression. Several known applications of neighborhood processing, such as noise removal, and boundary smoothing, are shown to be special cases of...Processing [21] A small effort was undertaken to implement image array processing at a very low cost. To this end, a standard Qwip Facsimile

  12. Design and implementation of non-linear image processing functions for CMOS image sensor

    NASA Astrophysics Data System (ADS)

    Musa, Purnawarman; Sudiro, Sunny A.; Wibowo, Eri P.; Harmanto, Suryadi; Paindavoine, Michel

    2012-11-01

    Today, solid state image sensors are used in many applications like in mobile phones, video surveillance systems, embedded medical imaging and industrial vision systems. These image sensors require the integration in the focal plane (or near the focal plane) of complex image processing algorithms. Such devices must meet the constraints related to the quality of acquired images, speed and performance of embedded processing, as well as low power consumption. To achieve these objectives, low-level analog processing allows extracting the useful information in the scene directly. For example, edge detection step followed by a local maxima extraction will facilitate the high-level processing like objects pattern recognition in a visual scene. Our goal was to design an intelligent image sensor prototype achieving high-speed image acquisition and non-linear image processing (like local minima and maxima calculations). For this purpose, we present in this article the design and test of a 64×64 pixels image sensor built in a standard CMOS Technology 0.35 μm including non-linear image processing. The architecture of our sensor, named nLiRIC (non-Linear Rapid Image Capture), is based on the implementation of an analog Minima/Maxima Unit. This MMU calculates the minimum and maximum values (non-linear functions), in real time, in a 2×2 pixels neighbourhood. Each MMU needs 52 transistors and the pitch of one pixel is 40×40 mu m. The total area of the 64×64 pixels is 12.5mm2. Our tests have shown the validity of the main functions of our new image sensor like fast image acquisition (10K frames per second), minima/maxima calculations in less then one ms.

  13. Multilevel principal component analysis (mPCA) in shape analysis: A feasibility study in medical and dental imaging.

    PubMed

    Farnell, D J J; Popat, H; Richmond, S

    2016-06-01

    Methods used in image processing should reflect any multilevel structures inherent in the image dataset or they run the risk of functioning inadequately. We wish to test the feasibility of multilevel principal components analysis (PCA) to build active shape models (ASMs) for cases relevant to medical and dental imaging. Multilevel PCA was used to carry out model fitting to sets of landmark points and it was compared to the results of "standard" (single-level) PCA. Proof of principle was tested by applying mPCA to model basic peri-oral expressions (happy, neutral, sad) approximated to the junction between the mouth/lips. Monte Carlo simulations were used to create this data which allowed exploration of practical implementation issues such as the number of landmark points, number of images, and number of groups (i.e., "expressions" for this example). To further test the robustness of the method, mPCA was subsequently applied to a dental imaging dataset utilising landmark points (placed by different clinicians) along the boundary of mandibular cortical bone in panoramic radiographs of the face. Changes of expression that varied between groups were modelled correctly at one level of the model and changes in lip width that varied within groups at another for the Monte Carlo dataset. Extreme cases in the test dataset were modelled adequately by mPCA but not by standard PCA. Similarly, variations in the shape of the cortical bone were modelled by one level of mPCA and variations between the experts at another for the panoramic radiographs dataset. Results for mPCA were found to be comparable to those of standard PCA for point-to-point errors via miss-one-out testing for this dataset. These errors reduce with increasing number of eigenvectors/values retained, as expected. We have shown that mPCA can be used in shape models for dental and medical image processing. mPCA was found to provide more control and flexibility when compared to standard "single-level" PCA. Specifically, mPCA is preferable to "standard" PCA when multiple levels occur naturally in the dataset. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  14. Interoperability in planetary research for geospatial data analysis

    NASA Astrophysics Data System (ADS)

    Hare, Trent M.; Rossi, Angelo P.; Frigeri, Alessandro; Marmo, Chiara

    2018-01-01

    For more than a decade there has been a push in the planetary science community to support interoperable methods for accessing and working with geospatial data. Common geospatial data products for planetary research include image mosaics, digital elevation or terrain models, geologic maps, geographic location databases (e.g., craters, volcanoes) or any data that can be tied to the surface of a planetary body (including moons, comets or asteroids). Several U.S. and international cartographic research institutions have converged on mapping standards that embrace standardized geospatial image formats, geologic mapping conventions, U.S. Federal Geographic Data Committee (FGDC) cartographic and metadata standards, and notably on-line mapping services as defined by the Open Geospatial Consortium (OGC). The latter includes defined standards such as the OGC Web Mapping Services (simple image maps), Web Map Tile Services (cached image tiles), Web Feature Services (feature streaming), Web Coverage Services (rich scientific data streaming), and Catalog Services for the Web (data searching and discoverability). While these standards were developed for application to Earth-based data, they can be just as valuable for planetary domain. Another initiative, called VESPA (Virtual European Solar and Planetary Access), will marry several of the above geoscience standards and astronomy-based standards as defined by International Virtual Observatory Alliance (IVOA). This work outlines the current state of interoperability initiatives in use or in the process of being researched within the planetary geospatial community.

  15. Pictures, images, and recollective experience.

    PubMed

    Dewhurst, S A; Conway, M A

    1994-09-01

    Five experiments investigated the influence of picture processing on recollective experience in recognition memory. Subjects studied items that differed in visual or imaginal detail, such as pictures versus words and high-imageability versus low-imageability words, and performed orienting tasks that directed processing either toward a stimulus as a word or toward a stimulus as a picture or image. Standard effects of imageability (e.g., the picture superiority effect and memory advantages following imagery) were obtained only in recognition judgments that featured recollective experience and were eliminated or reversed when recognition was not accompanied by recollective experience. It is proposed that conscious recollective experience in recognition memory is cued by attributes of retrieved memories such as sensory-perceptual attributes and records of cognitive operations performed at encoding.

  16. Nonparametric Bayesian Dictionary Learning for Analysis of Noisy and Incomplete Images

    PubMed Central

    Zhou, Mingyuan; Chen, Haojun; Paisley, John; Ren, Lu; Li, Lingbo; Xing, Zhengming; Dunson, David; Sapiro, Guillermo; Carin, Lawrence

    2013-01-01

    Nonparametric Bayesian methods are considered for recovery of imagery based upon compressive, incomplete, and/or noisy measurements. A truncated beta-Bernoulli process is employed to infer an appropriate dictionary for the data under test and also for image recovery. In the context of compressive sensing, significant improvements in image recovery are manifested using learned dictionaries, relative to using standard orthonormal image expansions. The compressive-measurement projections are also optimized for the learned dictionary. Additionally, we consider simpler (incomplete) measurements, defined by measuring a subset of image pixels, uniformly selected at random. Spatial interrelationships within imagery are exploited through use of the Dirichlet and probit stick-breaking processes. Several example results are presented, with comparisons to other methods in the literature. PMID:21693421

  17. LANDSAT 2 world standard catalog, 1-31 December 1978

    NASA Technical Reports Server (NTRS)

    1978-01-01

    The World Standard Catalog lists imagery acquired by LANDSAT 2 which was processed and input to the data files during the referenced period. Information on cloud cover and image quality is given for each scene. The microfilm roll and frame on which the scene may be found is presented.

  18. LANDSAT 3 world standard catalog, 1-31 December 1978

    NASA Technical Reports Server (NTRS)

    1978-01-01

    The World Standard Catalog lists imagery acquired by LANDSAT 3 which was processed and input to the data files during the referenced period. Information on cloud cover and image quality is given for each scene. The microfilm roll and frame on which the scene may be found is given.

  19. Image processing techniques for digital orthophotoquad production

    USGS Publications Warehouse

    Hood, Joy J.; Ladner, L. J.; Champion, Richard A.

    1989-01-01

    Orthophotographs have long been recognized for their value as supplements or alternatives to standard maps. Recent trends towards digital cartography have resulted in efforts by the US Geological Survey to develop a digital orthophotoquad production system. Digital image files were created by scanning color infrared photographs on a microdensitometer. Rectification techniques were applied to remove tile and relief displacement, thereby creating digital orthophotos. Image mosaicking software was then used to join the rectified images, producing digital orthophotos in quadrangle format.

  20. Technical innovation changes standard radiographic protocols in veterinary medicine: is it necessary to obtain two dorsoproximal-palmarodistal oblique views of the equine foot when using computerised radiography systems?

    PubMed

    Whitlock, J; Dixon, J; Sherlock, C; Tucker, R; Bolt, D M; Weller, R

    2016-05-21

    Since the 1950s, veterinary practitioners have included two separate dorsoproximal-palmarodistal oblique (DPr-PaDiO) radiographs as part of a standard series of the equine foot. One image is obtained to visualise the distal phalanx and the other to visualise the navicular bone. However, rapid development of computed radiography and digital radiography and their post-processing capabilities could mean that this practice is no longer required. The aim of this study was to determine differences in perceived image quality between DPr-PaDiO radiographs that were acquired with a computerised radiography system with exposures, centring and collimation recommended for the navicular bone versus images acquired for the distal phalanx but were subsequently manipulated post-acquisition to highlight the navicular bone. Thirty images were presented to four clinicians for quality assessment and graded using a 1-3 scale (1=textbook quality, 2=diagnostic quality, 3=non-diagnostic image). No significant difference in diagnostic quality was found between the original navicular bone images and the manipulated distal phalanx images. This finding suggests that a single DPr-PaDiO image of the distal phalanx is sufficient for an equine foot radiographic series, with appropriate post-processing and manipulation. This change in protocol will result in reduced radiographic study time and decreased patient/personnel radiation exposure. British Veterinary Association.

  1. Neuroimaging standards for research into small vessel disease and its contribution to ageing and neurodegeneration

    PubMed Central

    Wardlaw, Joanna M; Smith, Eric E; Biessels, Geert J; Cordonnier, Charlotte; Fazekas, Franz; Frayne, Richard; Lindley, Richard I; O'Brien, John T; Barkhof, Frederik; Benavente, Oscar R; Black, Sandra E; Brayne, Carol; Breteler, Monique; Chabriat, Hugues; DeCarli, Charles; de Leeuw, Frank-Erik; Doubal, Fergus; Duering, Marco; Fox, Nick C; Greenberg, Steven; Hachinski, Vladimir; Kilimann, Ingo; Mok, Vincent; Oostenbrugge, Robert van; Pantoni, Leonardo; Speck, Oliver; Stephan, Blossom C M; Teipel, Stefan; Viswanathan, Anand; Werring, David; Chen, Christopher; Smith, Colin; van Buchem, Mark; Norrving, Bo; Gorelick, Philip B; Dichgans, Martin

    2013-01-01

    Summary Cerebral small vessel disease (SVD) is a common accompaniment of ageing. Features seen on neuroimaging include recent small subcortical infarcts, lacunes, white matter hyperintensities, perivascular spaces, microbleeds, and brain atrophy. SVD can present as a stroke or cognitive decline, or can have few or no symptoms. SVD frequently coexists with neurodegenerative disease, and can exacerbate cognitive deficits, physical disabilities, and other symptoms of neurodegeneration. Terminology and definitions for imaging the features of SVD vary widely, which is also true for protocols for image acquisition and image analysis. This lack of consistency hampers progress in identifying the contribution of SVD to the pathophysiology and clinical features of common neurodegenerative diseases. We are an international working group from the Centres of Excellence in Neurodegeneration. We completed a structured process to develop definitions and imaging standards for markers and consequences of SVD. We aimed to achieve the following: first, to provide a common advisory about terms and definitions for features visible on MRI; second, to suggest minimum standards for image acquisition and analysis; third, to agree on standards for scientific reporting of changes related to SVD on neuroimaging; and fourth, to review emerging imaging methods for detection and quantification of preclinical manifestations of SVD. Our findings and recommendations apply to research studies, and can be used in the clinical setting to standardise image interpretation, acquisition, and reporting. This Position Paper summarises the main outcomes of this international effort to provide the STandards for ReportIng Vascular changes on nEuroimaging (STRIVE). PMID:23867200

  2. Quality assurance of multiport image-guided minimally invasive surgery at the lateral skull base.

    PubMed

    Nau-Hermes, Maria; Schmitt, Robert; Becker, Meike; El-Hakimi, Wissam; Hansen, Stefan; Klenzner, Thomas; Schipper, Jörg

    2014-01-01

    For multiport image-guided minimally invasive surgery at the lateral skull base a quality management is necessary to avoid the damage of closely spaced critical neurovascular structures. So far there is no standardized method applicable independently from the surgery. Therefore, we adapt a quality management method, the quality gates (QG), which is well established in, for example, the automotive industry and apply it to multiport image-guided minimally invasive surgery. QG divide a process into different sections. Passing between sections can only be achieved if previously defined requirements are fulfilled which secures the process chain. An interdisciplinary team of otosurgeons, computer scientists, and engineers has worked together to define the quality gates and the corresponding criteria that need to be fulfilled before passing each quality gate. In order to evaluate the defined QG and their criteria, the new surgery method was applied with a first prototype at a human skull cadaver model. We show that the QG method can ensure a safe multiport minimally invasive surgical process at the lateral skull base. Therewith, we present an approach towards the standardization of quality assurance of surgical processes.

  3. Quality Assurance of Multiport Image-Guided Minimally Invasive Surgery at the Lateral Skull Base

    PubMed Central

    Nau-Hermes, Maria; Schmitt, Robert; Becker, Meike; El-Hakimi, Wissam; Hansen, Stefan; Klenzner, Thomas; Schipper, Jörg

    2014-01-01

    For multiport image-guided minimally invasive surgery at the lateral skull base a quality management is necessary to avoid the damage of closely spaced critical neurovascular structures. So far there is no standardized method applicable independently from the surgery. Therefore, we adapt a quality management method, the quality gates (QG), which is well established in, for example, the automotive industry and apply it to multiport image-guided minimally invasive surgery. QG divide a process into different sections. Passing between sections can only be achieved if previously defined requirements are fulfilled which secures the process chain. An interdisciplinary team of otosurgeons, computer scientists, and engineers has worked together to define the quality gates and the corresponding criteria that need to be fulfilled before passing each quality gate. In order to evaluate the defined QG and their criteria, the new surgery method was applied with a first prototype at a human skull cadaver model. We show that the QG method can ensure a safe multiport minimally invasive surgical process at the lateral skull base. Therewith, we present an approach towards the standardization of quality assurance of surgical processes. PMID:25105146

  4. Quality evaluation of no-reference MR images using multidirectional filters and image statistics.

    PubMed

    Jang, Jinseong; Bang, Kihun; Jang, Hanbyol; Hwang, Dosik

    2018-09-01

    This study aimed to develop a fully automatic, no-reference image-quality assessment (IQA) method for MR images. New quality-aware features were obtained by applying multidirectional filters to MR images and examining the feature statistics. A histogram of these features was then fitted to a generalized Gaussian distribution function for which the shape parameters yielded different values depending on the type of distortion in the MR image. Standard feature statistics were established through a training process based on high-quality MR images without distortion. Subsequently, the feature statistics of a test MR image were calculated and compared with the standards. The quality score was calculated as the difference between the shape parameters of the test image and the undistorted standard images. The proposed IQA method showed a >0.99 correlation with the conventional full-reference assessment methods; accordingly, this proposed method yielded the best performance among no-reference IQA methods for images containing six types of synthetic, MR-specific distortions. In addition, for authentically distorted images, the proposed method yielded the highest correlation with subjective assessments by human observers, thus demonstrating its superior performance over other no-reference IQAs. Our proposed IQA was designed to consider MR-specific features and outperformed other no-reference IQAs designed mainly for photographic images. Magn Reson Med 80:914-924, 2018. © 2018 International Society for Magnetic Resonance in Medicine. © 2018 International Society for Magnetic Resonance in Medicine.

  5. Graph theory for feature extraction and classification: a migraine pathology case study.

    PubMed

    Jorge-Hernandez, Fernando; Garcia Chimeno, Yolanda; Garcia-Zapirain, Begonya; Cabrera Zubizarreta, Alberto; Gomez Beldarrain, Maria Angeles; Fernandez-Ruanova, Begonya

    2014-01-01

    Graph theory is also widely used as a representational form and characterization of brain connectivity network, as is machine learning for classifying groups depending on the features extracted from images. Many of these studies use different techniques, such as preprocessing, correlations, features or algorithms. This paper proposes an automatic tool to perform a standard process using images of the Magnetic Resonance Imaging (MRI) machine. The process includes pre-processing, building the graph per subject with different correlations, atlas, relevant feature extraction according to the literature, and finally providing a set of machine learning algorithms which can produce analyzable results for physicians or specialists. In order to verify the process, a set of images from prescription drug abusers and patients with migraine have been used. In this way, the proper functioning of the tool has been proved, providing results of 87% and 92% of success depending on the classifier used.

  6. Asymmetry and irregularity border as discrimination factor between melanocytic lesions

    NASA Astrophysics Data System (ADS)

    Sbrissa, David; Pratavieira, Sebastião.; Salvio, Ana Gabriela; Kurachi, Cristina; Bagnato, Vanderlei Salvadori; Costa, Luciano Da Fontoura; Travieso, Gonzalo

    2015-06-01

    Image processing tools have been widely used in systems supporting medical diagnosis. The use of mobile devices for the diagnosis of melanoma can assist doctors and improve their diagnosis of a melanocytic lesion. This study proposes a method of image analysis for melanoma discrimination from other types of melanocytic lesions, such as regular and atypical nevi. The process is based on extracting features related with asymmetry and border irregularity. It were collected 104 images, from medical database of two years. The images were obtained with standard digital cameras without lighting and scale control. Metrics relating to the characteristics of shape, asymmetry and curvature of the contour were extracted from segmented images. Linear Discriminant Analysis was performed for dimensionality reduction and data visualization. Segmentation results showed good efficiency in the process, with approximately 88:5% accuracy. Validation results presents sensibility and specificity 85% and 70% for melanoma detection, respectively.

  7. 3D printed pathological sectioning boxes to facilitate radiological-pathological correlation in hepatectomy cases.

    PubMed

    Trout, Andrew T; Batie, Matthew R; Gupta, Anita; Sheridan, Rachel M; Tiao, Gregory M; Towbin, Alexander J

    2017-11-01

    Radiogenomics promises to identify tumour imaging features indicative of genomic or proteomic aberrations that can be therapeutically targeted allowing precision personalised therapy. An accurate radiological-pathological correlation is critical to the process of radiogenomic characterisation of tumours. An accurate correlation, however, is difficult to achieve with current pathological sectioning techniques which result in sectioning in non-standard planes. The purpose of this work is to present a technique to standardise hepatic sectioning to facilitateradiological-pathological correlation. We describe a process in which three-dimensional (3D)-printed specimen boxes based on preoperative cross-sectional imaging (CT and MRI) can be used to facilitate pathological sectioning in standard planes immediately on hepatic resection enabling improved tumour mapping. We have applied this process in 13 patients undergoing hepatectomy and have observed close correlation between imaging and gross pathology in patients with both unifocal and multifocal tumours. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  8. Data analysis for GOPEX image frames

    NASA Technical Reports Server (NTRS)

    Levine, B. M.; Shaik, K. S.; Yan, T.-Y.

    1993-01-01

    The data analysis based on the image frames received at the Solid State Imaging (SSI) camera of the Galileo Optical Experiment (GOPEX) demonstration conducted between 9-16 Dec. 1992 is described. Laser uplink was successfully established between the ground and the Galileo spacecraft during its second Earth-gravity-assist phase in December 1992. SSI camera frames were acquired which contained images of detected laser pulses transmitted from the Table Mountain Facility (TMF), Wrightwood, California, and the Starfire Optical Range (SOR), Albuquerque, New Mexico. Laser pulse data were processed using standard image-processing techniques at the Multimission Image Processing Laboratory (MIPL) for preliminary pulse identification and to produce public release images. Subsequent image analysis corrected for background noise to measure received pulse intensities. Data were plotted to obtain histograms on a daily basis and were then compared with theoretical results derived from applicable weak-turbulence and strong-turbulence considerations. Processing steps are described and the theories are compared with the experimental results. Quantitative agreement was found in both turbulence regimes, and better agreement would have been found, given more received laser pulses. Future experiments should consider methods to reliably measure low-intensity pulses, and through experimental planning to geometrically locate pulse positions with greater certainty.

  9. An innovative system for 3D clinical photography in the resource-limited settings.

    PubMed

    Baghdadchi, Saharnaz; Liu, Kimberly; Knapp, Jacquelyn; Prager, Gabriel; Graves, Susannah; Akrami, Kevan; Manuel, Rolanda; Bastos, Rui; Reid, Erin; Carson, Dennis; Esener, Sadik; Carson, Joseph; Liu, Yu-Tsueng

    2014-06-15

    Kaposi's sarcoma (KS) is the most frequently occurring cancer in Mozambique among men and the second most frequently occurring cancer among women. Effective therapeutic treatments for KS are poorly understood in this area. There is an unmet need to develop a simple but accurate tool for improved monitoring and diagnosis in a resource-limited setting. Standardized clinical photographs have been considered to be an essential part of the evaluation. When a therapeutic response is achieved, nodular KS often exhibits a reduction of the thickness without a change in the base area of the lesion. To evaluate the vertical space along with other characters of a KS lesion, we have created an innovative imaging system with a consumer light-field camera attached to a miniature "photography studio" adaptor. The image file can be further processed by computational methods for quantification. With this novel imaging system, each high-quality 3D image was consistently obtained with a single camera shot at bedside by minimally trained personnel. After computational processing, all-focused photos and measurable 3D parameters were obtained. More than 80 KS image sets were processed in a semi-automated fashion. In this proof-of-concept study, the feasibility to use a simple, low-cost and user-friendly system has been established for future clinical study to monitor KS therapeutic response. This 3D imaging system can be also applied to obtain standardized clinical photographs for other diseases.

  10. An innovative system for 3D clinical photography in the resource-limited settings

    PubMed Central

    2014-01-01

    Background Kaposi’s sarcoma (KS) is the most frequently occurring cancer in Mozambique among men and the second most frequently occurring cancer among women. Effective therapeutic treatments for KS are poorly understood in this area. There is an unmet need to develop a simple but accurate tool for improved monitoring and diagnosis in a resource-limited setting. Standardized clinical photographs have been considered to be an essential part of the evaluation. Methods When a therapeutic response is achieved, nodular KS often exhibits a reduction of the thickness without a change in the base area of the lesion. To evaluate the vertical space along with other characters of a KS lesion, we have created an innovative imaging system with a consumer light-field camera attached to a miniature “photography studio” adaptor. The image file can be further processed by computational methods for quantification. Results With this novel imaging system, each high-quality 3D image was consistently obtained with a single camera shot at bedside by minimally trained personnel. After computational processing, all-focused photos and measurable 3D parameters were obtained. More than 80 KS image sets were processed in a semi-automated fashion. Conclusions In this proof-of-concept study, the feasibility to use a simple, low-cost and user-friendly system has been established for future clinical study to monitor KS therapeutic response. This 3D imaging system can be also applied to obtain standardized clinical photographs for other diseases. PMID:24929434

  11. Initial clinical testing of a multi-spectral imaging system built on a smartphone platform

    NASA Astrophysics Data System (ADS)

    Mink, Jonah W.; Wexler, Shraga; Bolton, Frank J.; Hummel, Charles; Kahn, Bruce S.; Levitz, David

    2016-03-01

    Multi-spectral imaging systems are often expensive and bulky. An innovative multi-spectral imaging system was fitted onto a mobile colposcope, an imaging system built around a smartphone in order to image the uterine cervix from outside the body. The multi-spectral mobile colposcope (MSMC) acquires images at different wavelengths. This paper presents the clinical testing of MSMC imaging (technical validation of the MSMC system is described elsewhere 1 ). Patients who were referred to colposcopy following abnormal screening test (Pap or HPV DNA test) according to the standard of care were enrolled. Multi-spectral image sets of the cervix were acquired, consisting of images from the various wavelengths. Image acquisition took 1-2 sec. Areas suspected for dysplasia under white light imaging were biopsied, according to the standard of care. Biopsied sites were recorded on a clockface map of the cervix. Following the procedure, MSMC data was processed from the sites of biopsied sites. To date, the initial histopathological results are still outstanding. Qualitatively, structures in the cervical images were sharper at lower wavelengths than higher wavelengths. Patients tolerated imaging well. The result suggests MSMC holds promise for cervical imaging.

  12. Business Model for the Security of a Large-Scale PACS, Compliance with ISO/27002:2013 Standard.

    PubMed

    Gutiérrez-Martínez, Josefina; Núñez-Gaona, Marco Antonio; Aguirre-Meneses, Heriberto

    2015-08-01

    Data security is a critical issue in an organization; a proper information security management (ISM) is an ongoing process that seeks to build and maintain programs, policies, and controls for protecting information. A hospital is one of the most complex organizations, where patient information has not only legal and economic implications but, more importantly, an impact on the patient's health. Imaging studies include medical images, patient identification data, and proprietary information of the study; these data are contained in the storage device of a PACS. This system must preserve the confidentiality, integrity, and availability of patient information. There are techniques such as firewalls, encryption, and data encapsulation that contribute to the protection of information. In addition, the Digital Imaging and Communications in Medicine (DICOM) standard and the requirements of the Health Insurance Portability and Accountability Act (HIPAA) regulations are also used to protect the patient clinical data. However, these techniques are not systematically applied to the picture and archiving and communication system (PACS) in most cases and are not sufficient to ensure the integrity of the images and associated data during transmission. The ISO/IEC 27001:2013 standard has been developed to improve the ISM. Currently, health institutions lack effective ISM processes that enable reliable interorganizational activities. In this paper, we present a business model that accomplishes the controls of ISO/IEC 27002:2013 standard and criteria of security and privacy from DICOM and HIPAA to improve the ISM of a large-scale PACS. The methodology associated with the model can monitor the flow of data in a PACS, facilitating the detection of unauthorized access to images and other abnormal activities.

  13. Image processing can cause some malignant soft-tissue lesions to be missed in digital mammography images.

    PubMed

    Warren, L M; Halling-Brown, M D; Looney, P T; Dance, D R; Wallis, M G; Given-Wilson, R M; Wilkinson, L; McAvinchey, R; Young, K C

    2017-09-01

    To investigate the effect of image processing on cancer detection in mammography. An observer study was performed using 349 digital mammography images of women with normal breasts, calcification clusters, or soft-tissue lesions including 191 subtle cancers. Images underwent two types of processing: FlavourA (standard) and FlavourB (added enhancement). Six observers located features in the breast they suspected to be cancerous (4,188 observations). Data were analysed using jackknife alternative free-response receiver operating characteristic (JAFROC) analysis. Characteristics of the cancers detected with each image processing type were investigated. For calcifications, the JAFROC figure of merit (FOM) was equal to 0.86 for both types of image processing. For soft-tissue lesions, the JAFROC FOM were better for FlavourA (0.81) than FlavourB (0.78); this difference was significant (p=0.001). Using FlavourA a greater number of cancers of all grades and sizes were detected than with FlavourB. FlavourA improved soft-tissue lesion detection in denser breasts (p=0.04 when volumetric density was over 7.5%) CONCLUSIONS: The detection of malignant soft-tissue lesions (which were primarily invasive) was significantly better with FlavourA than FlavourB image processing. This is despite FlavourB having a higher contrast appearance often preferred by radiologists. It is important that clinical choice of image processing is based on objective measures. Copyright © 2017 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.

  14. Left-right facial orientation of familiar faces: developmental aspects of « the mere exposure hypothesis ».

    PubMed

    Amestoy, Anouck; Bouvard, Manuel P; Cazalets, Jean-René

    2010-01-01

    We investigated the developmental aspect of sensitivity to the orientation of familiar faces by asking 38 adults and 72 children from 3 to 12 years old to make a preference choice between standard and mirror images of themselves and of familiar faces, presented side-by-side or successively. When familiar (parental) faces were presented simultaneously, 3- to 5-year-olds showed no preference, but by age 5-7 years an adult-like preference for the standard image emerged. Similarly, the adult-like preference for the mirror image of their own face emerged by 5-7 years of age. When familiar or self faces were presented successively, 3- to 7-year-olds showed no preference, and adult-like preference for the standard image emerged by age 7-12 years. These results suggest the occurrence of a developmental process in the perception of familiar face asymmetries which is retained in memory related to knowledge about faces.

  15. Fluorescence intensity positivity classification of Hep-2 cells images using fuzzy logic

    NASA Astrophysics Data System (ADS)

    Sazali, Dayang Farzana Abang; Janier, Josefina Barnachea; May, Zazilah Bt.

    2014-10-01

    Indirect Immunofluorescence (IIF) is a good standard used for antinuclear autoantibody (ANA) test using Hep-2 cells to determine specific diseases. Different classifier algorithm methods have been proposed in previous works however, there still no valid set as a standard to classify the fluorescence intensity. This paper presents the use of fuzzy logic to classify the fluorescence intensity and to determine the positivity of the Hep-2 cell serum samples. The fuzzy algorithm involves the image pre-processing by filtering the noises and smoothen the image, converting the red, green and blue (RGB) color space of images to luminosity layer, chromaticity layer "a" and "b" (LAB) color space where the mean value of the lightness and chromaticity layer "a" was extracted and classified by using fuzzy logic algorithm based on the standard score ranges of antinuclear autoantibody (ANA) fluorescence intensity. Using 100 data sets of positive and intermediate fluorescence intensity for testing the performance measurements, the fuzzy logic obtained an accuracy of intermediate and positive class as 85% and 87% respectively.

  16. Mixed deep learning and natural language processing method for fake-food image recognition and standardization to help automated dietary assessment.

    PubMed

    Mezgec, Simon; Eftimov, Tome; Bucher, Tamara; Koroušić Seljak, Barbara

    2018-04-06

    The present study tested the combination of an established and a validated food-choice research method (the 'fake food buffet') with a new food-matching technology to automate the data collection and analysis. The methodology combines fake-food image recognition using deep learning and food matching and standardization based on natural language processing. The former is specific because it uses a single deep learning network to perform both the segmentation and the classification at the pixel level of the image. To assess its performance, measures based on the standard pixel accuracy and Intersection over Union were applied. Food matching firstly describes each of the recognized food items in the image and then matches the food items with their compositional data, considering both their food names and their descriptors. The final accuracy of the deep learning model trained on fake-food images acquired by 124 study participants and providing fifty-five food classes was 92·18 %, while the food matching was performed with a classification accuracy of 93 %. The present findings are a step towards automating dietary assessment and food-choice research. The methodology outperforms other approaches in pixel accuracy, and since it is the first automatic solution for recognizing the images of fake foods, the results could be used as a baseline for possible future studies. As the approach enables a semi-automatic description of recognized food items (e.g. with respect to FoodEx2), these can be linked to any food composition database that applies the same classification and description system.

  17. Challenges for data storage in medical imaging research.

    PubMed

    Langer, Steve G

    2011-04-01

    Researchers in medical imaging have multiple challenges for storing, indexing, maintaining viability, and sharing their data. Addressing all these concerns requires a constellation of tools, but not all of them need to be local to the site. In particular, the data storage challenges faced by researchers can begin to require professional information technology skills. With limited human resources and funds, the medical imaging researcher may be better served with an outsourcing strategy for some management aspects. This paper outlines an approach to manage the main objectives faced by medical imaging scientists whose work includes processing and data mining on non-standard file formats, and relating those files to the their DICOM standard descendents. The capacity of the approach scales as the researcher's need grows by leveraging the on-demand provisioning ability of cloud computing.

  18. Up Periscope! Designing a New Perceptual Metric for Imaging System Performance

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.

    2016-01-01

    Modern electronic imaging systems include optics, sensors, sampling, noise, processing, compression, transmission and display elements, and are viewed by the human eye. Many of these elements cannot be assessed by traditional imaging system metrics such as the MTF. More complex metrics such as NVTherm do address these elements, but do so largely through parametric adjustment of an MTF-like metric. The parameters are adjusted through subjective testing of human observers identifying specific targets in a set of standard images. We have designed a new metric that is based on a model of human visual pattern classification. In contrast to previous metrics, ours simulates the human observer identifying the standard targets. One application of this metric is to quantify performance of modern electronic periscope systems on submarines.

  19. Automation of image data processing. (Polish Title: Automatyzacja proces u przetwarzania danych obrazowych)

    NASA Astrophysics Data System (ADS)

    Preuss, R.

    2014-12-01

    This article discusses the current capabilities of automate processing of the image data on the example of using PhotoScan software by Agisoft. At present, image data obtained by various registration systems (metric and non - metric cameras) placed on airplanes, satellites, or more often on UAVs is used to create photogrammetric products. Multiple registrations of object or land area (large groups of photos are captured) are usually performed in order to eliminate obscured area as well as to raise the final accuracy of the photogrammetric product. Because of such a situation t he geometry of the resulting image blocks is far from the typical configuration of images. For fast images georeferencing automatic image matching algorithms are currently applied. They can create a model of a block in the local coordinate system or using initial exterior orientation and measured control points can provide image georeference in an external reference frame. In the case of non - metric image application, it is also possible to carry out self - calibration process at this stage. Image matching algorithm is also used in generation of dense point clouds reconstructing spatial shape of the object (area). In subsequent processing steps it is possible to obtain typical photogrammetric products such as orthomosaic, DSM or DTM and a photorealistic solid model of an object . All aforementioned processing steps are implemented in a single program in contrary to standard commercial software dividing all steps into dedicated modules. Image processing leading to final geo referenced products can be fully automated including sequential implementation of the processing steps at predetermined control parameters. The paper presents the practical results of the application fully automatic generation of othomosaic for both images obtained by a metric Vexell camera and a block of images acquired by a non - metric UAV system

  20. Standardization of left atrial, right ventricular, and right atrial deformation imaging using two-dimensional speckle tracking echocardiography: a consensus document of the EACVI/ASE/Industry Task Force to standardize deformation imaging.

    PubMed

    Badano, Luigi P; Kolias, Theodore J; Muraru, Denisa; Abraham, Theodore P; Aurigemma, Gerard; Edvardsen, Thor; D'Hooge, Jan; Donal, Erwan; Fraser, Alan G; Marwick, Thomas; Mertens, Luc; Popescu, Bogdan A; Sengupta, Partho P; Lancellotti, Patrizio; Thomas, James D; Voigt, Jens-Uwe

    2018-03-27

    The EACVI/ASE/Industry Task Force to standardize deformation imaging prepared this consensus document to standardize definitions and techniques for using two-dimensional (2D) speckle tracking echocardiography (STE) to assess left atrial, right ventricular, and right atrial myocardial deformation. This document is intended for both the technical engineering community and the clinical community at large to provide guidance on selecting the functional parameters to measure and how to measure them using 2D STE.This document aims to represent a significant step forward in the collaboration between the scientific societies and the industry since technical specifications of the software packages designed to post-process echocardiographic datasets have been agreed and shared before their actual development. Hopefully, this will lead to more clinically oriented software packages which will be better tailored to clinical needs and will allow industry to save time and resources in their development.

  1. 36 CFR 1238.14 - What are the microfilming requirements for permanent and unscheduled records?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... accordance with ISO 18901 (incorporated by reference, see § 1238.5) and use the processing procedures in ANSI... § 1238.5). (2) Background density of images. Agencies must use the background ISO standard visual diffuse... transmission density. (i) Recommended visual diffuse transmission background densities for images of documents...

  2. 36 CFR 1238.14 - What are the microfilming requirements for permanent and unscheduled records?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... accordance with ISO 18901 (incorporated by reference, see § 1238.5) and use the processing procedures in ANSI... § 1238.5). (2) Background density of images. Agencies must use the background ISO standard visual diffuse... transmission density. (i) Recommended visual diffuse transmission background densities for images of documents...

  3. 36 CFR 1238.14 - What are the microfilming requirements for permanent and unscheduled records?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... accordance with ISO 18901 (incorporated by reference, see § 1238.5) and use the processing procedures in ANSI... § 1238.5). (2) Background density of images. Agencies must use the background ISO standard visual diffuse... transmission density. (i) Recommended visual diffuse transmission background densities for images of documents...

  4. An independent software system for the analysis of dynamic MR images.

    PubMed

    Torheim, G; Lombardi, M; Rinck, P A

    1997-01-01

    A computer system for the manual, semi-automatic, and automatic analysis of dynamic MR images was to be developed on UNIX and personal computer platforms. The system was to offer an integrated and standardized way of performing both image processing and analysis that was independent of the MR unit used. The system consists of modules that are easily adaptable to special needs. Data from MR units or other diagnostic imaging equipment in techniques such as CT, ultrasonography, or nuclear medicine can be processed through the ACR-NEMA/DICOM standard file formats. A full set of functions is available, among them cine-loop visual analysis, and generation of time-intensity curves. Parameters such as cross-correlation coefficients, area under the curve, peak/maximum intensity, wash-in and wash-out slopes, time to peak, and relative signal intensity/contrast enhancement can be calculated. Other parameters can be extracted by fitting functions like the gamma-variate function. Region-of-interest data and parametric values can easily be exported. The system has been successfully tested in animal and patient examinations.

  5. Synchronous high speed multi-point velocity profile measurement by heterodyne interferometry

    NASA Astrophysics Data System (ADS)

    Hou, Xueqin; Xiao, Wen; Chen, Zonghui; Qin, Xiaodong; Pan, Feng

    2017-02-01

    This paper presents a synchronous multipoint velocity profile measurement system, which acquires the vibration velocities as well as images of vibrating objects by combining optical heterodyne interferometry and a high-speed CMOS-DVR camera. The high-speed CMOS-DVR camera records a sequence of images of the vibrating object. Then, by extracting and processing multiple pixels at the same time, a digital demodulation technique is implemented to simultaneously acquire the vibrating velocity of the target from the recorded sequences of images. This method is validated with an experiment. A piezoelectric ceramic plate with standard vibration characteristics is used as the vibrating target, which is driven by a standard sinusoidal signal.

  6. The N170 component is sensitive to face-like stimuli: a study of Chinese Peking opera makeup.

    PubMed

    Liu, Tiantian; Mu, Shoukuan; He, Huamin; Zhang, Lingcong; Fan, Cong; Ren, Jie; Zhang, Mingming; He, Weiqi; Luo, Wenbo

    2016-12-01

    The N170 component is considered a neural marker of face-sensitive processing. In the present study, the face-sensitive N170 component of event-related potentials (ERPs) was investigated with a modified oddball paradigm using a natural face (the standard stimulus), human- and animal-like makeup stimuli, scrambled control images that mixed human- and animal-like makeup pieces, and a grey control image. Nineteen participants were instructed to respond within 1000 ms by pressing the ' F ' or ' J ' key in response to the standard or deviant stimuli, respectively. We simultaneously recorded ERPs, response accuracy, and reaction times. The behavioral results showed that the main effect of stimulus type was significant for reaction time, whereas there were no significant differences in response accuracies among stimulus types. In relation to the ERPs, N170 amplitudes elicited by human-like makeup stimuli, animal-like makeup stimuli, scrambled control images, and a grey control image progressively decreased. A right hemisphere advantage was observed in the N170 amplitudes for human-like makeup stimuli, animal-like makeup stimuli, and scrambled control images but not for grey control image. These results indicate that the N170 component is sensitive to face-like stimuli and reflect configural processing in face recognition.

  7. An Innovative Method for Obtaining Consistent Images and Quantification of Histochemically Stained Specimens

    PubMed Central

    Sedgewick, Gerald J.; Ericson, Marna

    2015-01-01

    Obtaining digital images of color brightfield microscopy is an important aspect of biomedical research and the clinical practice of diagnostic pathology. Although the field of digital pathology has had tremendous advances in whole-slide imaging systems, little effort has been directed toward standardizing color brightfield digital imaging to maintain image-to-image consistency and tonal linearity. Using a single camera and microscope to obtain digital images of three stains, we show that microscope and camera systems inherently produce image-to-image variation. Moreover, we demonstrate that post-processing with a widely used raster graphics editor software program does not completely correct for session-to-session inconsistency. We introduce a reliable method for creating consistent images with a hardware/software solution (ChromaCal™; Datacolor Inc., NJ) along with its features for creating color standardization, preserving linear tonal levels, providing automated white balancing and setting automated brightness to consistent levels. The resulting image consistency using this method will also streamline mean density and morphometry measurements, as images are easily segmented and single thresholds can be used. We suggest that this is a superior method for color brightfield imaging, which can be used for quantification and can be readily incorporated into workflows. PMID:25575568

  8. TH-E-17A-07: Improved Cine Four-Dimensional Computed Tomography (4D CT) Acquisition and Processing Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Castillo, S; Castillo, R; Castillo, E

    2014-06-15

    Purpose: Artifacts arising from the 4D CT acquisition and post-processing methods add systematic uncertainty to the treatment planning process. We propose an alternate cine 4D CT acquisition and post-processing method to consistently reduce artifacts, and explore patient parameters indicative of image quality. Methods: In an IRB-approved protocol, 18 patients with primary thoracic malignancies received a standard cine 4D CT acquisition followed by an oversampling 4D CT that doubled the number of images acquired. A second cohort of 10 patients received the clinical 4D CT plus 3 oversampling scans for intra-fraction reproducibility. The clinical acquisitions were processed by the standard phasemore » sorting method. The oversampling acquisitions were processed using Dijkstras algorithm to optimize an artifact metric over available image data. Image quality was evaluated with a one-way mixed ANOVA model using a correlation-based artifact metric calculated from the final 4D CT image sets. Spearman correlations and a linear mixed model tested the association between breathing parameters, patient characteristics, and image quality. Results: The oversampling 4D CT scans reduced artifact presence significantly by 27% and 28%, for the first cohort and second cohort respectively. From cohort 2, the inter-replicate deviation for the oversampling method was within approximately 13% of the cross scan average at the 0.05 significance level. Artifact presence for both clinical and oversampling methods was significantly correlated with breathing period (ρ=0.407, p-value<0.032 clinical, ρ=0.296, p-value<0.041 oversampling). Artifact presence in the oversampling method was significantly correlated with amount of data acquired, (ρ=-0.335, p-value<0.02) indicating decreased artifact presence with increased breathing cycles per scan location. Conclusion: The 4D CT oversampling acquisition with optimized sorting reduced artifact presence significantly and reproducibly compared to the phase-sorted clinical acquisition.« less

  9. Automated aerial image based CD metrology initiated by pattern marking with photomask layout data

    NASA Astrophysics Data System (ADS)

    Davis, Grant; Choi, Sun Young; Jung, Eui Hee; Seyfarth, Arne; van Doornmalen, Hans; Poortinga, Eric

    2007-05-01

    The photomask is a critical element in the lithographic image transfer process from the drawn layout to the final structures on the wafer. The non-linearity of the imaging process and the related MEEF impose a tight control requirement on the photomask critical dimensions. Critical dimensions can be measured in aerial images with hardware emulation. This is a more recent complement to the standard scanning electron microscope measurement of wafers and photomasks. Aerial image measurement includes non-linear, 3-dimensional, and materials effects on imaging that cannot be observed directly by SEM measurement of the mask. Aerial image measurement excludes the processing effects of printing and etching on the wafer. This presents a unique contribution to the difficult process control and modeling tasks in mask making. In the past, aerial image measurements have been used mainly to characterize the printability of mask repair sites. Development of photomask CD characterization with the AIMS TM tool was motivated by the benefit of MEEF sensitivity and the shorter feedback loop compared to wafer exposures. This paper describes a new application that includes: an improved interface for the selection of meaningful locations using the photomask and design layout data with the Calibre TM Metrology Interface, an automated recipe generation process, an automated measurement process, and automated analysis and result reporting on a Carl Zeiss AIMS TM system.

  10. Evaluation of width and width uniformity of near-field electrospinning printed micro and sub-micrometer lines based on optical image processing

    NASA Astrophysics Data System (ADS)

    Zhao, Libo; Xia, Yong; Hebibul, Rahman; Wang, Jiuhong; Zhou, Xiangyang; Hu, Yingjie; Li, Zhikang; Luo, Guoxi; Zhao, Yulong; Jiang, Zhuangde

    2018-03-01

    This paper presents an experimental study using image processing to investigate width and width uniformity of sub-micrometer polyethylene oxide (PEO) lines fabricated by near-filed electrospinning (NFES) technique. An adaptive thresholding method was developed to determine the optimal gray values to accurately extract profiles of printed lines from original optical images. And it was proved with good feasibility. The mechanism of the proposed thresholding method was believed to take advantage of statistic property and get rid of halo induced errors. Triangular method and relative standard deviation (RSD) were introduced to calculate line width and width uniformity, respectively. Based on these image processing methods, the effects of process parameters including substrate speed (v), applied voltage (U), nozzle-to-collector distance (H), and syringe pump flow rate (Q) on width and width uniformity of printed lines were discussed. The research results are helpful to promote the NFES technique for fabricating high resolution micro and sub-micro lines and also helpful to optical image processing at sub-micro level.

  11. Quantitative fluorescence microscopy and image deconvolution.

    PubMed

    Swedlow, Jason R

    2013-01-01

    Quantitative imaging and image deconvolution have become standard techniques for the modern cell biologist because they can form the basis of an increasing number of assays for molecular function in a cellular context. There are two major types of deconvolution approaches--deblurring and restoration algorithms. Deblurring algorithms remove blur but treat a series of optical sections as individual two-dimensional entities and therefore sometimes mishandle blurred light. Restoration algorithms determine an object that, when convolved with the point-spread function of the microscope, could produce the image data. The advantages and disadvantages of these methods are discussed in this chapter. Image deconvolution in fluorescence microscopy has usually been applied to high-resolution imaging to improve contrast and thus detect small, dim objects that might otherwise be obscured. Their proper use demands some consideration of the imaging hardware, the acquisition process, fundamental aspects of photon detection, and image processing. This can prove daunting for some cell biologists, but the power of these techniques has been proven many times in the works cited in the chapter and elsewhere. Their usage is now well defined, so they can be incorporated into the capabilities of most laboratories. A major application of fluorescence microscopy is the quantitative measurement of the localization, dynamics, and interactions of cellular factors. The introduction of green fluorescent protein and its spectral variants has led to a significant increase in the use of fluorescence microscopy as a quantitative assay system. For quantitative imaging assays, it is critical to consider the nature of the image-acquisition system and to validate its response to known standards. Any image-processing algorithms used before quantitative analysis should preserve the relative signal levels in different parts of the image. A very common image-processing algorithm, image deconvolution, is used to remove blurred signal from an image. There are two major types of deconvolution approaches, deblurring and restoration algorithms. Deblurring algorithms remove blur, but treat a series of optical sections as individual two-dimensional entities, and therefore sometimes mishandle blurred light. Restoration algorithms determine an object that, when convolved with the point-spread function of the microscope, could produce the image data. The advantages and disadvantages of these methods are discussed. Copyright © 1998 Elsevier Inc. All rights reserved.

  12. The OSHA standard setting process: role of the occupational health nurse.

    PubMed

    Klinger, C S; Jones, M L

    1994-08-01

    1. Occupational health nurses are the health professionals most often involved with the worker who suffers as a result of ineffective or non-existent safety and health standards. 2. Occupational health nurses are familiar with health and safety standards, but may not understand or participate in the rulemaking process used to develop them. 3. Knowing the eight basic steps of rulemaking and actively participating in the process empowers occupational health nurses to influence national policy decisions affecting the safety and health of millions of workers. 4. By actively participating in rulemaking activities, occupational health nurses also improve the quality of occupational health nursing practice and enhance the image of the nursing profession.

  13. MEMS scanning micromirror for optical coherence tomography.

    PubMed

    Strathman, Matthew; Liu, Yunbo; Keeler, Ethan G; Song, Mingli; Baran, Utku; Xi, Jiefeng; Sun, Ming-Ting; Wang, Ruikang; Li, Xingde; Lin, Lih Y

    2015-01-01

    This paper describes an endoscopic-inspired imaging system employing a micro-electromechanical system (MEMS) micromirror scanner to achieve beam scanning for optical coherence tomography (OCT) imaging. Miniaturization of a scanning mirror using MEMS technology can allow a fully functional imaging probe to be contained in a package sufficiently small for utilization in a working channel of a standard gastroesophageal endoscope. This work employs advanced image processing techniques to enhance the images acquired using the MEMS scanner to correct non-idealities in mirror performance. The experimental results demonstrate the effectiveness of the proposed technique.

  14. MEMS scanning micromirror for optical coherence tomography

    PubMed Central

    Strathman, Matthew; Liu, Yunbo; Keeler, Ethan G.; Song, Mingli; Baran, Utku; Xi, Jiefeng; Sun, Ming-Ting; Wang, Ruikang; Li, Xingde; Lin, Lih Y.

    2014-01-01

    This paper describes an endoscopic-inspired imaging system employing a micro-electromechanical system (MEMS) micromirror scanner to achieve beam scanning for optical coherence tomography (OCT) imaging. Miniaturization of a scanning mirror using MEMS technology can allow a fully functional imaging probe to be contained in a package sufficiently small for utilization in a working channel of a standard gastroesophageal endoscope. This work employs advanced image processing techniques to enhance the images acquired using the MEMS scanner to correct non-idealities in mirror performance. The experimental results demonstrate the effectiveness of the proposed technique. PMID:25657887

  15. CT and MR Protocol Standardization Across a Large Health System: Providing a Consistent Radiologist, Patient, and Referring Provider Experience.

    PubMed

    Sachs, Peter B; Hunt, Kelly; Mansoubi, Fabien; Borgstede, James

    2017-02-01

    Building and maintaining a comprehensive yet simple set of standardized protocols for a cross-sectional image can be a daunting task. A single department may have difficulty preventing "protocol creep," which almost inevitably occurs when an organized "playbook" of protocols does not exist and individual radiologists and technologists alter protocols at will and on a case-by-case basis. When multiple departments or groups function in a large health system, the lack of uniformity of protocols can increase exponentially. In 2012, the University of Colorado Hospital formed a large health system (UCHealth) and became a 5-hospital provider network. CT and MR imaging studies are conducted at multiple locations by different radiology groups. To facilitate consistency in ordering, acquisition, and appearance of a given study, regardless of location, we minimized the number of protocols across all scanners and sites of practice with a clinical indication-driven protocol selection and standardization process. Here we review the steps utilized to perform this process improvement task and insure its stability over time. Actions included creation of a standardized protocol template, which allowed for changes in electronic storage and management of protocols, designing a change request form, and formation of a governance structure. We utilized rapid improvement events (1 day for CT, 2 days for MR) and reduced 248 CT protocols into 97 standardized protocols and 168 MR protocols to 66. Additional steps are underway to further standardize output and reporting of imaging interpretation. This will result in an improved, consistent radiologist, patient, and provider experience across the system.

  16. Digital radiography: spatial and contrast resolution

    NASA Astrophysics Data System (ADS)

    Bjorkholm, Paul; Annis, M.; Frederick, E.; Stein, J.; Swift, R.

    1981-07-01

    The addition of digital image collection and storage to standard and newly developed x-ray imaging techniques has allowed spectacular improvements in some diagnostic procedures. There is no reason to expect that the developments in this area are yet complete. But no matter what further developments occur in this field, all the techniques will share a common element, digital image storage and processing. This common element alone determines some of the important imaging characteristics. These will be discussed using one system, the Medical MICRODOSE System as an example.

  17. [Perception of odor quality by Free Image-Association Test].

    PubMed

    Ueno, Y

    1992-10-01

    A method was devised for evaluating odor quality. Subjects were requested to freely describe the images elicited by smelling odors. This test was named the "Free Image-Association Test (FIT)". The test was applied for 20 flavors of various foods, five odors from the standards of T&T olfactometer (Japanese standard olfactory test), butter of yak milk, and incense from Lamaism temples. The words for expressing imagery were analyzed by multidimensional scaling and cluster analysis. Seven clusters of odors were obtained. The feature of these clusters were quite similar to that of primary odors which have been suggested by previous studies. However, the clustering of odors can not be explained on the basis of the primary-odor theory, but the information processing theory originally proposed by Miller (1956). These results support the usefulness of the Free Image-Association Test for investigating odor perception based on the images associated with odors.

  18. Medical Image Processing Server applied to Quality Control of Nuclear Medicine.

    NASA Astrophysics Data System (ADS)

    Vergara, C.; Graffigna, J. P.; Marino, E.; Omati, S.; Holleywell, P.

    2016-04-01

    This paper is framed within the area of medical image processing and aims to present the process of installation, configuration and implementation of a processing server of medical images (MIPS) in the Fundación Escuela de Medicina Nuclear located in Mendoza, Argentina (FUESMEN). It has been developed in the Gabinete de Tecnologia Médica (GA.TE.ME), Facultad de Ingeniería-Universidad Nacional de San Juan. MIPS is a software that using the DICOM standard, can receive medical imaging studies of different modalities or viewing stations, then it executes algorithms and finally returns the results to other devices. To achieve the objectives previously mentioned, preliminary tests were conducted in the laboratory. More over, tools were remotely installed in clinical enviroment. The appropiate protocols for setting up and using them in different services were established once defined those suitable algorithms. Finally, it’s important to focus on the implementation and training that is provided in FUESMEN, using nuclear medicine quality control processes. Results on implementation are exposed in this work.

  19. The development of an imaging informatics-based multi-institutional platform to support sports performance and injury prevention in track and field

    NASA Astrophysics Data System (ADS)

    Liu, Joseph; Wang, Ximing; Verma, Sneha; McNitt-Gray, Jill; Liu, Brent

    2018-03-01

    The main goal of sports science and performance enhancement is to collect video and image data, process them, and quantify the results, giving insight to help athletes improve technique. For long jump in track and field, the processed output of video with force vector overlays and force calculations allow coaches to view specific stages of the hop, step, and jump, and identify how each stage can be improved to increase jump distance. Outputs also provide insight into how athletes can better maneuver to prevent injury. Currently, each data collection site collects and stores data with their own methods. There is no standard for data collection, formats, or storage. Video files and quantified results are stored in different formats, structures, and locations such as Dropbox and hard drives. Using imaging informatics-based principles we can develop a platform for multiple institutions that promotes the standardization of sports performance data. In addition, the system will provide user authentication and privacy as in clinical trials, with specific user access rights. Long jump data collected from different field sites will be standardized into specified formats before database storage. Quantified results from image-processing algorithms are stored similar to CAD algorithm results. The system will streamline the current sports performance data workflow and provide a user interface for athletes and coaches to view results of individual collections and also longitudinally across different collections. This streamlined platform and interface is a tool for coaches and athletes to easily access and review data to improve sports performance and prevent injury.

  20. Correlation plenoptic imaging

    NASA Astrophysics Data System (ADS)

    Pepe, Francesco V.; Di Lena, Francesco; Garuccio, Augusto; D'Angelo, Milena

    2017-06-01

    Plenoptic Imaging (PI) is a novel optical technique for achieving tridimensional imaging in a single shot. In conventional PI, a microlens array is inserted in the native image plane and the sensor array is moved behind the microlenses. On the one hand, the microlenses act as imaging pixels to reproduce the image of the scene; on the other hand, each microlens reproduces on the sensor array an image of the camera lens, thus providing the angular information associated with each imaging pixel. The recorded propagation direction is exploited, in post- processing, to computationally retrace the geometrical light path, thus enabling the refocusing of different planes within the scene, the extension of the depth of field of the acquired image, as well as the 3D reconstruction of the scene. However, a trade-off between spatial and angular resolution is built in the standard plenoptic imaging process. We demonstrate that the second-order spatio-temporal correlation properties of light can be exploited to overcome this fundamental limitation. Using two correlated beams, from either a chaotic or an entangled photon source, we can perform imaging in one arm and simultaneously obtain the angular information in the other arm. In fact, we show that the second order correlation function possesses plenoptic imaging properties (i.e., it encodes both spatial and angular information), and is thus characterized by a key re-focusing and 3D imaging capability. From a fundamental standpoint, the plenoptic application is the first situation where the counterintuitive properties of correlated systems are effectively used to beat intrinsic limits of standard imaging systems. From a practical standpoint, our protocol can dramatically enhance the potentials of PI, paving the way towards its promising applications.

  1. A 32 x 32 capacitive micromachined ultrasonic transducer array manufactured in standard CMOS.

    PubMed

    Lemmerhirt, David F; Cheng, Xiaoyang; White, Robert; Rich, Collin A; Zhang, Man; Fowlkes, J Brian; Kripfgans, Oliver D

    2012-07-01

    As ultrasound imagers become increasingly portable and lower cost, breakthroughs in transducer technology will be needed to provide high-resolution, real-time 3-D imaging while maintaining the affordability needed for portable systems. This paper presents a 32 x 32 ultrasound array prototype, manufactured using a CMUT-in-CMOS approach whereby ultrasonic transducer elements and readout circuits are integrated on a single chip using a standard integrated circuit manufacturing process in a commercial CMOS foundry. Only blanket wet-etch and sealing steps are added to complete the MEMS devices after the CMOS process. This process typically yields better than 99% working elements per array, with less than ±1.5 dB variation in receive sensitivity among the 1024 individually addressable elements. The CMUT pulseecho frequency response is typically centered at 2.1 MHz with a -6 dB fractional bandwidth of 60%, and elements are arranged on a 250 μm hexagonal grid (less than half-wavelength pitch). Multiplexers and CMOS buffers within the array are used to make on-chip routing manageable, reduce the number of physical output leads, and drive the transducer cable. The array has been interfaced to a commercial imager as well as a set of custom transmit and receive electronics, and volumetric images of nylon fishing line targets have been produced.

  2. Testbed Experiment for SPIDER: A Photonic Integrated Circuit-based Interferometric imaging system

    NASA Astrophysics Data System (ADS)

    Badham, K.; Duncan, A.; Kendrick, R. L.; Wuchenich, D.; Ogden, C.; Chriqui, G.; Thurman, S. T.; Su, T.; Lai, W.; Chun, J.; Li, S.; Liu, G.; Yoo, S. J. B.

    The Lockheed Martin Advanced Technology Center (LM ATC) and the University of California at Davis (UC Davis) are developing an electro-optical (EO) imaging sensor called SPIDER (Segmented Planar Imaging Detector for Electro-optical Reconnaissance) that seeks to provide a 10x to 100x size, weight, and power (SWaP) reduction alternative to the traditional bulky optical telescope and focal-plane detector array. The substantial reductions in SWaP would reduce cost and/or provide higher resolution by enabling a larger-aperture imager in a constrained volume. Our SPIDER imager replaces the traditional optical telescope and digital focal plane detector array with a densely packed interferometer array based on emerging photonic integrated circuit (PIC) technologies that samples the object being imaged in the Fourier domain (i.e., spatial frequency domain), and then reconstructs an image. Our approach replaces the large optics and structures required by a conventional telescope with PICs that are accommodated by standard lithographic fabrication techniques (e.g., complementary metal-oxide-semiconductor (CMOS) fabrication). The standard EO payload integration and test process that involves precision alignment and test of optical components to form a diffraction limited telescope is, therefore, replaced by in-process integration and test as part of the PIC fabrication, which substantially reduces associated schedule and cost. In this paper we describe the photonic integrated circuit design and the testbed used to create the first images of extended scenes. We summarize the image reconstruction steps and present the final images. We also describe our next generation PIC design for a larger (16x area, 4x field of view) image.

  3. LANDSAT 2 world standard catalog, 1-31 January 1979

    NASA Technical Reports Server (NTRS)

    1979-01-01

    The World Standard Catalog lists imagery acquired by LANDSAT 2 which are processed and input to the data files during the referenced period. Information such as cloud cover and image quality is given for each scene. The microfilm roll and frame on which the scene may be found is also given.

  4. LANDSAT 2 world standard catalog, 1-30 November 1978

    NASA Technical Reports Server (NTRS)

    1978-01-01

    The World Standard Catalog lists imagery acquired by LANDSAT 2 which was processed and input to the data files during the referenced period. Information such as cloud cover and image quality is given for each scene. The microfilm roll and frame on which the scene may be found is also given.

  5. LANDSAT 2 world standard catalog, 1-31 October 1978

    NASA Technical Reports Server (NTRS)

    1978-01-01

    The World Standard Catalog lists imagery acquired by LANDSAT 2 which was processed and input to the data files during the referenced period. Information such as cloud cover and image quality is given for each scene. The microfilms roll and frame on which the scene may be found is also given.

  6. LANDSAT 2 world standard catalog, 1 Jan. - 30 Apr. 1978

    NASA Technical Reports Server (NTRS)

    1978-01-01

    The World Standard Catalog lists imagery acquired by LANDSAT 2 which has been processed and input to the data files during the referenced months. Data, such as cloud cover and image quality, are given for each scene. The microfilm roll and frame on which the scene may be found is also given.

  7. 3D visualization of Thoraco-Lumbar Spinal Lesions in German Shepherd Dog

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Azpiroz, J.; Krafft, J.; Cadena, M.

    2006-09-08

    Computed tomography (CT) has been found to be an excellent imaging modality due to its sensitivity to characterize the morphology of the spine in dogs. This technique is considered to be particularly helpful for diagnosing spinal cord atrophy and spinal stenosis. The three-dimensional visualization of organs and bones can significantly improve the diagnosis of certain diseases in dogs. CT images were acquired of a German shepherd's dog spinal cord to generate stacks and digitally process them to arrange them in a volume image. All imaging experiments were acquired using standard clinical protocols on a clinical CT scanner. The three-dimensional visualizationmore » allowed us to observe anatomical structures that otherwise are not possible to observe with two-dimensional images. The combination of an imaging modality like CT together with imaging processing techniques can be a powerful tool for the diagnosis of a number of animal diseases.« less

  8. 3D visualization of Thoraco-Lumbar Spinal Lesions in German Shepherd Dog

    NASA Astrophysics Data System (ADS)

    Azpiroz, J.; Krafft, J.; Cadena, M.; Rodríguez, A. O.

    2006-09-01

    Computed tomography (CT) has been found to be an excellent imaging modality due to its sensitivity to characterize the morphology of the spine in dogs. This technique is considered to be particularly helpful for diagnosing spinal cord atrophy and spinal stenosis. The three-dimensional visualization of organs and bones can significantly improve the diagnosis of certain diseases in dogs. CT images were acquired of a German shepherd's dog spinal cord to generate stacks and digitally process them to arrange them in a volume image. All imaging experiments were acquired using standard clinical protocols on a clinical CT scanner. The three-dimensional visualization allowed us to observe anatomical structures that otherwise are not possible to observe with two-dimensional images. The combination of an imaging modality like CT together with imaging processing techniques can be a powerful tool for the diagnosis of a number of animal diseases.

  9. Characteristics of Kodak Insight, an F-speed intraoral film.

    PubMed

    Ludlow, J B; Platin, E; Mol, A

    2001-01-01

    This study reports film speed, contrast, exposure latitude, resolution, and response to processing solution depletion of Kodak Insight intraoral film. Densitometric curves were generated by using International Standards Organization protocol. Additional curves were generated for Ultra-speed, Ektaspeed Plus, and Insight films developed in progressively depleted processing solutions. Eight observers viewed images of a resolution test tool for maximum resolution assessment. Images of an aluminum step-wedge were reviewed to determine useful exposure latitude. Insight's sensitivity in fresh automatic processor solutions places it in the F-speed group. An average gradient of 1.8 was found with all film types. Insight provided 93% of the useful exposure latitude of Ektaspeed Plus film. Insight maintained contrast in progressively depleted processing solutions. Like Ektaspeed Plus, Insight was able to resolve at least 20 line-pairs per millimeter. Under International Standards Organization conditions, Insight required only 77% of the exposure of Ektaspeed Plus film. Insight film provided stable contrast in depleted processing solutions.

  10. Delay-Encoded Harmonic Imaging (DE-HI) in Multiplane-Wave Compounding.

    PubMed

    Gong, Ping; Song, Pengfei; Chen, Shigao

    2017-04-01

    The development of ultrafast ultrasound imaging brings great opportunities to improve imaging technologies such as shear wave elastography and ultrafast Doppler imaging. In ultrafast imaging, several tilted plane or diverging wave images are coherently combined to form a compounded image, leading to trade-offs among image signal-to-noise ratio (SNR), resolution, and post-compounded frame rate. Multiplane wave (MW) imaging is proposed to solve this trade-off by encoding multiple plane waves with Hadamard matrix during one transmission event (i.e. pulse-echo event), to improve image SNR without sacrificing the resolution or frame rate. However, it suffers from stronger reverberation artifacts in B-mode images compared to standard plane wave compounding due to longer transmitted pulses. If harmonic imaging can be combined with MW imaging, the reverberation artifacts and other clutter noises such as sidelobes and multipath scattering clutters should be suppressed. The challenge, however, is that the Hadamard codes used in MW imaging cannot encode the 2 nd harmonic component by inversing the pulse polarity. In this paper, we propose a delay-encoded harmonic imaging (DE-HI) technique to encode the 2 nd harmonic with a one quarter period delay calculated at the transmit center frequency, rather than reversing the pulse polarity during multiplane wave emissions. Received DE-HI signals can then be decoded in the frequency domain to recover the signals as in single plane wave emissions, but mainly with improved SNR at the 2 nd harmonic component instead of the fundamental component. DE-HI was tested experimentally with a point target, a B-mode imaging phantom, and in-vivo human liver imaging. Improvements in image contrast-to-noise ratio (CNR), spatial resolution, and lesion-signal-to-noise ratio ( l SNR) have been achieved compared to standard plane wave compounding, MW imaging, and standard harmonic imaging (maximal improvement of 116% on CNR and 115% on l SNR as compared to standard HI around 55 mm depth in the B-mode imaging phantom study). The potential high frame rate and the stability of encoding and decoding processes of DE-HI were also demonstrated, which made DE-HI promising for a wide spectrum of imaging applications.

  11. MATHEMATICAL METHODS IN MEDICAL IMAGE PROCESSING

    PubMed Central

    ANGENENT, SIGURD; PICHON, ERIC; TANNENBAUM, ALLEN

    2013-01-01

    In this paper, we describe some central mathematical problems in medical imaging. The subject has been undergoing rapid changes driven by better hardware and software. Much of the software is based on novel methods utilizing geometric partial differential equations in conjunction with standard signal/image processing techniques as well as computer graphics facilitating man/machine interactions. As part of this enterprise, researchers have been trying to base biomedical engineering principles on rigorous mathematical foundations for the development of software methods to be integrated into complete therapy delivery systems. These systems support the more effective delivery of many image-guided procedures such as radiation therapy, biopsy, and minimally invasive surgery. We will show how mathematics may impact some of the main problems in this area, including image enhancement, registration, and segmentation. PMID:23645963

  12. A service protocol for post-processing of medical images on the mobile device

    NASA Astrophysics Data System (ADS)

    He, Longjun; Ming, Xing; Xu, Lang; Liu, Qian

    2014-03-01

    With computing capability and display size growing, the mobile device has been used as a tool to help clinicians view patient information and medical images anywhere and anytime. It is uneasy and time-consuming for transferring medical images with large data size from picture archiving and communication system to mobile client, since the wireless network is unstable and limited by bandwidth. Besides, limited by computing capability, memory and power endurance, it is hard to provide a satisfactory quality of experience for radiologists to handle some complex post-processing of medical images on the mobile device, such as real-time direct interactive three-dimensional visualization. In this work, remote rendering technology is employed to implement the post-processing of medical images instead of local rendering, and a service protocol is developed to standardize the communication between the render server and mobile client. In order to make mobile devices with different platforms be able to access post-processing of medical images, the Extensible Markup Language is taken to describe this protocol, which contains four main parts: user authentication, medical image query/ retrieval, 2D post-processing (e.g. window leveling, pixel values obtained) and 3D post-processing (e.g. maximum intensity projection, multi-planar reconstruction, curved planar reformation and direct volume rendering). And then an instance is implemented to verify the protocol. This instance can support the mobile device access post-processing of medical image services on the render server via a client application or on the web page.

  13. Local contextual processing of abstract and meaningful real-life images in professional athletes.

    PubMed

    Fogelson, Noa; Fernandez-Del-Olmo, Miguel; Acero, Rafael Martín

    2012-05-01

    We investigated the effect of abstract versus real-life meaningful images from sports on local contextual processing in two groups of professional athletes. Local context was defined as the occurrence of a short predictive series of stimuli occurring before delivery of a target event. EEG was recorded in 10 professional basketball players and 9 professional athletes of individual sports during three sessions. In each session, a different set of visual stimuli were presented: triangles facing left, up, right, or down; four images of a basketball player throwing a ball; four images of a baseball player pitching a baseball. Stimuli consisted of 15 % targets and 85 % of equal numbers of three types of standards. Recording blocks consisted of targets preceded by randomized sequences of standards and by sequences including a predictive sequence signaling the occurrence of a subsequent target event. Subjects pressed a button in response to targets. In all three sessions, reaction times and peak P3b latencies were shorter for predicted targets compared with random targets, the last most informative stimulus of the predictive sequence induced a robust P3b, and N2 amplitude was larger for random targets compared with predicted targets. P3b and N2 peak amplitudes were larger in the professional basketball group in comparison with professional athletes of individual sports, across the three sessions. The findings of this study suggest that local contextual information is processed similarly for abstract and for meaningful images and that professional basketball players seem to allocate more attentional resources in the processing of these visual stimuli.

  14. Increasing the accuracy and scalability of the Immunofluorescence Assay for Epstein Barr Virus by inferring continuous titers from a single sample dilution.

    PubMed

    Goh, Sherry Meow Peng; Swaminathan, Muthukaruppan; Lai, Julian U-Ming; Anwar, Azlinda; Chan, Soh Ha; Cheong, Ian

    2017-01-01

    High Epstein Barr Virus (EBV) titers detected by the indirect Immunofluorescence Assay (IFA) are a reliable predictor of Nasopharyngeal Carcinoma (NPC). Despite being the gold standard for serological detection of NPC, the IFA is limited by scaling bottlenecks. Specifically, 5 serial dilutions of each patient sample must be prepared and visually matched by an evaluator to one of 5 discrete titers. Here, we describe a simple method for inferring continuous EBV titers from IFA images acquired from NPC-positive patient sera using only a single sample dilution. In the first part of our study, 2 blinded evaluators used a set of reference titer standards to perform independent re-evaluations of historical samples with known titers. Besides exhibiting high inter-evaluator agreement, both evaluators were also in high concordance with historical titers, thus validating the accuracy of the reference titer standards. In the second part of the study, the reference titer standards were IFA-processed and assigned an 'EBV Score' using image analysis. A log-linear relationship between titers and EBV Score was observed. This relationship was preserved even when images were acquired and analyzed 3days post-IFA. We conclude that image analysis of IFA-processed samples can be used to infer a continuous EBV titer with just a single dilution of NPC-positive patient sera. This work opens new possibilities for improving the accuracy and scalability of IFA in the context of clinical screening. Copyright © 2016. Published by Elsevier B.V.

  15. Space images processing methodology for assessment of atmosphere pollution impact on forest-swamp territories

    NASA Astrophysics Data System (ADS)

    Polichtchouk, Yuri; Tokareva, Olga; Bulgakova, Irina V.

    2003-03-01

    Methodical problems of space images processing for assessment of atmosphere pollution impact on forest ecosystems using geoinformation systems are developed. An approach to quantitative assessment of atmosphere pollution impact on forest ecosystems is based on calculating relative squares of forest landscapes which are inside atmosphere pollution zones. Landscape structure of forested territories in the southern part of Western Siberia are determined on the basis of procession of middle resolution space images from spaceborn Resource-O. Particularities of atmosphere pollution zones modeling caused by gas burning in torches on territories of oil fields are considered. Pollution zones were revealed by modeling of contaminants dispersal in atmosphere with standard models. Polluted landscapes squares are calculated depending on atmosphere pollution level.

  16. Application-ready expedited MODIS data for operational land surface monitoring of vegetation condition

    USGS Publications Warehouse

    Brown, Jesslyn; Howard, Daniel M.; Wylie, Bruce K.; Friesz, Aaron M.; Ji, Lei; Gacke, Carolyn

    2015-01-01

    Monitoring systems benefit from high temporal frequency image data collected from the Moderate Resolution Imaging Spectroradiometer (MODIS) system. Because of near-daily global coverage, MODIS data are beneficial to applications that require timely information about vegetation condition related to drought, flooding, or fire danger. Rapid satellite data streams in operational applications have clear benefits for monitoring vegetation, especially when information can be delivered as fast as changing surface conditions. An “expedited” processing system called “eMODIS” operated by the U.S. Geological Survey provides rapid MODIS surface reflectance data to operational applications in less than 24 h offering tailored, consistently-processed information products that complement standard MODIS products. We assessed eMODIS quality and consistency by comparing to standard MODIS data. Only land data with known high quality were analyzed in a central U.S. study area. When compared to standard MODIS (MOD/MYD09Q1), the eMODIS Normalized Difference Vegetation Index (NDVI) maintained a strong, significant relationship to standard MODIS NDVI, whether from morning (Terra) or afternoon (Aqua) orbits. The Aqua eMODIS data were more prone to noise than the Terra data, likely due to differences in the internal cloud mask used in MOD/MYD09Q1 or compositing rules. Post-processing temporal smoothing decreased noise in eMODIS data.

  17. Normalization of cortical thickness measurements across different T1 magnetic resonance imaging protocols by novel W-Score standardization.

    PubMed

    Chung, Jinyong; Yoo, Kwangsun; Lee, Peter; Kim, Chan Mi; Roh, Jee Hoon; Park, Ji Eun; Kim, Sang Joon; Seo, Sang Won; Shin, Jeong-Hyeon; Seong, Joon-Kyung; Jeong, Yong

    2017-10-01

    The use of different 3D T1-weighted magnetic resonance (T1 MR) imaging protocols induces image incompatibility across multicenter studies, negating the many advantages of multicenter studies. A few methods have been developed to address this problem, but significant image incompatibility still remains. Thus, we developed a novel and convenient method to improve image compatibility. W-score standardization creates quality reference values by using a healthy group to obtain normalized disease values. We developed a protocol-specific w-score standardization to control the protocol effect, which is applied to each protocol separately. We used three data sets. In dataset 1, brain T1 MR images of normal controls (NC) and patients with Alzheimer's disease (AD) from two centers, acquired with different T1 MR protocols, were used (Protocol 1 and 2, n = 45/group). In dataset 2, data from six subjects, who underwent MRI with two different protocols (Protocol 1 and 2), were used with different repetition times, echo times, and slice thicknesses. In dataset 3, T1 MR images from a large number of healthy normal controls (Protocol 1: n = 148, Protocol 2: n = 343) were collected for w-score standardization. The protocol effect and disease effect on subjects' cortical thickness were analyzed before and after the application of protocol-specific w-score standardization. As expected, different protocols resulted in differing cortical thickness measurements in both NC and AD subjects. Different measurements were obtained for the same subject when imaged with different protocols. Multivariate pattern difference between measurements was observed between the protocols. Classification accuracy between two protocols was nearly 90%. After applying protocol-specific w-score standardization, the differences between the protocols substantially decreased. Most importantly, protocol-specific w-score standardization reduced both univariate and multivariate differences in the images while maintaining the AD disease effect. Compared to conventional regression methods, our method showed the best performance for in terms of controlling the protocol effect while preserving disease information. Protocol-specific w-score standardization effectively resolved the concerns of conventional regression methods. It showed the best performance for improving the compatibility of a T1 MR post-processed feature, cortical thickness. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. JPRS Report, Science & Technology, Japan, 27th Aircraft Symposium

    DTIC Science & Technology

    1990-10-29

    screen; the relative attitude is then determined . 2) Video Sensor System Specific patterns (grapple target, etc.) drawn on the target spacecraft , or the...entire target spacecraft , is imaged by camera . Navigation information is obtained by on-board image processing, such as extraction of contours and...standard figure called "grapple target" located in the vicinity of the grapple fixture on the target spacecraft is imaged by camera . Contour lines and

  19. 75 FR 70011 - Guidance for Industry, Mammography Quality Standards Act Inspectors, and Food and Drug...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-16

    ... label to assist that office in processing your request, or fax your request to 301-847-8149. See the.... Clarifying that original or lossless compressed digital image files may be acceptable for record transfer; 3... be acceptable to FDA; 4. Deleting the question and answer dealing with image labeling; 5. Modifying...

  20. Variety and evolution of American endoscopic image management and recording systems.

    PubMed

    Korman, L Y

    1996-03-01

    The rapid evolution of computing technology has and will continue to alter the practice of gastroenterology and gastrointestinal endoscopy. Development of communication standards for text, images, and security systems will be necessary for medicine to take advantage of high-speed computing and communications. Professional societies can have an important role in guiding the development process.

  1. Semiautomated spleen volumetry with diffusion-weighted MR imaging.

    PubMed

    Lee, Jeongjin; Kim, Kyoung Won; Lee, Ho; Lee, So Jung; Choi, Sanghyun; Jeong, Woo Kyoung; Kye, Heewon; Song, Gi-Won; Hwang, Shin; Lee, Sung-Gyu

    2012-07-01

    In this article, we determined the relative accuracy of semiautomated spleen volumetry with diffusion-weighted (DW) MR images compared to standard manual volumetry with DW-MR or CT images. Semiautomated spleen volumetry using simple thresholding followed by 3D and 2D connected component analysis was performed with DW-MR images. Manual spleen volumetry was performed on DW-MR and CT images. In this study, 35 potential live liver donor candidates were included. Semiautomated volumetry results were highly correlated with manual volumetry results using DW-MR (r = 0.99; P < 0.0001; mean percentage absolute difference, 1.43 ± 0.94) and CT (r = 0.99; P < 0.0001; 1.76 ± 1.07). Mean total processing time for semiautomated volumetry was significantly shorter compared to that of manual volumetry with DW-MR (P < 0.0001) and CT (P < 0.0001). In conclusion, semiautomated spleen volumetry with DW-MR images can be performed rapidly and accurately when compared with standard manual volumetry. Copyright © 2011 Wiley Periodicals, Inc.

  2. VQone MATLAB toolbox: A graphical experiment builder for image and video quality evaluations: VQone MATLAB toolbox.

    PubMed

    Nuutinen, Mikko; Virtanen, Toni; Rummukainen, Olli; Häkkinen, Jukka

    2016-03-01

    This article presents VQone, a graphical experiment builder, written as a MATLAB toolbox, developed for image and video quality ratings. VQone contains the main elements needed for the subjective image and video quality rating process. This includes building and conducting experiments and data analysis. All functions can be controlled through graphical user interfaces. The experiment builder includes many standardized image and video quality rating methods. Moreover, it enables the creation of new methods or modified versions from standard methods. VQone is distributed free of charge under the terms of the GNU general public license and allows code modifications to be made so that the program's functions can be adjusted according to a user's requirements. VQone is available for download from the project page (http://www.helsinki.fi/psychology/groups/visualcognition/).

  3. Noninvasive enhanced mid-IR imaging of breast cancer development in vivo

    NASA Astrophysics Data System (ADS)

    Case, Jason R.; Young, Madison A.; Dréau, D.; Trammell, Susan R.

    2015-11-01

    Lumpectomy coupled with radiation therapy and/or chemotherapy is commonly used to treat breast cancer patients. We are developing an enhanced thermal IR imaging technique that has the potential to provide real-time imaging to guide tissue excision during a lumpectomy by delineating tumor margins. This enhanced thermal imaging method is a combination of IR imaging (8 to 10 μm) and selective heating of blood (˜0.5°C) relative to surrounding water-rich tissue using LED sources at low powers. Postacquisition processing of these images highlights temporal changes in temperature and the presence of vascular structures. In this study, fluorescent, standard thermal, and enhanced thermal imaging modalities, as well as physical caliper measurements, were used to monitor breast cancer tumor volumes over a 30-day study period in 19 mice implanted with 4T1-RFP tumor cells. Tumor volumes calculated from fluorescent imaging follow an exponential growth curve for the first 22 days of the study. Cell necrosis affected the tumor volume estimates based on the fluorescent images after day 22. The tumor volumes estimated from enhanced thermal imaging, standard thermal imaging, and caliper measurements all show exponential growth over the entire study period. A strong correlation was found between tumor volumes estimated using fluorescent imaging, standard IR imaging, and caliper measurements with enhanced thermal imaging, indicating that enhanced thermal imaging monitors tumor growth. Further, the enhanced IR images reveal a corona of bright emission along the edges of the tumor masses associated with the tumor margin. In the future, this IR technique might be used to estimate tumor margins in real time during surgical procedures.

  4. The Commercial Challenges Of Pacs

    NASA Astrophysics Data System (ADS)

    Vanden Brink, John A.

    1984-08-01

    The increasing use of digital imaging techniques create a need for improved methods of digital processing, communication and archiving. However, the commercial opportunity is dependent on the resolution of a number of issues. These issues include proof that digital processes are more cost effective than present techniques, implementation of information system support in the imaging activity, implementation of industry standards, conversion of analog images to digital formats, definition of clinical needs, the implications of the purchase decision and technology requirements. In spite of these obstacles, a market is emerging, served by new and existing companies, that may become a $500 million market (U.S.) by 1990 for equipment and supplies.

  5. Multiscale Image Processing of Solar Image Data

    NASA Astrophysics Data System (ADS)

    Young, C.; Myers, D. C.

    2001-12-01

    It is often said that the blessing and curse of solar physics is too much data. Solar missions such as Yohkoh, SOHO and TRACE have shown us the Sun with amazing clarity but have also increased the amount of highly complex data. We have improved our view of the Sun yet we have not improved our analysis techniques. The standard techniques used for analysis of solar images generally consist of observing the evolution of features in a sequence of byte scaled images or a sequence of byte scaled difference images. The determination of features and structures in the images are done qualitatively by the observer. There is little quantitative and objective analysis done with these images. Many advances in image processing techniques have occured in the past decade. Many of these methods are possibly suited for solar image analysis. Multiscale/Multiresolution methods are perhaps the most promising. These methods have been used to formulate the human ability to view and comprehend phenomena on different scales. So these techniques could be used to quantitify the imaging processing done by the observers eyes and brains. In this work we present several applications of multiscale techniques applied to solar image data. Specifically, we discuss uses of the wavelet, curvelet, and related transforms to define a multiresolution support for EIT, LASCO and TRACE images.

  6. Iterative image-domain decomposition for dual-energy CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Niu, Tianye; Dong, Xue; Petrongolo, Michael

    2014-04-15

    Purpose: Dual energy CT (DECT) imaging plays an important role in advanced imaging applications due to its capability of material decomposition. Direct decomposition via matrix inversion suffers from significant degradation of image signal-to-noise ratios, which reduces clinical values of DECT. Existing denoising algorithms achieve suboptimal performance since they suppress image noise either before or after the decomposition and do not fully explore the noise statistical properties of the decomposition process. In this work, the authors propose an iterative image-domain decomposition method for noise suppression in DECT, using the full variance-covariance matrix of the decomposed images. Methods: The proposed algorithm ismore » formulated in the form of least-square estimation with smoothness regularization. Based on the design principles of a best linear unbiased estimator, the authors include the inverse of the estimated variance-covariance matrix of the decomposed images as the penalty weight in the least-square term. The regularization term enforces the image smoothness by calculating the square sum of neighboring pixel value differences. To retain the boundary sharpness of the decomposed images, the authors detect the edges in the CT images before decomposition. These edge pixels have small weights in the calculation of the regularization term. Distinct from the existing denoising algorithms applied on the images before or after decomposition, the method has an iterative process for noise suppression, with decomposition performed in each iteration. The authors implement the proposed algorithm using a standard conjugate gradient algorithm. The method performance is evaluated using an evaluation phantom (Catphan©600) and an anthropomorphic head phantom. The results are compared with those generated using direct matrix inversion with no noise suppression, a denoising method applied on the decomposed images, and an existing algorithm with similar formulation as the proposed method but with an edge-preserving regularization term. Results: On the Catphan phantom, the method maintains the same spatial resolution on the decomposed images as that of the CT images before decomposition (8 pairs/cm) while significantly reducing their noise standard deviation. Compared to that obtained by the direct matrix inversion, the noise standard deviation in the images decomposed by the proposed algorithm is reduced by over 98%. Without considering the noise correlation properties in the formulation, the denoising scheme degrades the spatial resolution to 6 pairs/cm for the same level of noise suppression. Compared to the edge-preserving algorithm, the method achieves better low-contrast detectability. A quantitative study is performed on the contrast-rod slice of Catphan phantom. The proposed method achieves lower electron density measurement error as compared to that by the direct matrix inversion, and significantly reduces the error variation by over 97%. On the head phantom, the method reduces the noise standard deviation of decomposed images by over 97% without blurring the sinus structures. Conclusions: The authors propose an iterative image-domain decomposition method for DECT. The method combines noise suppression and material decomposition into an iterative process and achieves both goals simultaneously. By exploring the full variance-covariance properties of the decomposed images and utilizing the edge predetection, the proposed algorithm shows superior performance on noise suppression with high image spatial resolution and low-contrast detectability.« less

  7. Exploration of Mars by Mariner 9 - Television sensors and image processing.

    NASA Technical Reports Server (NTRS)

    Cutts, J. A.

    1973-01-01

    Two cameras equipped with selenium sulfur slow scan vidicons were used in the orbital reconnaissance of Mars by the U.S. Spacecraft Mariner 9 and the performance characteristics of these devices are presented. Digital image processing techniques have been widely applied in the analysis of images of Mars and its satellites. Photometric and geometric distortion corrections, image detail enhancement and transformation to standard map projection have been routinely employed. More specializing applications included picture differencing, limb profiling, solar lighting corrections, noise removal, line plots and computer mosaics. Information on enhancements as well as important picture geometric information was stored in a master library. Display of the library data in graphic or numerical form was accomplished by a data management computer program.

  8. SEM AutoAnalysis: enhancing photomask and NIL defect disposition and review

    NASA Astrophysics Data System (ADS)

    Schulz, Kristian; Egodage, Kokila; Tabbone, Gilles; Ehrlich, Christian; Garetto, Anthony

    2017-06-01

    For defect disposition and repair verification regarding printability, AIMS™ is the state of the art measurement tool in industry. With its unique capability of capturing aerial images of photomasks it is the one method that comes closest to emulating the printing behaviour of a scanner. However for nanoimprint lithography (NIL) templates aerial images cannot be applied to evaluate the success of a repair process. Hence, for NIL defect dispositioning scanning, electron microscopy (SEM) imaging is the method of choice. In addition, it has been a standard imaging method for further root cause analysis of defects and defect review on optical photomasks which enables 2D or even 3D mask profiling at high resolutions. In recent years a trend observed in mask shops has been the automation of processes that traditionally were driven by operators. This of course has brought many advantages one of which is freeing cost intensive labour from conducting repetitive and tedious work. Furthermore, it reduces variability in processes due to different operator skill and experience levels which at the end contributes to eliminating the human factor. Taking these factors into consideration, one of the software based solutions available under the FAVOR® brand to support customer needs is the aerial image evaluation software, AIMS™ AutoAnalysis (AAA). It provides fully automated analysis of AIMS™ images and runs in parallel to measurements. This is enabled by its direct connection and communication with the AIMS™tools. As one of many positive outcomes, generating automated result reports is facilitated, standardizing the mask manufacturing workflow. Today, AAA has been successfully introduced into production at multiple customers and is supporting the workflow as described above. These trends indeed have triggered the demand for similar automation with respect to SEM measurements leading to the development of SEM AutoAnalysis (SAA). It aims towards a fully automated SEM image evaluation process utilizing a completely different algorithm due to the different nature of SEM images and aerial images. Both AAA and SAA are the building blocks towards an image evaluation suite in the mask shop industry.

  9. Optimizing parameter choice for FSL-Brain Extraction Tool (BET) on 3D T1 images in multiple sclerosis.

    PubMed

    Popescu, V; Battaglini, M; Hoogstrate, W S; Verfaillie, S C J; Sluimer, I C; van Schijndel, R A; van Dijk, B W; Cover, K S; Knol, D L; Jenkinson, M; Barkhof, F; de Stefano, N; Vrenken, H

    2012-07-16

    Brain atrophy studies often use FSL-BET (Brain Extraction Tool) as the first step of image processing. Default BET does not always give satisfactory results on 3DT1 MR images, which negatively impacts atrophy measurements. Finding the right alternative BET settings can be a difficult and time-consuming task, which can introduce unwanted variability. To systematically analyze the performance of BET in images of MS patients by varying its parameters and options combinations, and quantitatively comparing its results to a manual gold standard. Images from 159 MS patients were selected from different MAGNIMS consortium centers, and 16 different 3DT1 acquisition protocols at 1.5 T or 3T. Before running BET, one of three pre-processing pipelines was applied: (1) no pre-processing, (2) removal of neck slices, or (3) additional N3 inhomogeneity correction. Then BET was applied, systematically varying the fractional intensity threshold (the "f" parameter) and with either one of the main BET options ("B" - bias field correction and neck cleanup, "R" - robust brain center estimation, or "S" - eye and optic nerve cleanup) or none. For comparison, intracranial cavity masks were manually created for all image volumes. FSL-FAST (FMRIB's Automated Segmentation Tool) tissue-type segmentation was run on all BET output images and on the image volumes masked with the manual intracranial cavity masks (thus creating the gold-standard tissue masks). The resulting brain tissue masks were quantitatively compared to the gold standard using Dice overlap coefficient (DOC). Normalized brain volumes (NBV) were calculated with SIENAX. NBV values obtained using for SIENAX other BET settings than default were compared to gold standard NBV with the paired t-test. The parameter/preprocessing/options combinations resulted in 20,988 BET runs. The median DOC for default BET (f=0.5, g=0) was 0.913 (range 0.321-0.977) across all 159 native scans. For all acquisition protocols, brain extraction was substantially improved for lower values of "f" than the default value. Using native images, optimum BET performance was observed for f=0.2 with option "B", giving median DOC=0.979 (range 0.867-0.994). Using neck removal before BET, optimum BET performance was observed for f=0.1 with option "B", giving median DOC 0.983 (range 0.844-0.996). Using the above BET-options for SIENAX instead of default, the NBV values obtained from images after neck removal with f=0.1 and option "B" did not differ statistically from NBV values obtained with gold-standard. Although default BET performs reasonably well on most 3DT1 images of MS patients, the performance can be improved substantially. The removal of the neck slices, either externally or within BET, has a marked positive effect on the brain extraction quality. BET option "B" with f=0.1 after removal of the neck slices seems to work best for all acquisition protocols. Copyright © 2012 Elsevier Inc. All rights reserved.

  10. Investigation into Cloud Computing for More Robust Automated Bulk Image Geoprocessing

    NASA Technical Reports Server (NTRS)

    Brown, Richard B.; Smoot, James C.; Underwood, Lauren; Armstrong, C. Duane

    2012-01-01

    Geospatial resource assessments frequently require timely geospatial data processing that involves large multivariate remote sensing data sets. In particular, for disasters, response requires rapid access to large data volumes, substantial storage space and high performance processing capability. The processing and distribution of this data into usable information products requires a processing pipeline that can efficiently manage the required storage, computing utilities, and data handling requirements. In recent years, with the availability of cloud computing technology, cloud processing platforms have made available a powerful new computing infrastructure resource that can meet this need. To assess the utility of this resource, this project investigates cloud computing platforms for bulk, automated geoprocessing capabilities with respect to data handling and application development requirements. This presentation is of work being conducted by Applied Sciences Program Office at NASA-Stennis Space Center. A prototypical set of image manipulation and transformation processes that incorporate sample Unmanned Airborne System data were developed to create value-added products and tested for implementation on the "cloud". This project outlines the steps involved in creating and testing of open source software developed process code on a local prototype platform, and then transitioning this code with associated environment requirements into an analogous, but memory and processor enhanced cloud platform. A data processing cloud was used to store both standard digital camera panchromatic and multi-band image data, which were subsequently subjected to standard image processing functions such as NDVI (Normalized Difference Vegetation Index), NDMI (Normalized Difference Moisture Index), band stacking, reprojection, and other similar type data processes. Cloud infrastructure service providers were evaluated by taking these locally tested processing functions, and then applying them to a given cloud-enabled infrastructure to assesses and compare environment setup options and enabled technologies. This project reviews findings that were observed when cloud platforms were evaluated for bulk geoprocessing capabilities based on data handling and application development requirements.

  11. High-performance camera module for fast quality inspection in industrial printing applications

    NASA Astrophysics Data System (ADS)

    Fürtler, Johannes; Bodenstorfer, Ernst; Mayer, Konrad J.; Brodersen, Jörg; Heiss, Dorothea; Penz, Harald; Eckel, Christian; Gravogl, Klaus; Nachtnebel, Herbert

    2007-02-01

    Today, printing products which must meet highest quality standards, e.g., banknotes, stamps, or vouchers, are automatically checked by optical inspection systems. Typically, the examination of fine details of the print or security features demands images taken from various perspectives, with different spectral sensitivity (visible, infrared, ultraviolet), and with high resolution. Consequently, the inspection system is equipped with several cameras and has to cope with an enormous data rate to be processed in real-time. Hence, it is desirable to move image processing tasks into the camera to reduce the amount of data which has to be transferred to the (central) image processing system. The idea is to transfer relevant information only, i.e., features of the image instead of the raw image data from the sensor. These features are then further processed. In this paper a color line-scan camera for line rates up to 100 kHz is presented. The camera is based on a commercial CMOS (complementary metal oxide semiconductor) area image sensor and a field programmable gate array (FPGA). It implements extraction of image features which are well suited to detect print flaws like blotches of ink, color smears, splashes, spots and scratches. The camera design and several image processing methods implemented on the FPGA are described, including flat field correction, compensation of geometric distortions, color transformation, as well as decimation and neighborhood operations.

  12. Combination of CT scanning and fluoroscopy imaging on a flat-panel CT scanner

    NASA Astrophysics Data System (ADS)

    Grasruck, M.; Gupta, R.; Reichardt, B.; Suess, Ch.; Schmidt, B.; Stierstorfer, K.; Popescu, S.; Brady, T.; Flohr, T.

    2006-03-01

    We developed and evaluated a prototype flat-panel detector based Volume CT (fpVCT) scanner. The fpVCT scanner consists of a Varian 4030CB a-Si flat-panel detector mounted in a multi slice CT-gantry (Siemens Medical Solutions). It provides a 25 cm field of view with 18 cm z-coverage at the isocenter. In addition to the standard tomographic scanning, fpVCT allows two new scan modes: (1) fluoroscopic imaging from any arbitrary rotation angle, and (2) continuous, time-resolved tomographic scanning of a dynamically changing viewing volume. Fluoroscopic imaging is feasible by modifying the standard CT gantry so that the imaging chain can be oriented along any user-selected rotation angle. Scanning with a stationary gantry, after it has been oriented, is equivalent to a conventional fluoroscopic examination. This scan mode enables combined use of high-resolution tomography and real-time fluoroscopy with a clinically usable field of view in the z direction. The second scan mode allows continuous observation of a timeevolving process such as perfusion. The gantry can be continuously rotated for up to 80 sec, with the rotation time ranging from 3 to 20 sec, to gather projection images of a dynamic process. The projection data, that provides a temporal log of the viewing volume, is then converted into multiple image stacks that capture the temporal evolution of a dynamic process. Studies using phantoms, ex vivo specimens, and live animals have confirmed that these new scanning modes are clinically usable and offer a unique view of the anatomy and physiology that heretofore has not been feasible using static CT scanning. At the current level of image quality and temporal resolution, several clinical applications such a dynamic angiography, tumor enhancement pattern and vascularity studies, organ perfusion, and interventional applications are in reach.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leng, Shuai; Yu, Lifeng; Wang, Jia

    Purpose: Our purpose was to reduce image noise in spectral CT by exploiting data redundancies in the energy domain to allow flexible selection of the number, width, and location of the energy bins. Methods: Using a variety of spectral CT imaging methods, conventional filtered backprojection (FBP) reconstructions were performed and resulting images were compared to those processed using a Local HighlY constrained backPRojection Reconstruction (HYPR-LR) algorithm. The mean and standard deviation of CT numbers were measured within regions of interest (ROIs), and results were compared between FBP and HYPR-LR. For these comparisons, the following spectral CT imaging methods were used:(i)more » numerical simulations based on a photon-counting, detector-based CT system, (ii) a photon-counting, detector-based micro CT system using rubidium and potassium chloride solutions, (iii) a commercial CT system equipped with integrating detectors utilizing tube potentials of 80, 100, 120, and 140 kV, and (iv) a clinical dual-energy CT examination. The effects of tube energy and energy bin width were evaluated appropriate to each CT system. Results: The mean CT number in each ROI was unchanged between FBP and HYPR-LR images for each of the spectral CT imaging scenarios, irrespective of bin width or tube potential. However, image noise, as represented by the standard deviation of CT numbers in each ROI, was reduced by 36%-76%. In all scenarios, image noise after HYPR-LR algorithm was similar to that of composite images, which used all available photons. No difference in spatial resolution was observed between HYPR-LR processing and FBP. Dual energy patient data processed using HYPR-LR demonstrated reduced noise in the individual, low- and high-energy images, as well as in the material-specific basis images. Conclusions: Noise reduction can be accomplished for spectral CT by exploiting data redundancies in the energy domain. HYPR-LR is a robust method for reducing image noise in a variety of spectral CT imaging systems without losing spatial resolution or CT number accuracy. This method improves the flexibility to select energy bins in the manner that optimizes material identification and separation without paying the penalty of increased image noise or its corollary, increased patient dose.« less

  14. An array processing system for lunar geochemical and geophysical data

    NASA Technical Reports Server (NTRS)

    Eliason, E. M.; Soderblom, L. A.

    1977-01-01

    A computerized array processing system has been developed to reduce, analyze, display, and correlate a large number of orbital and earth-based geochemical, geophysical, and geological measurements of the moon on a global scale. The system supports the activities of a consortium of about 30 lunar scientists involved in data synthesis studies. The system was modeled after standard digital image-processing techniques but differs in that processing is performed with floating point precision rather than integer precision. Because of flexibility in floating-point image processing, a series of techniques that are impossible or cumbersome in conventional integer processing were developed to perform optimum interpolation and smoothing of data. Recently color maps of about 25 lunar geophysical and geochemical variables have been generated.

  15. LANDSAT 2 cumulative non-US standard catalog

    NASA Technical Reports Server (NTRS)

    1977-01-01

    The Non-U.S. Standard Catalog lists imagery acquired by LANDSAT 1 and LANDSAT 2 which has been processed and input to the data files during the referred month. Data, such as data acquired, cloud cover and image quality are given for each scene. The microfilm roll and frame on which the scene may be found is also given.

  16. LANDSAT 3 world standard catalog, 6 March - 31 July 1978

    NASA Technical Reports Server (NTRS)

    1978-01-01

    The World Standard Catalog lists imagery acquired by LANDSAT 3 which was processed and input to the data files during the referenced period. Information such as date of entry, cloud cover, and image quality is given for each scene. The microfilm roll and frame on which the scene may be found is also indicated.

  17. Landsat non-US standard catalog

    NASA Technical Reports Server (NTRS)

    1975-01-01

    The Non-U.S. Standard Catalog lists Non-U.S. imagery acquired by Landsat 1 and 2 which was processed and input to the data files during the referenced month. Data, such as date acquired, cloud cover and image quality are given for each scene. The microfilm roll and frame on which the scene may be found are also given.

  18. Assessment of body fat based on potential function clustering segmentation of computed tomography images

    NASA Astrophysics Data System (ADS)

    Zhang, Lixin; Lin, Min; Wan, Baikun; Zhou, Yu; Wang, Yizhong

    2005-01-01

    In this paper, a new method of body fat and its distribution testing is proposed based on CT image processing. As it is more sensitive to slight differences in attenuation than standard radiography, CT depicts the soft tissues with better clarity. And body fat has a distinct grayness range compared with its neighboring tissues in a CT image. An effective multi-thresholds image segmentation method based on potential function clustering is used to deal with multiple peaks in the grayness histogram of a CT image. The CT images of abdomens of 14 volunteers with different fatness are processed with the proposed method. Not only can the result of total fat area be got, but also the differentiation of subcutaneous fat from intra-abdominal fat has been identified. The results show the adaptability and stability of the proposed method, which will be a useful tool for diagnosing obesity.

  19. A Mathematical Model for Storage and Recall of Images using Targeted Synchronization of Coupled Maps.

    PubMed

    Palaniyandi, P; Rangarajan, Govindan

    2017-08-21

    We propose a mathematical model for storage and recall of images using coupled maps. We start by theoretically investigating targeted synchronization in coupled map systems wherein only a desired (partial) subset of the maps is made to synchronize. A simple method is introduced to specify coupling coefficients such that targeted synchronization is ensured. The principle of this method is extended to storage/recall of images using coupled Rulkov maps. The process of adjusting coupling coefficients between Rulkov maps (often used to model neurons) for the purpose of storing a desired image mimics the process of adjusting synaptic strengths between neurons to store memories. Our method uses both synchronisation and synaptic weight modification, as the human brain is thought to do. The stored image can be recalled by providing an initial random pattern to the dynamical system. The storage and recall of the standard image of Lena is explicitly demonstrated.

  20. A novel structured dictionary for fast processing of 3D medical images, with application to computed tomography restoration and denoising

    NASA Astrophysics Data System (ADS)

    Karimi, Davood; Ward, Rabab K.

    2016-03-01

    Sparse representation of signals in learned overcomplete dictionaries has proven to be a powerful tool with applications in denoising, restoration, compression, reconstruction, and more. Recent research has shown that learned overcomplete dictionaries can lead to better results than analytical dictionaries such as wavelets in almost all image processing applications. However, a major disadvantage of these dictionaries is that their learning and usage is very computationally intensive. In particular, finding the sparse representation of a signal in these dictionaries requires solving an optimization problem that leads to very long computational times, especially in 3D image processing. Moreover, the sparse representation found by greedy algorithms is usually sub-optimal. In this paper, we propose a novel two-level dictionary structure that improves the performance and the speed of standard greedy sparse coding methods. The first (i.e., the top) level in our dictionary is a fixed orthonormal basis, whereas the second level includes the atoms that are learned from the training data. We explain how such a dictionary can be learned from the training data and how the sparse representation of a new signal in this dictionary can be computed. As an application, we use the proposed dictionary structure for removing the noise and artifacts in 3D computed tomography (CT) images. Our experiments with real CT images show that the proposed method achieves results that are comparable with standard dictionary-based methods while substantially reducing the computational time.

  1. Radial line method for rear-view mirror distortion detection

    NASA Astrophysics Data System (ADS)

    Rahmah, Fitri; Kusumawardhani, Apriani; Setijono, Heru; Hatta, Agus M.; Irwansyah, .

    2015-01-01

    An image of the object can be distorted due to a defect in a mirror. A rear-view mirror is an important component for the vehicle safety. One of standard parameters of the rear-view mirror is a distortion factor. This paper presents a radial line method for distortion detection of the rear-view mirror. The rear-view mirror was tested for the distortion detection by using a system consisting of a webcam sensor and an image-processing unit. In the image-processing unit, the captured image from the webcam were pre-processed by using smoothing and sharpening techniques and then a radial line method was used to define the distortion factor. It was demonstrated successfully that the radial line method could be used to define the distortion factor. This detection system is useful to be implemented such as in Indonesian's automotive component industry while the manual inspection still be used.

  2. EMAN2: an extensible image processing suite for electron microscopy.

    PubMed

    Tang, Guang; Peng, Liwei; Baldwin, Philip R; Mann, Deepinder S; Jiang, Wen; Rees, Ian; Ludtke, Steven J

    2007-01-01

    EMAN is a scientific image processing package with a particular focus on single particle reconstruction from transmission electron microscopy (TEM) images. It was first released in 1999, and new versions have been released typically 2-3 times each year since that time. EMAN2 has been under development for the last two years, with a completely refactored image processing library, and a wide range of features to make it much more flexible and extensible than EMAN1. The user-level programs are better documented, more straightforward to use, and written in the Python scripting language, so advanced users can modify the programs' behavior without any recompilation. A completely rewritten 3D transformation class simplifies translation between Euler angle standards and symmetry conventions. The core C++ library has over 500 functions for image processing and associated tasks, and it is modular with introspection capabilities, so programmers can add new algorithms with minimal effort and programs can incorporate new capabilities automatically. Finally, a flexible new parallelism system has been designed to address the shortcomings in the rigid system in EMAN1.

  3. Determination of Small Animal Long Bone Properties Using Densitometry

    NASA Technical Reports Server (NTRS)

    Breit, Gregory A.; Goldberg, BethAnn K.; Whalen, Robert T.; Hargens, Alan R. (Technical Monitor)

    1996-01-01

    Assessment of bone structural property changes due to loading regimens or pharmacological treatment typically requires destructive mechanical testing and sectioning. Our group has accurately and non-destructively estimated three dimensional cross-sectional areal properties (principal moments of inertia, Imax and Imin, and principal angle, Theta) of human cadaver long bones from pixel-by-pixel analysis of three non-coplanar densitometry scans. Because the scanner beam width is on the order of typical small animal diapbyseal diameters, applying this technique to high-resolution scans of rat long bones necessitates additional processing to minimize errors induced by beam smearing, such as dependence on sample orientation and overestimation of Imax and Imin. We hypothesized that these errors are correctable by digital image processing of the raw scan data. In all cases, four scans, using only the low energy data (Hologic QDR-1000W, small animal mode), are averaged to increase image signal-to-noise ratio. Raw scans are additionally processed by interpolation, deconvolution by a filter derived from scanner beam characteristics, and masking using a variable threshold based on image dynamic range. To assess accuracy, we scanned an aluminum step phantom at 12 orientations over a range of 180 deg about the longitudinal axis, in 15 deg increments. The phantom dimensions (2.5, 3.1, 3.8 mm x 4.4 mm; Imin/Imax: 0.33-0.74) were comparable to the dimensions of a rat femur which was also scanned. Cross-sectional properties were determined at 0.25 mm increments along the length of the phantom and femur. The table shows average error (+/- SD) from theory of Imax, Imin, and Theta) over the 12 orientations, calculated from raw and fully processed phantom images, as well as standard deviations about the mean for the femur scans. Processing of phantom scans increased agreement with theory, indicating improved accuracy. Smaller standard deviations with processing indicate increased precision and repeatability. Standard deviations for the femur are consistent with those of the phantom. We conclude that in conjunction with digital image enhancement, densitometry scans are suitable for non-destructive determination of areal properties of small animal bones of comparable size to our phantom, allowing prediction of Imax and Imin within 2.5% and Theta within a fraction of a degree. This method represents a considerable extension of current methods of analyzing bone tissue distribution in small animal bones.

  4. Enhancing Web applications in radiology with Java: estimating MR imaging relaxation times.

    PubMed

    Dagher, A P; Fitzpatrick, M; Flanders, A E; Eng, J

    1998-01-01

    Java is a relatively new programming language that has been used to develop a World Wide Web-based tool for estimating magnetic resonance (MR) imaging relaxation times, thereby demonstrating how Java may be used for Web-based radiology applications beyond improving the user interface of teaching files. A standard processing algorithm coded with Java is downloaded along with the hypertext markup language (HTML) document. The user (client) selects the desired pulse sequence and inputs data obtained from a region of interest on the MR images. The algorithm is used to modify selected MR imaging parameters in an equation that models the phenomenon being evaluated. MR imaging relaxation times are estimated, and confidence intervals and a P value expressing the accuracy of the final results are calculated. Design features such as simplicity, object-oriented programming, and security restrictions allow Java to expand the capabilities of HTML by offering a more versatile user interface that includes dynamic annotations and graphics. Java also allows the client to perform more sophisticated information processing and computation than is usually associated with Web applications. Java is likely to become a standard programming option, and the development of stand-alone Java applications may become more common as Java is integrated into future versions of computer operating systems.

  5. Laser doppler blood flow imaging using a CMOS imaging sensor with on-chip signal processing.

    PubMed

    He, Diwei; Nguyen, Hoang C; Hayes-Gill, Barrie R; Zhu, Yiqun; Crowe, John A; Gill, Cally; Clough, Geraldine F; Morgan, Stephen P

    2013-09-18

    The first fully integrated 2D CMOS imaging sensor with on-chip signal processing for applications in laser Doppler blood flow (LDBF) imaging has been designed and tested. To obtain a space efficient design over 64 × 64 pixels means that standard processing electronics used off-chip cannot be implemented. Therefore the analog signal processing at each pixel is a tailored design for LDBF signals with balanced optimization for signal-to-noise ratio and silicon area. This custom made sensor offers key advantages over conventional sensors, viz. the analog signal processing at the pixel level carries out signal normalization; the AC amplification in combination with an anti-aliasing filter allows analog-to-digital conversion with a low number of bits; low resource implementation of the digital processor enables on-chip processing and the data bottleneck that exists between the detector and processing electronics has been overcome. The sensor demonstrates good agreement with simulation at each design stage. The measured optical performance of the sensor is demonstrated using modulated light signals and in vivo blood flow experiments. Images showing blood flow changes with arterial occlusion and an inflammatory response to a histamine skin-prick demonstrate that the sensor array is capable of detecting blood flow signals from tissue.

  6. Three-dimensional image signals: processing methods

    NASA Astrophysics Data System (ADS)

    Schiopu, Paul; Manea, Adrian; Craciun, Anca-Ileana; Craciun, Alexandru

    2010-11-01

    Over the years extensive studies have been carried out to apply coherent optics methods in real-time processing, communications and transmission image. This is especially true when a large amount of information needs to be processed, e.g., in high-resolution imaging. The recent progress in data-processing networks and communication systems has considerably increased the capacity of information exchange. We describe the results of literature investigation research of processing methods for the signals of the three-dimensional images. All commercially available 3D technologies today are based on stereoscopic viewing. 3D technology was once the exclusive domain of skilled computer-graphics developers with high-end machines and software. The images capture from the advanced 3D digital camera can be displayed onto screen of the 3D digital viewer with/ without special glasses. For this is needed considerable processing power and memory to create and render the complex mix of colors, textures, and virtual lighting and perspective necessary to make figures appear three-dimensional. Also, using a standard digital camera and a technique called phase-shift interferometry we can capture "digital holograms." These are holograms that can be stored on computer and transmitted over conventional networks. We present some research methods to process "digital holograms" for the Internet transmission and results.

  7. Recent Advances in Techniques for Hyperspectral Image Processing

    NASA Technical Reports Server (NTRS)

    Plaza, Antonio; Benediktsson, Jon Atli; Boardman, Joseph W.; Brazile, Jason; Bruzzone, Lorenzo; Camps-Valls, Gustavo; Chanussot, Jocelyn; Fauvel, Mathieu; Gamba, Paolo; Gualtieri, Anthony; hide

    2009-01-01

    Imaging spectroscopy, also known as hyperspectral imaging, has been transformed in less than 30 years from being a sparse research tool into a commodity product available to a broad user community. Currently, there is a need for standardized data processing techniques able to take into account the special properties of hyperspectral data. In this paper, we provide a seminal view on recent advances in techniques for hyperspectral image processing. Our main focus is on the design of techniques able to deal with the highdimensional nature of the data, and to integrate the spatial and spectral information. Performance of the discussed techniques is evaluated in different analysis scenarios. To satisfy time-critical constraints in specific applications, we also develop efficient parallel implementations of some of the discussed algorithms. Combined, these parts provide an excellent snapshot of the state-of-the-art in those areas, and offer a thoughtful perspective on future potentials and emerging challenges in the design of robust hyperspectral imaging algorithms

  8. Dual-energy CT in patients with abdominal malignant lymphoma: impact of noise-optimised virtual monoenergetic imaging on objective and subjective image quality.

    PubMed

    Lenga, L; Czwikla, R; Wichmann, J L; Leithner, D; Albrecht, M H; D'Angelo, T; Arendt, C T; Booz, C; Hammerstingl, R; Vogl, T J; Martin, S S

    2018-06-05

    To investigate the impact of noise-optimised virtual monoenergetic imaging (VMI+) reconstructions on quantitative and qualitative image parameters in patients with malignant lymphoma at dual-energy computed tomography (DECT) examinations of the abdomen. Thirty-five consecutive patients (mean age, 53.8±18.6 years; range, 21-82 years) with histologically proven malignant lymphoma of the abdomen were included retrospectively. Images were post-processed with standard linear blending (M_0.6), traditional VMI, and VMI+ technique at energy levels ranging from 40 to 100 keV in 10 keV increments. Signal-to-noise (SNR) and contrast-to-noise ratios (CNR) were objectively measured in lymphoma lesions. Image quality, lesion delineation, and image noise were rated subjectively by three blinded observers using five-point Likert scales. Quantitative image quality parameters peaked at 40-keV VMI+ (SNR, 15.77±7.74; CNR, 18.27±8.04) with significant differences compared to standard linearly blended M_0.6 (SNR, 7.96±3.26; CNR, 13.55±3.47) and all traditional VMI series (p<0.001). Qualitative image quality assessment revealed significantly superior ratings for image quality at 60-keV VMI+ (median, 5) in comparison with all other image series (p<0.001). Assessment of lesion delineation showed the highest rating scores for 40-keV VMI+ series (median, 5), while lowest subjective image noise was found for 100-keV VMI+ reconstructions (median, 5). Low-keV VMI+ reconstructions led to improved image quality and lesion delineation of malignant lymphoma lesions compared to standard image reconstruction and traditional VMI at abdominal DECT examinations. Copyright © 2018 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.

  9. SCIFIO: an extensible framework to support scientific image formats.

    PubMed

    Hiner, Mark C; Rueden, Curtis T; Eliceiri, Kevin W

    2016-12-07

    No gold standard exists in the world of scientific image acquisition; a proliferation of instruments each with its own proprietary data format has made out-of-the-box sharing of that data nearly impossible. In the field of light microscopy, the Bio-Formats library was designed to translate such proprietary data formats to a common, open-source schema, enabling sharing and reproduction of scientific results. While Bio-Formats has proved successful for microscopy images, the greater scientific community was lacking a domain-independent framework for format translation. SCIFIO (SCientific Image Format Input and Output) is presented as a freely available, open-source library unifying the mechanisms of reading and writing image data. The core of SCIFIO is its modular definition of formats, the design of which clearly outlines the components of image I/O to encourage extensibility, facilitated by the dynamic discovery of the SciJava plugin framework. SCIFIO is structured to support coexistence of multiple domain-specific open exchange formats, such as Bio-Formats' OME-TIFF, within a unified environment. SCIFIO is a freely available software library developed to standardize the process of reading and writing scientific image formats.

  10. Automatic detection of cone photoreceptors in split detector adaptive optics scanning light ophthalmoscope images.

    PubMed

    Cunefare, David; Cooper, Robert F; Higgins, Brian; Katz, David F; Dubra, Alfredo; Carroll, Joseph; Farsiu, Sina

    2016-05-01

    Quantitative analysis of the cone photoreceptor mosaic in the living retina is potentially useful for early diagnosis and prognosis of many ocular diseases. Non-confocal split detector based adaptive optics scanning light ophthalmoscope (AOSLO) imaging reveals the cone photoreceptor inner segment mosaics often not visualized on confocal AOSLO imaging. Despite recent advances in automated cone segmentation algorithms for confocal AOSLO imagery, quantitative analysis of split detector AOSLO images is currently a time-consuming manual process. In this paper, we present the fully automatic adaptive filtering and local detection (AFLD) method for detecting cones in split detector AOSLO images. We validated our algorithm on 80 images from 10 subjects, showing an overall mean Dice's coefficient of 0.95 (standard deviation 0.03), when comparing our AFLD algorithm to an expert grader. This is comparable to the inter-observer Dice's coefficient of 0.94 (standard deviation 0.04). To the best of our knowledge, this is the first validated, fully-automated segmentation method which has been applied to split detector AOSLO images.

  11. The use of immunohistochemistry for biomarker assessment--can it compete with other technologies?

    PubMed

    Dunstan, Robert W; Wharton, Keith A; Quigley, Catherine; Lowe, Amanda

    2011-10-01

    A morphology-based assay such as immunohistochemistry (IHC) should be a highly effective means to define the expression of a target molecule of interest, especially if the target is a protein. However, over the past decade, IHC as a platform for biomarkers has been challenged by more quantitative molecular assays with reference standards but that lack morphologic context. For IHC to be considered a "top-tier" biomarker assay, it must provide truly quantitative data on par with non-morphologic assays, which means it needs to be run with reference standards. However, creating such standards for IHC will require optimizing all aspects of tissue collection, fixation, section thickness, morphologic criteria for assessment, staining processes, digitization of images, and image analysis. This will also require anatomic pathology to evolve from a discipline that is descriptive to one that is quantitative. A major step in this transformation will be replacing traditional ocular microscopes with computer monitors and whole slide images, for without digitization, there can be no accurate quantitation; without quantitation, there can be no standardization; and without standardization, the value of morphology-based IHC assays will not be realized.

  12. An Automatic Image Processing Workflow for Daily Magnetic Resonance Imaging Quality Assurance.

    PubMed

    Peltonen, Juha I; Mäkelä, Teemu; Sofiev, Alexey; Salli, Eero

    2017-04-01

    The performance of magnetic resonance imaging (MRI) equipment is typically monitored with a quality assurance (QA) program. The QA program includes various tests performed at regular intervals. Users may execute specific tests, e.g., daily, weekly, or monthly. The exact interval of these measurements varies according to the department policies, machine setup and usage, manufacturer's recommendations, and available resources. In our experience, a single image acquired before the first patient of the day offers a low effort and effective system check. When this daily QA check is repeated with identical imaging parameters and phantom setup, the data can be used to derive various time series of the scanner performance. However, daily QA with manual processing can quickly become laborious in a multi-scanner environment. Fully automated image analysis and results output can positively impact the QA process by decreasing reaction time, improving repeatability, and by offering novel performance evaluation methods. In this study, we have developed a daily MRI QA workflow that can measure multiple scanner performance parameters with minimal manual labor required. The daily QA system is built around a phantom image taken by the radiographers at the beginning of day. The image is acquired with a consistent phantom setup and standardized imaging parameters. Recorded parameters are processed into graphs available to everyone involved in the MRI QA process via a web-based interface. The presented automatic MRI QA system provides an efficient tool for following the short- and long-term stability of MRI scanners.

  13. Automated measurement of pressure injury through image processing.

    PubMed

    Li, Dan; Mathews, Carol

    2017-11-01

    To develop an image processing algorithm to automatically measure pressure injuries using electronic pressure injury images stored in nursing documentation. Photographing pressure injuries and storing the images in the electronic health record is standard practice in many hospitals. However, the manual measurement of pressure injury is time-consuming, challenging and subject to intra/inter-reader variability with complexities of the pressure injury and the clinical environment. A cross-sectional algorithm development study. A set of 32 pressure injury images were obtained from a western Pennsylvania hospital. First, we transformed the images from an RGB (i.e. red, green and blue) colour space to a YC b C r colour space to eliminate inferences from varying light conditions and skin colours. Second, a probability map, generated by a skin colour Gaussian model, guided the pressure injury segmentation process using the Support Vector Machine classifier. Third, after segmentation, the reference ruler - included in each of the images - enabled perspective transformation and determination of pressure injury size. Finally, two nurses independently measured those 32 pressure injury images, and intraclass correlation coefficient was calculated. An image processing algorithm was developed to automatically measure the size of pressure injuries. Both inter- and intra-rater analysis achieved good level reliability. Validation of the size measurement of the pressure injury (1) demonstrates that our image processing algorithm is a reliable approach to monitoring pressure injury progress through clinical pressure injury images and (2) offers new insight to pressure injury evaluation and documentation. Once our algorithm is further developed, clinicians can be provided with an objective, reliable and efficient computational tool for segmentation and measurement of pressure injuries. With this, clinicians will be able to more effectively monitor the healing process of pressure injuries. © 2017 John Wiley & Sons Ltd.

  14. 36 CFR § 1238.14 - What are the microfilming requirements for permanent and unscheduled records?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... processing procedures in ANSI/AIIM MS1 and ANSI/AIIM MS23 (both incorporated by reference, see § 1238.5). (d... reference, see § 1238.5). (2) Background density of images. Agencies must use the background ISO standard... densities for images of documents are as follows: Classification Description of document Background density...

  15. Clinical evaluation of reducing acquisition time on single-photon emission computed tomography image quality using proprietary resolution recovery software.

    PubMed

    Aldridge, Matthew D; Waddington, Wendy W; Dickson, John C; Prakash, Vineet; Ell, Peter J; Bomanji, Jamshed B

    2013-11-01

    A three-dimensional model-based resolution recovery (RR) reconstruction algorithm that compensates for collimator-detector response, resulting in an improvement in reconstructed spatial resolution and signal-to-noise ratio of single-photon emission computed tomography (SPECT) images, was tested. The software is said to retain image quality even with reduced acquisition time. Clinically, any improvement in patient throughput without loss of quality is to be welcomed. Furthermore, future restrictions in radiotracer supplies may add value to this type of data analysis. The aims of this study were to assess improvement in image quality using the software and to evaluate the potential of performing reduced time acquisitions for bone and parathyroid SPECT applications. Data acquisition was performed using the local standard SPECT/CT protocols for 99mTc-hydroxymethylene diphosphonate bone and 99mTc-methoxyisobutylisonitrile parathyroid SPECT imaging. The principal modification applied was the acquisition of an eight-frame gated data set acquired using an ECG simulator with a fixed signal as the trigger. This had the effect of partitioning the data such that the effect of reduced time acquisitions could be assessed without conferring additional scanning time on the patient. The set of summed data sets was then independently reconstructed using the RR software to permit a blinded assessment of the effect of acquired counts upon reconstructed image quality as adjudged by three experienced observers. Data sets reconstructed with the RR software were compared with the local standard processing protocols; filtered back-projection and ordered-subset expectation-maximization. Thirty SPECT studies were assessed (20 bone and 10 parathyroid). The images reconstructed with the RR algorithm showed improved image quality for both full-time and half-time acquisitions over local current processing protocols (P<0.05). The RR algorithm improved image quality compared with local processing protocols and has been introduced into routine clinical use. SPECT acquisitions are now acquired at half of the time previously required. The method of binning the data can be applied to any other camera system to evaluate the reduction in acquisition time for similar processes. The potential for dose reduction is also inherent with this approach.

  16. Algorithms and programming tools for image processing on the MPP:3

    NASA Technical Reports Server (NTRS)

    Reeves, Anthony P.

    1987-01-01

    This is the third and final report on the work done for NASA Grant 5-403 on Algorithms and Programming Tools for Image Processing on the MPP:3. All the work done for this grant is summarized in the introduction. Work done since August 1986 is reported in detail. Research for this grant falls under the following headings: (1) fundamental algorithms for the MPP; (2) programming utilities for the MPP; (3) the Parallel Pascal Development System; and (4) performance analysis. In this report, the results of two efforts are reported: region growing, and performance analysis of important characteristic algorithms. In each case, timing results from MPP implementations are included. A paper is included in which parallel algorithms for region growing on the MPP is discussed. These algorithms permit different sized regions to be merged in parallel. Details on the implementation and peformance of several important MPP algorithms are given. These include a number of standard permutations, the FFT, convolution, arbitrary data mappings, image warping, and pyramid operations, all of which have been implemented on the MPP. The permutation and image warping functions have been included in the standard development system library.

  17. Ground control requirements for precision processing of ERTS images

    USGS Publications Warehouse

    Burger, Thomas C.

    1973-01-01

    With the successful flight of the ERTS-1 satellite, orbital height images are available for precision processing into products such as 1:1,000,000-scale photomaps and enlargements up to 1:250,000 scale. In order to maintain positional error below 100 meters, control points for the precision processing must be carefully selected, clearly definitive on photos in both X and Y. Coordinates of selected control points measured on existing ½ and 15-minute standard maps provide sufficient accuracy for any space imaging system thus far defined. This procedure references the points to accepted horizontal and vertical datums. Maps as small as 1:250,000 scale can be used as source material for coordinates, but to maintain the desired accuracy, maps of 1:100,000 and larger scale should be used when available.

  18. A customizable system for real-time image processing using the Blackfin DSProcessor and the MicroC/OS-II real-time kernel

    NASA Astrophysics Data System (ADS)

    Coffey, Stephen; Connell, Joseph

    2005-06-01

    This paper presents a development platform for real-time image processing based on the ADSP-BF533 Blackfin processor and the MicroC/OS-II real-time operating system (RTOS). MicroC/OS-II is a completely portable, ROMable, pre-emptive, real-time kernel. The Blackfin Digital Signal Processors (DSPs), incorporating the Analog Devices/Intel Micro Signal Architecture (MSA), are a broad family of 16-bit fixed-point products with a dual Multiply Accumulate (MAC) core. In addition, they have a rich instruction set with variable instruction length and both DSP and MCU functionality thus making them ideal for media based applications. Using the MicroC/OS-II for task scheduling and management, the proposed system can capture and process raw RGB data from any standard 8-bit greyscale image sensor in soft real-time and then display the processed result using a simple PC graphical user interface (GUI). Additionally, the GUI allows configuration of the image capture rate and the system and core DSP clock rates thereby allowing connectivity to a selection of image sensors and memory devices. The GUI also allows selection from a set of image processing algorithms based in the embedded operating system.

  19. Integrating research and clinical neuroimaging for the evaluation of traumatic brain injury recovery

    NASA Astrophysics Data System (ADS)

    Senseney, Justin; Ollinger, John; Graner, John; Lui, Wei; Oakes, Terry; Riedy, Gerard

    2015-03-01

    Advanced MRI research and other imaging modalities may serve as biomarkers for the evaluation of traumatic brain injury (TBI) recovery. However, these advanced modalities typically require off-line processing which creates images that are incompatible with radiologist viewing software sold commercially. AGFA Impax is an example of such a picture archiving and communication system(PACS) that is used by many radiology departments in the United States Military Health System. By taking advantage of Impax's use of the Digital Imaging and Communications in Medicine (DICOM) standard, we developed a system that allows for advanced medical imaging to be incorporated into clinical PACS. Radiology research can now be conducted using existing clinical imaging display platforms resources in combination with image processingtechniques that are only available outside of the clinical scanning environment. We extracted the spatial and identification elements of theDICOM standard that are necessary to allow research images to be incorporatedinto a clinical radiology system, and developed a tool that annotates research images with the proper tags. This allows for the evaluation of imaging representations of biological markers that may be useful in theevaluation of TBI and TBI recovery.

  20. A Framework for Integration of Heterogeneous Medical Imaging Networks

    PubMed Central

    Viana-Ferreira, Carlos; Ribeiro, Luís S; Costa, Carlos

    2014-01-01

    Medical imaging is increasing its importance in matters of medical diagnosis and in treatment support. Much is due to computers that have revolutionized medical imaging not only in acquisition process but also in the way it is visualized, stored, exchanged and managed. Picture Archiving and Communication Systems (PACS) is an example of how medical imaging takes advantage of computers. To solve problems of interoperability of PACS and medical imaging equipment, the Digital Imaging and Communications in Medicine (DICOM) standard was defined and widely implemented in current solutions. More recently, the need to exchange medical data between distinct institutions resulted in Integrating the Healthcare Enterprise (IHE) initiative that contains a content profile especially conceived for medical imaging exchange: Cross Enterprise Document Sharing for imaging (XDS-i). Moreover, due to application requirements, many solutions developed private networks to support their services. For instance, some applications support enhanced query and retrieve over DICOM objects metadata. This paper proposes anintegration framework to medical imaging networks that provides protocols interoperability and data federation services. It is an extensible plugin system that supports standard approaches (DICOM and XDS-I), but is also capable of supporting private protocols. The framework is being used in the Dicoogle Open Source PACS. PMID:25279021

  1. A framework for integration of heterogeneous medical imaging networks.

    PubMed

    Viana-Ferreira, Carlos; Ribeiro, Luís S; Costa, Carlos

    2014-01-01

    Medical imaging is increasing its importance in matters of medical diagnosis and in treatment support. Much is due to computers that have revolutionized medical imaging not only in acquisition process but also in the way it is visualized, stored, exchanged and managed. Picture Archiving and Communication Systems (PACS) is an example of how medical imaging takes advantage of computers. To solve problems of interoperability of PACS and medical imaging equipment, the Digital Imaging and Communications in Medicine (DICOM) standard was defined and widely implemented in current solutions. More recently, the need to exchange medical data between distinct institutions resulted in Integrating the Healthcare Enterprise (IHE) initiative that contains a content profile especially conceived for medical imaging exchange: Cross Enterprise Document Sharing for imaging (XDS-i). Moreover, due to application requirements, many solutions developed private networks to support their services. For instance, some applications support enhanced query and retrieve over DICOM objects metadata. This paper proposes anintegration framework to medical imaging networks that provides protocols interoperability and data federation services. It is an extensible plugin system that supports standard approaches (DICOM and XDS-I), but is also capable of supporting private protocols. The framework is being used in the Dicoogle Open Source PACS.

  2. Ultrasonic Imaging Techniques for Breast Cancer Detection

    NASA Astrophysics Data System (ADS)

    Goulding, N. R.; Marquez, J. D.; Prewett, E. M.; Claytor, T. N.; Nadler, B. R.

    2008-02-01

    Improving the resolution and specificity of current ultrasonic imaging technology is needed to enhance its relevance to breast cancer detection. A novel ultrasonic imaging reconstruction method is described that exploits classical straight-ray migration. This novel method improves signal processing for better image resolution and uses novel staging hardware options using a pulse-echo approach. A breast phantom with various inclusions is imaged using the classical migration method and is compared to standard computed tomography (CT) scans. These innovative ultrasonic methods incorporate ultrasound data acquisition, beam profile characterization, and image reconstruction. For an ultrasonic frequency of 2.25 MHz, imaged inclusions of approximately 1 cm are resolved and identified. Better resolution is expected with minor modifications. Improved image quality and resolution enables earlier detection and more accurate diagnoses of tumors thus reducing the number of biopsies performed, increasing treatment options, and lowering remission percentages. Using these new techniques the inclusions in the phantom are resolved and compared to the results of standard methods. Refinement of this application using other imaging techniques such as time-reversal mirrors (TRM), synthetic aperture focusing technique (SAFT), decomposition of the time reversal operator (DORT), and factorization methods is also discussed.

  3. Meta-analysis of the technical performance of an imaging procedure: guidelines and statistical methodology.

    PubMed

    Huang, Erich P; Wang, Xiao-Feng; Choudhury, Kingshuk Roy; McShane, Lisa M; Gönen, Mithat; Ye, Jingjing; Buckler, Andrew J; Kinahan, Paul E; Reeves, Anthony P; Jackson, Edward F; Guimaraes, Alexander R; Zahlmann, Gudrun

    2015-02-01

    Medical imaging serves many roles in patient care and the drug approval process, including assessing treatment response and guiding treatment decisions. These roles often involve a quantitative imaging biomarker, an objectively measured characteristic of the underlying anatomic structure or biochemical process derived from medical images. Before a quantitative imaging biomarker is accepted for use in such roles, the imaging procedure to acquire it must undergo evaluation of its technical performance, which entails assessment of performance metrics such as repeatability and reproducibility of the quantitative imaging biomarker. Ideally, this evaluation will involve quantitative summaries of results from multiple studies to overcome limitations due to the typically small sample sizes of technical performance studies and/or to include a broader range of clinical settings and patient populations. This paper is a review of meta-analysis procedures for such an evaluation, including identification of suitable studies, statistical methodology to evaluate and summarize the performance metrics, and complete and transparent reporting of the results. This review addresses challenges typical of meta-analyses of technical performance, particularly small study sizes, which often causes violations of assumptions underlying standard meta-analysis techniques. Alternative approaches to address these difficulties are also presented; simulation studies indicate that they outperform standard techniques when some studies are small. The meta-analysis procedures presented are also applied to actual [18F]-fluorodeoxyglucose positron emission tomography (FDG-PET) test-retest repeatability data for illustrative purposes. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.

  4. Meta-analysis of the technical performance of an imaging procedure: Guidelines and statistical methodology

    PubMed Central

    Huang, Erich P; Wang, Xiao-Feng; Choudhury, Kingshuk Roy; McShane, Lisa M; Gönen, Mithat; Ye, Jingjing; Buckler, Andrew J; Kinahan, Paul E; Reeves, Anthony P; Jackson, Edward F; Guimaraes, Alexander R; Zahlmann, Gudrun

    2017-01-01

    Medical imaging serves many roles in patient care and the drug approval process, including assessing treatment response and guiding treatment decisions. These roles often involve a quantitative imaging biomarker, an objectively measured characteristic of the underlying anatomic structure or biochemical process derived from medical images. Before a quantitative imaging biomarker is accepted for use in such roles, the imaging procedure to acquire it must undergo evaluation of its technical performance, which entails assessment of performance metrics such as repeatability and reproducibility of the quantitative imaging biomarker. Ideally, this evaluation will involve quantitative summaries of results from multiple studies to overcome limitations due to the typically small sample sizes of technical performance studies and/or to include a broader range of clinical settings and patient populations. This paper is a review of meta-analysis procedures for such an evaluation, including identification of suitable studies, statistical methodology to evaluate and summarize the performance metrics, and complete and transparent reporting of the results. This review addresses challenges typical of meta-analyses of technical performance, particularly small study sizes, which often causes violations of assumptions underlying standard meta-analysis techniques. Alternative approaches to address these difficulties are also presented; simulation studies indicate that they outperform standard techniques when some studies are small. The meta-analysis procedures presented are also applied to actual [18F]-fluorodeoxyglucose positron emission tomography (FDG-PET) test–retest repeatability data for illustrative purposes. PMID:24872353

  5. [Basic concept in computer assisted surgery].

    PubMed

    Merloz, Philippe; Wu, Hao

    2006-03-01

    To investigate application of medical digital imaging systems and computer technologies in orthopedics. The main computer-assisted surgery systems comprise the four following subcategories. (1) A collection and recording process for digital data on each patient, including preoperative images (CT scans, MRI, standard X-rays), intraoperative visualization (fluoroscopy, ultrasound), and intraoperative position and orientation of surgical instruments or bone sections (using 3D localises). Data merging based on the matching of preoperative imaging (CT scans, MRI, standard X-rays) and intraoperative visualization (anatomical landmarks, or bone surfaces digitized intraoperatively via 3D localiser; intraoperative ultrasound images processed for delineation of bone contours). (2) In cases where only intraoperative images are used for computer-assisted surgical navigation, the calibration of the intraoperative imaging system replaces the merged data system, which is then no longer necessary. (3) A system that provides aid in decision-making, so that the surgical approach is planned on basis of multimodal information: the interactive positioning of surgical instruments or bone sections transmitted via pre- or intraoperative images, display of elements to guide surgical navigation (direction, axis, orientation, length and diameter of a surgical instrument, impingement, etc. ). And (4) A system that monitors the surgical procedure, thereby ensuring that the optimal strategy defined at the preoperative stage is taken into account. It is possible that computer-assisted orthopedic surgery systems will enable surgeons to better assess the accuracy and reliability of the various operative techniques, an indispensable stage in the optimization of surgery.

  6. Feature evaluation of complex hysteresis smoothing and its practical applications to noisy SEM images.

    PubMed

    Suzuki, Kazuhiko; Oho, Eisaku

    2013-01-01

    Quality of a scanning electron microscopy (SEM) image is strongly influenced by noise. This is a fundamental drawback of the SEM instrument. Complex hysteresis smoothing (CHS) has been previously developed for noise removal of SEM images. This noise removal is performed by monitoring and processing properly the amplitude of the SEM signal. As it stands now, CHS may not be so utilized, though it has several advantages for SEM. For example, the resolution of image processed by CHS is basically equal to that of the original image. In order to find wide application of the CHS method in microscopy, the feature of CHS, which has not been so clarified until now is evaluated correctly. As the application of the result obtained by the feature evaluation, cursor width (CW), which is the sole processing parameter of CHS, is determined more properly using standard deviation of noise Nσ. In addition, disadvantage that CHS cannot remove the noise with excessively large amplitude is improved by a certain postprocessing. CHS is successfully applicable to SEM images with various noise amplitudes. © Wiley Periodicals, Inc.

  7. Hospital integrated parallel cluster for fast and cost-efficient image analysis: clinical experience and research evaluation

    NASA Astrophysics Data System (ADS)

    Erberich, Stephan G.; Hoppe, Martin; Jansen, Christian; Schmidt, Thomas; Thron, Armin; Oberschelp, Walter

    2001-08-01

    In the last few years more and more University Hospitals as well as private hospitals changed to digital information systems for patient record, diagnostic files and digital images. Not only that patient management becomes easier, it is also very remarkable how clinical research can profit from Picture Archiving and Communication Systems (PACS) and diagnostic databases, especially from image databases. Since images are available on the finger tip, difficulties arise when image data needs to be processed, e.g. segmented, classified or co-registered, which usually demands a lot computational power. Today's clinical environment does support PACS very well, but real image processing is still under-developed. The purpose of this paper is to introduce a parallel cluster of standard distributed systems and its software components and how such a system can be integrated into a hospital environment. To demonstrate the cluster technique we present our clinical experience with the crucial but cost-intensive motion correction of clinical routine and research functional MRI (fMRI) data, as it is processed in our Lab on a daily basis.

  8. apART: system for the acquisition, processing, archiving, and retrieval of digital images in an open, distributed imaging environment

    NASA Astrophysics Data System (ADS)

    Schneider, Uwe; Strack, Ruediger

    1992-04-01

    apART reflects the structure of an open, distributed environment. According to the general trend in the area of imaging, network-capable, general purpose workstations with capabilities of open system image communication and image input are used. Several heterogeneous components like CCD cameras, slide scanners, and image archives can be accessed. The system is driven by an object-oriented user interface where devices (image sources and destinations), operators (derived from a commercial image processing library), and images (of different data types) are managed and presented uniformly to the user. Browsing mechanisms are used to traverse devices, operators, and images. An audit trail mechanism is offered to record interactive operations on low-resolution image derivatives. These operations are processed off-line on the original image. Thus, the processing of extremely high-resolution raster images is possible, and the performance of resolution dependent operations is enhanced significantly during interaction. An object-oriented database system (APRIL), which can be browsed, is integrated into the system. Attribute retrieval is supported by the user interface. Other essential features of the system include: implementation on top of the X Window System (X11R4) and the OSF/Motif widget set; a SUN4 general purpose workstation, inclusive ethernet, magneto optical disc, etc., as the hardware platform for the user interface; complete graphical-interactive parametrization of all operators; support of different image interchange formats (GIF, TIFF, IIF, etc.); consideration of current IPI standard activities within ISO/IEC for further refinement and extensions.

  9. Overview of CMOS process and design options for image sensor dedicated to space applications

    NASA Astrophysics Data System (ADS)

    Martin-Gonthier, P.; Magnan, P.; Corbiere, F.

    2005-10-01

    With the growth of huge volume markets (mobile phones, digital cameras...) CMOS technologies for image sensor improve significantly. New process flows appear in order to optimize some parameters such as quantum efficiency, dark current, and conversion gain. Space applications can of course benefit from these improvements. To illustrate this evolution, this paper reports results from three technologies that have been evaluated with test vehicles composed of several sub arrays designed with some space applications as target. These three technologies are CMOS standard, improved and sensor optimized process in 0.35μm generation. Measurements are focussed on quantum efficiency, dark current, conversion gain and noise. Other measurements such as Modulation Transfer Function (MTF) and crosstalk are depicted in [1]. A comparison between results has been done and three categories of CMOS process for image sensors have been listed. Radiation tolerance has been also studied for the CMOS improved process in the way of hardening the imager by design. Results at 4, 15, 25 and 50 krad prove a good ionizing dose radiation tolerance applying specific techniques.

  10. A novel partial volume effects correction technique integrating deconvolution associated with denoising within an iterative PET image reconstruction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Merlin, Thibaut, E-mail: thibaut.merlin@telecom-bretagne.eu; Visvikis, Dimitris; Fernandez, Philippe

    2015-02-15

    Purpose: Partial volume effect (PVE) plays an important role in both qualitative and quantitative PET image accuracy, especially for small structures. A previously proposed voxelwise PVE correction method applied on PET reconstructed images involves the use of Lucy–Richardson deconvolution incorporating wavelet-based denoising to limit the associated propagation of noise. The aim of this study is to incorporate the deconvolution, coupled with the denoising step, directly inside the iterative reconstruction process to further improve PVE correction. Methods: The list-mode ordered subset expectation maximization (OSEM) algorithm has been modified accordingly with the application of the Lucy–Richardson deconvolution algorithm to the current estimationmore » of the image, at each reconstruction iteration. Acquisitions of the NEMA NU2-2001 IQ phantom were performed on a GE DRX PET/CT system to study the impact of incorporating the deconvolution inside the reconstruction [with and without the point spread function (PSF) model] in comparison to its application postreconstruction and to standard iterative reconstruction incorporating the PSF model. The impact of the denoising step was also evaluated. Images were semiquantitatively assessed by studying the trade-off between the intensity recovery and the noise level in the background estimated as relative standard deviation. Qualitative assessments of the developed methods were additionally performed on clinical cases. Results: Incorporating the deconvolution without denoising within the reconstruction achieved superior intensity recovery in comparison to both standard OSEM reconstruction integrating a PSF model and application of the deconvolution algorithm in a postreconstruction process. The addition of the denoising step permitted to limit the SNR degradation while preserving the intensity recovery. Conclusions: This study demonstrates the feasibility of incorporating the Lucy–Richardson deconvolution associated with a wavelet-based denoising in the reconstruction process to better correct for PVE. Future work includes further evaluations of the proposed method on clinical datasets and the use of improved PSF models.« less

  11. Smart cloud system with image processing server in diagnosing brain diseases dedicated for hospitals with limited resources.

    PubMed

    Fahmi, Fahmi; Nasution, Tigor H; Anggreiny, Anggreiny

    2017-01-01

    The use of medical imaging in diagnosing brain disease is growing. The challenges are related to the big size of data and complexity of the image processing. High standard of hardware and software are demanded, which can only be provided in big hospitals. Our purpose was to provide a smart cloud system to help diagnosing brain diseases for hospital with limited infrastructure. The expertise of neurologists was first implanted in cloud server to conduct an automatic diagnosis in real time using image processing technique developed based on ITK library and web service. Users upload images through website and the result, in this case the size of tumor was sent back immediately. A specific image compression technique was developed for this purpose. The smart cloud system was able to measure the area and location of tumors, with average size of 19.91 ± 2.38 cm2 and an average response time 7.0 ± 0.3 s. The capability of the server decreased when multiple clients accessed the system simultaneously: 14 ± 0 s (5 parallel clients) and 27 ± 0.2 s (10 parallel clients). The cloud system was successfully developed to process and analyze medical images for diagnosing brain diseases in this case for tumor.

  12. In situ spectroradiometric quantification of ERTS data. [Prescott and Phoenix, Arizona

    NASA Technical Reports Server (NTRS)

    Yost, E. F. (Principal Investigator)

    1975-01-01

    The author has identified the following significant results. Analyses of ERTS-1 photographic data were made to quantitatively relate ground reflectance measurements to photometric characteristics of the images. Digital image processing of photographic data resulted in a nomograph to correct for atmospheric effects over arid terrain. Optimum processing techniques to derive maximum geologic information from desert areas were established. Additive color techniques to provide quantitative measurements of surface water between different orbits were developed which were accepted as the standard flood mapping techniques using ERTS.

  13. Phase Composition Maps integrate mineral compositions with rock textures from the micro-meter to the thin section scale

    NASA Astrophysics Data System (ADS)

    Willis, Kyle V.; Srogi, LeeAnn; Lutz, Tim; Monson, Frederick C.; Pollock, Meagen

    2017-12-01

    Textures and compositions are critical information for interpreting rock formation. Existing methods to integrate both types of information favor high-resolution images of mineral compositions over small areas or low-resolution images of larger areas for phase identification. The method in this paper produces images of individual phases in which textural and compositional details are resolved over three orders of magnitude, from tens of micrometers to tens of millimeters. To construct these images, called Phase Composition Maps (PCMs), we make use of the resolution in backscattered electron (BSE) images and calibrate the gray scale values with mineral analyses by energy-dispersive X-ray spectrometry (EDS). The resulting images show the area of a standard thin section (roughly 40 mm × 20 mm) with spatial resolution as good as 3.5 μm/pixel, or more than 81 000 pixels/mm2, comparable to the resolution of X-ray element maps produced by wavelength-dispersive spectrometry (WDS). Procedures to create PCMs for mafic igneous rocks with multivariate linear regression models for minerals with solid solution (olivine, plagioclase feldspar, and pyroxenes) are presented and are applicable to other rock types. PCMs are processed using threshold functions based on the regression models to image specific composition ranges of minerals. PCMs are constructed using widely-available instrumentation: a scanning-electron microscope (SEM) with BSE and EDS X-ray detectors and standard image processing software such as ImageJ and Adobe Photoshop. Three brief applications illustrate the use of PCMs as petrologic tools: to reveal mineral composition patterns at multiple scales; to generate crystal size distributions for intracrystalline compositional zones and compare growth over time; and to image spatial distributions of minerals at different stages of magma crystallization by integrating textures and compositions with thermodynamic modeling.

  14. LANDSAT US standard catalog, 1-31 March 1976

    NASA Technical Reports Server (NTRS)

    1976-01-01

    The U.S. Standard Catalog lists U.S. imagery acquired by LANDSAT 1 and LANDSAT 2 which has been processed and input to the data files during the referenced month. Data, such as date acquired, cloud cover and image quality are given for each scene. The microfilm roll and frame on which the scene may be found is also given.

  15. LANDSAT: Non-US standard catalog. [LANDSAT imagery for August 1977

    NASA Technical Reports Server (NTRS)

    1977-01-01

    The non-U. S. Standard Catalog lists non-U. S. imagery acquired by LANDSAT 1 and 2 which has been processed and input to the data files during the referenced month. Data, such as date acquired, cloud cover and image quality are given for each scene. The microfilm roll and frame on which the scene may be found is also given.

  16. LANDSAT non-US standard catalog, 1-31 May 1976

    NASA Technical Reports Server (NTRS)

    1976-01-01

    The non-U.S. standard catalog lists non-U.S. imagery acquired by LANDSAT 1 and LANDSAT 2 which has been processed and input to the data files during the referenced month. Data, such as date acquired, cloud cover and image quality are given for each scene. The microfilm roll and frame on which the scene may be found is also given.

  17. LANDSAT 2 cumulative US standard catalog. [LANDSAT imagery for January 1976

    NASA Technical Reports Server (NTRS)

    1977-01-01

    The U.S. Standard Catalog lists U.S. imagery acquired by LANDSAT 1 and LANDSAT 2 which has been processed and input to the data files during the referenced month. Data, such as date acquired, cloud cover and image quality, are given for each scene. The microfilm roll and frame on which the scene may be found is also given.

  18. "When Meeting 'Khun' Teacher, Each Time We Should Pay Respect": Standardizing Respect in a Northern Thai Classroom

    ERIC Educational Resources Information Center

    Howard, Kathryn M.

    2009-01-01

    This paper examines how Northern Thai (Muang) children are socialized into the discourses and practices of respect in school, a process that indexically links Standard Thai to images of polite and respectful Thai citizenship. Focusing on the socialization of politeness particles, the paper examines how cultural models of conduct are taken up,…

  19. LANDSAT: US standard catalog, 1 January 1977 through 31 January 1977

    NASA Technical Reports Server (NTRS)

    1977-01-01

    The U.S. Standard Catalog lists U.S. imagery acquired by LANDSAT 1 and LANDSAT 2 which has been processed and input to the data files during the referenced month. Data, such as date acquired, cloud cover and image quality are given for each scene. The microfilm roll and frame on which the scene may be found is also given.

  20. Delay and Standard Deviation Beamforming to Enhance Specular Reflections in Ultrasound Imaging.

    PubMed

    Bandaru, Raja Sekhar; Sornes, Anders Rasmus; Hermans, Jeroen; Samset, Eigil; D'hooge, Jan

    2016-12-01

    Although interventional devices, such as needles, guide wires, and catheters, are best visualized by X-ray, real-time volumetric echography could offer an attractive alternative as it avoids ionizing radiation; it provides good soft tissue contrast, and it is mobile and relatively cheap. Unfortunately, as echography is traditionally used to image soft tissue and blood flow, the appearance of interventional devices in conventional ultrasound images remains relatively poor, which is a major obstacle toward ultrasound-guided interventions. The objective of this paper was therefore to enhance the appearance of interventional devices in ultrasound images. Thereto, a modified ultrasound beamforming process using conventional-focused transmit beams is proposed that exploits the properties of received signals containing specular reflections (as arising from these devices). This new beamforming approach referred to as delay and standard deviation beamforming (DASD) was quantitatively tested using simulated as well as experimental data using a linear array transducer. Furthermore, the influence of different imaging settings (i.e., transmit focus, imaging depth, and scan angle) on the obtained image contrast was evaluated. The study showed that the image contrast of specular regions improved by 5-30 dB using DASD beamforming compared with traditional delay and sum (DAS) beamforming. The highest gain in contrast was observed when the interventional device was tilted away from being orthogonal to the transmit beam, which is a major limitation in standard DAS imaging. As such, the proposed beamforming methodology can offer an improved visualization of interventional devices in the ultrasound image with potential implications for ultrasound-guided interventions.

  1. High-performance computing in image registration

    NASA Astrophysics Data System (ADS)

    Zanin, Michele; Remondino, Fabio; Dalla Mura, Mauro

    2012-10-01

    Thanks to the recent technological advances, a large variety of image data is at our disposal with variable geometric, radiometric and temporal resolution. In many applications the processing of such images needs high performance computing techniques in order to deliver timely responses e.g. for rapid decisions or real-time actions. Thus, parallel or distributed computing methods, Digital Signal Processor (DSP) architectures, Graphical Processing Unit (GPU) programming and Field-Programmable Gate Array (FPGA) devices have become essential tools for the challenging issue of processing large amount of geo-data. The article focuses on the processing and registration of large datasets of terrestrial and aerial images for 3D reconstruction, diagnostic purposes and monitoring of the environment. For the image alignment procedure, sets of corresponding feature points need to be automatically extracted in order to successively compute the geometric transformation that aligns the data. The feature extraction and matching are ones of the most computationally demanding operations in the processing chain thus, a great degree of automation and speed is mandatory. The details of the implemented operations (named LARES) exploiting parallel architectures and GPU are thus presented. The innovative aspects of the implementation are (i) the effectiveness on a large variety of unorganized and complex datasets, (ii) capability to work with high-resolution images and (iii) the speed of the computations. Examples and comparisons with standard CPU processing are also reported and commented.

  2. Optimizing hippocampal segmentation in infants utilizing MRI post-acquisition processing.

    PubMed

    Thompson, Deanne K; Ahmadzai, Zohra M; Wood, Stephen J; Inder, Terrie E; Warfield, Simon K; Doyle, Lex W; Egan, Gary F

    2012-04-01

    This study aims to determine the most reliable method for infant hippocampal segmentation by comparing magnetic resonance (MR) imaging post-acquisition processing techniques: contrast to noise ratio (CNR) enhancement, or reformatting to standard orientation. MR scans were performed with a 1.5 T GE scanner to obtain dual echo T2 and proton density (PD) images at term equivalent (38-42 weeks' gestational age). 15 hippocampi were manually traced four times on ten infant images by 2 independent raters on the original T2 image, as well as images processed by: a) combining T2 and PD images (T2-PD) to enhance CNR; then b) reformatting T2-PD images perpendicular to the long axis of the left hippocampus. CNRs and intraclass correlation coefficients (ICC) were calculated. T2-PD images had 17% higher CNR (15.2) than T2 images (12.6). Original T2 volumes' ICC was 0.87 for rater 1 and 0.84 for rater 2, whereas T2-PD images' ICC was 0.95 for rater 1 and 0.87 for rater 2. Reliability of hippocampal segmentation on T2-PD images was not improved by reformatting images (rater 1 ICC = 0.88, rater 2 ICC = 0.66). Post-acquisition processing can improve CNR and hence reliability of hippocampal segmentation in neonate MR scans when tissue contrast is poor. These findings may be applied to enhance boundary definition in infant segmentation for various brain structures or in any volumetric study where image contrast is sub-optimal, enabling hippocampal structure-function relationships to be explored.

  3. SPIDER: Next Generation Chip Scale Imaging Sensor Update

    NASA Astrophysics Data System (ADS)

    Duncan, A.; Kendrick, R.; Ogden, C.; Wuchenich, D.; Thurman, S.; Su, T.; Lai, W.; Chun, J.; Li, S.; Liu, G.; Yoo, S. J. B.

    2016-09-01

    The Lockheed Martin Advanced Technology Center (LM ATC) and the University of California at Davis (UC Davis) are developing an electro-optical (EO) imaging sensor called SPIDER (Segmented Planar Imaging Detector for Electro-optical Reconnaissance) that seeks to provide a 10x to 100x size, weight, and power (SWaP) reduction alternative to the traditional bulky optical telescope and focal-plane detector array. The substantial reductions in SWaP would reduce cost and/or provide higher resolution by enabling a larger-aperture imager in a constrained volume. Our SPIDER imager replaces the traditional optical telescope and digital focal plane detector array with a densely packed interferometer array based on emerging photonic integrated circuit (PIC) technologies that samples the object being imaged in the Fourier domain (i.e., spatial frequency domain), and then reconstructs an image. Our approach replaces the large optics and structures required by a conventional telescope with PICs that are accommodated by standard lithographic fabrication techniques (e.g., complementary metal-oxide-semiconductor (CMOS) fabrication). The standard EO payload integration and test process that involves precision alignment and test of optical components to form a diffraction limited telescope is, therefore, replaced by in-process integration and test as part of the PIC fabrication, which substantially reduces associated schedule and cost. This paper provides an overview of performance data on the second-generation PIC for SPIDER developed under the Defense Advanced Research Projects Agency (DARPA)'s SPIDER Zoom research funding. We also update the design description of the SPIDER Zoom imaging sensor and the second-generation PIC (high- and low resolution versions).

  4. Effects of processing conditions on mammographic image quality.

    PubMed

    Braeuning, M P; Cooper, H W; O'Brien, S; Burns, C B; Washburn, D B; Schell, M J; Pisano, E D

    1999-08-01

    Any given mammographic film will exhibit changes in sensitometric response and image resolution as processing variables are altered. Developer type, immersion time, and temperature have been shown to affect the contrast of the mammographic image and thus lesion visibility. The authors evaluated the effect of altering processing variables, including film type, developer type, and immersion time, on the visibility of masses, fibrils, and speaks in a standard mammographic phantom. Images of a phantom obtained with two screen types (Kodak Min-R and Fuji) and five film types (Kodak Min-R M, Min-R E, Min-R H; Fuji UM-MA HC, and DuPont Microvision-C) were processed with five different developer chemicals (Autex SE, DuPont HSD, Kodak RP, Picker 3-7-90, and White Mountain) at four different immersion times (24, 30, 36, and 46 seconds). Processor chemical activity was monitored with sensitometric strips, and developer temperatures were continuously measured. The film images were reviewed by two board-certified radiologists and two physicists with expertise in mammography quality control and were scored based on the visibility of calcifications, masses, and fibrils. Although the differences in the absolute scores were not large, the Kodak Min-R M and Fuji films exhibited the highest scores, and images developed in White Mountain and Autex chemicals exhibited the highest scores. For any film, several processing chemicals may be used to produce images of similar quality. Extended processing may no longer be necessary.

  5. Structures Validation Profiles in Transmission of Imaging and Data (TRIAD) for Automated NCTN Clinical Trial Digital Data Quality Assurance

    PubMed Central

    Giaddui, Tawfik; Yu, Jialu; Manfredi, Denise; Linnemann, Nancy; Hunter, Joanne; O’Meara, Elizabeth; Galvin, James; Bialecki, Brian; Xiao, Ying

    2016-01-01

    Transmission of Imaging and Data (TRIAD) is a standard-based system built by the American College of Radiology (ACR) to provide seamless exchange of images and data for accreditation of clinical trials and registries. Scripts of structures’ names validation profiles created in TRIAD are used in the automated submission process. It is essential for users to understand the logistics of these scripts for successful submission of radiotherapy cases with less iteration. PMID:27053498

  6. Quantitative Analysis of Rat Dorsal Root Ganglion Neurons Cultured on Microelectrode Arrays Based on Fluorescence Microscopy Image Processing.

    PubMed

    Mari, João Fernando; Saito, José Hiroki; Neves, Amanda Ferreira; Lotufo, Celina Monteiro da Cruz; Destro-Filho, João-Batista; Nicoletti, Maria do Carmo

    2015-12-01

    Microelectrode Arrays (MEA) are devices for long term electrophysiological recording of extracellular spontaneous or evocated activities on in vitro neuron culture. This work proposes and develops a framework for quantitative and morphological analysis of neuron cultures on MEAs, by processing their corresponding images, acquired by fluorescence microscopy. The neurons are segmented from the fluorescence channel images using a combination of segmentation by thresholding, watershed transform, and object classification. The positioning of microelectrodes is obtained from the transmitted light channel images using the circular Hough transform. The proposed method was applied to images of dissociated culture of rat dorsal root ganglion (DRG) neuronal cells. The morphological and topological quantitative analysis carried out produced information regarding the state of culture, such as population count, neuron-to-neuron and neuron-to-microelectrode distances, soma morphologies, neuron sizes, neuron and microelectrode spatial distributions. Most of the analysis of microscopy images taken from neuronal cultures on MEA only consider simple qualitative analysis. Also, the proposed framework aims to standardize the image processing and to compute quantitative useful measures for integrated image-signal studies and further computational simulations. As results show, the implemented microelectrode identification method is robust and so are the implemented neuron segmentation and classification one (with a correct segmentation rate up to 84%). The quantitative information retrieved by the method is highly relevant to assist the integrated signal-image study of recorded electrophysiological signals as well as the physical aspects of the neuron culture on MEA. Although the experiments deal with DRG cell images, cortical and hippocampal cell images could also be processed with small adjustments in the image processing parameter estimation.

  7. OR2020: The Operating Room of the Future

    DTIC Science & Technology

    2004-05-01

    25 3.3 Technical Requirements: Standards and Tools for Improved Operating R oom Process Integration...Image processing and visualization tools must be made available to the operating room. 5. Communications issues must be addressed and aim toward...protocols for effectively performing advanced surgeries and using telecommunications-ready tools as needed. The following recommendations were made

  8. Counting pollen grains using readily available, free image processing and analysis software.

    PubMed

    Costa, Clayton M; Yang, Suann

    2009-10-01

    Although many methods exist for quantifying the number of pollen grains in a sample, there are few standard methods that are user-friendly, inexpensive and reliable. The present contribution describes a new method of counting pollen using readily available, free image processing and analysis software. Pollen was collected from anthers of two species, Carduus acanthoides and C. nutans (Asteraceae), then illuminated on slides and digitally photographed through a stereomicroscope. Using ImageJ (NIH), these digital images were processed to remove noise and sharpen individual pollen grains, then analysed to obtain a reliable total count of the number of grains present in the image. A macro was developed to analyse multiple images together. To assess the accuracy and consistency of pollen counting by ImageJ analysis, counts were compared with those made by the human eye. Image analysis produced pollen counts in 60 s or less per image, considerably faster than counting with the human eye (5-68 min). In addition, counts produced with the ImageJ procedure were similar to those obtained by eye. Because count parameters are adjustable, this image analysis protocol may be used for many other plant species. Thus, the method provides a quick, inexpensive and reliable solution to counting pollen from digital images, not only reducing the chance of error but also substantially lowering labour requirements.

  9. Automated detection using natural language processing of radiologists recommendations for additional imaging of incidental findings.

    PubMed

    Dutta, Sayon; Long, William J; Brown, David F M; Reisner, Andrew T

    2013-08-01

    As use of radiology studies increases, there is a concurrent increase in incidental findings (eg, lung nodules) for which the radiologist issues recommendations for additional imaging for follow-up. Busy emergency physicians may be challenged to carefully communicate recommendations for additional imaging not relevant to the patient's primary evaluation. The emergence of electronic health records and natural language processing algorithms may help address this quality gap. We seek to describe recommendations for additional imaging from our institution and develop and validate an automated natural language processing algorithm to reliably identify recommendations for additional imaging. We developed a natural language processing algorithm to detect recommendations for additional imaging, using 3 iterative cycles of training and validation. The third cycle used 3,235 radiology reports (1,600 for algorithm training and 1,635 for validation) of discharged emergency department (ED) patients from which we determined the incidence of discharge-relevant recommendations for additional imaging and the frequency of appropriate discharge documentation. The test characteristics of the 3 natural language processing algorithm iterations were compared, using blinded chart review as the criterion standard. Discharge-relevant recommendations for additional imaging were found in 4.5% (95% confidence interval [CI] 3.5% to 5.5%) of ED radiology reports, but 51% (95% CI 43% to 59%) of discharge instructions failed to note those findings. The final natural language processing algorithm had 89% (95% CI 82% to 94%) sensitivity and 98% (95% CI 97% to 98%) specificity for detecting recommendations for additional imaging. For discharge-relevant recommendations for additional imaging, sensitivity improved to 97% (95% CI 89% to 100%). Recommendations for additional imaging are common, and failure to document relevant recommendations for additional imaging in ED discharge instructions occurs frequently. The natural language processing algorithm's performance improved with each iteration and offers a promising error-prevention tool. Copyright © 2013 American College of Emergency Physicians. Published by Mosby, Inc. All rights reserved.

  10. Visual analysis of trash bin processing on garbage trucks in low resolution video

    NASA Astrophysics Data System (ADS)

    Sidla, Oliver; Loibner, Gernot

    2015-03-01

    We present a system for trash can detection and counting from a camera which is mounted on a garbage collection truck. A working prototype has been successfully implemented and tested with several hours of real-world video. The detection pipeline consists of HOG detectors for two trash can sizes, and meanshift tracking and low level image processing for the analysis of the garbage disposal process. Considering the harsh environment and unfavorable imaging conditions, the process works already good enough so that very useful measurements from video data can be extracted. The false positive/false negative rate of the full processing pipeline is about 5-6% at fully automatic operation. Video data of a full day (about 8 hrs) can be processed in about 30 minutes on a standard PC.

  11. SOI-CMOS Process for Monolithic, Radiation-Tolerant, Science-Grade Imagers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, George; Lee, Adam

    In Phase I, Voxtel worked with Jazz and Sandia to document and simulate the processes necessary to implement a DH-BSI SOI CMOS imaging process. The development is based upon mature SOI CMOS process at both fabs, with the addition of only a few custom processing steps for integration and electrical interconnection of the fully-depleted photodetectors. In Phase I, Voxtel also characterized the Sandia process, including the CMOS7 design rules, and we developed the outline of a process option that included a “BOX etch”, that will permit a “detector in handle” SOI CMOS process to be developed The process flows weremore » developed in cooperation with both Jazz and Sandia process engineers, along with detailed TCAD modeling and testing of the photodiode array architectures. In addition, Voxtel tested the radiation performance of the Jazz’s CA18HJ process, using standard and circular-enclosed transistors.« less

  12. Parallel ICA and its hardware implementation in hyperspectral image analysis

    NASA Astrophysics Data System (ADS)

    Du, Hongtao; Qi, Hairong; Peterson, Gregory D.

    2004-04-01

    Advances in hyperspectral images have dramatically boosted remote sensing applications by providing abundant information using hundreds of contiguous spectral bands. However, the high volume of information also results in excessive computation burden. Since most materials have specific characteristics only at certain bands, a lot of these information is redundant. This property of hyperspectral images has motivated many researchers to study various dimensionality reduction algorithms, including Projection Pursuit (PP), Principal Component Analysis (PCA), wavelet transform, and Independent Component Analysis (ICA), where ICA is one of the most popular techniques. It searches for a linear or nonlinear transformation which minimizes the statistical dependence between spectral bands. Through this process, ICA can eliminate superfluous but retain practical information given only the observations of hyperspectral images. One hurdle of applying ICA in hyperspectral image (HSI) analysis, however, is its long computation time, especially for high volume hyperspectral data sets. Even the most efficient method, FastICA, is a very time-consuming process. In this paper, we present a parallel ICA (pICA) algorithm derived from FastICA. During the unmixing process, pICA divides the estimation of weight matrix into sub-processes which can be conducted in parallel on multiple processors. The decorrelation process is decomposed into the internal decorrelation and the external decorrelation, which perform weight vector decorrelations within individual processors and between cooperative processors, respectively. In order to further improve the performance of pICA, we seek hardware solutions in the implementation of pICA. Until now, there are very few hardware designs for ICA-related processes due to the complicated and iterant computation. This paper discusses capacity limitation of FPGA implementations for pICA in HSI analysis. A synthesis of Application-Specific Integrated Circuit (ASIC) is designed for pICA-based dimensionality reduction in HSI analysis. The pICA design is implemented using standard-height cells and aimed at TSMC 0.18 micron process. During the synthesis procedure, three ICA-related reconfigurable components are developed for the reuse and retargeting purpose. Preliminary results show that the standard-height cell based ASIC synthesis provide an effective solution for pICA and ICA-related processes in HSI analysis.

  13. Blind multirigid retrospective motion correction of MR images.

    PubMed

    Loktyushin, Alexander; Nickisch, Hannes; Pohmann, Rolf; Schölkopf, Bernhard

    2015-04-01

    Physiological nonrigid motion is inevitable when imaging, e.g., abdominal viscera, and can lead to serious deterioration of the image quality. Prospective techniques for motion correction can handle only special types of nonrigid motion, as they only allow global correction. Retrospective methods developed so far need guidance from navigator sequences or external sensors. We propose a fully retrospective nonrigid motion correction scheme that only needs raw data as an input. Our method is based on a forward model that describes the effects of nonrigid motion by partitioning the image into patches with locally rigid motion. Using this forward model, we construct an objective function that we can optimize with respect to both unknown motion parameters per patch and the underlying sharp image. We evaluate our method on both synthetic and real data in 2D and 3D. In vivo data was acquired using standard imaging sequences. The correction algorithm significantly improves the image quality. Our compute unified device architecture (CUDA)-enabled graphic processing unit implementation ensures feasible computation times. The presented technique is the first computationally feasible retrospective method that uses the raw data of standard imaging sequences, and allows to correct for nonrigid motion without guidance from external motion sensors. © 2014 Wiley Periodicals, Inc.

  14. Cascaded deep decision networks for classification of endoscopic images

    NASA Astrophysics Data System (ADS)

    Murthy, Venkatesh N.; Singh, Vivek; Sun, Shanhui; Bhattacharya, Subhabrata; Chen, Terrence; Comaniciu, Dorin

    2017-02-01

    Both traditional and wireless capsule endoscopes can generate tens of thousands of images for each patient. It is desirable to have the majority of irrelevant images filtered out by automatic algorithms during an offline review process or to have automatic indication for highly suspicious areas during an online guidance. This also applies to the newly invented endomicroscopy, where online indication of tumor classification plays a significant role. Image classification is a standard pattern recognition problem and is well studied in the literature. However, performance on the challenging endoscopic images still has room for improvement. In this paper, we present a novel Cascaded Deep Decision Network (CDDN) to improve image classification performance over standard Deep neural network based methods. During the learning phase, CDDN automatically builds a network which discards samples that are classified with high confidence scores by a previously trained network and concentrates only on the challenging samples which would be handled by the subsequent expert shallow networks. We validate CDDN using two different types of endoscopic imaging, which includes a polyp classification dataset and a tumor classification dataset. From both datasets we show that CDDN can outperform other methods by about 10%. In addition, CDDN can also be applied to other image classification problems.

  15. Volume Segmentation and Ghost Particles

    NASA Astrophysics Data System (ADS)

    Ziskin, Isaac; Adrian, Ronald

    2011-11-01

    Volume Segmentation Tomographic PIV (VS-TPIV) is a type of tomographic PIV in which images of particles in a relatively thick volume are segmented into images on a set of much thinner volumes that may be approximated as planes, as in 2D planar PIV. The planes of images can be analysed by standard mono-PIV, and the volume of flow vectors can be recreated by assembling the planes of vectors. The interrogation process is similar to a Holographic PIV analysis, except that the planes of image data are extracted from two-dimensional camera images of the volume of particles instead of three-dimensional holographic images. Like the tomographic PIV method using the MART algorithm, Volume Segmentation requires at least two cameras and works best with three or four. Unlike the MART method, Volume Segmentation does not require reconstruction of individual particle images one pixel at a time and it does not require an iterative process, so it operates much faster. As in all tomographic reconstruction strategies, ambiguities known as ghost particles are produced in the segmentation process. The effect of these ghost particles on the PIV measurement is discussed. This research was supported by Contract 79419-001-09, Los Alamos National Laboratory.

  16. Precedence of the eye region in neural processing of faces

    PubMed Central

    Issa, Elias; DiCarlo, James

    2012-01-01

    SUMMARY Functional magnetic resonance imaging (fMRI) has revealed multiple subregions in monkey inferior temporal cortex (IT) that are selective for images of faces over other objects. The earliest of these subregions, the posterior lateral face patch (PL), has not been studied previously at the neurophysiological level. Perhaps not surprisingly, we found that PL contains a high concentration of ‘face selective’ cells when tested with standard image sets comparable to those used previously to define the region at the level of fMRI. However, we here report that several different image sets and analytical approaches converge to show that nearly all face selective PL cells are driven by the presence of a single eye in the context of a face outline. Most strikingly, images containing only an eye, even when incorrectly positioned in an outline, drove neurons nearly as well as full face images, and face images lacking only this feature led to longer latency responses. Thus, bottom-up face processing is relatively local and linearly integrates features -- consistent with parts-based models -- grounding investigation of how the presence of a face is first inferred in the IT face processing hierarchy. PMID:23175821

  17. Image re-sampling detection through a novel interpolation kernel.

    PubMed

    Hilal, Alaa

    2018-06-01

    Image re-sampling involved in re-size and rotation transformations is an essential element block in a typical digital image alteration. Fortunately, traces left from such processes are detectable, proving that the image has gone a re-sampling transformation. Within this context, we present in this paper two original contributions. First, we propose a new re-sampling interpolation kernel. It depends on five independent parameters that controls its amplitude, angular frequency, standard deviation, and duration. Then, we demonstrate its capacity to imitate the same behavior of the most frequent interpolation kernels used in digital image re-sampling applications. Secondly, the proposed model is used to characterize and detect the correlation coefficients involved in re-sampling transformations. The involved process includes a minimization of an error function using the gradient method. The proposed method is assessed over a large database of 11,000 re-sampled images. Additionally, it is implemented within an algorithm in order to assess images that had undergone complex transformations. Obtained results demonstrate better performance and reduced processing time when compared to a reference method validating the suitability of the proposed approaches. Copyright © 2018 Elsevier B.V. All rights reserved.

  18. Image interpolation and denoising for division of focal plane sensors using Gaussian processes.

    PubMed

    Gilboa, Elad; Cunningham, John P; Nehorai, Arye; Gruev, Viktor

    2014-06-16

    Image interpolation and denoising are important techniques in image processing. These methods are inherent to digital image acquisition as most digital cameras are composed of a 2D grid of heterogeneous imaging sensors. Current polarization imaging employ four different pixelated polarization filters, commonly referred to as division of focal plane polarization sensors. The sensors capture only partial information of the true scene, leading to a loss of spatial resolution as well as inaccuracy of the captured polarization information. Interpolation is a standard technique to recover the missing information and increase the accuracy of the captured polarization information. Here we focus specifically on Gaussian process regression as a way to perform a statistical image interpolation, where estimates of sensor noise are used to improve the accuracy of the estimated pixel information. We further exploit the inherent grid structure of this data to create a fast exact algorithm that operates in ����(N(3/2)) (vs. the naive ���� (N³)), thus making the Gaussian process method computationally tractable for image data. This modeling advance and the enabling computational advance combine to produce significant improvements over previously published interpolation methods for polarimeters, which is most pronounced in cases of low signal-to-noise ratio (SNR). We provide the comprehensive mathematical model as well as experimental results of the GP interpolation performance for division of focal plane polarimeter.

  19. Image processing for x-ray inspection of pistachio nuts

    NASA Astrophysics Data System (ADS)

    Casasent, David P.

    2001-03-01

    A review is provided of image processing techniques that have been applied to the inspection of pistachio nuts using X-ray images. X-ray sensors provide non-destructive internal product detail not available from other sensors. The primary concern in this data is detecting the presence of worm infestations in nuts, since they have been linked to the presence of aflatoxin. We describe new techniques for segmentation, feature selection, selection of product categories (clusters), classifier design, etc. Specific novel results include: a new segmentation algorithm to produce images of isolated product items; preferable classifier operation (the classifier with the best probability of correct recognition Pc is not best); higher-order discrimination information is present in standard features (thus, high-order features appear useful); classifiers that use new cluster categories of samples achieve improved performance. Results are presented for X-ray images of pistachio nuts; however, all techniques have use in other product inspection applications.

  20. Architecture of distributed picture archiving and communication systems for storing and processing high resolution medical images

    NASA Astrophysics Data System (ADS)

    Tokareva, Victoria

    2018-04-01

    New generation medicine demands a better quality of analysis increasing the amount of data collected during checkups, and simultaneously decreasing the invasiveness of a procedure. Thus it becomes urgent not only to develop advanced modern hardware, but also to implement special software infrastructure for using it in everyday clinical practice, so-called Picture Archiving and Communication Systems (PACS). Developing distributed PACS is a challenging task for nowadays medical informatics. The paper discusses the architecture of distributed PACS server for processing large high-quality medical images, with respect to technical specifications of modern medical imaging hardware, as well as international standards in medical imaging software. The MapReduce paradigm is proposed for image reconstruction by server, and the details of utilizing the Hadoop framework for this task are being discussed in order to provide the design of distributed PACS as ergonomic and adapted to the needs of end users as possible.

  1. Cardio-PACs: a new opportunity

    NASA Astrophysics Data System (ADS)

    Heupler, Frederick A., Jr.; Thomas, James D.; Blume, Hartwig R.; Cecil, Robert A.; Heisler, Mary

    2000-05-01

    It is now possible to replace film-based image management in the cardiac catheterization laboratory with a Cardiology Picture Archiving and Communication System (Cardio-PACS) based on digital imaging technology. The first step in the conversion process is installation of a digital image acquisition system that is capable of generating high-quality DICOM-compatible images. The next three steps, which are the subject of this presentation, involve image display, distribution, and storage. Clinical requirements and associated cost considerations for these three steps are listed below: Image display: (1) Image quality equal to film, with DICOM format, lossless compression, image processing, desktop PC-based with color monitor, and physician-friendly imaging software; (2) Performance specifications include: acquire 30 frames/sec; replay 15 frames/sec; access to file server 5 seconds, and to archive 5 minutes; (3) Compatibility of image file, transmission, and processing formats; (4) Image manipulation: brightness, contrast, gray scale, zoom, biplane display, and quantification; (5) User-friendly control of image review. Image distribution: (1) Standard IP-based network between cardiac catheterization laboratories, file server, long-term archive, review stations, and remote sites; (2) Non-proprietary formats; (3) Bidirectional distribution. Image storage: (1) CD-ROM vs disk vs tape; (2) Verification of data integrity; (3) User-designated storage capacity for catheterization laboratory, file server, long-term archive. Costs: (1) Image acquisition equipment, file server, long-term archive; (2) Network infrastructure; (3) Review stations and software; (4) Maintenance and administration; (5) Future upgrades and expansion; (6) Personnel.

  2. FISSA: A neuropil decontamination toolbox for calcium imaging signals.

    PubMed

    Keemink, Sander W; Lowe, Scott C; Pakan, Janelle M P; Dylda, Evelyn; van Rossum, Mark C W; Rochefort, Nathalie L

    2018-02-22

    In vivo calcium imaging has become a method of choice to image neuronal population activity throughout the nervous system. These experiments generate large sequences of images. Their analysis is computationally intensive and typically involves motion correction, image segmentation into regions of interest (ROIs), and extraction of fluorescence traces from each ROI. Out of focus fluorescence from surrounding neuropil and other cells can strongly contaminate the signal assigned to a given ROI. In this study, we introduce the FISSA toolbox (Fast Image Signal Separation Analysis) for neuropil decontamination. Given pre-defined ROIs, the FISSA toolbox automatically extracts the surrounding local neuropil and performs blind-source separation with non-negative matrix factorization. Using both simulated and in vivo data, we show that this toolbox performs similarly or better than existing published methods. FISSA requires only little RAM, and allows for fast processing of large datasets even on a standard laptop. The FISSA toolbox is available in Python, with an option for MATLAB format outputs, and can easily be integrated into existing workflows. It is available from Github and the standard Python repositories.

  3. An anisotropic diffusion method for denoising dynamic susceptibility contrast-enhanced magnetic resonance images

    NASA Astrophysics Data System (ADS)

    Murase, Kenya; Yamazaki, Youichi; Shinohara, Masaaki; Kawakami, Kazunori; Kikuchi, Keiichi; Miki, Hitoshi; Mochizuki, Teruhito; Ikezoe, Junpei

    2001-10-01

    The purpose of this study was to present an application of a novel denoising technique for improving the accuracy of cerebral blood flow (CBF) images generated from dynamic susceptibility contrast-enhanced magnetic resonance imaging (DSC-MRI). The method presented in this study was based on anisotropic diffusion (AD). The usefulness of this method was firstly investigated using computer simulations. We applied this method to patient data acquired using a 1.5 T MR system. After a bolus injection of Gd-DTPA, we obtained 40-50 dynamic images with a 1.32-2.08 s time resolution in 4-6 slices. The dynamic images were processed using the AD method, and then the CBF images were generated using pixel-by-pixel deconvolution analysis. For comparison, the CBF images were also generated with or without processing the dynamic images using a median or Gaussian filter. In simulation studies, the standard deviation of the CBF values obtained after processing by the AD method was smaller than that of the CBF values obtained without any processing, while the mean value agreed well with the true CBF value. Although the median and Gaussian filters also reduced image noise, the mean CBF values were considerably underestimated compared with the true values. Clinical studies also suggested that the AD method was capable of reducing the image noise while preserving the quantitative accuracy of CBF images. In conclusion, the AD method appears useful for denoising DSC-MRI, which will make the CBF images generated from DSC-MRI more reliable.

  4. ISLE (Image and Signal Processing LISP Environment) reference manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sherwood, R.J.; Searfus, R.M.

    1990-01-01

    ISLE is a rapid prototyping system for performing image and signal processing. It is designed to meet the needs of a person doing development of image and signal processing algorithms in a research environment. The image and signal processing modules in ISLE form a very capable package in themselves. They also provide a rich environment for quickly and easily integrating user-written software modules into the package. ISLE is well suited to applications in which there is a need to develop a processing algorithm in an interactive manner. It is straightforward to develop the algorithms, load it into ISLE, apply themore » algorithm to an image or signal, display the results, then modify the algorithm and repeat the develop-load-apply-display cycle. ISLE consists of a collection of image and signal processing modules integrated into a cohesive package through a standard command interpreter. ISLE developer elected to concentrate their effort on developing image and signal processing software rather than developing a command interpreter. A COMMON LISP interpreter was selected for the command interpreter because it already has the features desired in a command interpreter, it supports dynamic loading of modules for customization purposes, it supports run-time parameter and argument type checking, it is very well documented, and it is a commercially supported product. This manual is intended to be a reference manual for the ISLE functions The functions are grouped into a number of categories and briefly discussed in the Function Summary chapter. The full descriptions of the functions and all their arguments are given in the Function Descriptions chapter. 6 refs.« less

  5. Color Image Restoration Using Nonlocal Mumford-Shah Regularizers

    NASA Astrophysics Data System (ADS)

    Jung, Miyoun; Bresson, Xavier; Chan, Tony F.; Vese, Luminita A.

    We introduce several color image restoration algorithms based on the Mumford-Shah model and nonlocal image information. The standard Ambrosio-Tortorelli and Shah models are defined to work in a small local neighborhood, which are sufficient to denoise smooth regions with sharp boundaries. However, textures are not local in nature and require semi-local/non-local information to be denoised efficiently. Inspired from recent work (NL-means of Buades, Coll, Morel and NL-TV of Gilboa, Osher), we extend the standard models of Ambrosio-Tortorelli and Shah approximations to Mumford-Shah functionals to work with nonlocal information, for better restoration of fine structures and textures. We present several applications of the proposed nonlocal MS regularizers in image processing such as color image denoising, color image deblurring in the presence of Gaussian or impulse noise, color image inpainting, and color image super-resolution. In the formulation of nonlocal variational models for the image deblurring with impulse noise, we propose an efficient preprocessing step for the computation of the weight function w. In all the applications, the proposed nonlocal regularizers produce superior results over the local ones, especially in image inpainting with large missing regions. Experimental results and comparisons between the proposed nonlocal methods and the local ones are shown.

  6. Blind guidance system based on laser triangulation

    NASA Astrophysics Data System (ADS)

    Wu, Jih-Huah; Wang, Jinner-Der; Fang, Wei; Lee, Yun-Parn; Shan, Yi-Chia; Kao, Hai-Ko; Ma, Shih-Hsin; Jiang, Joe-Air

    2012-05-01

    We propose a new guidance system for the blind. An optical triangulation method is used in the system. The main components of the proposed system comprise of a notebook computer, a camera, and two laser modules. The track image of the light beam on the ground or on the object is captured by the camera and then the image is sent to the notebook computer for further processing and analysis. Using a developed signal-processing algorithm, our system can determine the object width and the distance between the object and the blind person through the calculation of the light line positions on the image. A series of feasibility tests of the developed blind guidance system were conducted. The experimental results show that the distance between the test object and the blind can be measured with a standard deviation of less than 8.5% within the range of 40 and 130 cm, while the test object width can be measured with a standard deviation of less than 4.5% within the range of 40 and 130 cm. The application potential of the designed system to the blind guidance can be expected.

  7. Technical Note: Image filtering to make computer-aided detection robust to image reconstruction kernel choice in lung cancer CT screening

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ohkubo, Masaki, E-mail: mook@clg.niigata-u.ac.jp

    Purpose: In lung cancer computed tomography (CT) screening, the performance of a computer-aided detection (CAD) system depends on the selection of the image reconstruction kernel. To reduce this dependence on reconstruction kernels, the authors propose a novel application of an image filtering method previously proposed by their group. Methods: The proposed filtering process uses the ratio of modulation transfer functions (MTFs) of two reconstruction kernels as a filtering function in the spatial-frequency domain. This method is referred to as MTF{sub ratio} filtering. Test image data were obtained from CT screening scans of 67 subjects who each had one nodule. Imagesmore » were reconstructed using two kernels: f{sub STD} (for standard lung imaging) and f{sub SHARP} (for sharp edge-enhancement lung imaging). The MTF{sub ratio} filtering was implemented using the MTFs measured for those kernels and was applied to the reconstructed f{sub SHARP} images to obtain images that were similar to the f{sub STD} images. A mean filter and a median filter were applied (separately) for comparison. All reconstructed and filtered images were processed using their prototype CAD system. Results: The MTF{sub ratio} filtered images showed excellent agreement with the f{sub STD} images. The standard deviation for the difference between these images was very small, ∼6.0 Hounsfield units (HU). However, the mean and median filtered images showed larger differences of ∼48.1 and ∼57.9 HU from the f{sub STD} images, respectively. The free-response receiver operating characteristic (FROC) curve for the f{sub SHARP} images indicated poorer performance compared with the FROC curve for the f{sub STD} images. The FROC curve for the MTF{sub ratio} filtered images was equivalent to the curve for the f{sub STD} images. However, this similarity was not achieved by using the mean filter or median filter. Conclusions: The accuracy of MTF{sub ratio} image filtering was verified and the method was demonstrated to be effective for reducing the kernel dependence of CAD performance.« less

  8. Noise reduction in spectral CT: reducing dose and breaking the trade-off between image noise and energy bin selection.

    PubMed

    Leng, Shuai; Yu, Lifeng; Wang, Jia; Fletcher, Joel G; Mistretta, Charles A; McCollough, Cynthia H

    2011-09-01

    Our purpose was to reduce image noise in spectral CT by exploiting data redundancies in the energy domain to allow flexible selection of the number, width, and location of the energy bins. Using a variety of spectral CT imaging methods, conventional filtered backprojection (FBP) reconstructions were performed and resulting images were compared to those processed using a Local HighlY constrained backPRojection Reconstruction (HYPR-LR) algorithm. The mean and standard deviation of CT numbers were measured within regions of interest (ROIs), and results were compared between FBP and HYPR-LR. For these comparisons, the following spectral CT imaging methods were used:(i) numerical simulations based on a photon-counting, detector-based CT system, (ii) a photon-counting, detector-based micro CT system using rubidium and potassium chloride solutions, (iii) a commercial CT system equipped with integrating detectors utilizing tube potentials of 80, 100, 120, and 140 kV, and (iv) a clinical dual-energy CT examination. The effects of tube energy and energy bin width were evaluated appropriate to each CT system. The mean CT number in each ROI was unchanged between FBP and HYPR-LR images for each of the spectral CT imaging scenarios, irrespective of bin width or tube potential. However, image noise, as represented by the standard deviation of CT numbers in each ROI, was reduced by 36%-76%. In all scenarios, image noise after HYPR-LR algorithm was similar to that of composite images, which used all available photons. No difference in spatial resolution was observed between HYPR-LR processing and FBP. Dual energy patient data processed using HYPR-LR demonstrated reduced noise in the individual, low- and high-energy images, as well as in the material-specific basis images. Noise reduction can be accomplished for spectral CT by exploiting data redundancies in the energy domain. HYPR-LR is a robust method for reducing image noise in a variety of spectral CT imaging systems without losing spatial resolution or CT number accuracy. This method improves the flexibility to select energy bins in the manner that optimizes material identification and separation without paying the penalty of increased image noise or its corollary, increased patient dose.

  9. Intra-operative adjustment of standard planes in C-arm CT image data.

    PubMed

    Brehler, Michael; Görres, Joseph; Franke, Jochen; Barth, Karl; Vetter, Sven Y; Grützner, Paul A; Meinzer, Hans-Peter; Wolf, Ivo; Nabers, Diana

    2016-03-01

    With the help of an intra-operative mobile C-arm CT, medical interventions can be verified and corrected, avoiding the need for a post-operative CT and a second intervention. An exact adjustment of standard plane positions is necessary for the best possible assessment of the anatomical regions of interest but the mobility of the C-arm causes the need for a time-consuming manual adjustment. In this article, we present an automatic plane adjustment at the example of calcaneal fractures. We developed two feature detection methods (2D and pseudo-3D) based on SURF key points and also transferred the SURF approach to 3D. Combined with an atlas-based registration, our algorithm adjusts the standard planes of the calcaneal C-arm images automatically. The robustness of the algorithms is evaluated using a clinical data set. Additionally, we tested the algorithm's performance for two registration approaches, two resolutions of C-arm images and two methods for metal artifact reduction. For the feature extraction, the novel 3D-SURF approach performs best. As expected, a higher resolution ([Formula: see text] voxel) leads also to more robust feature points and is therefore slightly better than the [Formula: see text] voxel images (standard setting of device). Our comparison of two different artifact reduction methods and the complete removal of metal in the images shows that our approach is highly robust against artifacts and the number and position of metal implants. By introducing our fast algorithmic processing pipeline, we developed the first steps for a fully automatic assistance system for the assessment of C-arm CT images.

  10. Vision based tunnel inspection using non-rigid registration

    NASA Astrophysics Data System (ADS)

    Badshah, Amir; Ullah, Shan; Shahzad, Danish

    2015-04-01

    Growing numbers of long tunnels across the globe has increased the need for safety measurements and inspections of tunnels in these days. To avoid serious damages, tunnel inspection is highly recommended at regular intervals of time to find any deformations or cracks at the right time. While following the stringent safety and tunnel accessibility standards, conventional geodetic surveying using techniques of civil engineering and other manual and mechanical methods are time consuming and results in troublesome of routine life. An automatic tunnel inspection by image processing techniques using non rigid registration has been proposed. There are many other image processing methods used for image registration purposes. Most of the processes are operation of images in its spatial domain like finding edges and corners by Harris edge detection method. These methods are quite time consuming and fail for some or other reasons like for blurred or images with noise. Due to use of image features directly by these methods in the process, are known by the group, correlation by image features. The other method is featureless correlation, in which the images are converted into its frequency domain and then correlated with each other. The shift in spatial domain is the same as in frequency domain, but the processing is order faster than in spatial domain. In the proposed method modified normalized phase correlation has been used to find any shift between two images. As pre pre-processing the tunnel images i.e. reference and template are divided into small patches. All these relative patches are registered by the proposed modified normalized phase correlation. By the application of the proposed algorithm we get the pixel movement of the images. And then these pixels shifts are converted to measuring units like mm, cm etc. After the complete process if there is any shift in the tunnel at described points are located.

  11. Extracting relevant information for cancer diagnosis from dynamic full field OCT through image processing and learning

    NASA Astrophysics Data System (ADS)

    Apelian, Clément; Gastaud, Clément; Boccara, A. Claude

    2017-02-01

    For a large number of cancer surgeries, the lack of reliable intraoperative diagnosis leads to reoperations or bad outcomes for the patients. To deliver better diagnosis, we developed Dynamic Full Field OCT (D-FFOCT) as a complement to FFOCT. FFOCT already presents interesting results for cancer diagnosis e.g. Mohs surgery and reaching 96% accuracy on prostate cancer. D-FFOCT accesses the dynamic processes of metabolism and gives new tools to diagnose the state of a tissue at the cellular level to complement FFOCT contrast. We developed a processing framework that intends to maximize the information provided by the FFOCT technology as well as D-FFOCT and synthetize this as a meaningful image. We use different time processing to generate metrics (standard deviation of time signals, decorrelation times and more) and spatial processing to sort out structures and the corresponding imaging modality, which is the most appropriate. Sorting was achieved through quadratic discriminant analysis in a N-dimension parametric space corresponding to our metrics. Combining the best imaging modalities for each structure leads to a rich morphology image. This image displaying the morphology is then colored to represent the dynamic behavior of these structures (slow or fast) and to be quickly analyzed by doctors. Therefore, we achieved a micron resolved image, rich of both FFOCT ability of imaging fixed and highly backscattering structures as well as D-FFOCT ability of imaging low level scattering cellular level details. We believe that this morphological contrast close to histology and the dynamic behavior contrast will push forward the limits of intraoperative diagnosis further on.

  12. Investigation of radio astronomy image processing techniques for use in the passive millimetre-wave security screening environment

    NASA Astrophysics Data System (ADS)

    Taylor, Christopher T.; Hutchinson, Simon; Salmon, Neil A.; Wilkinson, Peter N.; Cameron, Colin D.

    2014-06-01

    Image processing techniques can be used to improve the cost-effectiveness of future interferometric Passive MilliMetre Wave (PMMW) imagers. The implementation of such techniques will allow for a reduction in the number of collecting elements whilst ensuring adequate image fidelity is maintained. Various techniques have been developed by the radio astronomy community to enhance the imaging capability of sparse interferometric arrays. The most prominent are Multi- Frequency Synthesis (MFS) and non-linear deconvolution algorithms, such as the Maximum Entropy Method (MEM) and variations of the CLEAN algorithm. This investigation focuses on the implementation of these methods in the defacto standard for radio astronomy image processing, the Common Astronomy Software Applications (CASA) package, building upon the discussion presented in Taylor et al., SPIE 8362-0F. We describe the image conversion process into a CASA suitable format, followed by a series of simulations that exploit the highlighted deconvolution and MFS algorithms assuming far-field imagery. The primary target application used for this investigation is an outdoor security scanner for soft-sided Heavy Goods Vehicles. A quantitative analysis of the effectiveness of the aforementioned image processing techniques is presented, with thoughts on the potential cost-savings such an approach could yield. Consideration is also given to how the implementation of these techniques in CASA might be adapted to operate in a near-field target environment. This may enable a much wider usability by the imaging community outside of radio astronomy and thus would be directly relevant to portal screening security systems in the microwave and millimetre wave bands.

  13. Wavelet Filter Banks for Super-Resolution SAR Imaging

    NASA Technical Reports Server (NTRS)

    Sheybani, Ehsan O.; Deshpande, Manohar; Memarsadeghi, Nargess

    2011-01-01

    This paper discusses Innovative wavelet-based filter banks designed to enhance the analysis of super resolution Synthetic Aperture Radar (SAR) images using parametric spectral methods and signal classification algorithms, SAR finds applications In many of NASA's earth science fields such as deformation, ecosystem structure, and dynamics of Ice, snow and cold land processes, and surface water and ocean topography. Traditionally, standard methods such as Fast-Fourier Transform (FFT) and Inverse Fast-Fourier Transform (IFFT) have been used to extract Images from SAR radar data, Due to non-parametric features of these methods and their resolution limitations and observation time dependence, use of spectral estimation and signal pre- and post-processing techniques based on wavelets to process SAR radar data has been proposed. Multi-resolution wavelet transforms and advanced spectral estimation techniques have proven to offer efficient solutions to this problem.

  14. A post-processing system for automated rectification and registration of spaceborne SAR imagery

    NASA Technical Reports Server (NTRS)

    Curlander, John C.; Kwok, Ronald; Pang, Shirley S.

    1987-01-01

    An automated post-processing system has been developed that interfaces with the raw image output of the operational digital SAR correlator. This system is designed for optimal efficiency by using advanced signal processing hardware and an algorithm that requires no operator interaction, such as the determination of ground control points. The standard output is a geocoded image product (i.e. resampled to a specified map projection). The system is capable of producing multiframe mosaics for large-scale mapping by combining images in both the along-track direction and adjacent cross-track swaths from ascending and descending passes over the same target area. The output products have absolute location uncertainty of less than 50 m and relative distortion (scale factor and skew) of less than 0.1 per cent relative to local variations from the assumed geoid.

  15. Automatic x-ray image contrast enhancement based on parameter auto-optimization.

    PubMed

    Qiu, Jianfeng; Harold Li, H; Zhang, Tiezhi; Ma, Fangfang; Yang, Deshan

    2017-11-01

    Insufficient image contrast associated with radiation therapy daily setup x-ray images could negatively affect accurate patient treatment setup. We developed a method to perform automatic and user-independent contrast enhancement on 2D kilo voltage (kV) and megavoltage (MV) x-ray images. The goal was to provide tissue contrast optimized for each treatment site in order to support accurate patient daily treatment setup and the subsequent offline review. The proposed method processes the 2D x-ray images with an optimized image processing filter chain, which consists of a noise reduction filter and a high-pass filter followed by a contrast limited adaptive histogram equalization (CLAHE) filter. The most important innovation is to optimize the image processing parameters automatically to determine the required image contrast settings per disease site and imaging modality. Three major parameters controlling the image processing chain, i.e., the Gaussian smoothing weighting factor for the high-pass filter, the block size, and the clip limiting parameter for the CLAHE filter, were determined automatically using an interior-point constrained optimization algorithm. Fifty-two kV and MV x-ray images were included in this study. The results were manually evaluated and ranked with scores from 1 (worst, unacceptable) to 5 (significantly better than adequate and visually praise worthy) by physicians and physicists. The average scores for the images processed by the proposed method, the CLAHE, and the best window-level adjustment were 3.92, 2.83, and 2.27, respectively. The percentage of the processed images received a score of 5 were 48, 29, and 18%, respectively. The proposed method is able to outperform the standard image contrast adjustment procedures that are currently used in the commercial clinical systems. When the proposed method is implemented in the clinical systems as an automatic image processing filter, it could be useful for allowing quicker and potentially more accurate treatment setup and facilitating the subsequent offline review and verification. © 2017 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  16. WHOLE BODY NONRIGID CT-PET REGISTRATION USING WEIGHTED DEMONS.

    PubMed

    Suh, J W; Kwon, Oh-K; Scheinost, D; Sinusas, A J; Cline, Gary W; Papademetris, X

    2011-03-30

    We present a new registration method for whole-body rat computed tomography (CT) image and positron emission tomography (PET) images using a weighted demons algorithm. The CT and PET images are acquired in separate scanners at different times and the inherent differences in the imaging protocols produced significant nonrigid changes between the two acquisitions in addition to heterogeneous image characteristics. In this situation, we utilized both the transmission-PET and the emission-PET images in the deformable registration process emphasizing particular regions of the moving transmission-PET image using the emission-PET image. We validated our results with nine rat image sets using M-Hausdorff distance similarity measure. We demonstrate improved performance compared to standard methods such as Demons and normalized mutual information-based non-rigid FFD registration.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Niu, T; Dong, X; Petrongolo, M

    Purpose: Dual energy CT (DECT) imaging plays an important role in advanced imaging applications due to its material decomposition capability. Direct decomposition via matrix inversion suffers from significant degradation of image signal-to-noise ratios, which reduces clinical value. Existing de-noising algorithms achieve suboptimal performance since they suppress image noise either before or after the decomposition and do not fully explore the noise statistical properties of the decomposition process. We propose an iterative image-domain decomposition method for noise suppression in DECT, using the full variance-covariance matrix of the decomposed images. Methods: The proposed algorithm is formulated in the form of least-square estimationmore » with smoothness regularization. It includes the inverse of the estimated variance-covariance matrix of the decomposed images as the penalty weight in the least-square term. Performance is evaluated using an evaluation phantom (Catphan 600) and an anthropomorphic head phantom. Results are compared to those generated using direct matrix inversion with no noise suppression, a de-noising method applied on the decomposed images, and an existing algorithm with similar formulation but with an edge-preserving regularization term. Results: On the Catphan phantom, our method retains the same spatial resolution as the CT images before decomposition while reducing the noise standard deviation of decomposed images by over 98%. The other methods either degrade spatial resolution or achieve less low-contrast detectability. Also, our method yields lower electron density measurement error than direct matrix inversion and reduces error variation by over 97%. On the head phantom, it reduces the noise standard deviation of decomposed images by over 97% without blurring the sinus structures. Conclusion: We propose an iterative image-domain decomposition method for DECT. The method combines noise suppression and material decomposition into an iterative process and achieves both goals simultaneously. The proposed algorithm shows superior performance on noise suppression with high image spatial resolution and low-contrast detectability. This work is supported by a Varian MRA grant.« less

  18. Comparison of micro-computerized tomography and cone-beam computerized tomography in the detection of accessory canals in primary molars.

    PubMed

    Acar, Buket; Kamburoğlu, Kıvanç; Tatar, İlkan; Arıkan, Volkan; Çelik, Hakan Hamdi; Yüksel, Selcen; Özen, Tuncer

    2015-12-01

    This study was performed to compare the accuracy of micro-computed tomography (CT) and cone-beam computed tomography (CBCT) in detecting accessory canals in primary molars. Forty-one extracted human primary first and second molars were embedded in wax blocks and scanned using micro-CT and CBCT. After the images were taken, the samples were processed using a clearing technique and examined under a stereomicroscope in order to establish the gold standard for this study. The specimens were classified into three groups: maxillary molars, mandibular molars with three canals, and mandibular molars with four canals. Differences between the gold standard and the observations made using the imaging methods were calculated using Spearman's rho correlation coefficient test. The presence of accessory canals in micro-CT images of maxillary and mandibular root canals showed a statistically significant correlation with the stereomicroscopic images used as a gold standard. No statistically significant correlation was found between the CBCT findings and the stereomicroscopic images. Although micro-CT is not suitable for clinical use, it provides more detailed information about minor anatomical structures. However, CBCT is convenient for clinical use but may not be capable of adequately analyzing the internal anatomy of primary teeth.

  19. High-frequency ultrasound measurements of the normal ciliary body and iris.

    PubMed

    Garcia, Julian P S; Spielberg, Leigh; Finger, Paul T

    2011-01-01

    To determine the normal ultrasonographic thickness of the iris and ciliary body. This prospective 35-MHz ultrasonographic study included 80 normal eyes of 40 healthy volunteers. The images were obtained at the 12-, 3-, 6-, and 9-o'clock radial meridians, measured at three locations along the radial length of the iris and at the thickest section of the ciliary body. Mixed model was used to estimate eye site-adjusted means and standard errors and to test the statistical difference of adjusted results. Parameters included mean thickness, standard deviation, and range. Mean thicknesses at the iris root, midway along the radial length of the iris, and at the juxtapupillary margin were 0.4 ± 0.1, 0.5 ± 0.1, and 0.6 ± 0.1 mm, respectively. Those of the ciliary body, ciliary processes, and ciliary body + ciliary processes were 0.7 ± 0.1, 0.6 ± 0.1, and 1.3 ± 0.2 mm, respectively. This study provides standard, normative thickness data for the iris and ciliary body in healthy adults using ultrasonographic imaging. Copyright 2011, SLACK Incorporated.

  20. LANDSAT US standard catalog, 1-30 September 1977. [LANDSAT imagery for September, 1977

    NASA Technical Reports Server (NTRS)

    1977-01-01

    The U. S. Standard Catalog lists U. S. imagery acquired by LANDSAT 1 and 2 which has been processed and input to the data files during the referenced month. Data, such as date acquired, cloud cover, and image quality are given for each scene. The microfilm roll and frame on which the scene may be found is also given.

  1. LANDSAT Non-US standard catalog, 1-31 December 1975. [LANDSAT imagery for December 1975

    NASA Technical Reports Server (NTRS)

    1975-01-01

    The Non-U.S. Standard Catalog lists Non-U.S. imagery acquired by LANDSAT 1 and 2 which has been processed and input to the data files during the referenced month. Data, such as date acquired, cloud cover and image quality are given for each scene. The microfilm roll and frame on which the scene may be found is also given.

  2. Evidence and diagnostic reporting in the IHE context.

    PubMed

    Loef, Cor; Truyen, Roel

    2005-05-01

    Capturing clinical observations and findings during the diagnostic imaging process is increasingly becoming a critical step in diagnostic reporting. Standards developers-notably HL7 and DICOM-are making significant progress toward standards that enable exchanging clinical observations and findings among the various information systems of the healthcare enterprise. DICOM-like the HL7 Clinical Document Architecture (CDA) -uses templates and constrained, coded vocabulary (SNOMED, LOINC, etc.). Such a representation facilitates automated software recognition of findings and observations, intrapatient comparison, correlation to norms, and outcomes research. The scope of DICOM Structured Reporting (SR) includes many findings that products routinely create in digital form (measurements, computed estimates, etc.). In the Integrating the Healthcare Enterprise (IHE) framework, two Integration Profiles are defined for clinical data capture and diagnostic reporting: Evidence Document, and Simple Image and Numeric Report. This report describes these two DICOM SR-based integration profiles in the diagnostic reporting process.

  3. Digital Images on the DIME

    NASA Technical Reports Server (NTRS)

    2003-01-01

    With NASA on its side, Positive Systems, Inc., of Whitefish, Montana, is veering away from the industry standards defined for producing and processing remotely sensed images. A top developer of imaging products for geographic information system (GIS) and computer-aided design (CAD) applications, Positive Systems is bucking traditional imaging concepts with a cost-effective and time-saving software tool called Digital Images Made Easy (DIME(trademark)). Like piecing a jigsaw puzzle together, DIME can integrate a series of raw aerial or satellite snapshots into a single, seamless panoramic image, known as a 'mosaic.' The 'mosaicked' images serve as useful backdrops to GIS maps - which typically consist of line drawings called 'vectors' - by allowing users to view a multidimensional map that provides substantially more geographic information.

  4. SEGY to ASCII Conversion and Plotting Program 2.0

    USGS Publications Warehouse

    Goldman, Mark R.

    2005-01-01

    INTRODUCTION SEGY has long been a standard format for storing seismic data and header information. Almost every seismic processing package can read and write seismic data in SEGY format. In the data processing world, however, ASCII format is the 'universal' standard format. Very few general-purpose plotting or computation programs will accept data in SEGY format. The software presented in this report, referred to as SEGY to ASCII (SAC), converts seismic data written in SEGY format (Barry et al., 1975) to an ASCII data file, and then creates a postscript file of the seismic data using a general plotting package (GMT, Wessel and Smith, 1995). The resulting postscript file may be plotted by any standard postscript plotting program. There are two versions of SAC: one version for plotting a SEGY file that contains a single gather, such as a stacked CDP or migrated section, and a second version for plotting multiple gathers from a SEGY file containing more than one gather, such as a collection of shot gathers. Note that if a SEGY file has multiple gathers, then each gather must have the same number of traces per gather, and each trace must have the same sample interval and number of samples per trace. SAC will read several common standards of SEGY data, including SEGY files with sample values written in either IBM or IEEE floating-point format. In addition, utility programs are present to convert non-standard Seismic Unix (.sux) SEGY files and PASSCAL (.rsy) SEGY files to standard SEGY files. SAC allows complete user control over all plotting parameters including label size and font, tick mark intervals, trace scaling, and the inclusion of a title and descriptive text. SAC shell scripts create a postscript image of the seismic data in vector rather than bitmap format, using GMT's pswiggle command. Although this can produce a very large postscript file, the image quality is generally superior to that of a bitmap image, and commercial programs such as Adobe Illustrator? can manipulate the image more efficiently.

  5. Standards to support information systems integration in anatomic pathology.

    PubMed

    Daniel, Christel; García Rojo, Marcial; Bourquard, Karima; Henin, Dominique; Schrader, Thomas; Della Mea, Vincenzo; Gilbertson, John; Beckwith, Bruce A

    2009-11-01

    Integrating anatomic pathology information- text and images-into electronic health care records is a key challenge for enhancing clinical information exchange between anatomic pathologists and clinicians. The aim of the Integrating the Healthcare Enterprise (IHE) international initiative is precisely to ensure interoperability of clinical information systems by using existing widespread industry standards such as Digital Imaging and Communication in Medicine (DICOM) and Health Level Seven (HL7). To define standard-based informatics transactions to integrate anatomic pathology information to the Healthcare Enterprise. We used the methodology of the IHE initiative. Working groups from IHE, HL7, and DICOM, with special interest in anatomic pathology, defined consensual technical solutions to provide end-users with improved access to consistent information across multiple information systems. The IHE anatomic pathology technical framework describes a first integration profile, "Anatomic Pathology Workflow," dedicated to the diagnostic process including basic image acquisition and reporting solutions. This integration profile relies on 10 transactions based on HL7 or DICOM standards. A common specimen model was defined to consistently identify and describe specimens in both HL7 and DICOM transactions. The IHE anatomic pathology working group has defined standard-based informatics transactions to support the basic diagnostic workflow in anatomic pathology laboratories. In further stages, the technical framework will be completed to manage whole-slide images and semantically rich structured reports in the diagnostic workflow and to integrate systems used for patient care and those used for research activities (such as tissue bank databases or tissue microarrayers).

  6. DICOM: a standard for medical imaging

    NASA Astrophysics Data System (ADS)

    Horii, Steven C.; Bidgood, W. Dean

    1993-01-01

    Since 1983, the American College of Radiology (ACR) and the National Electrical Manufacturers Association (NEMA) have been engaged in developing standards related to medical imaging. This alliance of users and manufacturers was formed to meet the needs of the medical imaging community as its use of digital imaging technology increased. The development of electronic picture archiving and communications systems (PACS), which could connect a number of medical imaging devices together in a network, led to the need for a standard interface and data structure for use on imaging equipment. Since medical image files tend to be very large and include much text information along with the image, the need for a fast, flexible, and extensible standard was quickly established. The ACR-NEMA Digital Imaging and Communications Standards Committee developed a standard which met these needs. The standard (ACR-NEMA 300-1988) was first published in 1985 and revised in 1988. It is increasingly available from equipment manufacturers. The current work of the ACR- NEMA Committee has been to extend the standard to incorporate direct network connection features, and build on standards work done by the International Standards Organization in its Open Systems Interconnection series. This new standard, called Digital Imaging and Communication in Medicine (DICOM), follows an object-oriented design methodology and makes use of as many existing internationally accepted standards as possible. This paper gives a brief overview of the requirements for communications standards in medical imaging, a history of the ACR-NEMA effort and what it has produced, and a description of the DICOM standard.

  7. Computer Vision and Machine Learning for Autonomous Characterization of AM Powder Feedstocks

    NASA Astrophysics Data System (ADS)

    DeCost, Brian L.; Jain, Harshvardhan; Rollett, Anthony D.; Holm, Elizabeth A.

    2017-03-01

    By applying computer vision and machine learning methods, we develop a system to characterize powder feedstock materials for metal additive manufacturing (AM). Feature detection and description algorithms are applied to create a microstructural scale image representation that can be used to cluster, compare, and analyze powder micrographs. When applied to eight commercial feedstock powders, the system classifies powder images into the correct material systems with greater than 95% accuracy. The system also identifies both representative and atypical powder images. These results suggest the possibility of measuring variations in powders as a function of processing history, relating microstructural features of powders to properties relevant to their performance in AM processes, and defining objective material standards based on visual images. A significant advantage of the computer vision approach is that it is autonomous, objective, and repeatable.

  8. SedCT: MATLAB™ tools for standardized and quantitative processing of sediment core computed tomography (CT) data collected using a medical CT scanner

    NASA Astrophysics Data System (ADS)

    Reilly, B. T.; Stoner, J. S.; Wiest, J.

    2017-08-01

    Computed tomography (CT) of sediment cores allows for high-resolution images, three-dimensional volumes, and down core profiles. These quantitative data are generated through the attenuation of X-rays, which are sensitive to sediment density and atomic number, and are stored in pixels as relative gray scale values or Hounsfield units (HU). We present a suite of MATLAB™ tools specifically designed for routine sediment core analysis as a means to standardize and better quantify the products of CT data collected on medical CT scanners. SedCT uses a graphical interface to process Digital Imaging and Communications in Medicine (DICOM) files, stitch overlapping scanned intervals, and create down core HU profiles in a manner robust to normal coring imperfections. Utilizing a random sampling technique, SedCT reduces data size and allows for quick processing on typical laptop computers. SedCTimage uses a graphical interface to create quality tiff files of CT slices that are scaled to a user-defined HU range, preserving the quantitative nature of CT images and easily allowing for comparison between sediment cores with different HU means and variance. These tools are presented along with examples from lacustrine and marine sediment cores to highlight the robustness and quantitative nature of this method.

  9. Coronary calcium visualization using dual energy chest radiography with sliding organ registration

    NASA Astrophysics Data System (ADS)

    Wen, Di; Nye, Katelyn; Zhou, Bo; Gilkeson, Robert C.; Wilson, David L.

    2016-03-01

    Coronary artery calcification (CAC) is the lead biomarker for atherosclerotic heart disease. We are developing a new technique to image CAC using ubiquitously ordered, low cost, low radiation dual energy (DE) chest radiography (using the two-shot GE Revolution XRd system). In this paper, we proposed a novel image processing method (CorCalDx) based on sliding organ registration to create a bone-image-like, coronary calcium image (CCI) that significantly reduces motion artifacts and improves CAC conspicuity. Experiments on images of a physical dynamic cardiac phantom showed that CorCalDx reduced 73% of the motion artifact area as compared to standard DE over a range of heart rates up to 90 bpm and varying x-ray radiation exposures. Residual motion artifact in the phantom CCI is greatly suppressed in gray level and area (0.88% of the heart area). In a Functional Measurement Test (FMT) with 20 clinical exams, image quality improvement of CorCalDx against standard DE (measured from -10 to +10) was significantly suggested (p<0.0001) by three radiologists for cardiac motion artifacts (7.2+/-2.1) and cardiac anatomy visibility (6.1+/-3.5). CorCalDx was always chosen best in every image tested. In preliminary assessments of 12 patients with 18 calcifications, 90% of motion artifact regions in standard DE results were removed in CorCalDx results, with 100% sensitivity of calcification detection, showing great potential of CorCalDx to improve CAC detection and grading in DE chest radiography.

  10. Exemplary design of a DICOM structured report template for CBIR integration into radiological routine

    NASA Astrophysics Data System (ADS)

    Welter, Petra; Deserno, Thomas M.; Gülpers, Ralph; Wein, Berthold B.; Grouls, Christoph; Günther, Rolf W.

    2010-03-01

    The large and continuously growing amount of medical image data demands access methods with regards to content rather than simple text-based queries. The potential benefits of content-based image retrieval (CBIR) systems for computer-aided diagnosis (CAD) are evident and have been approved. Still, CBIR is not a well-established part of daily routine of radiologists. We have already presented a concept of CBIR integration for the radiology workflow in accordance with the Integrating the Healthcare Enterprise (IHE) framework. The retrieval result is composed as a Digital Imaging and Communication in Medicine (DICOM) Structured Reporting (SR) document. The use of DICOM SR provides interchange with PACS archive and image viewer. It offers the possibility of further data mining and automatic interpretation of CBIR results. However, existing standard templates do not address the domain of CBIR. We present a design of a SR template customized for CBIR. Our approach is based on the DICOM standard templates and makes use of the mammography and chest CAD SR templates. Reuse of approved SR sub-trees promises a reliable design which is further adopted to the CBIR domain. We analyze the special CBIR requirements and integrate the new concept of similar images into our template. Our approach also includes the new concept of a set of selected images for defining the processed images for CBIR. A commonly accepted pre-defined template for the presentation and exchange of results in a standardized format promotes the widespread application of CBIR in radiological routine.

  11. Imaging and quantification of endothelial cell loss in eye bank prepared DMEK grafts using trainable segmentation software.

    PubMed

    Jardine, Griffin J; Holiman, Jeffrey D; Stoeger, Christopher G; Chamberlain, Winston D

    2014-09-01

    To improve accuracy and efficiency in quantifying the endothelial cell loss (ECL) in eye bank preparation of corneal endothelial grafts. Eight cadaveric corneas were subjected to Descemet Membrane Endothelial Keratoplasty (DMEK) preparation. The endothelial surfaces were stained with a viability stain, calcein AM dye (CAM) and then captured by a digital camera. The ECL rates were quantified in these images by three separate readers using trainable segmentation, a plug-in feature from the imaging software, Fiji. Images were also analyzed by Adobe Photoshop for comparison. Mean times required to process the images were measured between the two modalities. The mean ECL (with standard deviation) as analyzed by Fiji was 22.5% (6.5%) and Adobe was 18.7% (7.0%; p = 0.04). The mean time required to process the images through the two different imaging methods was 19.9 min (7.5) for Fiji and 23.4 min (12.9) for Adobe (p = 0.17). Establishing an accurate, efficient and reproducible means of quantifying ECL in graft preparation and surgical techniques can provide insight to the safety, long-term potential of the graft tissues as well as provide a quality control measure for eye banks and surgeons. Trainable segmentation in Fiji software using CAM is a novel approach to measuring ECL that captured a statistically significantly higher percentage of ECL comparable to Adobe and was more accurate in standardized testing. Interestingly, ECL as determined using both methods in eye bank-prepared DMEK grafts exceeded 18% on average.

  12. An in situ probe for on-line monitoring of cell density and viability on the basis of dark field microscopy in conjunction with image processing and supervised machine learning.

    PubMed

    Wei, Ning; You, Jia; Friehs, Karl; Flaschel, Erwin; Nattkemper, Tim Wilhelm

    2007-08-15

    Fermentation industries would benefit from on-line monitoring of important parameters describing cell growth such as cell density and viability during fermentation processes. For this purpose, an in situ probe has been developed, which utilizes a dark field illumination unit to obtain high contrast images with an integrated CCD camera. To test the probe, brewer's yeast Saccharomyces cerevisiae is chosen as the target microorganism. Images of the yeast cells in the bioreactors are captured, processed, and analyzed automatically by means of mechatronics, image processing, and machine learning. Two support vector machine based classifiers are used for separating cells from background, and for distinguishing live from dead cells afterwards. The evaluation of the in situ experiments showed strong correlation between results obtained by the probe and those by widely accepted standard methods. Thus, the in situ probe has been proved to be a feasible device for on-line monitoring of both cell density and viability with high accuracy and stability. (c) 2007 Wiley Periodicals, Inc.

  13. Fixed-pattern noise correction method based on improved moment matching for a TDI CMOS image sensor.

    PubMed

    Xu, Jiangtao; Nie, Huafeng; Nie, Kaiming; Jin, Weimin

    2017-09-01

    In this paper, an improved moment matching method based on a spatial correlation filter (SCF) and bilateral filter (BF) is proposed to correct the fixed-pattern noise (FPN) of a time-delay-integration CMOS image sensor (TDI-CIS). First, the values of row FPN (RFPN) and column FPN (CFPN) are estimated and added to the original image through SCF and BF, respectively. Then the filtered image will be processed by an improved moment matching method with a moving window. Experimental results based on a 128-stage TDI-CIS show that, after correcting the FPN in the image captured under uniform illumination, the standard deviation of row mean vector (SDRMV) decreases from 5.6761 LSB to 0.1948 LSB, while the standard deviation of the column mean vector (SDCMV) decreases from 15.2005 LSB to 13.1949LSB. In addition, for different images captured by different TDI-CISs, the average decrease of SDRMV and SDCMV is 5.4922/2.0357 LSB, respectively. Comparative experimental results indicate that the proposed method can effectively correct the FPNs of different TDI-CISs while maintaining image details without any auxiliary equipment.

  14. sTools - a data reduction pipeline for the GREGOR Fabry-Pérot Interferometer and the High-resolution Fast Imager at the GREGOR solar telescope

    NASA Astrophysics Data System (ADS)

    Kuckein, C.; Denker, C.; Verma, M.; Balthasar, H.; González Manrique, S. J.; Louis, R. E.; Diercke, A.

    2017-10-01

    A huge amount of data has been acquired with the GREGOR Fabry-Pérot Interferometer (GFPI), large-format facility cameras, and since 2016 with the High-resolution Fast Imager (HiFI). These data are processed in standardized procedures with the aim of providing science-ready data for the solar physics community. For this purpose, we have developed a user-friendly data reduction pipeline called ``sTools'' based on the Interactive Data Language (IDL) and licensed under creative commons license. The pipeline delivers reduced and image-reconstructed data with a minimum of user interaction. Furthermore, quick-look data are generated as well as a webpage with an overview of the observations and their statistics. All the processed data are stored online at the GREGOR GFPI and HiFI data archive of the Leibniz Institute for Astrophysics Potsdam (AIP). The principles of the pipeline are presented together with selected high-resolution spectral scans and images processed with sTools.

  15. The Landsat Data Continuity Mission Operational Land Imager (OLI) Radiometric Calibration

    NASA Technical Reports Server (NTRS)

    Markham, Brian L.; Dabney, Philip W.; Murphy-Morris, Jeanine E.; Knight, Edward J.; Kvaran, Geir; Barsi, Julia A.

    2010-01-01

    The Operational Land Imager (OLI) on the Landsat Data Continuity Mission (LDCM) has a comprehensive radiometric characterization and calibration program beginning with the instrument design, and extending through integration and test, on-orbit operations and science data processing. Key instrument design features for radiometric calibration include dual solar diffusers and multi-lamped on-board calibrators. The radiometric calibration transfer procedure from NIST standards has multiple checks on the radiometric scale throughout the process and uses a heliostat as part of the transfer to orbit of the radiometric calibration. On-orbit lunar imaging will be used to track the instruments stability and side slither maneuvers will be used in addition to the solar diffuser to flat field across the thousands of detectors per band. A Calibration Validation Team is continuously involved in the process from design to operations. This team uses an Image Assessment System (IAS), part of the ground system to characterize and calibrate the on-orbit data.

  16. Informatics in radiology: Efficiency metrics for imaging device productivity.

    PubMed

    Hu, Mengqi; Pavlicek, William; Liu, Patrick T; Zhang, Muhong; Langer, Steve G; Wang, Shanshan; Place, Vicki; Miranda, Rafael; Wu, Teresa Tong

    2011-01-01

    Acute awareness of the costs associated with medical imaging equipment is an ever-present aspect of the current healthcare debate. However, the monitoring of productivity associated with expensive imaging devices is likely to be labor intensive, relies on summary statistics, and lacks accepted and standardized benchmarks of efficiency. In the context of the general Six Sigma DMAIC (design, measure, analyze, improve, and control) process, a World Wide Web-based productivity tool called the Imaging Exam Time Monitor was developed to accurately and remotely monitor imaging efficiency with use of Digital Imaging and Communications in Medicine (DICOM) combined with a picture archiving and communication system. Five device efficiency metrics-examination duration, table utilization, interpatient time, appointment interval time, and interseries time-were derived from DICOM values. These metrics allow the standardized measurement of productivity, to facilitate the comparative evaluation of imaging equipment use and ongoing efforts to improve efficiency. A relational database was constructed to store patient imaging data, along with device- and examination-related data. The database provides full access to ad hoc queries and can automatically generate detailed reports for administrative and business use, thereby allowing staff to monitor data for trends and to better identify possible changes that could lead to improved productivity and reduced costs in association with imaging services. © RSNA, 2011.

  17. DICOM relay over the cloud.

    PubMed

    Silva, Luís A Bastião; Costa, Carlos; Oliveira, José Luis

    2013-05-01

    Healthcare institutions worldwide have adopted picture archiving and communication system (PACS) for enterprise access to images, relying on Digital Imaging Communication in Medicine (DICOM) standards for data exchange. However, communication over a wider domain of independent medical institutions is not well standardized. A DICOM-compliant bridge was developed for extending and sharing DICOM services across healthcare institutions without requiring complex network setups or dedicated communication channels. A set of DICOM routers interconnected through a public cloud infrastructure was implemented to support medical image exchange among institutions. Despite the advantages of cloud computing, new challenges were encountered regarding data privacy, particularly when medical data are transmitted over different domains. To address this issue, a solution was introduced by creating a ciphered data channel between the entities sharing DICOM services. Two main DICOM services were implemented in the bridge: Storage and Query/Retrieve. The performance measures demonstrated it is quite simple to exchange information and processes between several institutions. The solution can be integrated with any currently installed PACS-DICOM infrastructure. This method works transparently with well-known cloud service providers. Cloud computing was introduced to augment enterprise PACS by providing standard medical imaging services across different institutions, offering communication privacy and enabling creation of wider PACS scenarios with suitable technical solutions.

  18. Method for simulating dose reduction in digital mammography using the Anscombe transformation.

    PubMed

    Borges, Lucas R; Oliveira, Helder C R de; Nunes, Polyana F; Bakic, Predrag R; Maidment, Andrew D A; Vieira, Marcelo A C

    2016-06-01

    This work proposes an accurate method for simulating dose reduction in digital mammography starting from a clinical image acquired with a standard dose. The method developed in this work consists of scaling a mammogram acquired at the standard radiation dose and adding signal-dependent noise. The algorithm accounts for specific issues relevant in digital mammography images, such as anisotropic noise, spatial variations in pixel gain, and the effect of dose reduction on the detective quantum efficiency. The scaling process takes into account the linearity of the system and the offset of the detector elements. The inserted noise is obtained by acquiring images of a flat-field phantom at the standard radiation dose and at the simulated dose. Using the Anscombe transformation, a relationship is created between the calculated noise mask and the scaled image, resulting in a clinical mammogram with the same noise and gray level characteristics as an image acquired at the lower-radiation dose. The performance of the proposed algorithm was validated using real images acquired with an anthropomorphic breast phantom at four different doses, with five exposures for each dose and 256 nonoverlapping ROIs extracted from each image and with uniform images. The authors simulated lower-dose images and compared these with the real images. The authors evaluated the similarity between the normalized noise power spectrum (NNPS) and power spectrum (PS) of simulated images and real images acquired with the same dose. The maximum relative error was less than 2.5% for every ROI. The added noise was also evaluated by measuring the local variance in the real and simulated images. The relative average error for the local variance was smaller than 1%. A new method is proposed for simulating dose reduction in clinical mammograms. In this method, the dependency between image noise and image signal is addressed using a novel application of the Anscombe transformation. NNPS, PS, and local noise metrics confirm that this method is capable of precisely simulating various dose reductions.

  19. Assessment of global longitudinal strain using standardized myocardial deformation imaging: a modality independent software approach.

    PubMed

    Riffel, Johannes H; Keller, Marius G P; Aurich, Matthias; Sander, Yannick; Andre, Florian; Giusca, Sorin; Aus dem Siepen, Fabian; Seitz, Sebastian; Galuschky, Christian; Korosoglou, Grigorios; Mereles, Derliz; Katus, Hugo A; Buss, Sebastian J

    2015-07-01

    Myocardial deformation measurement is superior to left ventricular ejection fraction in identifying early changes in myocardial contractility and prediction of cardiovascular outcome. The lack of standardization hinders its clinical implementation. The aim of the study is to investigate a novel standardized deformation imaging approach based on the feature tracking algorithm for the assessment of global longitudinal (GLS) and global circumferential strain (GCS) in echocardiography and cardiac magnetic resonance imaging (CMR). 70 subjects undergoing CMR were consecutively investigated with echocardiography within a median time of 30 min. GLS and GCS were analyzed with a post-processing software incorporating the same standardized algorithm for both modalities. Global strain was defined as the relative shortening of the whole endocardial contour length and calculated according to the strain formula. Mean GLS values were -16.2 ± 5.3 and -17.3 ± 5.3 % for echocardiography and CMR, respectively. GLS did not differ significantly between the two imaging modalities, which showed strong correlation (r = 0.86), a small bias (-1.1 %) and narrow 95 % limits of agreement (LOA ± 5.4 %). Mean GCS values were -17.9 ± 6.3 and -24.4 ± 7.8 % for echocardiography and CMR, respectively. GCS was significantly underestimated by echocardiography (p < 0.001). A weaker correlation (r = 0.73), a higher bias (-6.5 %) and wider LOA (± 10.5 %) were observed for GCS. GLS showed a strong correlation (r = 0.92) when image quality was good, while correlation dropped to r = 0.82 with poor acoustic windows in echocardiography. GCS assessment revealed only a strong correlation (r = 0.87) when echocardiographic image quality was good. No significant differences for GLS between two different echocardiographic vendors could be detected. Quantitative assessment of GLS using a standardized software algorithm allows the direct comparison of values acquired irrespective of the imaging modality. GLS may, therefore, serve as a reliable parameter for the assessment of global left ventricular function in clinical routine besides standard evaluation of the ejection fraction.

  20. Use of commercial off-the-shelf digital cameras for scientific data acquisition and scene-specific color calibration

    PubMed Central

    Akkaynak, Derya; Treibitz, Tali; Xiao, Bei; Gürkan, Umut A.; Allen, Justine J.; Demirci, Utkan; Hanlon, Roger T.

    2014-01-01

    Commercial off-the-shelf digital cameras are inexpensive and easy-to-use instruments that can be used for quantitative scientific data acquisition if images are captured in raw format and processed so that they maintain a linear relationship with scene radiance. Here we describe the image-processing steps required for consistent data acquisition with color cameras. In addition, we present a method for scene-specific color calibration that increases the accuracy of color capture when a scene contains colors that are not well represented in the gamut of a standard color-calibration target. We demonstrate applications of the proposed methodology in the fields of biomedical engineering, artwork photography, perception science, marine biology, and underwater imaging. PMID:24562030

  1. Enhancement of Satellite Image Compression Using a Hybrid (DWT-DCT) Algorithm

    NASA Astrophysics Data System (ADS)

    Shihab, Halah Saadoon; Shafie, Suhaidi; Ramli, Abdul Rahman; Ahmad, Fauzan

    2017-12-01

    Discrete Cosine Transform (DCT) and Discrete Wavelet Transform (DWT) image compression techniques have been utilized in most of the earth observation satellites launched during the last few decades. However, these techniques have some issues that should be addressed. The DWT method has proven to be more efficient than DCT for several reasons. Nevertheless, the DCT can be exploited to improve the high-resolution satellite image compression when combined with the DWT technique. Hence, a proposed hybrid (DWT-DCT) method was developed and implemented in the current work, simulating an image compression system on-board on a small remote sensing satellite, with the aim of achieving a higher compression ratio to decrease the onboard data storage and the downlink bandwidth, while avoiding further complex levels of DWT. This method also succeeded in maintaining the reconstructed satellite image quality through replacing the standard forward DWT thresholding and quantization processes with an alternative process that employed the zero-padding technique, which also helped to reduce the processing time of DWT compression. The DCT, DWT and the proposed hybrid methods were implemented individually, for comparison, on three LANDSAT 8 images, using the MATLAB software package. A comparison was also made between the proposed method and three other previously published hybrid methods. The evaluation of all the objective and subjective results indicated the feasibility of using the proposed hybrid (DWT-DCT) method to enhance the image compression process on-board satellites.

  2. A stereoscopic lens for digital cinema cameras

    NASA Astrophysics Data System (ADS)

    Lipton, Lenny; Rupkalvis, John

    2015-03-01

    Live-action stereoscopic feature films are, for the most part, produced using a costly post-production process to convert planar cinematography into stereo-pair images and are only occasionally shot stereoscopically using bulky dual-cameras that are adaptations of the Ramsdell rig. The stereoscopic lens design described here might very well encourage more live-action image capture because it uses standard digital cinema cameras and workflow to save time and money.

  3. Automated System for Early Breast Cancer Detection in Mammograms

    NASA Technical Reports Server (NTRS)

    Bankman, Isaac N.; Kim, Dong W.; Christens-Barry, William A.; Weinberg, Irving N.; Gatewood, Olga B.; Brody, William R.

    1993-01-01

    The increasing demand on mammographic screening for early breast cancer detection, and the subtlety of early breast cancer signs on mammograms, suggest an automated image processing system that can serve as a diagnostic aid in radiology clinics. We present a fully automated algorithm for detecting clusters of microcalcifications that are the most common signs of early, potentially curable breast cancer. By using the contour map of the mammogram, the algorithm circumvents some of the difficulties encountered with standard image processing methods. The clinical implementation of an automated instrument based on this algorithm is also discussed.

  4. Optical coherence microscopy as a novel, non-invasive method for the 4D live imaging of early mammalian embryos.

    PubMed

    Karnowski, Karol; Ajduk, Anna; Wieloch, Bartosz; Tamborski, Szymon; Krawiec, Krzysztof; Wojtkowski, Maciej; Szkulmowski, Maciej

    2017-06-23

    Imaging of living cells based on traditional fluorescence and confocal laser scanning microscopy has delivered an enormous amount of information critical for understanding biological processes in single cells. However, the requirement for a high numerical aperture and fluorescent markers still limits researchers' ability to visualize the cellular architecture without causing short- and long-term photodamage. Optical coherence microscopy (OCM) is a promising alternative that circumvents the technical limitations of fluorescence imaging techniques and provides unique access to fundamental aspects of early embryonic development, without the requirement for sample pre-processing or labeling. In the present paper, we utilized the internal motion of cytoplasm, as well as custom scanning and signal processing protocols, to effectively reduce the speckle noise typical for standard OCM and enable high-resolution intracellular time-lapse imaging. To test our imaging system we used mouse and pig oocytes and embryos and visualized them through fertilization and the first embryonic division, as well as at selected stages of oogenesis and preimplantation development. Because all morphological and morphokinetic properties recorded by OCM are believed to be biomarkers of oocyte/embryo quality, OCM may represent a new chapter in imaging-based preimplantation embryo diagnostics.

  5. Automatic Image Processing Workflow for the Keck/NIRC2 Vortex Coronagraph

    NASA Astrophysics Data System (ADS)

    Xuan, Wenhao; Cook, Therese; Ngo, Henry; Zawol, Zoe; Ruane, Garreth; Mawet, Dimitri

    2018-01-01

    The Keck/NIRC2 camera, equipped with the vortex coronagraph, is an instrument targeted at the high contrast imaging of extrasolar planets. To uncover a faint planet signal from the overwhelming starlight, we utilize the Vortex Image Processing (VIP) library, which carries out principal component analysis to model and remove the stellar point spread function. To bridge the gap between data acquisition and data reduction, we implement a workflow that 1) downloads, sorts, and processes data with VIP, 2) stores the analysis products into a database, and 3) displays the reduced images, contrast curves, and auxiliary information on a web interface. Both angular differential imaging and reference star differential imaging are implemented in the analysis module. A real-time version of the workflow runs during observations, allowing observers to make educated decisions about time distribution on different targets, hence optimizing science yield. The post-night version performs a standardized reduction after the observation, building up a valuable database that not only helps uncover new discoveries, but also enables a statistical study of the instrument itself. We present the workflow, and an examination of the contrast performance of the NIRC2 vortex with respect to factors including target star properties and observing conditions.

  6. The Advanced Rapid Imaging and Analysis (ARIA) Project: Providing Standard and On-Demand SAR products for Hazard Science and Hazard Response

    NASA Astrophysics Data System (ADS)

    Owen, S. E.; Hua, H.; Rosen, P. A.; Agram, P. S.; Webb, F.; Simons, M.; Yun, S. H.; Sacco, G. F.; Liu, Z.; Fielding, E. J.; Lundgren, P.; Moore, A. W.

    2017-12-01

    A new era of geodetic imaging arrived with the launch of the ESA Sentinel-1A/B satellites in 2014 and 2016, and with the 2016 confirmation of the NISAR mission, planned for launch in 2021. These missions assure high quality, freely and openly distributed regularly sampled SAR data into the indefinite future. These unprecedented data sets are a watershed for solid earth sciences as we progress towards the goal of ubiquitous InSAR measurements. We now face the challenge of how to best address the massive volumes of data and intensive processing requirements. Should scientists individually process the same data independently themselves? Should a centralized service provider create standard products that all can use? Are there other approaches to accelerate science that are cost effective and efficient? The Advanced Rapid Imaging and Analysis (ARIA) project, a joint venture co-sponsored by California Institute of Technology (Caltech) and by NASA through the Jet Propulsion Laboratory (JPL), is focused on rapidly generating higher level geodetic imaging products and placing them in the hands of the solid earth science and local, national, and international natural hazard communities by providing science product generation, exploration, and delivery capabilities at an operational level. However, there are challenges in defining the optimal InSAR data products for the solid earth science community. In this presentation, we will present our experience with InSAR users, our lessons learned the advantages of on demand and standard products, and our proposal for the most effective path forward.

  7. Dual energy exposure control (DEEC) for computed tomography: algorithm and simulation study.

    PubMed

    Stenner, Philip; Kachelriess, Marc

    2008-11-01

    DECT means acquiring the same object at two different energies, respectively two different tube voltages U1 and U2. The raw data q1 and q2 undergo a decomposition process of type p = p(q1,q2). The raw data p are reconstructed to obtain monochromatic images of the attenuation mu, of the object density rho, or of a specific material distribution. Recent advances in DECT focus on noise reduction techniques [S. Richard and J. H. Siewerdsen, Med. Phys. 35(2), 586-600 (2008)] and enable high performance DECT such as lung nodule detection [Shkumat et al., Med. Phys. 35(2), 629-632 (2008)]. Given p and a raw data-based projection-wise patient dose estimation D(alpha) the authors determine the optimal tube current curves I1(alpha) and I2(alpha), with alpha being the view angle, which minimizes image noise for a given patient dose level. DEEC can perform online; I1(alpha) and I2(alpha) can be determined during the scan. Simulation studies using semianthropomorphic phantom data were carried out. In particular, functions p that generate mu-images and density images were evaluated. Image quality was compared to standard scans at U0=120 kV (clinical CT) and U0=45 kV (micro-CT) that were taken at the same dose level (D0=D1 + D2) and identical spatial resolution. Appropriate choice of p(q1, q2) allows to obtain mu-images that show fewer artifacts and yield image noise levels comparable to the noise of the standard scan. The authors compared the standard scan to mu-images at 70 keV, which is the effective energy used in clinical CT, and found optimal results with mu-images at 25 keV for micro-CT. Nonoptimal choice of the decomposition function will, however, significantly increase image noise. In particular mu-images at 511 keV, as needed for PET/CT attenuation correction, exhibit more than twice as much image noise as the standard scan. With DEEC, which guarantees best dose usage possible, monochromatic images are generated with only slightly increased noise levels at the same dose compared to a standard scan. The benefit of significantly decreased artifacts appears to allow using DEEC-generated monochromatic images in daily routine. Furthermore, DEEC is not restricted to DECT and the inherent tube current modulation algorithm may also be applied to single energy CT.

  8. Dual energy exposure control (DEEC) for computed tomography: Algorithm and simulation study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stenner, Philip; Kachelriess, Marc

    2008-11-15

    DECT means acquiring the same object at two different energies, respectively two different tube voltages U{sub 1} and U{sub 2}. The raw data q{sub 1} and q{sub 2} undergo a decomposition process of type p=p(q{sub 1},q{sub 2}). The raw data p are reconstructed to obtain monochromatic images of the attenuation {mu}, of the object density {rho}, or of a specific material distribution. Recent advances in DECT focus on noise reduction techniques [S. Richard and J. H. Siewerdsen, Med. Phys. 35(2), 586-600 (2008)] and enable high performance DECT such as lung nodule detection [Shkumat et al., Med. Phys. 35(2), 629-632 (2008)].more » Given p and a raw data-based projection-wise patient dose estimation D({alpha}) the authors determine the optimal tube current curves I{sub 1}({alpha}) and I{sub 2}({alpha}), with {alpha} being the view angle, which minimizes image noise for a given patient dose level. DEEC can perform online; I{sub 1}({alpha}) and I{sub 2}({alpha}) can be determined during the scan. Simulation studies using semianthropomorphic phantom data were carried out. In particular, functions p that generate {mu}-images and density images were evaluated. Image quality was compared to standard scans at U{sub 0}=120 kV (clinical CT) and U{sub 0}=45 kV (micro-CT) that were taken at the same dose level (D{sub 0}=D{sub 1}+D{sub 2}) and identical spatial resolution. Appropriate choice of p(q{sub 1},q{sub 2}) allows to obtain {mu}-images that show fewer artifacts and yield image noise levels comparable to the noise of the standard scan. The authors compared the standard scan to {mu}-images at 70 keV, which is the effective energy used in clinical CT, and found optimal results with {mu}-images at 25 keV for micro-CT. Nonoptimal choice of the decomposition function will, however, significantly increase image noise. In particular {mu}-images at 511 keV, as needed for PET/CT attenuation correction, exhibit more than twice as much image noise as the standard scan. With DEEC, which guarantees best dose usage possible, monochromatic images are generated with only slightly increased noise levels at the same dose compared to a standard scan. The benefit of significantly decreased artifacts appears to allow using DEEC-generated monochromatic images in daily routine. Furthermore, DEEC is not restricted to DECT and the inherent tube current modulation algorithm may also be applied to single energy CT.« less

  9. Development and Evaluation of Reference Standards for Image-based Telemedicine Diagnosis and Clinical Research Studies in Ophthalmology

    PubMed Central

    Ryan, Michael C.; Ostmo, Susan; Jonas, Karyn; Berrocal, Audina; Drenser, Kimberly; Horowitz, Jason; Lee, Thomas C.; Simmons, Charles; Martinez-Castellanos, Maria-Ana; Chan, R.V. Paul; Chiang, Michael F.

    2014-01-01

    Information systems managing image-based data for telemedicine or clinical research applications require a reference standard representing the correct diagnosis. Accurate reference standards are difficult to establish because of imperfect agreement among physicians, and discrepancies between clinical vs. image-based diagnosis. This study is designed to describe the development and evaluation of reference standards for image-based diagnosis, which combine diagnostic impressions of multiple image readers with the actual clinical diagnoses. We show that agreement between image reading and clinical examinations was imperfect (689 [32%] discrepancies in 2148 image readings), as was inter-reader agreement (kappa 0.490-0.652). This was improved by establishing an image-based reference standard defined as the majority diagnosis given by three readers (13% discrepancies with image readers). It was further improved by establishing an overall reference standard that incorporated the clinical diagnosis (10% discrepancies with image readers). These principles of establishing reference standards may be applied to improve robustness of real-world systems supporting image-based diagnosis. PMID:25954463

  10. Automated Processing of Imaging Data through Multi-tiered Classification of Biological Structures Illustrated Using Caenorhabditis elegans.

    PubMed

    Zhan, Mei; Crane, Matthew M; Entchev, Eugeni V; Caballero, Antonio; Fernandes de Abreu, Diana Andrea; Ch'ng, QueeLim; Lu, Hang

    2015-04-01

    Quantitative imaging has become a vital technique in biological discovery and clinical diagnostics; a plethora of tools have recently been developed to enable new and accelerated forms of biological investigation. Increasingly, the capacity for high-throughput experimentation provided by new imaging modalities, contrast techniques, microscopy tools, microfluidics and computer controlled systems shifts the experimental bottleneck from the level of physical manipulation and raw data collection to automated recognition and data processing. Yet, despite their broad importance, image analysis solutions to address these needs have been narrowly tailored. Here, we present a generalizable formulation for autonomous identification of specific biological structures that is applicable for many problems. The process flow architecture we present here utilizes standard image processing techniques and the multi-tiered application of classification models such as support vector machines (SVM). These low-level functions are readily available in a large array of image processing software packages and programming languages. Our framework is thus both easy to implement at the modular level and provides specific high-level architecture to guide the solution of more complicated image-processing problems. We demonstrate the utility of the classification routine by developing two specific classifiers as a toolset for automation and cell identification in the model organism Caenorhabditis elegans. To serve a common need for automated high-resolution imaging and behavior applications in the C. elegans research community, we contribute a ready-to-use classifier for the identification of the head of the animal under bright field imaging. Furthermore, we extend our framework to address the pervasive problem of cell-specific identification under fluorescent imaging, which is critical for biological investigation in multicellular organisms or tissues. Using these examples as a guide, we envision the broad utility of the framework for diverse problems across different length scales and imaging methods.

  11. Flexible imaging payload for real-time fluorescent biological imaging in parabolic, suborbital and space analog environments

    NASA Astrophysics Data System (ADS)

    Bamsey, Matthew T.; Paul, Anna-Lisa; Graham, Thomas; Ferl, Robert J.

    2014-10-01

    Fluorescent imaging offers the ability to monitor biological functions, in this case biological responses to space-related environments. For plants, fluorescent imaging can include general health indicators such as chlorophyll fluorescence as well as specific metabolic indicators such as engineered fluorescent reporters. This paper describes the Flex Imager a fluorescent imaging payload designed for Middeck Locker deployment and now tested on multiple flight and flight-related platforms. The Flex Imager and associated payload elements have been developed with a focus on 'flexibility' allowing for multiple imaging modalities and change-out of individual imaging or control components in the field. The imaging platform is contained within the standard Middeck Locker spaceflight form factor, with components affixed to a baseplate that permits easy rearrangement and fine adjustment of components. The Flex Imager utilizes standard software packages to simplify operation, operator training, and evaluation by flight provider flight test engineers, or by researchers processing the raw data. Images are obtained using a commercial cooled CCD image sensor, with light-emitting diodes for excitation and a suite of filters that allow biological samples to be imaged over wavelength bands of interest. Although baselined for the monitoring of green fluorescent protein and chlorophyll fluorescence from Arabidopsis samples, the Flex Imager payload permits imaging of any biological sample contained within a standard 10 cm by 10 cm square Petri plate. A sample holder was developed to secure sample plates under different flight profiles while permitting sample change-out should crewed operations be possible. In addition to crew-directed imaging, autonomous or telemetric operation of the payload is also a viable operational mode. An infrared camera has also been integrated into the Flex Imager payload to allow concurrent fluorescent and thermal imaging of samples. The Flex Imager has been utilized to assess, in real-time, the response of plants to novel environments including various spaceflight analogs, including several parabolic flight environments as well as hypobaric plant growth chambers. Basic performance results obtained under these operational environments, as well as laboratory-based tests are described. The Flex Imager has also been designed to be compatible with emerging suborbital platforms.

  12. Grid Computing Application for Brain Magnetic Resonance Image Processing

    NASA Astrophysics Data System (ADS)

    Valdivia, F.; Crépeault, B.; Duchesne, S.

    2012-02-01

    This work emphasizes the use of grid computing and web technology for automatic post-processing of brain magnetic resonance images (MRI) in the context of neuropsychiatric (Alzheimer's disease) research. Post-acquisition image processing is achieved through the interconnection of several individual processes into pipelines. Each process has input and output data ports, options and execution parameters, and performs single tasks such as: a) extracting individual image attributes (e.g. dimensions, orientation, center of mass), b) performing image transformations (e.g. scaling, rotation, skewing, intensity standardization, linear and non-linear registration), c) performing image statistical analyses, and d) producing the necessary quality control images and/or files for user review. The pipelines are built to perform specific sequences of tasks on the alphanumeric data and MRIs contained in our database. The web application is coded in PHP and allows the creation of scripts to create, store and execute pipelines and their instances either on our local cluster or on high-performance computing platforms. To run an instance on an external cluster, the web application opens a communication tunnel through which it copies the necessary files, submits the execution commands and collects the results. We present result on system tests for the processing of a set of 821 brain MRIs from the Alzheimer's Disease Neuroimaging Initiative study via a nonlinear registration pipeline composed of 10 processes. Our results show successful execution on both local and external clusters, and a 4-fold increase in performance if using the external cluster. However, the latter's performance does not scale linearly as queue waiting times and execution overhead increase with the number of tasks to be executed.

  13. LANDSAT: Non-US standard catalog 1-31 December 1976. [LANDSAT imagery for December 1976

    NASA Technical Reports Server (NTRS)

    1976-01-01

    The Non-U.S. Standard Catalog lists Non-U.S. imagery acquired by LANDSAT 1 and LANDSAT 2 which has been processed and input to the data files during the referenced month. Data, such as date required, cloud cover and image quality are given for each scene. The microfilm roll and frame on which the scene may be found are also given.

  14. LANDSAT: US standard catalog, 1 February 1977 - 28 February 1977. [LANDSAT imagery for the month of February 1977

    NASA Technical Reports Server (NTRS)

    1977-01-01

    The U.S. Standard Catalog lists U.S. imagery acquired by LANDSAT 1 and LANDSAT 2 which has been processed and input to the data files during the referenced month. Data, such as data acquired, cloud cover and image quality are given for each scene. The microfilm roll and frame on which the scene may be found is also given.

  15. LANDSAT non-U.S. standard catalog, 1 January 1977 through 31 January 1977. [LANDSAT imagery January 1977

    NASA Technical Reports Server (NTRS)

    1977-01-01

    The Non-U.S. Standard Catalog lists Non-U.S. imagery acquired by LANDSAT 1 and LANDSAT 2 which was processed and input to the data files during the referenced month. Data, such as date acquired, cloud cover, and image quality are given for each scene. The microfilm roll and frame on which the scene may be found is also given.

  16. Automatic detection of blurred images in UAV image sets

    NASA Astrophysics Data System (ADS)

    Sieberth, Till; Wackrow, Rene; Chandler, Jim H.

    2016-12-01

    Unmanned aerial vehicles (UAV) have become an interesting and active research topic for photogrammetry. Current research is based on images acquired by an UAV, which have a high ground resolution and good spectral and radiometrical resolution, due to the low flight altitudes combined with a high resolution camera. UAV image flights are also cost effective and have become attractive for many applications including, change detection in small scale areas. One of the main problems preventing full automation of data processing of UAV imagery is the degradation effect of blur caused by camera movement during image acquisition. This can be caused by the normal flight movement of the UAV as well as strong winds, turbulence or sudden operator inputs. This blur disturbs the visual analysis and interpretation of the data, causes errors and can degrade the accuracy in automatic photogrammetric processing algorithms. The detection and removal of these images is currently achieved manually, which is both time consuming and prone to error, particularly for large image-sets. To increase the quality of data processing an automated process is necessary, which must be both reliable and quick. This paper describes the development of an automatic filtering process, which is based upon the quantification of blur in an image. Images with known blur are processed digitally to determine a quantifiable measure of image blur. The algorithm is required to process UAV images fast and reliably to relieve the operator from detecting blurred images manually. The newly developed method makes it possible to detect blur caused by linear camera displacement and is based on human detection of blur. Humans detect blurred images best by comparing it to other images in order to establish whether an image is blurred or not. The developed algorithm simulates this procedure by creating an image for comparison using image processing. Creating internally a comparable image makes the method independent of additional images. However, the calculated blur value named SIEDS (saturation image edge difference standard-deviation) on its own does not provide an absolute number to judge if an image is blurred or not. To achieve a reliable judgement of image sharpness the SIEDS value has to be compared to other SIEDS values from the same dataset. The speed and reliability of the method was tested using a range of different UAV datasets. Two datasets will be presented in this paper to demonstrate the effectiveness of the algorithm. The algorithm proves to be fast and the returned values are optically correct, making the algorithm applicable for UAV datasets. Additionally, a close range dataset was processed to determine whether the method is also useful for close range applications. The results show that the method is also reliable for close range images, which significantly extends the field of application for the algorithm.

  17. Rapid processing of PET list-mode data for efficient uncertainty estimation and data analysis

    NASA Astrophysics Data System (ADS)

    Markiewicz, P. J.; Thielemans, K.; Schott, J. M.; Atkinson, D.; Arridge, S. R.; Hutton, B. F.; Ourselin, S.

    2016-07-01

    In this technical note we propose a rapid and scalable software solution for the processing of PET list-mode data, which allows the efficient integration of list mode data processing into the workflow of image reconstruction and analysis. All processing is performed on the graphics processing unit (GPU), making use of streamed and concurrent kernel execution together with data transfers between disk and CPU memory as well as CPU and GPU memory. This approach leads to fast generation of multiple bootstrap realisations, and when combined with fast image reconstruction and analysis, it enables assessment of uncertainties of any image statistic and of any component of the image generation process (e.g. random correction, image processing) within reasonable time frames (e.g. within five minutes per realisation). This is of particular value when handling complex chains of image generation and processing. The software outputs the following: (1) estimate of expected random event data for noise reduction; (2) dynamic prompt and random sinograms of span-1 and span-11 and (3) variance estimates based on multiple bootstrap realisations of (1) and (2) assuming reasonable count levels for acceptable accuracy. In addition, the software produces statistics and visualisations for immediate quality control and crude motion detection, such as: (1) count rate curves; (2) centre of mass plots of the radiodistribution for motion detection; (3) video of dynamic projection views for fast visual list-mode skimming and inspection; (4) full normalisation factor sinograms. To demonstrate the software, we present an example of the above processing for fast uncertainty estimation of regional SUVR (standard uptake value ratio) calculation for a single PET scan of 18F-florbetapir using the Siemens Biograph mMR scanner.

  18. Transforming Dermatologic Imaging for the Digital Era: Metadata and Standards.

    PubMed

    Caffery, Liam J; Clunie, David; Curiel-Lewandrowski, Clara; Malvehy, Josep; Soyer, H Peter; Halpern, Allan C

    2018-01-17

    Imaging is increasingly being used in dermatology for documentation, diagnosis, and management of cutaneous disease. The lack of standards for dermatologic imaging is an impediment to clinical uptake. Standardization can occur in image acquisition, terminology, interoperability, and metadata. This paper presents the International Skin Imaging Collaboration position on standardization of metadata for dermatologic imaging. Metadata is essential to ensure that dermatologic images are properly managed and interpreted. There are two standards-based approaches to recording and storing metadata in dermatologic imaging. The first uses standard consumer image file formats, and the second is the file format and metadata model developed for the Digital Imaging and Communication in Medicine (DICOM) standard. DICOM would appear to provide an advantage over using consumer image file formats for metadata as it includes all the patient, study, and technical metadata necessary to use images clinically. Whereas, consumer image file formats only include technical metadata and need to be used in conjunction with another actor-for example, an electronic medical record-to supply the patient and study metadata. The use of DICOM may have some ancillary benefits in dermatologic imaging including leveraging DICOM network and workflow services, interoperability of images and metadata, leveraging existing enterprise imaging infrastructure, greater patient safety, and better compliance to legislative requirements for image retention.

  19. DIGITAL CARTOGRAPHY OF THE PLANETS: NEW METHODS, ITS STATUS, AND ITS FUTURE.

    USGS Publications Warehouse

    Batson, R.M.

    1987-01-01

    A system has been developed that establishes a standardized cartographic database for each of the 19 planets and major satellites that have been explored to date. Compilation of the databases involves both traditional and newly developed digital image processing and mosaicking techniques, including radiometric and geometric corrections of the images. Each database, or digital image model (DIM), is a digital mosaic of spacecraft images that have been radiometrically and geometrically corrected and photometrically modeled. During compilation, ancillary data files such as radiometric calibrations and refined photometric values for all camera lens and filter combinations and refined camera-orientation matrices for all images used in the mapping are produced.

  20. Color engineering in the age of digital convergence

    NASA Astrophysics Data System (ADS)

    MacDonald, Lindsay W.

    1998-09-01

    Digital color imaging has developed over the past twenty years from specialized scientific applications into the mainstream of computing. In addition to the phenomenal growth of computer processing power and storage capacity, great advances have been made in the capabilities and cost-effectiveness of color imaging peripherals. The majority of imaging applications, including the graphic arts, video and film have made the transition from analogue to digital production methods. Digital convergence of computing, communications and television now heralds new possibilities for multimedia publishing and mobile lifestyles. Color engineering, the application of color science to the design of imaging products, is an emerging discipline that poses exciting challenges to the international color imaging community for training, research and standards.

  1. TerraLook: GIS-Ready Time-Series of Satellite Imagery for Monitoring Change

    USGS Publications Warehouse

    ,

    2008-01-01

    TerraLook is a joint project of the U.S. Geological Survey (USGS) and the National Aeronautics and Space Administration (NASA) Jet Propulsion Laboratory (JPL) with a goal of providing satellite images that anyone can use to see changes in the Earth's surface over time. Each TerraLook product is a user-specified collection of satellite images selected from imagery archived at the USGS Earth Resources Observation and Science (EROS) Center. Images are bundled with standards-compliant metadata, a world file, and an outline of each image's ground footprint, enabling their use in geographic information systems (GIS), image processing software, and Web mapping applications. TerraLook images are available through the USGS Global Visualization Viewer (http://glovis.usgs.gov).

  2. Optimized protocol for combined PALM-dSTORM imaging.

    PubMed

    Glushonkov, O; Réal, E; Boutant, E; Mély, Y; Didier, P

    2018-06-08

    Multi-colour super-resolution localization microscopy is an efficient technique to study a variety of intracellular processes, including protein-protein interactions. This technique requires specific labels that display transition between fluorescent and non-fluorescent states under given conditions. For the most commonly used label types, photoactivatable fluorescent proteins and organic fluorophores, these conditions are different, making experiments that combine both labels difficult. Here, we demonstrate that changing the standard imaging buffer of thiols/oxygen scavenging system, used for organic fluorophores, to the commercial mounting medium Vectashield increased the number of photons emitted by the fluorescent protein mEos2 and enhanced the photoconversion rate between its green and red forms. In addition, the photophysical properties of organic fluorophores remained unaltered with respect to the standard imaging buffer. The use of Vectashield together with our optimized protocol for correction of sample drift and chromatic aberrations enabled us to perform two-colour 3D super-resolution imaging of the nucleolus and resolve its three compartments.

  3. MEG-BIDS, the brain imaging data structure extended to magnetoencephalography

    PubMed Central

    Niso, Guiomar; Gorgolewski, Krzysztof J.; Bock, Elizabeth; Brooks, Teon L.; Flandin, Guillaume; Gramfort, Alexandre; Henson, Richard N.; Jas, Mainak; Litvak, Vladimir; T. Moreau, Jeremy; Oostenveld, Robert; Schoffelen, Jan-Mathijs; Tadel, Francois; Wexler, Joseph; Baillet, Sylvain

    2018-01-01

    We present a significant extension of the Brain Imaging Data Structure (BIDS) to support the specific aspects of magnetoencephalography (MEG) data. MEG measures brain activity with millisecond temporal resolution and unique source imaging capabilities. So far, BIDS was a solution to organise magnetic resonance imaging (MRI) data. The nature and acquisition parameters of MRI and MEG data are strongly dissimilar. Although there is no standard data format for MEG, we propose MEG-BIDS as a principled solution to store, organise, process and share the multidimensional data volumes produced by the modality. The standard also includes well-defined metadata, to facilitate future data harmonisation and sharing efforts. This responds to unmet needs from the multimodal neuroimaging community and paves the way to further integration of other techniques in electrophysiology. MEG-BIDS builds on MRI-BIDS, extending BIDS to a multimodal data structure. We feature several data-analytics software that have adopted MEG-BIDS, and a diverse sample of open MEG-BIDS data resources available to everyone. PMID:29917016

  4. MEG-BIDS, the brain imaging data structure extended to magnetoencephalography.

    PubMed

    Niso, Guiomar; Gorgolewski, Krzysztof J; Bock, Elizabeth; Brooks, Teon L; Flandin, Guillaume; Gramfort, Alexandre; Henson, Richard N; Jas, Mainak; Litvak, Vladimir; T Moreau, Jeremy; Oostenveld, Robert; Schoffelen, Jan-Mathijs; Tadel, Francois; Wexler, Joseph; Baillet, Sylvain

    2018-06-19

    We present a significant extension of the Brain Imaging Data Structure (BIDS) to support the specific aspects of magnetoencephalography (MEG) data. MEG measures brain activity with millisecond temporal resolution and unique source imaging capabilities. So far, BIDS was a solution to organise magnetic resonance imaging (MRI) data. The nature and acquisition parameters of MRI and MEG data are strongly dissimilar. Although there is no standard data format for MEG, we propose MEG-BIDS as a principled solution to store, organise, process and share the multidimensional data volumes produced by the modality. The standard also includes well-defined metadata, to facilitate future data harmonisation and sharing efforts. This responds to unmet needs from the multimodal neuroimaging community and paves the way to further integration of other techniques in electrophysiology. MEG-BIDS builds on MRI-BIDS, extending BIDS to a multimodal data structure. We feature several data-analytics software that have adopted MEG-BIDS, and a diverse sample of open MEG-BIDS data resources available to everyone.

  5. Image manipulation software portable on different hardware platforms: what is the cost?

    NASA Astrophysics Data System (ADS)

    Ligier, Yves; Ratib, Osman M.; Funk, Matthieu; Perrier, Rene; Girard, Christian; Logean, Marianne

    1992-07-01

    A hospital wide PACS project is currently under development at the University Hospital of Geneva. The visualization and manipulation of images provided by different imaging modalities constitutes one of the most challenging components of a PACS. Because there are different requirements depending on the clinical usage, it was necessary for such a visualization software to be provided on different types of workstations in different sectors of the PACS. The user interface has to be the same independently of the underlying workstation. Beside, in addition to a standard set of image manipulation and processing tools there is a need for more specific clinical tools that should be easily adapted to specific medical requirements. To achieve operating and windowing systems: the standard Unix/X-11/OSF-Motif based workstations and the Macintosh family and should be easily ported on other systems. This paper describes the design of such a system and discusses the extra cost and efforts involved in the development of a portable and easily expandable software.

  6. 7T MRI subthalamic nucleus atlas for use with 3T MRI.

    PubMed

    Milchenko, Mikhail; Norris, Scott A; Poston, Kathleen; Campbell, Meghan C; Ushe, Mwiza; Perlmutter, Joel S; Snyder, Abraham Z

    2018-01-01

    Deep brain stimulation (DBS) of the subthalamic nucleus (STN) reduces motor symptoms in most patients with Parkinson disease (PD), yet may produce untoward effects. Investigation of DBS effects requires accurate localization of the STN, which can be difficult to identify on magnetic resonance images collected with clinically available 3T scanners. The goal of this study is to develop a high-quality STN atlas that can be applied to standard 3T images. We created a high-definition STN atlas derived from seven older participants imaged at 7T. This atlas was nonlinearly registered to a standard template representing 56 patients with PD imaged at 3T. This process required development of methodology for nonlinear multimodal image registration. We demonstrate mm-scale STN localization accuracy by comparison of our 3T atlas with a publicly available 7T atlas. We also demonstrate less agreement with an earlier histological atlas. STN localization error in the 56 patients imaged at 3T was less than 1 mm on average. Our methodology enables accurate STN localization in individuals imaged at 3T. The STN atlas and underlying 3T average template in MNI space are freely available to the research community. The image registration methodology developed in the course of this work may be generally applicable to other datasets.

  7. Privacy Protection by Masking Moving Objects for Security Cameras

    NASA Astrophysics Data System (ADS)

    Yabuta, Kenichi; Kitazawa, Hitoshi; Tanaka, Toshihisa

    Because of an increasing number of security cameras, it is crucial to establish a system that protects the privacy of objects in the recorded images. To this end, we propose a framework of image processing and data hiding for security monitoring and privacy protection. First, we state the requirements of the proposed monitoring systems and suggest possible implementation that satisfies those requirements. The underlying concept of our proposed framework is as follows: (1) in the recorded images, the objects whose privacy should be protected are deteriorated by appropriate image processing; (2) the original objects are encrypted and watermarked into the output image, which is encoded using an image compression standard; (3) real-time processing is performed such that no future frame is required to generate on output bitstream. It should be noted that in this framework, anyone can observe the decoded image that includes the deteriorated objects that are unrecognizable or invisible. On the other hand, for crime investigation, this system allows a limited number of users to observe the original objects by using a special viewer that decrypts and decodes the watermarked objects with a decoding password. Moreover, the special viewer allows us to select the objects to be decoded and displayed. We provide an implementation example, experimental results, and performance evaluations to support our proposed framework.

  8. European standardization effort: interworking the goal

    NASA Astrophysics Data System (ADS)

    Mattheus, Rudy A.

    1993-09-01

    In the European Standardization Committee (CEN), the technical committee responsible for the standardization activities in Medical Informatics (CEN TC 251), has agreed upon the directions of the scopes to follow in this field. They are described in the Directory of the European Standardization Requirements for Healthcare Informatics and Programme for the Development of Standards adopted on 02-28-1991 by CEN/TC 251 and approved by CEN/BT. Top-down objectives describe the common framework and items like terminology, security, more bottom up oriented items describe fields like medical imaging and multi-media. The draft standard is described; the general framework model and object oriented model; the interworking aspects, the relation to ISO standards, and the DICOM proposal. This paper also focuses on all the boundaries in the standardization work, which are also influencing the standardization process.

  9. Inverse scattering and refraction corrected reflection for breast cancer imaging

    NASA Astrophysics Data System (ADS)

    Wiskin, J.; Borup, D.; Johnson, S.; Berggren, M.; Robinson, D.; Smith, J.; Chen, J.; Parisky, Y.; Klock, John

    2010-03-01

    Reflection ultrasound (US) has been utilized as an adjunct imaging modality for over 30 years. TechniScan, Inc. has developed unique, transmission and concomitant reflection algorithms which are used to reconstruct images from data gathered during a tomographic breast scanning process called Warm Bath Ultrasound (WBU™). The transmission algorithm yields high resolution, 3D, attenuation and speed of sound (SOS) images. The reflection algorithm is based on canonical ray tracing utilizing refraction correction via the SOS and attenuation reconstructions. The refraction correction reflection algorithm allows 360 degree compounding resulting in the reflection image. The requisite data are collected when scanning the entire breast in a 33° C water bath, on average in 8 minutes. This presentation explains how the data are collected and processed by the 3D transmission and reflection imaging mode algorithms. The processing is carried out using two NVIDIA® Tesla™ GPU processors, accessing data on a 4-TeraByte RAID. The WBU™ images are displayed in a DICOM viewer that allows registration of all three modalities. Several representative cases are presented to demonstrate potential diagnostic capability including: a cyst, fibroadenoma, and a carcinoma. WBU™ images (SOS, attenuation, and reflection modalities) are shown along with their respective mammograms and standard ultrasound images. In addition, anatomical studies are shown comparing WBU™ images and MRI images of a cadaver breast. This innovative technology is designed to provide additional tools in the armamentarium for diagnosis of breast disease.

  10. A similarity measure method combining location feature for mammogram retrieval.

    PubMed

    Wang, Zhiqiong; Xin, Junchang; Huang, Yukun; Li, Chen; Xu, Ling; Li, Yang; Zhang, Hao; Gu, Huizi; Qian, Wei

    2018-05-28

    Breast cancer, the most common malignancy among women, has a high mortality rate in clinical practice. Early detection, diagnosis and treatment can reduce the mortalities of breast cancer greatly. The method of mammogram retrieval can help doctors to find the early breast lesions effectively and determine a reasonable feature set for image similarity measure. This will improve the accuracy effectively for mammogram retrieval. This paper proposes a similarity measure method combining location feature for mammogram retrieval. Firstly, the images are pre-processed, the regions of interest are detected and the lesions are segmented in order to get the center point and radius of the lesions. Then, the method, namely Coherent Point Drift, is used for image registration with the pre-defined standard image. The center point and radius of the lesions after registration are obtained and the standard location feature of the image is constructed. This standard location feature can help figure out the location similarity between the image pair from the query image to each dataset image in the database. Next, the content feature of the image is extracted, including the Histogram of Oriented Gradients, the Edge Direction Histogram, the Local Binary Pattern and the Gray Level Histogram, and the image pair content similarity can be calculated using the Earth Mover's Distance. Finally, the location similarity and content similarity are fused to form the image fusion similarity, and the specified number of the most similar images can be returned according to it. In the experiment, 440 mammograms, which are from Chinese women in Northeast China, are used as the database. When fusing 40% lesion location feature similarity and 60% content feature similarity, the results have obvious advantages. At this time, precision is 0.83, recall is 0.76, comprehensive indicator is 0.79, satisfaction is 96.0%, mean is 4.2 and variance is 17.7. The results show that the precision and recall of this method have obvious advantage, compared with the content-based image retrieval.

  11. 78 FR 6357 - Submission for Renewal: New Information Collection, Fingerprint Chart Standard Form 87 (SF 87)

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-01-30

    ..., IPAC and MNU blocks support billing and processing enhancements. The printed ORI number is no longer necessary because SF 87 forms are converted to images and transmitted to the FBI electronically. The Public...

  12. Bayesian inference on multiscale models for poisson intensity estimation: applications to photon-limited image denoising.

    PubMed

    Lefkimmiatis, Stamatios; Maragos, Petros; Papandreou, George

    2009-08-01

    We present an improved statistical model for analyzing Poisson processes, with applications to photon-limited imaging. We build on previous work, adopting a multiscale representation of the Poisson process in which the ratios of the underlying Poisson intensities (rates) in adjacent scales are modeled as mixtures of conjugate parametric distributions. Our main contributions include: 1) a rigorous and robust regularized expectation-maximization (EM) algorithm for maximum-likelihood estimation of the rate-ratio density parameters directly from the noisy observed Poisson data (counts); 2) extension of the method to work under a multiscale hidden Markov tree model (HMT) which couples the mixture label assignments in consecutive scales, thus modeling interscale coefficient dependencies in the vicinity of image edges; 3) exploration of a 2-D recursive quad-tree image representation, involving Dirichlet-mixture rate-ratio densities, instead of the conventional separable binary-tree image representation involving beta-mixture rate-ratio densities; and 4) a novel multiscale image representation, which we term Poisson-Haar decomposition, that better models the image edge structure, thus yielding improved performance. Experimental results on standard images with artificially simulated Poisson noise and on real photon-limited images demonstrate the effectiveness of the proposed techniques.

  13. The PICWidget

    NASA Technical Reports Server (NTRS)

    Norris, Jeffrey; Fox, Jason; Rabe, Kenneth; Shu, I-Hsiang; Powell, Mark

    2007-01-01

    The Plug-in Image Component Widget (PICWidget) is a software component for building digital imaging applications. The component is part of a methodology described in GIS Methodology for Planning Planetary-Rover Operations (NPO-41812), which appears elsewhere in this issue of NASA Tech Briefs. Planetary rover missions return a large number and wide variety of image data products that vary in complexity in many ways. Supported by a powerful, flexible image-data-processing pipeline, the PICWidget can process and render many types of imagery, including (but not limited to) thumbnail, subframed, downsampled, stereoscopic, and mosaic images; images coregistred with orbital data; and synthetic red/green/blue images. The PICWidget is capable of efficiently rendering images from data representing many more pixels than are available at a computer workstation where the images are to be displayed. The PICWidget is implemented as an Eclipse plug-in using the Standard Widget Toolkit, which provides a straightforward interface for re-use of the PICWidget in any number of application programs built upon the Eclipse application framework. Because the PICWidget is tile-based and performs aggressive tile caching, it has flexibility to perform faster or slower, depending whether more or less memory is available.

  14. Three-dimensional image technology in forensic anthropology: Assessing the validity of biological profiles derived from CT-3D images of the skeleton

    NASA Astrophysics Data System (ADS)

    Garcia de Leon Valenzuela, Maria Julia

    This project explores the reliability of building a biological profile for an unknown individual based on three-dimensional (3D) images of the individual's skeleton. 3D imaging technology has been widely researched for medical and engineering applications, and it is increasingly being used as a tool for anthropological inquiry. While the question of whether a biological profile can be derived from 3D images of a skeleton with the same accuracy as achieved when using dry bones has been explored, bigger sample sizes, a standardized scanning protocol and more interobserver error data are needed before 3D methods can become widely and confidently used in forensic anthropology. 3D images of Computed Tomography (CT) scans were obtained from 130 innominate bones from Boston University's skeletal collection (School of Medicine). For each bone, both 3D images and original bones were assessed using the Phenice and Suchey-Brooks methods. Statistical analysis was used to determine the agreement between 3D image assessment versus traditional assessment. A pool of six individuals with varying experience in the field of forensic anthropology scored a subsample (n = 20) to explore interobserver error. While a high agreement was found for age and sex estimation for specimens scored by the author, the interobserver study shows that observers found it difficult to apply standard methods to 3D images. Higher levels of experience did not result in higher agreement between observers, as would be expected. Thus, a need for training in 3D visualization before applying anthropological methods to 3D bones is suggested. Future research should explore interobserver error using a larger sample size in order to test the hypothesis that training in 3D visualization will result in a higher agreement between scores. The need for the development of a standard scanning protocol focusing on the optimization of 3D image resolution is highlighted. Applications for this research include the possibility of digitizing skeletal collections in order to expand their use and for deriving skeletal collections from living populations and creating population-specific standards. Further research for the development of a standard scanning and processing protocol is needed before 3D methods in forensic anthropology are considered as reliable tools for generating biological profiles.

  15. Pulmonary nodule detection with digital projection radiography: an ex-vivo study on increased latitude post-processing.

    PubMed

    Biederer, Juergen; Gottwald, Tobias; Bolte, Hendrik; Riedel, Christian; Freitag, Sandra; Van Metter, Richard; Heller, Martin

    2007-04-01

    To evaluate increased image latitude post-processing of digital projection radiograms for the detection of pulmonary nodules. 20 porcine lungs were inflated inside a chest phantom, prepared with 280 solid nodules of 4-8 mm in diameter and examined with direct radiography (3.0x2.5 k detector, 125 kVp, 4 mAs). Nodule position and size were documented by CT controls and dissection. Four intact lungs served as negative controls. Image post-processing included standard tone scales and increased latitude with detail contrast enhancement (log-factors 1.0, 1.5 and 2.0). 1280 sub-images (512x512 pixel) were centred on nodules or controls, behind the diaphragm and over free parenchyma, randomized and presented to six readers. Confidence in the decision was recorded with a scale of 0-100%. Sensitivity and specificity for nodules behind the diaphragm were 0.87/0.97 at standard tone scale and 0.92/0.92 with increased latitude (log factor 2.0). The fraction of "not diagnostic" readings was reduced (from 208/1920 to 52/1920). As an indicator of increased detection confidence, the median of the ratings behind the diaphragm approached 100 and 0, respectively, and the inter-quartile width decreased (controls: p<0.001, nodules: p=0.239) at higher image latitude. Above the diaphragm, accuracy and detection confidence remained unchanged. Here, the sensitivity for nodules was 0.94 with a specificity from 0.96 to 0.97 (all p>0.05). Increased latitude post-processing has minimal effects on the overall accuracy, but improves the detection confidence for sub-centimeter nodules in the posterior recesses of the lung.

  16. The PREP pipeline: standardized preprocessing for large-scale EEG analysis.

    PubMed

    Bigdely-Shamlo, Nima; Mullen, Tim; Kothe, Christian; Su, Kyung-Min; Robbins, Kay A

    2015-01-01

    The technology to collect brain imaging and physiological measures has become portable and ubiquitous, opening the possibility of large-scale analysis of real-world human imaging. By its nature, such data is large and complex, making automated processing essential. This paper shows how lack of attention to the very early stages of an EEG preprocessing pipeline can reduce the signal-to-noise ratio and introduce unwanted artifacts into the data, particularly for computations done in single precision. We demonstrate that ordinary average referencing improves the signal-to-noise ratio, but that noisy channels can contaminate the results. We also show that identification of noisy channels depends on the reference and examine the complex interaction of filtering, noisy channel identification, and referencing. We introduce a multi-stage robust referencing scheme to deal with the noisy channel-reference interaction. We propose a standardized early-stage EEG processing pipeline (PREP) and discuss the application of the pipeline to more than 600 EEG datasets. The pipeline includes an automatically generated report for each dataset processed. Users can download the PREP pipeline as a freely available MATLAB library from http://eegstudy.org/prepcode.

  17. A systematic approach to the interpretation of preoperative staging MRI for rectal cancer.

    PubMed

    Taylor, Fiona G M; Swift, Robert I; Blomqvist, Lennart; Brown, Gina

    2008-12-01

    The purpose of this article is to provide an aid to the systematic evaluation of MRI in staging rectal cancer. MRI has been shown to be an effective tool for the accurate preoperative staging of rectal cancer. In the Magnetic Resonance Imaging and Rectal Cancer European Equivalence Study (MERCURY), imaging workshops were held for participating radiologists to ensure standardization of scan acquisition techniques and interpretation of the images. In this article, we report how the information was obtained and give examples of the images and how they are interpreted, with the aim of providing a systematic approach to the reporting process.

  18. Feature tracking cardiac magnetic resonance imaging: A review of a novel non-invasive cardiac imaging technique

    PubMed Central

    Rahman, Zia Ur; Sethi, Pooja; Murtaza, Ghulam; Virk, Hafeez Ul Hassan; Rai, Aitzaz; Mahmod, Masliza; Schoondyke, Jeffrey; Albalbissi, Kais

    2017-01-01

    Cardiovascular disease is a leading cause of morbidity and mortality globally. Early diagnostic markers are gaining popularity for better patient care disease outcomes. There is an increasing interest in noninvasive cardiac imaging biomarkers to diagnose subclinical cardiac disease. Feature tracking cardiac magnetic resonance imaging is a novel post-processing technique that is increasingly being employed to assess global and regional myocardial function. This technique has numerous applications in structural and functional diagnostics. It has been validated in multiple studies, although there is still a long way to go for it to become routine standard of care. PMID:28515849

  19. Homographic Patch Feature Transform: A Robustness Registration for Gastroscopic Surgery.

    PubMed

    Hu, Weiling; Zhang, Xu; Wang, Bin; Liu, Jiquan; Duan, Huilong; Dai, Ning; Si, Jianmin

    2016-01-01

    Image registration is a key component of computer assistance in image guided surgery, and it is a challenging topic in endoscopic environments. In this study, we present a method for image registration named Homographic Patch Feature Transform (HPFT) to match gastroscopic images. HPFT can be used for tracking lesions and augmenting reality applications during gastroscopy. Furthermore, an overall evaluation scheme is proposed to validate the precision, robustness and uniformity of the registration results, which provides a standard for rejection of false matching pairs from corresponding results. Finally, HPFT is applied for processing in vivo gastroscopic data. The experimental results show that HPFT has stable performance in gastroscopic applications.

  20. A portable detection instrument based on DSP for beef marbling

    NASA Astrophysics Data System (ADS)

    Zhou, Tong; Peng, Yankun

    2014-05-01

    Beef marbling is one of the most important indices to assess beef quality. Beef marbling is graded by the measurement of the fat distribution density in the rib-eye region. However quality grades of beef in most of the beef slaughtering houses and businesses depend on trainees using their visual senses or comparing the beef slice to the Chinese standard sample cards. Manual grading demands not only great labor but it also lacks objectivity and accuracy. Aiming at the necessity of beef slaughtering houses and businesses, a beef marbling detection instrument was designed. The instrument employs Charge-coupled Device (CCD) imaging techniques, digital image processing, Digital Signal Processor (DSP) control and processing techniques and Liquid Crystal Display (LCD) screen display techniques. The TMS320DM642 digital signal processor of Texas Instruments (TI) is the core that combines high-speed data processing capabilities and real-time processing features. All processes such as image acquisition, data transmission, image processing algorithms and display were implemented on this instrument for a quick, efficient, and non-invasive detection of beef marbling. Structure of the system, working principle, hardware and software are introduced in detail. The device is compact and easy to transport. The instrument can determine the grade of beef marbling reliably and correctly.

  1. Objective Measurement of Erythema in Psoriasis using Digital Color Photography with Color Calibration

    PubMed Central

    Raina, Abhay; Hennessy, Ricky; Rains, Michael; Allred, James; Hirshburg, Jason M; Diven, Dayna; Markey, Mia K.

    2016-01-01

    Background Traditional metrics for evaluating the severity of psoriasis are subjective, which complicates efforts to measure effective treatments in clinical trials. Methods We collected images of psoriasis plaques and calibrated the coloration of the images according to an included color card. Features were extracted from the images and used to train a linear discriminant analysis classifier with cross-validation to automatically classify the degree of erythema. The results were tested against numerical scores obtained by a panel of dermatologists using a standard rating system. Results Quantitative measures of erythema based on the digital color images showed good agreement with subjective assessment of erythema severity (κ = 0.4203). The color calibration process improved the agreement from κ = 0.2364 to κ = 0.4203. Conclusions We propose a method for the objective measurement of the psoriasis severity parameter of erythema and show that the calibration process improved the results. PMID:26517973

  2. Optimization of CT image reconstruction algorithms for the lung tissue research consortium (LTRC)

    NASA Astrophysics Data System (ADS)

    McCollough, Cynthia; Zhang, Jie; Bruesewitz, Michael; Bartholmai, Brian

    2006-03-01

    To create a repository of clinical data, CT images and tissue samples and to more clearly understand the pathogenetic features of pulmonary fibrosis and emphysema, the National Heart, Lung, and Blood Institute (NHLBI) launched a cooperative effort known as the Lung Tissue Resource Consortium (LTRC). The CT images for the LTRC effort must contain accurate CT numbers in order to characterize tissues, and must have high-spatial resolution to show fine anatomic structures. This study was performed to optimize the CT image reconstruction algorithms to achieve these criteria. Quantitative analyses of phantom and clinical images were conducted. The ACR CT accreditation phantom containing five regions of distinct CT attenuations (CT numbers of approximately -1000 HU, -80 HU, 0 HU, 130 HU and 900 HU), and a high-contrast spatial resolution test pattern, was scanned using CT systems from two manufacturers (General Electric (GE) Healthcare and Siemens Medical Solutions). Phantom images were reconstructed using all relevant reconstruction algorithms. Mean CT numbers and image noise (standard deviation) were measured and compared for the five materials. Clinical high-resolution chest CT images acquired on a GE CT system for a patient with diffuse lung disease were reconstructed using BONE and STANDARD algorithms and evaluated by a thoracic radiologist in terms of image quality and disease extent. The clinical BONE images were processed with a 3 x 3 x 3 median filter to simulate a thicker slice reconstructed in smoother algorithms, which have traditionally been proven to provide an accurate estimation of emphysema extent in the lungs. Using a threshold technique, the volume of emphysema (defined as the percentage of lung voxels having a CT number lower than -950 HU) was computed for the STANDARD, BONE, and BONE filtered. The CT numbers measured in the ACR CT Phantom images were accurate for all reconstruction kernels for both manufacturers. As expected, visual evaluation of the spatial resolution bar patterns demonstrated that the BONE (GE) and B46f (Siemens) showed higher spatial resolution compared to the STANDARD (GE) or B30f (Siemens) reconstruction algorithms typically used for routine body CT imaging. Only the sharper images were deemed clinically acceptable for the evaluation of diffuse lung disease (e.g. emphysema). Quantitative analyses of the extent of emphysema in patient data showed the percent volumes above the -950 HU threshold as 9.4% for the BONE reconstruction, 5.9% for the STANDARD reconstruction, and 4.7% for the BONE filtered images. Contrary to the practice of using standard resolution CT images for the quantitation of diffuse lung disease, these data demonstrate that a single sharp reconstruction (BONE/B46f) should be used for both the qualitative and quantitative evaluation of diffuse lung disease. The sharper reconstruction images, which are required for diagnostic interpretation, provide accurate CT numbers over the range of -1000 to +900 HU and preserve the fidelity of small structures in the reconstructed images. A filtered version of the sharper images can be accurately substituted for images reconstructed with smoother kernels for comparison to previously published results.

  3. On the Performance Evaluation of 3D Reconstruction Techniques from a Sequence of Images

    NASA Astrophysics Data System (ADS)

    Eid, Ahmed; Farag, Aly

    2005-12-01

    The performance evaluation of 3D reconstruction techniques is not a simple problem to solve. This is not only due to the increased dimensionality of the problem but also due to the lack of standardized and widely accepted testing methodologies. This paper presents a unified framework for the performance evaluation of different 3D reconstruction techniques. This framework includes a general problem formalization, different measuring criteria, and a classification method as a first step in standardizing the evaluation process. Performance characterization of two standard 3D reconstruction techniques, stereo and space carving, is also presented. The evaluation is performed on the same data set using an image reprojection testing methodology to reduce the dimensionality of the evaluation domain. Also, different measuring strategies are presented and applied to the stereo and space carving techniques. These measuring strategies have shown consistent results in quantifying the performance of these techniques. Additional experiments are performed on the space carving technique to study the effect of the number of input images and the camera pose on its performance.

  4. Software implementation of the SKIPSM paradigm under PIP

    NASA Astrophysics Data System (ADS)

    Hack, Ralf; Waltz, Frederick M.; Batchelor, Bruce G.

    1997-09-01

    SKIPSM (separated-kernel image processing using finite state machines) is a technique for implementing large-kernel binary- morphology operators and many other operations. While earlier papers on SKIPSM concentrated mainly on implementations using pipelined hardware, there is considerable scope for achieving major speed improvements in software systems. Using identical control software, one-pass binary erosion and dilation structuring elements (SEs) ranging from the trivial (3 by 3) to the gigantic (51 by 51, or even larger), are readily available. Processing speed is independent of the size of the SE, making the SKIPSM approach practical for work with very large SEs on ordinary desktop computers. PIP (prolog image processing) is an interactive machine vision prototyping environment developed at the University of Wales Cardiff. It consists of a large number of image processing operators embedded within the standard AI language Prolog. This paper describes the SKIPSM implementation of binary morphology operators within PIP. A large set of binary erosion and dilation operations (circles, squares, diamonds, octagons, etc.) is available to the user through a command-line driven dialogue, via pull-down menus, or incorporated into standard (Prolog) programs. Little has been done thus far to optimize speed on this first software implementation of SKIPSM. Nevertheless, the results are impressive. The paper describes sample applications and presents timing figures. Readers have the opportunity to try out these operations on demonstration software written by the University of Wales, or via their WWW home page at http://bruce.cs.cf.ac.uk/bruce/index.html .

  5. Pain related inflammation analysis using infrared images

    NASA Astrophysics Data System (ADS)

    Bhowmik, Mrinal Kanti; Bardhan, Shawli; Das, Kakali; Bhattacharjee, Debotosh; Nath, Satyabrata

    2016-05-01

    Medical Infrared Thermography (MIT) offers a potential non-invasive, non-contact and radiation free imaging modality for assessment of abnormal inflammation having pain in the human body. The assessment of inflammation mainly depends on the emission of heat from the skin surface. Arthritis is a disease of joint damage that generates inflammation in one or more anatomical joints of the body. Osteoarthritis (OA) is the most frequent appearing form of arthritis, and rheumatoid arthritis (RA) is the most threatening form of them. In this study, the inflammatory analysis has been performed on the infrared images of patients suffering from RA and OA. For the analysis, a dataset of 30 bilateral knee thermograms has been captured from the patient of RA and OA by following a thermogram acquisition standard. The thermograms are pre-processed, and areas of interest are extracted for further processing. The investigation of the spread of inflammation is performed along with the statistical analysis of the pre-processed thermograms. The objectives of the study include: i) Generation of a novel thermogram acquisition standard for inflammatory pain disease ii) Analysis of the spread of the inflammation related to RA and OA using K-means clustering. iii) First and second order statistical analysis of pre-processed thermograms. The conclusion reflects that, in most of the cases, RA oriented inflammation affects bilateral knees whereas inflammation related to OA present in the unilateral knee. Also due to the spread of inflammation in OA, contralateral asymmetries are detected through the statistical analysis.

  6. Calibration, Projection, and Final Image Products of MESSENGER's Mercury Dual Imaging System

    NASA Astrophysics Data System (ADS)

    Denevi, Brett W.; Chabot, Nancy L.; Murchie, Scott L.; Becker, Kris J.; Blewett, David T.; Domingue, Deborah L.; Ernst, Carolyn M.; Hash, Christopher D.; Hawkins, S. Edward; Keller, Mary R.; Laslo, Nori R.; Nair, Hari; Robinson, Mark S.; Seelos, Frank P.; Stephens, Grant K.; Turner, F. Scott; Solomon, Sean C.

    2018-02-01

    We present an overview of the operations, calibration, geodetic control, photometric standardization, and processing of images from the Mercury Dual Imaging System (MDIS) acquired during the orbital phase of the MESSENGER spacecraft's mission at Mercury (18 March 2011-30 April 2015). We also provide a summary of all of the MDIS products that are available in NASA's Planetary Data System (PDS). Updates to the radiometric calibration included slight modification of the frame-transfer smear correction, updates to the flat fields of some wide-angle camera (WAC) filters, a new model for the temperature dependence of narrow-angle camera (NAC) and WAC sensitivity, and an empirical correction for temporal changes in WAC responsivity. Further, efforts to characterize scattered light in the WAC system are described, along with a mosaic-dependent correction for scattered light that was derived for two regional mosaics. Updates to the geometric calibration focused on the focal lengths and distortions of the NAC and all WAC filters, NAC-WAC alignment, and calibration of the MDIS pivot angle and base. Additionally, two control networks were derived so that the majority of MDIS images can be co-registered with sub-pixel accuracy; the larger of the two control networks was also used to create a global digital elevation model. Finally, we describe the image processing and photometric standardization parameters used in the creation of the MDIS advanced products in the PDS, which include seven large-scale mosaics, numerous targeted local mosaics, and a set of digital elevation models ranging in scale from local to global.

  7. Ultrafast image-based dynamic light scattering for nanoparticle sizing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Wu; Zhang, Jie; Liu, Lili

    An ultrafast sizing method for nanoparticles is proposed, called as UIDLS (Ultrafast Image-based Dynamic Light Scattering). This method makes use of the intensity fluctuation of scattered light from nanoparticles in Brownian motion, which is similar to the conventional DLS method. The difference in the experimental system is that the scattered light by nanoparticles is received by an image sensor instead of a photomultiplier tube. A novel data processing algorithm is proposed to directly get correlation coefficient between two images at a certain time interval (from microseconds to milliseconds) by employing a two-dimensional image correlation algorithm. This coefficient has been provedmore » to be a monotonic function of the particle diameter. Samples of standard latex particles (79/100/352/482/948 nm) were measured for validation of the proposed method. The measurement accuracy of higher than 90% was found with standard deviations less than 3%. A sample of nanosilver particle with nominal size of 20 ± 2 nm and a sample of polymethyl methacrylate emulsion with unknown size were also tested using UIDLS method. The measured results were 23.2 ± 3.0 nm and 246.1 ± 6.3 nm, respectively, which is substantially consistent with the transmission electron microscope results. Since the time for acquisition of two successive images has been reduced to less than 1 ms and the data processing time in about 10 ms, the total measuring time can be dramatically reduced from hundreds seconds to tens of milliseconds, which provides the potential for real-time and in situ nanoparticle sizing.« less

  8. Corner-point criterion for assessing nonlinear image processing imagers

    NASA Astrophysics Data System (ADS)

    Landeau, Stéphane; Pigois, Laurent; Foing, Jean-Paul; Deshors, Gilles; Swiathy, Greggory

    2017-10-01

    Range performance modeling of optronics imagers attempts to characterize the ability to resolve details in the image. Today, digital image processing is systematically used in conjunction with the optoelectronic system to correct its defects or to exploit tiny detection signals to increase performance. In order to characterize these processing having adaptive and non-linear properties, it becomes necessary to stimulate the imagers with test patterns whose properties are similar to the actual scene image ones, in terms of dynamic range, contours, texture and singular points. This paper presents an approach based on a Corner-Point (CP) resolution criterion, derived from the Probability of Correct Resolution (PCR) of binary fractal patterns. The fundamental principle lies in the respectful perception of the CP direction of one pixel minority value among the majority value of a 2×2 pixels block. The evaluation procedure considers the actual image as its multi-resolution CP transformation, taking the role of Ground Truth (GT). After a spatial registration between the degraded image and the original one, the degradation is statistically measured by comparing the GT with the degraded image CP transformation, in terms of localized PCR at the region of interest. The paper defines this CP criterion and presents the developed evaluation techniques, such as the measurement of the number of CP resolved on the target, the transformation CP and its inverse transform that make it possible to reconstruct an image of the perceived CPs. Then, this criterion is compared with the standard Johnson criterion, in the case of a linear blur and noise degradation. The evaluation of an imaging system integrating an image display and a visual perception is considered, by proposing an analysis scheme combining two methods: a CP measurement for the highly non-linear part (imaging) with real signature test target and conventional methods for the more linear part (displaying). The application to color imaging is proposed, with a discussion about the choice of the working color space depending on the type of image enhancement processing used.

  9. Wavelet compression of noisy tomographic images

    NASA Astrophysics Data System (ADS)

    Kappeler, Christian; Mueller, Stefan P.

    1995-09-01

    3D data acquisition is increasingly used in positron emission tomography (PET) to collect a larger fraction of the emitted radiation. A major practical difficulty with data storage and transmission in 3D-PET is the large size of the data sets. A typical dynamic study contains about 200 Mbyte of data. PET images inherently have a high level of photon noise and therefore usually are evaluated after being processed by a smoothing filter. In this work we examined lossy compression schemes under the postulate not induce image modifications exceeding those resulting from low pass filtering. The standard we will refer to is the Hanning filter. Resolution and inhomogeneity serve as figures of merit for quantification of image quality. The images to be compressed are transformed to a wavelet representation using Daubechies12 wavelets and compressed after filtering by thresholding. We do not include further compression by quantization and coding here. Achievable compression factors at this level of processing are thirty to fifty.

  10. Navigable points estimation for mobile robots using binary image skeletonization

    NASA Astrophysics Data System (ADS)

    Martinez S., Fernando; Jacinto G., Edwar; Montiel A., Holman

    2017-02-01

    This paper describes the use of image skeletonization for the estimation of all the navigable points, inside a scene of mobile robots navigation. Those points are used for computing a valid navigation path, using standard methods. The main idea is to find the middle and the extreme points of the obstacles in the scene, taking into account the robot size, and create a map of navigable points, in order to reduce the amount of information for the planning algorithm. Those points are located by means of the skeletonization of a binary image of the obstacles and the scene background, along with some other digital image processing algorithms. The proposed algorithm automatically gives a variable number of navigable points per obstacle, depending on the complexity of its shape. As well as, the way how the algorithm can change some of their parameters in order to change the final number of the resultant key points is shown. The results shown here were obtained applying different kinds of digital image processing algorithms on static scenes.

  11. Robust flood area detection using a L-band synthetic aperture radar: Preliminary application for Florida, the U.S. affected by Hurricane Irma

    NASA Astrophysics Data System (ADS)

    Nagai, H.; Ohki, M.; Abe, T.

    2017-12-01

    Urgent crisis response for a hurricane-induced flood needs urgent providing of a flood map covering a broad region. However, there is no standard threshold values for automatic flood identification from pre-and-post images obtained by satellite-based synthetic aperture radars (SARs). This problem could hamper prompt data providing for operational uses. Furthermore, one pre-flood SAR image does not always represent potential water surfaces and river flows especially in tropical flat lands which are greatly influenced by seasonal precipitation cycle. We are, therefore, developing a new method of flood mapping using PALSAR-2, an L-band SAR, which is less affected by temporal surface changes. Specifically, a mean-value image and a standard-deviation image are calculated from a series of pre-flood SAR images. It is combined with a post-flood SAR image to obtain normalized backscatter amplitude difference (NoBADi), with which a difference between a post-flood image and a mean-value image is divided by a standard-deviation image to emphasize anomalous water extents. Flooding areas are then automatically obtained from the NoBADi images as lower-value pixels avoiding potential water surfaces. We applied this method to PALSAR-2 images acquired on Sept. 8, 10, and 12, 2017, covering flooding areas in a central region of Dominican Republic and west Florida, the U.S. affected by Hurricane Irma. The output flooding outlines are validated with flooding areas manually delineated from high-resolution optical satellite images, resulting in higher consistency and less uncertainty than previous methods (i.e., a simple pre-and-post flood difference and pre-and-post coherence changes). The NoBADi method has a great potential to obtain a reliable flood map for future flood hazards, not hampered by cloud cover, seasonal surface changes, and "casual" thresholds in the flood identification process.

  12. Advances in the development of an imaging device for plaque measurement in the area of the carotid artery.

    PubMed

    Ličev, Lačezar; Krumnikl, Michal; Škuta, Jaromír; Babiuch, Marek; Farana, Radim

    2014-03-04

    This paper describes the advances in the development and subsequent testing of an imaging device for three-dimensional ultrasound measurement of atherosclerotic plaque in the carotid artery. The embolization from the atherosclerotic carotid plaque is one of the most common causes of ischemic stroke and, therefore, we consider the measurement of the plaque as extremely important. The paper describes the proposed hardware for enhancing the standard ultrasonic probe to provide a possibility of accurate probe positioning and synchronization with the cardiac activity, allowing the precise plaque measurements that were impossible with the standard equipment. The synchronization signal is derived from the output signal of the patient monitor (electrocardiogram (ECG)), processed by a microcontroller-based system, generating the control commands for the linear motion moving the probe. The controlling algorithm synchronizes the movement with the ECG waveform to obtain clear images not disturbed by the heart activity.

  13. A New Standard for Assessing the Performance of High Contrast Imaging Systems

    NASA Astrophysics Data System (ADS)

    Jensen-Clem, Rebecca; Mawet, Dimitri; Gomez Gonzalez, Carlos A.; Absil, Olivier; Belikov, Ruslan; Currie, Thayne; Kenworthy, Matthew A.; Marois, Christian; Mazoyer, Johan; Ruane, Garreth; Tanner, Angelle; Cantalloube, Faustine

    2018-01-01

    As planning for the next generation of high contrast imaging instruments (e.g., WFIRST, HabEx, and LUVOIR, TMT-PFI, EELT-EPICS) matures and second-generation ground-based extreme adaptive optics facilities (e.g., VLT-SPHERE, Gemini-GPI) finish their principal surveys, it is imperative that the performance of different designs, post-processing algorithms, observing strategies, and survey results be compared in a consistent, statistically robust framework. In this paper, we argue that the current industry standard for such comparisons—the contrast curve—falls short of this mandate. We propose a new figure of merit, the “performance map,” that incorporates three fundamental concepts in signal detection theory: the true positive fraction, the false positive fraction, and the detection threshold. By supplying a theoretical basis and recipe for generating the performance map, we hope to encourage the widespread adoption of this new metric across subfields in exoplanet imaging.

  14. De-identification of Medical Images with Retention of Scientific Research Value

    PubMed Central

    Maffitt, David R.; Smith, Kirk E.; Kirby, Justin S.; Clark, Kenneth W.; Freymann, John B.; Vendt, Bruce A.; Tarbox, Lawrence R.; Prior, Fred W.

    2015-01-01

    Online public repositories for sharing research data allow investigators to validate existing research or perform secondary research without the expense of collecting new data. Patient data made publicly available through such repositories may constitute a breach of personally identifiable information if not properly de-identified. Imaging data are especially at risk because some intricacies of the Digital Imaging and Communications in Medicine (DICOM) format are not widely understood by researchers. If imaging data still containing protected health information (PHI) were released through a public repository, a number of different parties could be held liable, including the original researcher who collected and submitted the data, the original researcher’s institution, and the organization managing the repository. To minimize these risks through proper de-identification of image data, one must understand what PHI exists and where that PHI resides, and one must have the tools to remove PHI without compromising the scientific integrity of the data. DICOM public elements are defined by the DICOM Standard. Modality vendors use private elements to encode acquisition parameters that are not yet defined by the DICOM Standard, or the vendor may not have updated an existing software product after DICOM defined new public elements. Because private elements are not standardized, a common de-identification practice is to delete all private elements, removing scientifically useful data as well as PHI. Researchers and publishers of imaging data can use the tools and process described in this article to de-identify DICOM images according to current best practices. ©RSNA, 2015 PMID:25969931

  15. Automatic recognition of light source from color negative films using sorting classification techniques

    NASA Astrophysics Data System (ADS)

    Sanger, Demas S.; Haneishi, Hideaki; Miyake, Yoichi

    1995-08-01

    This paper proposed a simple and automatic method for recognizing the light sources from various color negative film brands by means of digital image processing. First, we stretched the image obtained from a negative based on the standardized scaling factors, then extracted the dominant color component among red, green, and blue components of the stretched image. The dominant color component became the discriminator for the recognition. The experimental results verified that any one of the three techniques could recognize the light source from negatives of any film brands and all brands greater than 93.2 and 96.6% correct recognitions, respectively. This method is significant for the automation of color quality control in color reproduction from color negative film in mass processing and printing machine.

  16. UWGSP7: a real-time optical imaging workstation

    NASA Astrophysics Data System (ADS)

    Bush, John E.; Kim, Yongmin; Pennington, Stan D.; Alleman, Andrew P.

    1995-04-01

    With the development of UWGSP7, the University of Washington Image Computing Systems Laboratory has a real-time workstation for continuous-wave (cw) optical reflectance imaging. Recent discoveries in optical science and imaging research have suggested potential practical use of the technology as a medical imaging modality and identified the need for a machine to support these applications in real time. The UWGSP7 system was developed to provide researchers with a high-performance, versatile tool for use in optical imaging experiments with the eventual goal of bringing the technology into clinical use. One of several major applications of cw optical reflectance imaging is tumor imaging which uses a light-absorbing dye that preferentially sequesters in tumor tissue. This property could be used to locate tumors and to identify tumor margins intraoperatively. Cw optical reflectance imaging consists of illumination of a target with a band-limited light source and monitoring the light transmitted by or reflected from the target. While continuously illuminating the target, a control image is acquired and stored. A dye is injected into a subject and a sequence of data images are acquired and processed. The data images are aligned with the control image and then subtracted to obtain a signal representing the change in optical reflectance over time. This signal can be enhanced by digital image processing and displayed in pseudo-color. This type of emerging imaging technique requires a computer system that is versatile and adaptable. The UWGSP7 utilizes a VESA local bus PC as a host computer running the Windows NT operating system and includes ICSL developed add-on boards for image acquisition and processing. The image acquisition board is used to digitize and format the analog signal from the input device into digital frames and to the average frames into images. To accommodate different input devices, the camera interface circuitry is designed in a small mezzanine board that supports the RS-170 standard. The image acquisition board is connected to the image- processing board using a direct connect port which provides a 66 Mbytes/s channel independent of the system bus. The image processing board utilizes the Texas Instruments TMS320C80 Multimedia Video Processor chip. This chip is capable of 2 billion operations per second providing the UWGSP7 with the capability to perform real-time image processing functions like median filtering, convolution and contrast enhancement. This processing power allows interactive analysis of the experiments as compared to current practice of off-line processing and analysis. Due to its flexibility and programmability, the UWGSP7 can be adapted into various research needs in intraoperative optical imaging.

  17. Visualization of GPM Standard Products at the Precipitation Processing System (PPS)

    NASA Astrophysics Data System (ADS)

    Kelley, O.

    2010-12-01

    Many of the standard data products for the Global Precipitation Measurement (GPM) constellation of satellites will be generated at and distributed by the Precipitation Processing System (PPS) at NASA Goddard. PPS will provide several means to visualize these data products. These visualization tools will be used internally by PPS analysts to investigate potential anomalies in the data files, and these tools will also be made available to researchers. Currently, a free data viewer called THOR, the Tool for High-resolution Observation Review, can be downloaded and installed on Linux, Windows, and Mac OS X systems. THOR can display swath and grid products, and to a limited degree, the low-level data packets that the satellite itself transmits to the ground system. Observations collected since the 1997 launch of the Tropical Rainfall Measuring Mission (TRMM) satellite can be downloaded from the PPS FTP archive, and in the future, many of the GPM standard products will also be available from this FTP site. To provide easy access to this 80 terabyte and growing archive, PPS currently operates an on-line ordering tool called STORM that provides geographic and time searches, browse-image display, and the ability to order user-specified subsets of standard data files. Prior to the anticipated 2013 launch of the GPM core satellite, PPS will expand its visualization tools by integrating an on-line version of THOR within STORM to provide on-the-fly image creation of any portion of an archived data file at a user-specified degree of magnification. PPS will also provide OpenDAP access to the data archive and OGC WMS image creation of both swath and gridded data products. During the GPM era, PPS will continue to provide realtime globally-gridded 3-hour rainfall estimates to the public in a compact binary format (3B42RT) and in a GIS format (2-byte TIFF images + ESRI WorldFiles).

  18. Smartphone-based low light detection for bioluminescence application

    USDA-ARS?s Scientific Manuscript database

    We report a smartphone-based device and associated imaging-processing algorithm to maximize the sensitivity of standard smartphone cameras, that can detect the presence of single-digit pW of radiant flux intensity. The proposed hardware and software, called bioluminescent-based analyte quantitation ...

  19. Imaging standards for smart cards

    NASA Astrophysics Data System (ADS)

    Ellson, Richard N.; Ray, Lawrence A.

    1996-02-01

    "Smart cards" are plastic cards the size of credit cards which contain integrated circuits for the storage of digital information. The applications of these cards for image storage has been growing as card data capacities have moved from tens of bytes to thousands of bytes. This has prompted the recommendation of standards by the X3B10 committee of ANSI for inclusion in ISO standards for card image storage of a variety of image data types including digitized signatures and color portrait images. This paper will review imaging requirements of the smart card industry, challenges of image storage for small memory devices, card image communications, and the present status of standards. The paper will conclude with recommendations for the evolution of smart card image standards towards image formats customized to the image content and more optimized for smart card memory constraints.

  20. Imaging standards for smart cards

    NASA Astrophysics Data System (ADS)

    Ellson, Richard N.; Ray, Lawrence A.

    1996-01-01

    'Smart cards' are plastic cards the size of credit cards which contain integrated circuits for the storage of digital information. The applications of these cards for image storage has been growing as card data capacities have moved from tens of bytes to thousands of bytes. This has prompted the recommendation of standards by the X3B10 committee of ANSI for inclusion in ISO standards for card image storage of a variety of image data types including digitized signatures and color portrait images. This paper reviews imaging requirements of the smart card industry, challenges of image storage for small memory devices, card image communications, and the present status of standards. The paper concludes with recommendations for the evolution of smart card image standards towards image formats customized to the image content and more optimized for smart card memory constraints.

  1. Looking back to inform the future: The role of cognition in forest disturbance characterization from remote sensing imagery

    NASA Astrophysics Data System (ADS)

    Bianchetti, Raechel Anne

    Remotely sensed images have become a ubiquitous part of our daily lives. From novice users, aiding in search and rescue missions using tools such as TomNod, to trained analysts, synthesizing disparate data to address complex problems like climate change, imagery has become central to geospatial problem solving. Expert image analysts are continually faced with rapidly developing sensor technologies and software systems. In response to these cognitively demanding environments, expert analysts develop specialized knowledge and analytic skills to address increasingly complex problems. This study identifies the knowledge, skills, and analytic goals of expert image analysts tasked with identification of land cover and land use change. Analysts participating in this research are currently working as part of a national level analysis of land use change, and are well versed with the use of TimeSync, forest science, and image analysis. The results of this study benefit current analysts as it improves their awareness of their mental processes used during the image interpretation process. The study also can be generalized to understand the types of knowledge and visual cues that analysts use when reasoning with imagery for purposes beyond land use change studies. Here a Cognitive Task Analysis framework is used to organize evidence from qualitative knowledge elicitation methods for characterizing the cognitive aspects of the TimeSync image analysis process. Using a combination of content analysis, diagramming, semi-structured interviews, and observation, the study highlights the perceptual and cognitive elements of expert remote sensing interpretation. Results show that image analysts perform several standard cognitive processes, but flexibly employ these processes in response to various contextual cues. Expert image analysts' ability to think flexibly during their analysis process was directly related to their amount of image analysis experience. Additionally, results show that the basic Image Interpretation Elements continue to be important despite technological augmentation of the interpretation process. These results are used to derive a set of design guidelines for developing geovisual analytic tools and training to support image analysis.

  2. Role of Retinocortical Processing in Spatial Vision

    DTIC Science & Technology

    1989-06-01

    its inverse transform . These are even- symmetric functions. Odd-symmetric Gabor functions would also be required for image coding (Daugman, 1987), but...spectrum square; thus its horizontal and vertical scale factors may differ by a power of 2. Since the inverse transform undoes this distor- tion, it has...FIGURE 3 STANDARD FORM OF EVEN GABOR FILTER 7 order to inverse - transform correctly. We used Gabor functions with the standard shape of Daugman’s "polar

  3. Functional brain imaging and the induction of traumatic recall: a cross-correlational review between neuroimaging and hypnosis.

    PubMed

    Vermetten, Eric; Douglas Bremner, J

    2004-07-01

    The behavioral and psychophysiological alterations during recall in patients with trauma disorders often resemble phenomena that are seen in hypnosis. In studies of emotional recall as well as in neuroimaging studies of hypnotic processes similar brain structures are involved: thalamus, hippocampus, amygdala, medial prefrontal cortex, anterior cingulate cortex. This paper focuses on cross-correlations in traumatic recall and hypnotic responses and reviews correlations between the involvement of brain structures in traumatic recall and processes that are involved in hypnotic responsiveness. To further improve uniformity of results of brain imaging specifically for traumatic recall studies, attention is needed for standardization of hypnotic variables, isolation of the emotional process of interest (state),and assessment of trait-related differences.

  4. A new programming metaphor for image processing procedures

    NASA Technical Reports Server (NTRS)

    Smirnov, O. M.; Piskunov, N. E.

    1992-01-01

    Most image processing systems, besides an Application Program Interface (API) which lets users write their own image processing programs, also feature a higher level of programmability. Traditionally, this is a command or macro language, which can be used to build large procedures (scripts) out of simple programs or commands. This approach, a legacy of the teletypewriter has serious drawbacks. A command language is clumsy when (and if! it attempts to utilize the capabilities of a multitasking or multiprocessor environment, it is but adequate for real-time data acquisition and processing, it has a fairly steep learning curve, and the user interface is very inefficient,. especially when compared to a graphical user interface (GUI) that systems running under Xll or Windows should otherwise be able to provide. ll these difficulties stem from one basic problem: a command language is not a natural metaphor for an image processing procedure. A more natural metaphor - an image processing factory is described in detail. A factory is a set of programs (applications) that execute separate operations on images, connected by pipes that carry data (images and parameters) between them. The programs function concurrently, processing images as they arrive along pipes, and querying the user for whatever other input they need. From the user's point of view, programming (constructing) factories is a lot like playing with LEGO blocks - much more intuitive than writing scripts. Focus is on some of the difficulties of implementing factory support, most notably the design of an appropriate API. It also shows that factories retain all the functionality of a command language (including loops and conditional branches), while suffering from none of the drawbacks outlined above. Other benefits of factory programming include self-tuning factories and the process of encapsulation, which lets a factory take the shape of a standard application both from the system and the user's point of view, and thus be used as a component of other factories. A bare-bones prototype of factory programming was implemented under the PcIPS image processing system, and a complete version (on a multitasking platform) is under development.

  5. A two-step framework for the registration of HE stained and FTIR images

    NASA Astrophysics Data System (ADS)

    Peñaranda, Francisco; Naranjo, Valery; Verdú, Rafaél.; Lloyd, Gavin R.; Nallala, Jayakrupakar; Stone, Nick

    2016-03-01

    FTIR spectroscopy is an emerging technology with high potential for cancer diagnosis but with particular physical phenomena that require special processing. Little work has been done in the field with the aim of registering hyperspectral Fourier-Transform Infrared (FTIR) spectroscopic images and Hematoxilin and Eosin (HE) stained histological images of contiguous slices of tissue. This registration is necessary to transfer the location of relevant structures that the pathologist may identify in the gold standard HE images. A two-step registration framework is presented where a representative gray image extracted from the FTIR hypercube is used as an input. This representative image, which must have a spatial contrast as similar as possible to a gray image obtained from the HE image, is calculated through the spectrum variation in the fingerprint region. In the first step of the registration algorithm a similarity transformation is estimated from interest points, which are automatically detected by the popular SURF algorithm. In the second stage, a variational registration framework defined in the frequency domain compensates for local anatomical variations between both images. After a proper tuning of some parameters the proposed registration framework works in an automated way. The method was tested on 7 samples of colon tissue in different stages of cancer. Very promising qualitative and quantitative results were obtained (a mean correlation ratio of 92.16% with a standard deviation of 3.10%).

  6. Comparison of portable and conventional ultrasound imaging in spinal curvature measurement

    NASA Astrophysics Data System (ADS)

    Yan, Christina; Tabanfar, Reza; Kempston, Michael; Borschneck, Daniel; Ungi, Tamas; Fichtinger, Gabor

    2016-03-01

    PURPOSE: In scoliosis monitoring, tracked ultrasound has been explored as a safer imaging alternative to traditional radiography. The use of ultrasound in spinal curvature measurement requires identification of vertebral landmarks, but bones have reduced visibility in ultrasound imaging and high quality ultrasound machines are often expensive and not portable. In this work, we investigate the image quality and measurement accuracy of a low cost and portable ultrasound machine in comparison to a standard ultrasound machine in scoliosis monitoring. METHODS: Two different kinds of ultrasound machines were tested on three human subjects, using the same position tracker and software. Spinal curves were measured in the same reference coordinate system using both ultrasound machines. Lines were defined by connecting two symmetric landmarks identified on the left and right transverse process of the same vertebrae, and spinal curvature was defined as the transverse process angle between two such lines, projected on the coronal plane. RESULTS: Three healthy volunteers were scanned by both ultrasound configurations. Three experienced observers localized transverse processes as skeletal landmarks and obtained transverse process angles in images obtained from both ultrasounds. The mean difference per transverse process angle measured was 3.00 +/-2.1°. 94% of transverse processes visualized in the Sonix Touch were also visible in the Telemed. Inter-observer error in the Telemed was 4.5° and 4.3° in the Sonix Touch. CONCLUSION: Price, convenience and accessibility suggest the Telemed to be a viable alternative in scoliosis monitoring, however further improvements in measurement protocol and image noise reduction must be completed before implementing the Telemed in the clinical setting.

  7. Geometric processing workflow for vertical and oblique hyperspectral frame images collected using UAV

    NASA Astrophysics Data System (ADS)

    Markelin, L.; Honkavaara, E.; Näsi, R.; Nurminen, K.; Hakala, T.

    2014-08-01

    Remote sensing based on unmanned airborne vehicles (UAVs) is a rapidly developing field of technology. UAVs enable accurate, flexible, low-cost and multiangular measurements of 3D geometric, radiometric, and temporal properties of land and vegetation using various sensors. In this paper we present a geometric processing chain for multiangular measurement system that is designed for measuring object directional reflectance characteristics in a wavelength range of 400-900 nm. The technique is based on a novel, lightweight spectral camera designed for UAV use. The multiangular measurement is conducted by collecting vertical and oblique area-format spectral images. End products of the geometric processing are image exterior orientations, 3D point clouds and digital surface models (DSM). This data is needed for the radiometric processing chain that produces reflectance image mosaics and multiangular bidirectional reflectance factor (BRF) observations. The geometric processing workflow consists of the following three steps: (1) determining approximate image orientations using Visual Structure from Motion (VisualSFM) software, (2) calculating improved orientations and sensor calibration using a method based on self-calibrating bundle block adjustment (standard photogrammetric software) (this step is optional), and finally (3) creating dense 3D point clouds and DSMs using Photogrammetric Surface Reconstruction from Imagery (SURE) software that is based on semi-global-matching algorithm and it is capable of providing a point density corresponding to the pixel size of the image. We have tested the geometric processing workflow over various targets, including test fields, agricultural fields, lakes and complex 3D structures like forests.

  8. Theoretical and Empirical Comparison of Big Data Image Processing with Apache Hadoop and Sun Grid Engine.

    PubMed

    Bao, Shunxing; Weitendorf, Frederick D; Plassard, Andrew J; Huo, Yuankai; Gokhale, Aniruddha; Landman, Bennett A

    2017-02-11

    The field of big data is generally concerned with the scale of processing at which traditional computational paradigms break down. In medical imaging, traditional large scale processing uses a cluster computer that combines a group of workstation nodes into a functional unit that is controlled by a job scheduler. Typically, a shared-storage network file system (NFS) is used to host imaging data. However, data transfer from storage to processing nodes can saturate network bandwidth when data is frequently uploaded/retrieved from the NFS, e.g., "short" processing times and/or "large" datasets. Recently, an alternative approach using Hadoop and HBase was presented for medical imaging to enable co-location of data storage and computation while minimizing data transfer. The benefits of using such a framework must be formally evaluated against a traditional approach to characterize the point at which simply "large scale" processing transitions into "big data" and necessitates alternative computational frameworks. The proposed Hadoop system was implemented on a production lab-cluster alongside a standard Sun Grid Engine (SGE). Theoretical models for wall-clock time and resource time for both approaches are introduced and validated. To provide real example data, three T1 image archives were retrieved from a university secure, shared web database and used to empirically assess computational performance under three configurations of cluster hardware (using 72, 109, or 209 CPU cores) with differing job lengths. Empirical results match the theoretical models. Based on these data, a comparative analysis is presented for when the Hadoop framework will be relevant and non-relevant for medical imaging.

  9. Theoretical and empirical comparison of big data image processing with Apache Hadoop and Sun Grid Engine

    NASA Astrophysics Data System (ADS)

    Bao, Shunxing; Weitendorf, Frederick D.; Plassard, Andrew J.; Huo, Yuankai; Gokhale, Aniruddha; Landman, Bennett A.

    2017-03-01

    The field of big data is generally concerned with the scale of processing at which traditional computational paradigms break down. In medical imaging, traditional large scale processing uses a cluster computer that combines a group of workstation nodes into a functional unit that is controlled by a job scheduler. Typically, a shared-storage network file system (NFS) is used to host imaging data. However, data transfer from storage to processing nodes can saturate network bandwidth when data is frequently uploaded/retrieved from the NFS, e.g., "short" processing times and/or "large" datasets. Recently, an alternative approach using Hadoop and HBase was presented for medical imaging to enable co-location of data storage and computation while minimizing data transfer. The benefits of using such a framework must be formally evaluated against a traditional approach to characterize the point at which simply "large scale" processing transitions into "big data" and necessitates alternative computational frameworks. The proposed Hadoop system was implemented on a production lab-cluster alongside a standard Sun Grid Engine (SGE). Theoretical models for wall-clock time and resource time for both approaches are introduced and validated. To provide real example data, three T1 image archives were retrieved from a university secure, shared web database and used to empirically assess computational performance under three configurations of cluster hardware (using 72, 109, or 209 CPU cores) with differing job lengths. Empirical results match the theoretical models. Based on these data, a comparative analysis is presented for when the Hadoop framework will be relevant and nonrelevant for medical imaging.

  10. Exploring the complementarity of THz pulse imaging and DCE-MRIs: Toward a unified multi-channel classification and a deep learning framework.

    PubMed

    Yin, X-X; Zhang, Y; Cao, J; Wu, J-L; Hadjiloucas, S

    2016-12-01

    We provide a comprehensive account of recent advances in biomedical image analysis and classification from two complementary imaging modalities: terahertz (THz) pulse imaging and dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI). The work aims to highlight underlining commonalities in both data structures so that a common multi-channel data fusion framework can be developed. Signal pre-processing in both datasets is discussed briefly taking into consideration advances in multi-resolution analysis and model based fractional order calculus system identification. Developments in statistical signal processing using principal component and independent component analysis are also considered. These algorithms have been developed independently by the THz-pulse imaging and DCE-MRI communities, and there is scope to place them in a common multi-channel framework to provide better software standardization at the pre-processing de-noising stage. A comprehensive discussion of feature selection strategies is also provided and the importance of preserving textural information is highlighted. Feature extraction and classification methods taking into consideration recent advances in support vector machine (SVM) and extreme learning machine (ELM) classifiers and their complex extensions are presented. An outlook on Clifford algebra classifiers and deep learning techniques suitable to both types of datasets is also provided. The work points toward the direction of developing a new unified multi-channel signal processing framework for biomedical image analysis that will explore synergies from both sensing modalities for inferring disease proliferation. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  11. Contour sensitive saliency and depth application in image retargeting

    NASA Astrophysics Data System (ADS)

    Lu, Hongju; Yue, Pengfei; Zhao, Yanhui; Liu, Rui; Fu, Yuanbin; Zheng, Yuanjie; Cui, Jia

    2018-04-01

    Image retargeting technique requires important information preservation and less edge distortion during increasing/decreasing image size. The major existed content-aware methods perform well. However, there are two problems should be improved: the slight distortion appeared at the object edges and the structure distortion in the nonsalient area. According to psychological theories, people evaluate image quality based on multi-level judgments and comparison between different areas, both image content and image structure. The paper proposes a new standard: the structure preserving in non-salient area. After observation and image analysis, blur (slight blur) is generally existed at the edge of objects. The blur feature is used to estimate the depth cue, named blur depth descriptor. It can be used in the process of saliency computation for balanced image retargeting result. In order to keep the structure information in nonsalient area, the salient edge map is presented in Seam Carving process, instead of field-based saliency computation. The derivative saliency from x- and y-direction can avoid the redundant energy seam around salient objects causing structure distortion. After the comparison experiments between classical approaches and ours, the feasibility of our algorithm is proved.

  12. A physiology-based parametric imaging method for FDG-PET data

    NASA Astrophysics Data System (ADS)

    Scussolini, Mara; Garbarino, Sara; Sambuceti, Gianmario; Caviglia, Giacomo; Piana, Michele

    2017-12-01

    Parametric imaging is a compartmental approach that processes nuclear imaging data to estimate the spatial distribution of the kinetic parameters governing tracer flow. The present paper proposes a novel and efficient computational method for parametric imaging which is potentially applicable to several compartmental models of diverse complexity and which is effective in the determination of the parametric maps of all kinetic coefficients. We consider applications to [18 F]-fluorodeoxyglucose positron emission tomography (FDG-PET) data and analyze the two-compartment catenary model describing the standard FDG metabolization by an homogeneous tissue and the three-compartment non-catenary model representing the renal physiology. We show uniqueness theorems for both models. The proposed imaging method starts from the reconstructed FDG-PET images of tracer concentration and preliminarily applies image processing algorithms for noise reduction and image segmentation. The optimization procedure solves pixel-wise the non-linear inverse problem of determining the kinetic parameters from dynamic concentration data through a regularized Gauss-Newton iterative algorithm. The reliability of the method is validated against synthetic data, for the two-compartment system, and experimental real data of murine models, for the renal three-compartment system.

  13. A portable high-definition electronic endoscope based on embedded system

    NASA Astrophysics Data System (ADS)

    Xu, Guang; Wang, Liqiang; Xu, Jin

    2012-11-01

    This paper presents a low power and portable highdefinition (HD) electronic endoscope based on CortexA8 embedded system. A 1/6 inch CMOS image sensor is used to acquire HD images with 1280 *800 pixels. The camera interface of A8 is designed to support images of various sizes and support multiple inputs of video format such as ITUR BT601/ 656 standard. Image rotation (90 degrees clockwise) and image process functions are achieved by CAMIF. The decode engine of the processor plays back or records HD videos at speed of 30 frames per second, builtin HDMI interface transmits high definition images to the external display. Image processing procedures such as demosaicking, color correction and auto white balance are realized on the A8 platform. Other functions are selected through OSD settings. An LCD panel displays the real time images. The snapshot pictures or compressed videos are saved in an SD card or transmited to a computer through USB interface. The size of the camera head is 4×4.8×15 mm with more than 3 meters working distance. The whole endoscope system can be powered by a lithium battery, with the advantages of miniature, low cost and portability.

  14. An Automated Measurement of Ciliary Beating Frequency using a Combined Optical Flow and Peak Detection.

    PubMed

    Kim, Woojae; Han, Tae Hwa; Kim, Hyun Jun; Park, Man Young; Kim, Ku Sang; Park, Rae Woong

    2011-06-01

    The mucociliary transport system is a major defense mechanism of the respiratory tract. The performance of mucous transportation in the nasal cavity can be represented by a ciliary beating frequency (CBF). This study proposes a novel method to measure CBF by using optical flow. To obtain objective estimates of CBF from video images, an automated computer-based image processing technique is developed. This study proposes a new method based on optical flow for image processing and peak detection for signal processing. We compare the measuring accuracy of the method in various combinations of image processing (optical flow versus difference image) and signal processing (fast Fourier transform [FFT] vs. peak detection [PD]). The digital high-speed video method with a manual count of CBF in slow motion video play, is the gold-standard in CBF measurement. We obtained a total of fifty recorded ciliated sinonasal epithelium images to measure CBF from the Department of Otolaryngology. The ciliated sinonasal epithelium images were recorded at 50-100 frames per second using a charge coupled device camera with an inverted microscope at a magnification of ×1,000. The mean square errors and variance for each method were 1.24, 0.84 Hz; 11.8, 2.63 Hz; 3.22, 1.46 Hz; and 3.82, 1.53 Hz for optical flow (OF) + PD, OF + FFT, difference image [DI] + PD, and DI + FFT, respectively. Of the four methods, PD using optical flow showed the best performance for measuring the CBF of nasal mucosa. The proposed method was able to measure CBF more objectively and efficiently than what is currently possible.

  15. Ethical implications of digital images for teaching and learning purposes: an integrative review.

    PubMed

    Kornhaber, Rachel; Betihavas, Vasiliki; Baber, Rodney J

    2015-01-01

    Digital photography has simplified the process of capturing and utilizing medical images. The process of taking high-quality digital photographs has been recognized as efficient, timely, and cost-effective. In particular, the evolution of smartphone and comparable technologies has become a vital component in teaching and learning of health care professionals. However, ethical standards in relation to digital photography for teaching and learning have not always been of the highest standard. The inappropriate utilization of digital images within the health care setting has the capacity to compromise patient confidentiality and increase the risk of litigation. Therefore, the aim of this review was to investigate the literature concerning the ethical implications for health professionals utilizing digital photography for teaching and learning. A literature search was conducted utilizing five electronic databases, PubMed, Embase (Excerpta Medica Database), Cumulative Index to Nursing and Allied Health Literature, Educational Resources Information Center, and Scopus, limited to English language. Studies that endeavored to evaluate the ethical implications of digital photography for teaching and learning purposes in the health care setting were included. The search strategy identified 514 papers of which nine were retrieved for full review. Four papers were excluded based on the inclusion criteria, leaving five papers for final analysis. Three key themes were developed: knowledge deficit, consent and beyond, and standards driving scope of practice. The assimilation of evidence in this review suggests that there is value for health professionals utilizing digital photography for teaching purposes in health education. However, there is limited understanding of the process of obtaining and storage and use of such mediums for teaching purposes. Disparity was also highlighted related to policy and guideline identification and development in clinical practice. Therefore, the implementation of policy to guide practice requires further research.

  16. BOREAS TE-18, 60-m, Radiometrically Rectified Landsat TM Imagery

    NASA Technical Reports Server (NTRS)

    Hall, Forrest G. (Editor); Knapp, David

    2000-01-01

    The BOREAS TE-18 team used a radiometric rectification process to produce standardized DN values for a series of Landsat TM images of the BOREAS SSA and NSA in order to compare images that were collected under different atmospheric conditions. The images for each study area were referenced to an image that had very clear atmospheric qualities. The reference image for the SSA was collected on 02-Sep-1994, while the reference image for the NSA was collected on 2 1 Jun-1995. The 23 rectified images cover the period of 07-Jul-1985 to 18-Sep-1994 in the SSA and 22-Jun-1984 to 09-Jun-1994 in the NSA. Each of the reference scenes had coincident atmospheric optical thickness measurements made by RSS-11. The radiometric rectification process is described in more detail by Hall et al. (1991). The original Landsat TM data were received from CCRS for use in the BOREAS project. Due to the nature of the radiometric rectification process and copyright issues, the full-resolution (30-m) images may not be publicly distributed. However, this spatially degraded 60-m resolution version of the images may be openly distributed and is available on the BOREAS CD-ROM series. After the radiometric rectification processing, the original data were degraded to a 60-m pixel size from the original 30-m pixel size by averaging the data over a 2- by 2-pixel window. The data are stored in binary image-format files. The data files are available on a CD-ROM (see document number 20010000884), or from the Oak Ridge National Laboratory (ORNL) Distributed Activity Archive Center (DAAC).

  17. Computational aspects of geometric correction data generation in the LANDSAT-D imagery processing

    NASA Technical Reports Server (NTRS)

    Levine, I.

    1981-01-01

    A method is presented for systematic and geodetic correction data calculation. It is based on presentation of image distortions as a sum of nominal distortions and linear effects caused by variation of the spacecraft position and attitude variables from their nominals. The method may be used for both MSS and TM image data and it is incorporated into the processing by means of mostly offline calculations. Modeling shows that the maximal of the method are of the order of 5m at the worst point in a frame; the standard deviations of the average errors less than .8m.

  18. UkrVO astronomical WEB services

    NASA Astrophysics Data System (ADS)

    Mazhaev, A.

    2017-02-01

    Ukraine Virtual Observatory (UkrVO) has been a member of the International Virtual Observatory Alliance (IVOA) since 2011. The virtual observatory (VO) is not a magic solution to all problems of data storing and processing, but it provides certain standards for building infrastructure of astronomical data center. The astronomical databases help data mining and offer to users an easy access to observation metadata, images within celestial sphere and results of image processing. The astronomical web services (AWS) of UkrVO give to users handy tools for data selection from large astronomical catalogues for a relatively small region of interest in the sky. Examples of the AWS usage are showed.

  19. Automatic selection of landmarks in T1-weighted head MRI with regression forests for image registration initialization.

    PubMed

    Wang, Jianing; Liu, Yuan; Noble, Jack H; Dawant, Benoit M

    2017-10-01

    Medical image registration establishes a correspondence between images of biological structures, and it is at the core of many applications. Commonly used deformable image registration methods depend on a good preregistration initialization. We develop a learning-based method to automatically find a set of robust landmarks in three-dimensional MR image volumes of the head. These landmarks are then used to compute a thin plate spline-based initialization transformation. The process involves two steps: (1) identifying a set of landmarks that can be reliably localized in the images and (2) selecting among them the subset that leads to a good initial transformation. To validate our method, we use it to initialize five well-established deformable registration algorithms that are subsequently used to register an atlas to MR images of the head. We compare our proposed initialization method with a standard approach that involves estimating an affine transformation with an intensity-based approach. We show that for all five registration algorithms the final registration results are statistically better when they are initialized with the method that we propose than when a standard approach is used. The technique that we propose is generic and could be used to initialize nonrigid registration algorithms for other applications.

  20. Neuroimaging Feature Terminology: A Controlled Terminology for the Annotation of Brain Imaging Features.

    PubMed

    Iyappan, Anandhi; Younesi, Erfan; Redolfi, Alberto; Vrooman, Henri; Khanna, Shashank; Frisoni, Giovanni B; Hofmann-Apitius, Martin

    2017-01-01

    Ontologies and terminologies are used for interoperability of knowledge and data in a standard manner among interdisciplinary research groups. Existing imaging ontologies capture general aspects of the imaging domain as a whole such as methodological concepts or calibrations of imaging instruments. However, none of the existing ontologies covers the diagnostic features measured by imaging technologies in the context of neurodegenerative diseases. Therefore, the Neuro-Imaging Feature Terminology (NIFT) was developed to organize the knowledge domain of measured brain features in association with neurodegenerative diseases by imaging technologies. The purpose is to identify quantitative imaging biomarkers that can be extracted from multi-modal brain imaging data. This terminology attempts to cover measured features and parameters in brain scans relevant to disease progression. In this paper, we demonstrate the systematic retrieval of measured indices from literature and how the extracted knowledge can be further used for disease modeling that integrates neuroimaging features with molecular processes.

  1. Factor analysis for delineation of organ structures, creation of in- and output functions, and standardization of multicenter kinetic modeling

    NASA Astrophysics Data System (ADS)

    Schiepers, Christiaan; Hoh, Carl K.; Dahlbom, Magnus; Wu, Hsiao-Ming; Phelps, Michael E.

    1999-05-01

    PET imaging can quantify metabolic processes in-vivo; this requires the measurement of an input function which is invasive and labor intensive. A non-invasive, semi-automated, image based method of input function generation would be efficient, patient friendly, and allow quantitative PET to be applied routinely. A fully automated procedure would be ideal for studies across institutions. Factor analysis (FA) was applied as processing tool for definition of temporally changing structures in the field of view. FA has been proposed earlier, but the perceived mathematical difficulty has prevented widespread use. FA was utilized to delineate structures and extract blood and tissue time-activity-curves (TACs). These TACs were used as input and output functions for tracer kinetic modeling, the results of which were compared with those from an input function obtained with serial blood sampling. Dynamic image data of myocardial perfusion studies with N-13 ammonia, O-15 water, or Rb-82, cancer studies with F-18 FDG, and skeletal studies with F-18 fluoride were evaluated. Correlation coefficients of kinetic parameters obtained with factor and plasma input functions were high. Linear regression usually furnished a slope near unity. Processing time was 7 min/patient on an UltraSPARC. Conclusion: FA can non-invasively generate input functions from image data eliminating the need for blood sampling. Output (tissue) functions can be simultaneously generated. The method is simple, requires no sophisticated operator interaction and has little inter-operator variability. FA is well suited for studies across institutions and standardized evaluations.

  2. Automated measurements of metabolic tumor volume and metabolic parameters in lung PET/CT imaging

    NASA Astrophysics Data System (ADS)

    Orologas, F.; Saitis, P.; Kallergi, M.

    2017-11-01

    Patients with lung tumors or inflammatory lung disease could greatly benefit in terms of treatment and follow-up by PET/CT quantitative imaging, namely measurements of metabolic tumor volume (MTV), standardized uptake values (SUVs) and total lesion glycolysis (TLG). The purpose of this study was the development of an unsupervised or partially supervised algorithm using standard image processing tools for measuring MTV, SUV, and TLG from lung PET/CT scans. Automated metabolic lesion volume and metabolic parameter measurements were achieved through a 5 step algorithm: (i) The segmentation of the lung areas on the CT slices, (ii) the registration of the CT segmented lung regions on the PET images to define the anatomical boundaries of the lungs on the functional data, (iii) the segmentation of the regions of interest (ROIs) on the PET images based on adaptive thresholding and clinical criteria, (iv) the estimation of the number of pixels and pixel intensities in the PET slices of the segmented ROIs, (v) the estimation of MTV, SUVs, and TLG from the previous step and DICOM header data. Whole body PET/CT scans of patients with sarcoidosis were used for training and testing the algorithm. Lung area segmentation on the CT slices was better achieved with semi-supervised techniques that reduced false positive detections significantly. Lung segmentation results agreed with the lung volumes published in the literature while the agreement between experts and algorithm in the segmentation of the lesions was around 88%. Segmentation results depended on the image resolution selected for processing. The clinical parameters, SUV (either mean or max or peak) and TLG estimated by the segmented ROIs and DICOM header data provided a way to correlate imaging data to clinical and demographic data. In conclusion, automated MTV, SUV, and TLG measurements offer powerful analysis tools in PET/CT imaging of the lungs. Custom-made algorithms are often a better approach than the manufacturer’s general analysis software at much lower cost. Relatively simple processing techniques could lead to customized, unsupervised or partially supervised methods that can successfully perform the desirable analysis and adapt to the specific disease requirements.

  3. The Socio-Moral Image Database (SMID): A novel stimulus set for the study of social, moral and affective processes.

    PubMed

    Crone, Damien L; Bode, Stefan; Murawski, Carsten; Laham, Simon M

    2018-01-01

    A major obstacle for the design of rigorous, reproducible studies in moral psychology is the lack of suitable stimulus sets. Here, we present the Socio-Moral Image Database (SMID), the largest standardized moral stimulus set assembled to date, containing 2,941 freely available photographic images, representing a wide range of morally (and affectively) positive, negative and neutral content. The SMID was validated with over 820,525 individual judgments from 2,716 participants, with normative ratings currently available for all images on affective valence and arousal, moral wrongness, and relevance to each of the five moral values posited by Moral Foundations Theory. We present a thorough analysis of the SMID regarding (1) inter-rater consensus, (2) rating precision, and (3) breadth and variability of moral content. Additionally, we provide recommendations for use aimed at efficient study design and reproducibility, and outline planned extensions to the database. We anticipate that the SMID will serve as a useful resource for psychological, neuroscientific and computational (e.g., natural language processing or computer vision) investigations of social, moral and affective processes. The SMID images, along with associated normative data and additional resources are available at https://osf.io/2rqad/.

  4. PET/CT (and CT) instrumentation, image reconstruction and data transfer for radiotherapy planning.

    PubMed

    Sattler, Bernhard; Lee, John A; Lonsdale, Markus; Coche, Emmanuel

    2010-09-01

    The positron emission tomography in combination with CT in hybrid, cross-modality imaging systems (PET/CT) gains more and more importance as a part of the treatment-planning procedure in radiotherapy. Positron emission tomography (PET), as a integral part of nuclear medicine imaging and non-invasive imaging technique, offers the visualization and quantification of pre-selected tracer metabolism. In combination with the structural information from CT, this molecular imaging technique has great potential to support and improve the outcome of the treatment-planning procedure prior to radiotherapy. By the choice of the PET-Tracer, a variety of different metabolic processes can be visualized. First and foremost, this is the glucose metabolism of a tissue as well as for instance hypoxia or cell proliferation. This paper comprises the system characteristics of hybrid PET/CT systems. Acquisition and processing protocols are described in general and modifications to cope with the special needs in radiooncology. This starts with the different position of the patient on a special table top, continues with the use of the same fixation material as used for positioning of the patient in radiooncology while simulation and irradiation and leads to special processing protocols that include the delineation of the volumes that are subject to treatment planning and irradiation (PTV, GTV, CTV, etc.). General CT acquisition and processing parameters as well as the use of contrast enhancement of the CT are described. The possible risks and pitfalls the investigator could face during the hybrid-imaging procedure are explained and listed. The interdisciplinary use of different imaging modalities implies a increase of the volume of data created. These data need to be stored and communicated fast, safe and correct. Therefore, the DICOM-Standard provides objects and classes for this purpose (DICOM RT). Furthermore, the standard DICOM objects and classes for nuclear medicine (NM, PT) and computed tomography (CT) are used to communicate the actual image data created by the modalities. Care must be taken for data security, especially when transferring data across the (network-) borders of different hospitals. Overall, the most important precondition for successful integration of functional imaging in RT treatment planning is the goal orientated as well as close and thorough communication between nuclear medicine and radiotherapy departments on all levels of interaction (personnel, imaging protocols, GTV delineation, and selection of the data transfer method). Copyright 2010 European Society for Therapeutic Radiology and Oncology and European Association of Nuclear Medicine. Published by Elsevier Ireland Ltd.. All rights reserved.

  5. Image analysis for maintenance of coating quality in nickel electroplating baths--real time control.

    PubMed

    Vidal, M; Amigo, J M; Bro, R; van den Berg, F; Ostra, M; Ubide, C

    2011-11-07

    The aim of this paper is to show how it is possible to extract analytical information from images acquired with a flatbed scanner and make use of this information for real time control of a nickel plating process. Digital images of plated steel sheets in a nickel bath are used to follow the process under degradation of specific additives. Dedicated software has been developed for making the obtained results accessible to process operators. This includes obtaining the RGB image, to select the red channel data exclusively, to calculate the histogram of the red channel data and to calculate the mean colour value (MCV) and the standard deviation of the red channel data. MCV is then used by the software to determine the concentration of the additives Supreme Plus Brightner (SPB) and SA-1 (for confidentiality reasons, the chemical contents cannot be further detailed) present in the bath (these two additives degrade and their concentration changes during the process). Finally, the software informs the operator when the bath is generating unsuitable quality plating and suggests the amount of SPB and SA-1 to be added in order to recover the original plating quality. Copyright © 2011 Elsevier B.V. All rights reserved.

  6. The Use of Multidimensional Image-Based Analysis to Accurately Monitor Cell Growth in 3D Bioreactor Culture

    PubMed Central

    Baradez, Marc-Olivier; Marshall, Damian

    2011-01-01

    The transition from traditional culture methods towards bioreactor based bioprocessing to produce cells in commercially viable quantities for cell therapy applications requires the development of robust methods to ensure the quality of the cells produced. Standard methods for measuring cell quality parameters such as viability provide only limited information making process monitoring and optimisation difficult. Here we describe a 3D image-based approach to develop cell distribution maps which can be used to simultaneously measure the number, confluency and morphology of cells attached to microcarriers in a stirred tank bioreactor. The accuracy of the cell distribution measurements is validated using in silico modelling of synthetic image datasets and is shown to have an accuracy >90%. Using the cell distribution mapping process and principal component analysis we show how cell growth can be quantitatively monitored over a 13 day bioreactor culture period and how changes to manufacture processes such as initial cell seeding density can significantly influence cell morphology and the rate at which cells are produced. Taken together, these results demonstrate how image-based analysis can be incorporated in cell quality control processes facilitating the transition towards bioreactor based manufacture for clinical grade cells. PMID:22028809

  7. The use of multidimensional image-based analysis to accurately monitor cell growth in 3D bioreactor culture.

    PubMed

    Baradez, Marc-Olivier; Marshall, Damian

    2011-01-01

    The transition from traditional culture methods towards bioreactor based bioprocessing to produce cells in commercially viable quantities for cell therapy applications requires the development of robust methods to ensure the quality of the cells produced. Standard methods for measuring cell quality parameters such as viability provide only limited information making process monitoring and optimisation difficult. Here we describe a 3D image-based approach to develop cell distribution maps which can be used to simultaneously measure the number, confluency and morphology of cells attached to microcarriers in a stirred tank bioreactor. The accuracy of the cell distribution measurements is validated using in silico modelling of synthetic image datasets and is shown to have an accuracy >90%. Using the cell distribution mapping process and principal component analysis we show how cell growth can be quantitatively monitored over a 13 day bioreactor culture period and how changes to manufacture processes such as initial cell seeding density can significantly influence cell morphology and the rate at which cells are produced. Taken together, these results demonstrate how image-based analysis can be incorporated in cell quality control processes facilitating the transition towards bioreactor based manufacture for clinical grade cells.

  8. Effect of various digital processing algorithms on the measurement accuracy of endodontic file length.

    PubMed

    Kal, Betül Ilhan; Baksi, B Güniz; Dündar, Nesrin; Sen, Bilge Hakan

    2007-02-01

    The aim of this study was to compare the accuracy of endodontic file lengths after application of various image enhancement modalities. Endodontic files of three different ISO sizes were inserted in 20 single-rooted extracted permanent mandibular premolar teeth and standardized images were obtained. Original digital images were then enhanced using five processing algorithms. Six evaluators measured the length of each file on each image. The measurements from each processing algorithm and each file size were compared using repeated measures ANOVA and Bonferroni tests (P = 0.05). Paired t test was performed to compare the measurements with the true lengths of the files (P = 0.05). All of the processing algorithms provided significantly shorter measurements than the true length of each file size (P < 0.05). The threshold enhancement modality produced significantly higher mean error values (P < 0.05), while there was no significant difference among the other enhancement modalities (P > 0.05). Decrease in mean error value was observed with increasing file size (P < 0.05). Invert, contrast/brightness and edge enhancement algorithms may be recommended for accurate file length measurements when utilizing storage phosphor plates.

  9. In-Line Monitoring of a Pharmaceutical Pan Coating Process by Optical Coherence Tomography.

    PubMed

    Markl, Daniel; Hannesschläger, Günther; Sacher, Stephan; Leitner, Michael; Buchsbaum, Andreas; Pescod, Russel; Baele, Thomas; Khinast, Johannes G

    2015-08-01

    This work demonstrates a new in-line measurement technique for monitoring the coating growth of randomly moving tablets in a pan coating process. In-line quality control is performed by an optical coherence tomography (OCT) sensor allowing nondestructive and contact-free acquisition of cross-section images of film coatings in real time. The coating thickness can be determined directly from these OCT images and no chemometric calibration models are required for quantification. Coating thickness measurements are extracted from the images by a fully automated algorithm. Results of the in-line measurements are validated using off-line OCT images, thickness calculations from tablet dimension measurements, and weight gain measurements. Validation measurements are performed on sample tablets periodically removed from the process during production. Reproducibility of the results is demonstrated by three batches produced under the same process conditions. OCT enables a multiple direct measurement of the coating thickness on individual tablets rather than providing the average coating thickness of a large number of tablets. This gives substantially more information about the coating quality, that is, intra- and intertablet coating variability, than standard quality control methods. © 2015 Wiley Periodicals, Inc. and the American Pharmacists Association.

  10. ACIR: automatic cochlea image registration

    NASA Astrophysics Data System (ADS)

    Al-Dhamari, Ibraheem; Bauer, Sabine; Paulus, Dietrich; Lissek, Friedrich; Jacob, Roland

    2017-02-01

    Efficient Cochlear Implant (CI) surgery requires prior knowledge of the cochlea's size and its characteristics. This information helps to select suitable implants for different patients. To get these measurements, a segmentation method of cochlea medical images is needed. An important pre-processing step for good cochlea segmentation involves efficient image registration. The cochlea's small size and complex structure, in addition to the different resolutions and head positions during imaging, reveals a big challenge for the automated registration of the different image modalities. In this paper, an Automatic Cochlea Image Registration (ACIR) method for multi- modal human cochlea images is proposed. This method is based on using small areas that have clear structures from both input images instead of registering the complete image. It uses the Adaptive Stochastic Gradient Descent Optimizer (ASGD) and Mattes's Mutual Information metric (MMI) to estimate 3D rigid transform parameters. The use of state of the art medical image registration optimizers published over the last two years are studied and compared quantitatively using the standard Dice Similarity Coefficient (DSC). ACIR requires only 4.86 seconds on average to align cochlea images automatically and to put all the modalities in the same spatial locations without human interference. The source code is based on the tool elastix and is provided for free as a 3D Slicer plugin. Another contribution of this work is a proposed public cochlea standard dataset which can be downloaded for free from a public XNAT server.

  11. Digital information management: a progress report on the National Digital Mammography Archive

    NASA Astrophysics Data System (ADS)

    Beckerman, Barbara G.; Schnall, Mitchell D.

    2002-05-01

    Digital mammography creates very large images, which require new approaches to storage, retrieval, management, and security. The National Digital Mammography Archive (NDMA) project, funded by the National Library of Medicine (NLM), is developing a limited testbed that demonstrates the feasibility of a national breast imaging archive, with access to prior exams; patient information; computer aids for image processing, teaching, and testing tools; and security components to ensure confidentiality of patient information. There will be significant benefits to patients and clinicians in terms of accessible data with which to make a diagnosis and to researchers performing studies on breast cancer. Mammography was chosen for the project, because standards were already available for digital images, report formats, and structures. New standards have been created for communications protocols between devices, front- end portal and archive. NDMA is a distributed computing concept that provides for sharing and access across corporate entities. Privacy, auditing, and patient consent are all integrated into the system. Five sites, Universities of Pennsylvania, Chicago, North Carolina and Toronto, and BWXT Y12, are connected through high-speed networks to demonstrate functionality. We will review progress, including technical challenges, innovative research and development activities, standards and protocols being implemented, and potential benefits to healthcare systems.

  12. Radiomics: Images Are More than Pictures, They Are Data

    PubMed Central

    Kinahan, Paul E.; Hricak, Hedvig

    2016-01-01

    In the past decade, the field of medical image analysis has grown exponentially, with an increased number of pattern recognition tools and an increase in data set sizes. These advances have facilitated the development of processes for high-throughput extraction of quantitative features that result in the conversion of images into mineable data and the subsequent analysis of these data for decision support; this practice is termed radiomics. This is in contrast to the traditional practice of treating medical images as pictures intended solely for visual interpretation. Radiomic data contain first-, second-, and higher-order statistics. These data are combined with other patient data and are mined with sophisticated bioinformatics tools to develop models that may potentially improve diagnostic, prognostic, and predictive accuracy. Because radiomics analyses are intended to be conducted with standard of care images, it is conceivable that conversion of digital images to mineable data will eventually become routine practice. This report describes the process of radiomics, its challenges, and its potential power to facilitate better clinical decision making, particularly in the care of patients with cancer. PMID:26579733

  13. Paediatric x-ray radiation dose reduction and image quality analysis.

    PubMed

    Martin, L; Ruddlesden, R; Makepeace, C; Robinson, L; Mistry, T; Starritt, H

    2013-09-01

    Collaboration of multiple staff groups has resulted in significant reduction in the risk of radiation-induced cancer from radiographic x-ray exposure during childhood. In this study at an acute NHS hospital trust, a preliminary audit identified initial exposure factors. These were compared with European and UK guidance, leading to the introduction of new factors that were in compliance with European guidance on x-ray tube potentials. Image quality was assessed using standard anatomical criteria scoring, and visual grading characteristics analysis assessed the impact on image quality of changes in exposure factors. This analysis determined the acceptability of gradual radiation dose reduction below the European and UK guidance levels. Chest and pelvis exposures were optimised, achieving dose reduction for each age group, with 7%-55% decrease in critical organ dose. Clinicians confirmed diagnostic image quality throughout the iterative process. Analysis of images acquired with preliminary and final exposure factors indicated an average visual grading analysis result of 0.5, demonstrating equivalent image quality. The optimisation process and final radiation doses are reported for Carestream computed radiography to aid other hospitals in minimising radiation risks to children.

  14. Refining enamel thickness measurements from B-mode ultrasound images.

    PubMed

    Hua, Jeremy; Chen, Ssu-Kuang; Kim, Yongmin

    2009-01-01

    Dental erosion has been growing increasingly prevalent with the rise in consumption of heavy starches, sugars, coffee, and acidic beverages. In addition, various disorders, such as Gastroenterological Reflux Disease (GERD), have symptoms of rapid rates of tooth erosion. The measurement of enamel thickness would be important for dentists to assess the progression of enamel loss from all forms of erosion, attrition, and abrasion. Characterizing enamel loss is currently done with various subjective indexes that can be interpreted in different ways by different dentists. Ultrasound has been utilized since the 1960s to determine internal tooth structure, but with mixed results. Via image processing and enhancement, we were able to refine B-mode dental ultrasound images for more accurate enamel thickness measurements. The mean difference between the measured thickness of the occlusal enamel from ultrasound images and corresponding gold standard CT images improved from 0.55 mm to 0.32 mm with image processing (p = 0.033). The difference also improved from 0.62 to 0.53 mm at the buccal/lingual enamel surfaces, but not significantly (p = 0.38).

  15. A comprehensive neuropsychological mapping battery for functional magnetic resonance imaging.

    PubMed

    Karakas, Sirel; Baran, Zeynel; Ceylan, Arzu Ozkan; Tileylioglu, Emre; Tali, Turgut; Karakas, Hakki Muammer

    2013-11-01

    Existing batteries for FMRI do not precisely meet the criteria for comprehensive mapping of cognitive functions within minimum data acquisition times using standard scanners and head coils. The goal was to develop a battery of neuropsychological paradigms for FMRI that can also be used in other brain imaging techniques and behavioural research. Participants were 61 healthy, young adult volunteers (48 females and 13 males, mean age: 22.25 ± 3.39 years) from the university community. The battery included 8 paradigms for basic (visual, auditory, sensory-motor, emotional arousal) and complex (language, working memory, inhibition/interference control, learning) cognitive functions. Imaging was performed using standard functional imaging capabilities (1.5-T MR scanner, standard head coil). Structural and functional data series were analysed using Brain Voyager QX2.9 and Statistical Parametric Mapping-8. For basic processes, activation centres for individuals were within a distance of 3-11 mm of the group centres of the target regions and for complex cognitive processes, between 7 mm and 15 mm. Based on fixed-effect and random-effects analyses, the distance between the activation centres was 0-4 mm. There was spatial variability between individual cases; however, as shown by the distances between the centres found with fixed-effect and random-effects analyses, the coordinates for individual cases can be used to represent those of the group. The findings show that the neuropsychological brain mapping battery described here can be used in basic science studies that investigate the relationship of the brain to the mind and also as functional localiser in clinical studies for diagnosis, follow-up and pre-surgical mapping. © 2013.

  16. Computer-assisted image analysis to quantify daily growth rates of broiler chickens.

    PubMed

    De Wet, L; Vranken, E; Chedad, A; Aerts, J M; Ceunen, J; Berckmans, D

    2003-09-01

    1. The objective was to investigate the possibility of detecting daily body weight changes of broiler chickens with computer-assisted image analysis. 2. The experiment included 50 broiler chickens reared under commercial conditions. Ten out of 50 chickens were randomly selected and video recorded (upper view) 18 times during the 42-d growing period. The number of surface and periphery pixels from the images was used to derive a relationship between body dimension and live weight. 3. The relative error in weight estimation, expressed in terms of the standard deviation of the residuals from image surface data was 10%, while it was found to be 15% for the image periphery data. 4. Image-processing systems could be developed to assist the farmer in making important management and marketing decisions.

  17. Depth resolved hyperspectral imaging spectrometer based on structured light illumination and Fourier transform interferometry

    PubMed Central

    Choi, Heejin; Wadduwage, Dushan; Matsudaira, Paul T.; So, Peter T.C.

    2014-01-01

    A depth resolved hyperspectral imaging spectrometer can provide depth resolved imaging both in the spatial and the spectral domain. Images acquired through a standard imaging Fourier transform spectrometer do not have the depth-resolution. By post processing the spectral cubes (x, y, λ) obtained through a Sagnac interferometer under uniform illumination and structured illumination, spectrally resolved images with depth resolution can be recovered using structured light illumination algorithms such as the HiLo method. The proposed scheme is validated with in vitro specimens including fluorescent solution and fluorescent beads with known spectra. The system is further demonstrated in quantifying spectra from 3D resolved features in biological specimens. The system has demonstrated depth resolution of 1.8 μm and spectral resolution of 7 nm respectively. PMID:25360367

  18. Geometric registration of remotely sensed data with SAMIR

    NASA Astrophysics Data System (ADS)

    Gianinetto, Marco; Barazzetti, Luigi; Dini, Luigi; Fusiello, Andrea; Toldo, Roberto

    2015-06-01

    The commercial market offers several software packages for the registration of remotely sensed data through standard one-to-one image matching. Although very rapid and simple, this strategy does not take into consideration all the interconnections among the images of a multi-temporal data set. This paper presents a new scientific software, called Satellite Automatic Multi-Image Registration (SAMIR), able to extend the traditional registration approach towards multi-image global processing. Tests carried out with high-resolution optical (IKONOS) and high-resolution radar (COSMO-SkyMed) data showed that SAMIR can improve the registration phase with a more rigorous and robust workflow without initial approximations, user's interaction or limitation in spatial/spectral data size. The validation highlighted a sub-pixel accuracy in image co-registration for the considered imaging technologies, including optical and radar imagery.

  19. Application and further development of diffusion based 2D chemical imaging techniques in the rhizosphere

    NASA Astrophysics Data System (ADS)

    Hoefer, Christoph; Santner, Jakob; Borisov, Sergey; Kreuzeder, Andreas; Wenzel, Walter; Puschenreiter, Markus

    2015-04-01

    Two dimensional chemical imaging of root processes refers to novel in situ methods to investigate and map solutes at a high spatial resolution (sub-mm). The visualization of these solutes reveals new insights in soil biogeochemistry and root processes. We derive chemical images by using data from DGT-LA-ICP-MS (Diffusive Gradients in Thin Films and Laser Ablation Inductively Coupled Plasma Mass Spectrometry) and POS (Planar Optode Sensors). Both technologies have shown promising results when applied in aqueous environment but need to be refined and improved for imaging at the soil-plant interface. Co-localized mapping using combined DGT and POS technologies and the development of new gel combinations are in our focus. DGTs are smart and thin (<0.4 mm) hydrogels; containing a binding resin for the targeted analytes (e.g. trace metals, phosphate, sulphide or radionuclides). The measurement principle is passive and diffusion based. The present analytes are diffusing into the gel and are bound by the resin. Thereby, the resin acts as zero sink. After application, DGTs are retrieved, dried, and analysed using LA-ICP-MS. The data is then normalized by an internal standard (e.g. 13C), calibrated using in-house standards and chemical images of the target area are plotted using imaging software. POS are, similar to DGT, thin sensor foils containing a fluorophore coating depending on the target analyte. The measurement principle is based on excitation of the flourophore by a specific wavelength and emission of the fluorophore depending on the presence of the analyte. The emitted signal is captured using optical filters and a DSLR camera. While DGT analysis is destructive, POS measurements can be performed continuously during the application. Both semi-quantitative techniques allow an in situ application to visualize chemical processes directly at the soil-plant interface. Here, we present a summary of results from rhizotron experiments with different plants in metal contaminated and agricultural soils.

  20. Using collective expert judgements to evaluate quality measures of mass spectrometry images.

    PubMed

    Palmer, Andrew; Ovchinnikova, Ekaterina; Thuné, Mikael; Lavigne, Régis; Guével, Blandine; Dyatlov, Andrey; Vitek, Olga; Pineau, Charles; Borén, Mats; Alexandrov, Theodore

    2015-06-15

    Imaging mass spectrometry (IMS) is a maturating technique of molecular imaging. Confidence in the reproducible quality of IMS data is essential for its integration into routine use. However, the predominant method for assessing quality is visual examination, a time consuming, unstandardized and non-scalable approach. So far, the problem of assessing the quality has only been marginally addressed and existing measures do not account for the spatial information of IMS data. Importantly, no approach exists for unbiased evaluation of potential quality measures. We propose a novel approach for evaluating potential measures by creating a gold-standard set using collective expert judgements upon which we evaluated image-based measures. To produce a gold standard, we engaged 80 IMS experts, each to rate the relative quality between 52 pairs of ion images from MALDI-TOF IMS datasets of rat brain coronal sections. Experts' optional feedback on their expertise, the task and the survey showed that (i) they had diverse backgrounds and sufficient expertise, (ii) the task was properly understood, and (iii) the survey was comprehensible. A moderate inter-rater agreement was achieved with Krippendorff's alpha of 0.5. A gold-standard set of 634 pairs of images with accompanying ratings was constructed and showed a high agreement of 0.85. Eight families of potential measures with a range of parameters and statistical descriptors, giving 143 in total, were evaluated. Both signal-to-noise and spatial chaos-based measures performed highly with a correlation of 0.7 to 0.9 with the gold standard ratings. Moreover, we showed that a composite measure with the linear coefficients (trained on the gold standard with regularized least squares optimization and lasso) showed a strong linear correlation of 0.94 and an accuracy of 0.98 in predicting which image in a pair was of higher quality. The anonymized data collected from the survey and the Matlab source code for data processing can be found at: https://github.com/alexandrovteam/IMS_quality. © The Author 2015. Published by Oxford University Press.

  1. Using collective expert judgements to evaluate quality measures of mass spectrometry images

    PubMed Central

    Palmer, Andrew; Ovchinnikova, Ekaterina; Thuné, Mikael; Lavigne, Régis; Guével, Blandine; Dyatlov, Andrey; Vitek, Olga; Pineau, Charles; Borén, Mats; Alexandrov, Theodore

    2015-01-01

    Motivation: Imaging mass spectrometry (IMS) is a maturating technique of molecular imaging. Confidence in the reproducible quality of IMS data is essential for its integration into routine use. However, the predominant method for assessing quality is visual examination, a time consuming, unstandardized and non-scalable approach. So far, the problem of assessing the quality has only been marginally addressed and existing measures do not account for the spatial information of IMS data. Importantly, no approach exists for unbiased evaluation of potential quality measures. Results: We propose a novel approach for evaluating potential measures by creating a gold-standard set using collective expert judgements upon which we evaluated image-based measures. To produce a gold standard, we engaged 80 IMS experts, each to rate the relative quality between 52 pairs of ion images from MALDI-TOF IMS datasets of rat brain coronal sections. Experts’ optional feedback on their expertise, the task and the survey showed that (i) they had diverse backgrounds and sufficient expertise, (ii) the task was properly understood, and (iii) the survey was comprehensible. A moderate inter-rater agreement was achieved with Krippendorff’s alpha of 0.5. A gold-standard set of 634 pairs of images with accompanying ratings was constructed and showed a high agreement of 0.85. Eight families of potential measures with a range of parameters and statistical descriptors, giving 143 in total, were evaluated. Both signal-to-noise and spatial chaos-based measures performed highly with a correlation of 0.7 to 0.9 with the gold standard ratings. Moreover, we showed that a composite measure with the linear coefficients (trained on the gold standard with regularized least squares optimization and lasso) showed a strong linear correlation of 0.94 and an accuracy of 0.98 in predicting which image in a pair was of higher quality. Availability and implementation: The anonymized data collected from the survey and the Matlab source code for data processing can be found at: https://github.com/alexandrovteam/IMS_quality. Contact: theodore.alexandrov@embl.de PMID:26072506

  2. Platform for Postprocessing Waveform-Based NDE

    NASA Technical Reports Server (NTRS)

    Roth, Don

    2008-01-01

    Taking advantage of the similarities that exist among all waveform-based non-destructive evaluation (NDE) methods, a common software platform has been developed containing multiple- signal and image-processing techniques for waveforms and images. The NASA NDE Signal and Image Processing software has been developed using the latest versions of LabVIEW, and its associated Advanced Signal Processing and Vision Toolkits. The software is useable on a PC with Windows XP and Windows Vista. The software has been designed with a commercial grade interface in which two main windows, Waveform Window and Image Window, are displayed if the user chooses a waveform file to display. Within these two main windows, most actions are chosen through logically conceived run-time menus. The Waveform Window has plots for both the raw time-domain waves and their frequency- domain transformations (fast Fourier transform and power spectral density). The Image Window shows the C-scan image formed from information of the time-domain waveform (such as peak amplitude) or its frequency-domain transformation at each scan location. The user also has the ability to open an image, or series of images, or a simple set of X-Y paired data set in text format. Each of the Waveform and Image Windows contains menus from which to perform many user actions. An option exists to use raw waves obtained directly from scan, or waves after deconvolution if system wave response is provided. Two types of deconvolution, time-based subtraction or inverse-filter, can be performed to arrive at a deconvolved wave set. Additionally, the menu on the Waveform Window allows preprocessing of waveforms prior to image formation, scaling and display of waveforms, formation of different types of images (including non-standard types such as velocity), gating of portions of waves prior to image formation, and several other miscellaneous and specialized operations. The menu available on the Image Window allows many further image processing and analysis operations, some of which are found in commercially-available image-processing software programs (such as Adobe Photoshop), and some that are not (removing outliers, Bscan information, region-of-interest analysis, line profiles, and precision feature measurements).

  3. X-ray mask and method for providing same

    DOEpatents

    Morales, Alfredo M [Pleasanton, CA; Skala, Dawn M [Fremont, CA

    2004-09-28

    The present invention describes a method for fabricating an x-ray mask tool which can achieve pattern features having lateral dimension of less than 1 micron. The process uses a thin photoresist and a standard lithographic mask to transfer an trace image pattern in the surface of a silicon wafer by exposing and developing the resist. The exposed portion of the silicon substrate is then anisotropically etched to provide an etched image of the trace image pattern consisting of a series of channels in the silicon having a high depth-to-width aspect ratio. These channels are then filled by depositing a metal such as gold to provide an inverse image of the trace image and thereby providing a robust x-ray mask tool.

  4. X-ray mask and method for providing same

    DOEpatents

    Morales, Alfredo M.; Skala, Dawn M.

    2002-01-01

    The present invention describes a method for fabricating an x-ray mask tool which can achieve pattern features having lateral dimension of less than 1 micron. The process uses a thin photoresist and a standard lithographic mask to transfer an trace image pattern in the surface of a silicon wafer by exposing and developing the resist. The exposed portion of the silicon substrate is then anisotropically etched to provide an etched image of the trace image pattern consisting of a series of channels in the silicon having a high depth-to-width aspect ratio. These channels are then filled by depositing a metal such as gold to provide an inverse image of the trace image and thereby providing a robust x-ray mask tool.

  5. The radiographic anatomy of the normal ovine digit, the metacarpophalangeal and metatarsophalangeal joints.

    PubMed

    Duncan, Jennifer S; Singer, Ellen R; Devaney, Jane; Oultram, Joanne W H; Walby, Anna J; Lester, Bridie R; Williams, Helen J

    2013-03-01

    The aim of this project was to develop a detailed, accessible set of reference images of the normal radiographic anatomy of the ovine digit up to and including the metacarpo/metatatarsophalangeal joints. The lower front and hind limbs of 5 Lleyn ewes were radiographed using portable radiography equipment, a digital image processer and standard projections. Twenty images, illustrating the normal radiographic anatomy of the limb were selected, labelled and presented along with a detailed description and corresponding images of the bony skeleton. These images are aimed to be of assistance to veterinary surgeons, veterinary students and veterinary researchers by enabling understanding of the normal anatomy of the ovine lower limb, and allowing comparison with the abnormal.

  6. Anatomical based registration of multi-sector x-ray images for panorama reconstruction

    NASA Astrophysics Data System (ADS)

    Ben-Zikri, Yehuda Kfir; Mendez, Stacy; Linte, Cristian A.

    2017-03-01

    Accurate measurement of long limb alignment is an essential stage of the pre-operative planning of realignment surgery. This alignment is quantified according to the hip-knee-ankle (HKA) angle of the mechanical axis of the lower extremity and is measured based on a full-length weight-bearing X-ray or standard computed radiography (CR) image of the patient in standing position. Due to the limited field-of-view of the traditionally employed digital X-ray imaging systems, several sector images are required to capture the posture of a standing individual. These sector images need to then be "stitched" together to reconstruct the standing posture. To eliminate user-induced variability and time constraints associated with the traditional manual "stitching" protocol, we have created an image processing application to automate the stitching process, when there are no reliable external markers available in the images, by only relying on the most reliable anatomical content of the image. The application starts with a rough segmentation of the tibia and the sector images are then registered by evaluating the DICE coefficient between the edges of these corresponding bones along the medial edge. The identified translations are then used to register the original sector images into the standing panorama image. To test the robustness of our method, we randomly selected 40 datasets from a variant database consisting of nearly 100 patient X-ray images acquired for patient screening as part of a multi-site clinical trial. The resulting horizontal and vertical translation values from the automated registration were compared to the homologous translations recorded during the manual panorama generation conducted by a knowledgeable X-ray imaging technician. The mean and standard deviation of the differences for the horizontal translation parameters was -0:27+/-1:14 mm and 0:31+/-1:86 mm for the left and right tibia, respectively. The vertical translation differences for the left and right tibia were 1:05+/-5:24 mm and 1:32+/-4:77 mm, respectively. For these differences, the expert radiologist reported no difference in the hip-knee-ankle angular assessment.

  7. The ACR-NEMA Digital Imaging And Communications Standard: Evolution, Overview And Implementation Considerations

    NASA Astrophysics Data System (ADS)

    Alzner, Edgar; Murphy, Laura

    1986-06-01

    The growing digital nature of radiology images led to a recognition that compatibility of communication between imaging, display and data storage devices of different modalities and different manufacturers is necessary. The ACR-NEMA Digital Imaging and Communications Standard Committee was formed to develop a communications standard for radiological images. This standard includes the overall structure of a communication message and the protocols for bi-directional communication using end-to-end connections. The evolution and rationale of the ACR-NEMA Digital Imaging and Communication Standard are described. An overview is provided and sane practical implementation considerations are discussed. PACS will became reality only if the medical community accepts and implements the ACR-NEMA Standard.

  8. Web-based interactive 2D/3D medical image processing and visualization software.

    PubMed

    Mahmoudi, Seyyed Ehsan; Akhondi-Asl, Alireza; Rahmani, Roohollah; Faghih-Roohi, Shahrooz; Taimouri, Vahid; Sabouri, Ahmad; Soltanian-Zadeh, Hamid

    2010-05-01

    There are many medical image processing software tools available for research and diagnosis purposes. However, most of these tools are available only as local applications. This limits the accessibility of the software to a specific machine, and thus the data and processing power of that application are not available to other workstations. Further, there are operating system and processing power limitations which prevent such applications from running on every type of workstation. By developing web-based tools, it is possible for users to access the medical image processing functionalities wherever the internet is available. In this paper, we introduce a pure web-based, interactive, extendable, 2D and 3D medical image processing and visualization application that requires no client installation. Our software uses a four-layered design consisting of an algorithm layer, web-user-interface layer, server communication layer, and wrapper layer. To compete with extendibility of the current local medical image processing software, each layer is highly independent of other layers. A wide range of medical image preprocessing, registration, and segmentation methods are implemented using open source libraries. Desktop-like user interaction is provided by using AJAX technology in the web-user-interface. For the visualization functionality of the software, the VRML standard is used to provide 3D features over the web. Integration of these technologies has allowed implementation of our purely web-based software with high functionality without requiring powerful computational resources in the client side. The user-interface is designed such that the users can select appropriate parameters for practical research and clinical studies. Copyright (c) 2009 Elsevier Ireland Ltd. All rights reserved.

  9. Digital image processing techniques for the analysis of fuel sprays global pattern

    NASA Astrophysics Data System (ADS)

    Zakaria, Rami; Bryanston-Cross, Peter; Timmerman, Brenda

    2017-12-01

    We studied the fuel atomization process of two fuel injectors to be fitted in a new small rotary engine design. The aim was to improve the efficiency of the engine by optimizing the fuel injection system. Fuel sprays were visualised by an optical diagnostic system. Images of fuel sprays were produced under various testing conditions, by changing the line pressure, nozzle size, injection frequency, etc. The atomisers were a high-frequency microfluidic dispensing system and a standard low flow-rate fuel injector. A series of image processing procedures were developed in order to acquire information from the laser-scattering images. This paper presents the macroscopic characterisation of Jet fuel (JP8) sprays. We observed the droplet density distribution, tip velocity, and spray-cone angle against line-pressure and nozzle-size. The analysis was performed for low line-pressure (up to 10 bar) and short injection period (1-2 ms). Local velocity components were measured by applying particle image velocimetry (PIV) on double-exposure images. The discharge velocity was lower in the micro dispensing nozzle sprays and the tip penetration slowed down at higher rates compared to the gasoline injector. The PIV test confirmed that the gasoline injector produced sprays with higher velocity elements at the centre and the tip regions.

  10. Retooling Laser Speckle Contrast Analysis Algorithm to Enhance Non-Invasive High Resolution Laser Speckle Functional Imaging of Cutaneous Microcirculation

    NASA Astrophysics Data System (ADS)

    Gnyawali, Surya C.; Blum, Kevin; Pal, Durba; Ghatak, Subhadip; Khanna, Savita; Roy, Sashwati; Sen, Chandan K.

    2017-01-01

    Cutaneous microvasculopathy complicates wound healing. Functional assessment of gated individual dermal microvessels is therefore of outstanding interest. Functional performance of laser speckle contrast imaging (LSCI) systems is compromised by motion artefacts. To address such weakness, post-processing of stacked images is reported. We report the first post-processing of binary raw data from a high-resolution LSCI camera. Sharp images of low-flowing microvessels were enabled by introducing inverse variance in conjunction with speckle contrast in Matlab-based program code. Extended moving window averaging enhanced signal-to-noise ratio. Functional quantitative study of blood flow kinetics was performed on single gated microvessels using a free hand tool. Based on detection of flow in low-flow microvessels, a new sharp contrast image was derived. Thus, this work presents the first distinct image with quantitative microperfusion data from gated human foot microvasculature. This versatile platform is applicable to study a wide range of tissue systems including fine vascular network in murine brain without craniotomy as well as that in the murine dorsal skin. Importantly, the algorithm reported herein is hardware agnostic and is capable of post-processing binary raw data from any camera source to improve the sensitivity of functional flow data above and beyond standard limits of the optical system.

  11. Retooling Laser Speckle Contrast Analysis Algorithm to Enhance Non-Invasive High Resolution Laser Speckle Functional Imaging of Cutaneous Microcirculation

    PubMed Central

    Gnyawali, Surya C.; Blum, Kevin; Pal, Durba; Ghatak, Subhadip; Khanna, Savita; Roy, Sashwati; Sen, Chandan K.

    2017-01-01

    Cutaneous microvasculopathy complicates wound healing. Functional assessment of gated individual dermal microvessels is therefore of outstanding interest. Functional performance of laser speckle contrast imaging (LSCI) systems is compromised by motion artefacts. To address such weakness, post-processing of stacked images is reported. We report the first post-processing of binary raw data from a high-resolution LSCI camera. Sharp images of low-flowing microvessels were enabled by introducing inverse variance in conjunction with speckle contrast in Matlab-based program code. Extended moving window averaging enhanced signal-to-noise ratio. Functional quantitative study of blood flow kinetics was performed on single gated microvessels using a free hand tool. Based on detection of flow in low-flow microvessels, a new sharp contrast image was derived. Thus, this work presents the first distinct image with quantitative microperfusion data from gated human foot microvasculature. This versatile platform is applicable to study a wide range of tissue systems including fine vascular network in murine brain without craniotomy as well as that in the murine dorsal skin. Importantly, the algorithm reported herein is hardware agnostic and is capable of post-processing binary raw data from any camera source to improve the sensitivity of functional flow data above and beyond standard limits of the optical system. PMID:28106129

  12. Peculiarities of use of ECOC and AdaBoost based classifiers for thematic processing of hyperspectral data

    NASA Astrophysics Data System (ADS)

    Dementev, A. O.; Dmitriev, E. V.; Kozoderov, V. V.; Egorov, V. D.

    2017-10-01

    Hyperspectral imaging is up-to-date promising technology widely applied for the accurate thematic mapping. The presence of a large number of narrow survey channels allows us to use subtle differences in spectral characteristics of objects and to make a more detailed classification than in the case of using standard multispectral data. The difficulties encountered in the processing of hyperspectral images are usually associated with the redundancy of spectral information which leads to the problem of the curse of dimensionality. Methods currently used for recognizing objects on multispectral and hyperspectral images are usually based on standard base supervised classification algorithms of various complexity. Accuracy of these algorithms can be significantly different depending on considered classification tasks. In this paper we study the performance of ensemble classification methods for the problem of classification of the forest vegetation. Error correcting output codes and boosting are tested on artificial data and real hyperspectral images. It is demonstrates, that boosting gives more significant improvement when used with simple base classifiers. The accuracy in this case in comparable the error correcting output code (ECOC) classifier with Gaussian kernel SVM base algorithm. However the necessity of boosting ECOC with Gaussian kernel SVM is questionable. It is demonstrated, that selected ensemble classifiers allow us to recognize forest species with high enough accuracy which can be compared with ground-based forest inventory data.

  13. Modeling Patient-Specific Deformable Mitral Valves.

    PubMed

    Ginty, Olivia; Moore, John; Peters, Terry; Bainbridge, Daniel

    2018-06-01

    Medical imaging has advanced enormously over the last few decades, revolutionizing patient diagnostics and care. At the same time, additive manufacturing has emerged as a means of reproducing physical shapes and models previously not possible. In combination, they have given rise to 3-dimensional (3D) modeling, an entirely new technology for physicians. In an era in which 3D imaging has become a standard for aiding in the diagnosis and treatment of cardiac disease, this visualization now can be taken further by bringing the patient's anatomy into physical reality as a model. The authors describe the generalized process of creating a model of cardiac anatomy from patient images and their experience creating patient-specific dynamic mitral valve models. This involves a combination of image processing software and 3D printing technology. In this article, the complexity of 3D modeling is described and the decision-making process for cardiac anesthesiologists is summarized. The management of cardiac disease has been altered with the emergence of 3D echocardiography, and 3D modeling represents the next paradigm shift. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. Impact of iterative metal artifact reduction on diagnostic image quality in patients with dental hardware.

    PubMed

    Weiß, Jakob; Schabel, Christoph; Bongers, Malte; Raupach, Rainer; Clasen, Stephan; Notohamiprodjo, Mike; Nikolaou, Konstantin; Bamberg, Fabian

    2017-03-01

    Background Metal artifacts often impair diagnostic accuracy in computed tomography (CT) imaging. Therefore, effective and workflow implemented metal artifact reduction algorithms are crucial to gain higher diagnostic image quality in patients with metallic hardware. Purpose To assess the clinical performance of a novel iterative metal artifact reduction (iMAR) algorithm for CT in patients with dental fillings. Material and Methods Thirty consecutive patients scheduled for CT imaging and dental fillings were included in the analysis. All patients underwent CT imaging using a second generation dual-source CT scanner (120 kV single-energy; 100/Sn140 kV in dual-energy, 219 mAs, gantry rotation time 0.28-1/s, collimation 0.6 mm) as part of their clinical work-up. Post-processing included standard kernel (B49) and an iterative MAR algorithm. Image quality and diagnostic value were assessed qualitatively (Likert scale) and quantitatively (HU ± SD) by two reviewers independently. Results All 30 patients were included in the analysis, with equal reconstruction times for iMAR and standard reconstruction (17 s ± 0.5 vs. 19 s ± 0.5; P > 0.05). Visual image quality was significantly higher for iMAR as compared with standard reconstruction (3.8 ± 0.5 vs. 2.6 ± 0.5; P < 0.0001, respectively) and showed improved evaluation of adjacent anatomical structures. Similarly, HU-based measurements of degree of artifacts were significantly lower in the iMAR reconstructions as compared with the standard reconstruction (0.9 ± 1.6 vs. -20 ± 47; P < 0.05, respectively). Conclusion The tested iterative, raw-data based reconstruction MAR algorithm allows for a significant reduction of metal artifacts and improved evaluation of adjacent anatomical structures in the head and neck area in patients with dental hardware.

  15. Characterization of fission gas bubbles in irradiated U-10Mo fuel

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Casella, Andrew M.; Burkes, Douglas E.; MacFarlan, Paul J.

    2017-09-01

    Irradiated U-10Mo fuel samples were prepared with traditional mechanical potting and polishing methods with in a hot cell. They were then removed and imaged with an SEM located outside of a hot cell. The images were then processed with basic imaging techniques from 3 separate software packages. The results were compared and a baseline method for characterization of fission gas bubbles in the samples is proposed. It is hoped that through adoption of or comparison to this baseline method that sample characterization can be somewhat standardized across the field of post irradiated examination of metal fuels.

  16. Structures' validation profiles in Transmission of Imaging and Data (TRIAD) for automated National Clinical Trials Network (NCTN) clinical trial digital data quality assurance.

    PubMed

    Giaddui, Tawfik; Yu, Jialu; Manfredi, Denise; Linnemann, Nancy; Hunter, Joanne; O'Meara, Elizabeth; Galvin, James; Bialecki, Brian; Xiao, Ying

    2016-01-01

    Transmission of Imaging and Data (TRIAD) is a standard-based system built by the American College of Radiology to provide the seamless exchange of images and data for accreditation of clinical trials and registries. Scripts of structures' names validation profiles created in TRIAD are used in the automated submission process. It is essential for users to understand the logistics of these scripts for successful submission of radiation therapy cases with less iteration. Copyright © 2016 American Society for Radiation Oncology. Published by Elsevier Inc. All rights reserved.

  17. Accelerated pharmacokinetic map determination for dynamic contrast enhanced MRI using frequency-domain based Tofts model.

    PubMed

    Vajuvalli, Nithin N; Nayak, Krupa N; Geethanath, Sairam

    2014-01-01

    Dynamic Contrast Enhanced Magnetic Resonance Imaging (DCE-MRI) is widely used in the diagnosis of cancer and is also a promising tool for monitoring tumor response to treatment. The Tofts model has become a standard for the analysis of DCE-MRI. The process of curve fitting employed in the Tofts equation to obtain the pharmacokinetic (PK) parameters is time-consuming for high resolution scans. Current work demonstrates a frequency-domain approach applied to the standard Tofts equation to speed-up the process of curve-fitting in order to obtain the pharmacokinetic parameters. The results obtained show that using the frequency domain approach, the process of curve fitting is computationally more efficient compared to the time-domain approach.

  18. A design of camera simulator for photoelectric image acquisition system

    NASA Astrophysics Data System (ADS)

    Cai, Guanghui; Liu, Wen; Zhang, Xin

    2015-02-01

    In the process of developing the photoelectric image acquisition equipment, it needs to verify the function and performance. In order to make the photoelectric device recall the image data formerly in the process of debugging and testing, a design scheme of the camera simulator is presented. In this system, with FPGA as the control core, the image data is saved in NAND flash trough USB2.0 bus. Due to the access rate of the NAND, flash is too slow to meet the requirement of the sytsem, to fix the problem, the pipeline technique and the High-Band-Buses technique are applied in the design to improve the storage rate. It reads image data out from flash in the control logic of FPGA and output separately from three different interface of Camera Link, LVDS and PAL, which can provide image data for photoelectric image acquisition equipment's debugging and algorithm validation. However, because the standard of PAL image resolution is 720*576, the resolution is different between PAL image and input image, so the image can be output after the resolution conversion. The experimental results demonstrate that the camera simulator outputs three format image sequence correctly, which can be captured and displayed by frame gather. And the three-format image data can meet test requirements of the most equipment, shorten debugging time and improve the test efficiency.

  19. Cone beam volume tomography: an imaging option for diagnosis of complex mandibular third molar anatomical relationships.

    PubMed

    Danforth, Robert A; Peck, Jerry; Hall, Paul

    2003-11-01

    Complex impacted third molars present potential treatment complications and possible patient morbidity. Objectives of diagnostic imaging are to facilitate diagnosis, decision making, and enhance treatment outcomes. As cases become more complex, advanced multiplane imaging methods allowing for a 3-D view are more likely to meet these objectives than traditional 2-D radiography. Until recently, advanced imaging options were somewhat limited to standard film tomography or medical CT, but development of cone beam volume tomography (CBVT) multiplane 3-D imaging systems specifically for dental use now provides an alternative imaging option. Two cases were utilized to compare the role of CBVT to these other imaging options and to illustrate how multiplane visualization can assist the pretreatment evaluation and decision-making process for complex impacted mandibular third molar cases.

  20. HIPS: A new hippocampus subfield segmentation method.

    PubMed

    Romero, José E; Coupé, Pierrick; Manjón, José V

    2017-12-01

    The importance of the hippocampus in the study of several neurodegenerative diseases such as Alzheimer's disease makes it a structure of great interest in neuroimaging. However, few segmentation methods have been proposed to measure its subfields due to its complex structure and the lack of high resolution magnetic resonance (MR) data. In this work, we present a new pipeline for automatic hippocampus subfield segmentation using two available hippocampus subfield delineation protocols that can work with both high and standard resolution data. The proposed method is based on multi-atlas label fusion technology that benefits from a novel multi-contrast patch match search process (using high resolution T1-weighted and T2-weighted images). The proposed method also includes as post-processing a new neural network-based error correction step to minimize systematic segmentation errors. The method has been evaluated on both high and standard resolution images and compared to other state-of-the-art methods showing better results in terms of accuracy and execution time. Copyright © 2017 Elsevier Inc. All rights reserved.

Top