Science.gov

Sample records for standard imaging software

  1. Emerging standards for still image compression: A software implementation and simulation study

    NASA Technical Reports Server (NTRS)

    Pollara, F.; Arnold, S.

    1991-01-01

    The software implementation is described of an emerging standard for the lossy compression of continuous tone still images. This software program can be used to compress planetary images and other 2-D instrument data. It provides a high compression image coding capability that preserves image fidelity at compression rates competitive or superior to most known techniques. This software implementation confirms the usefulness of such data compression and allows its performance to be compared with other schemes used in deep space missions and for data based storage.

  2. Software Formal Inspections Standard

    NASA Technical Reports Server (NTRS)

    1993-01-01

    This Software Formal Inspections Standard (hereinafter referred to as Standard) is applicable to NASA software. This Standard defines the requirements that shall be fulfilled by the software formal inspections process whenever this process is specified for NASA software. The objective of this Standard is to define the requirements for a process that inspects software products to detect and eliminate defects as early as possible in the software life cycle. The process also provides for the collection and analysis of inspection data to improve the inspection process as well as the quality of the software.

  3. Software assurance standard

    NASA Technical Reports Server (NTRS)

    1992-01-01

    This standard specifies the software assurance program for the provider of software. It also delineates the assurance activities for the provider and the assurance data that are to be furnished by the provider to the acquirer. In any software development effort, the provider is the entity or individual that actually designs, develops, and implements the software product, while the acquirer is the entity or individual who specifies the requirements and accepts the resulting products. This standard specifies at a high level an overall software assurance program for software developed for and by NASA. Assurance includes the disciplines of quality assurance, quality engineering, verification and validation, nonconformance reporting and corrective action, safety assurance, and security assurance. The application of these disciplines during a software development life cycle is called software assurance. Subsequent lower-level standards will specify the specific processes within these disciplines.

  4. Assessment of global longitudinal strain using standardized myocardial deformation imaging: a modality independent software approach.

    PubMed

    Riffel, Johannes H; Keller, Marius G P; Aurich, Matthias; Sander, Yannick; Andre, Florian; Giusca, Sorin; Aus dem Siepen, Fabian; Seitz, Sebastian; Galuschky, Christian; Korosoglou, Grigorios; Mereles, Derliz; Katus, Hugo A; Buss, Sebastian J

    2015-07-01

    Myocardial deformation measurement is superior to left ventricular ejection fraction in identifying early changes in myocardial contractility and prediction of cardiovascular outcome. The lack of standardization hinders its clinical implementation. The aim of the study is to investigate a novel standardized deformation imaging approach based on the feature tracking algorithm for the assessment of global longitudinal (GLS) and global circumferential strain (GCS) in echocardiography and cardiac magnetic resonance imaging (CMR). 70 subjects undergoing CMR were consecutively investigated with echocardiography within a median time of 30 min. GLS and GCS were analyzed with a post-processing software incorporating the same standardized algorithm for both modalities. Global strain was defined as the relative shortening of the whole endocardial contour length and calculated according to the strain formula. Mean GLS values were -16.2 ± 5.3 and -17.3 ± 5.3 % for echocardiography and CMR, respectively. GLS did not differ significantly between the two imaging modalities, which showed strong correlation (r = 0.86), a small bias (-1.1 %) and narrow 95 % limits of agreement (LOA ± 5.4 %). Mean GCS values were -17.9 ± 6.3 and -24.4 ± 7.8 % for echocardiography and CMR, respectively. GCS was significantly underestimated by echocardiography (p < 0.001). A weaker correlation (r = 0.73), a higher bias (-6.5 %) and wider LOA (± 10.5 %) were observed for GCS. GLS showed a strong correlation (r = 0.92) when image quality was good, while correlation dropped to r = 0.82 with poor acoustic windows in echocardiography. GCS assessment revealed only a strong correlation (r = 0.87) when echocardiographic image quality was good. No significant differences for GLS between two different echocardiographic vendors could be detected. Quantitative assessment of GLS using a standardized software algorithm allows the direct comparison of values acquired irrespective of the imaging modality. GLS may

  5. NASA Software Documentation Standard

    NASA Technical Reports Server (NTRS)

    1991-01-01

    The NASA Software Documentation Standard (hereinafter referred to as "Standard") is designed to support the documentation of all software developed for NASA; its goal is to provide a framework and model for recording the essential information needed throughout the development life cycle and maintenance of a software system. The NASA Software Documentation Standard can be applied to the documentation of all NASA software. The Standard is limited to documentation format and content requirements. It does not mandate specific management, engineering, or assurance standards or techniques. This Standard defines the format and content of documentation for software acquisition, development, and sustaining engineering. Format requirements address where information shall be recorded and content requirements address what information shall be recorded. This Standard provides a framework to allow consistency of documentation across NASA and visibility into the completeness of project documentation. The basic framework consists of four major sections (or volumes). The Management Plan contains all planning and business aspects of a software project, including engineering and assurance planning. The Product Specification contains all technical engineering information, including software requirements and design. The Assurance and Test Procedures contains all technical assurance information, including Test, Quality Assurance (QA), and Verification and Validation (V&V). The Management, Engineering, and Assurance Reports is the library and/or listing of all project reports.

  6. NASA's Software Safety Standard

    NASA Technical Reports Server (NTRS)

    Ramsay, Christopher M.

    2007-01-01

    NASA relies more and more on software to control, monitor, and verify its safety critical systems, facilities and operations. Since the 1960's there has hardly been a spacecraft launched that does not have a computer on board that will provide command and control services. There have been recent incidents where software has played a role in high-profile mission failures and hazardous incidents. For example, the Mars Orbiter, Mars Polar Lander, the DART (Demonstration of Autonomous Rendezvous Technology), and MER (Mars Exploration Rover) Spirit anomalies were all caused or contributed to by software. The Mission Control Centers for the Shuttle, ISS, and unmanned programs are highly dependant on software for data displays, analysis, and mission planning. Despite this growing dependence on software control and monitoring, there has been little to no consistent application of software safety practices and methodology to NASA's projects with safety critical software. Meanwhile, academia and private industry have been stepping forward with procedures and standards for safety critical systems and software, for example Dr. Nancy Leveson's book Safeware: System Safety and Computers. The NASA Software Safety Standard, originally published in 1997, was widely ignored due to its complexity and poor organization. It also focused on concepts rather than definite procedural requirements organized around a software project lifecycle. Led by NASA Headquarters Office of Safety and Mission Assurance, the NASA Software Safety Standard has recently undergone a significant update. This new standard provides the procedures and guidelines for evaluating a project for safety criticality and then lays out the minimum project lifecycle requirements to assure the software is created, operated, and maintained in the safest possible manner. This update of the standard clearly delineates the minimum set of software safety requirements for a project without detailing the implementation for those

  7. NASA's Software Safety Standard

    NASA Technical Reports Server (NTRS)

    Ramsay, Christopher M.

    2007-01-01

    NASA relies more and more on software to control, monitor, and verify its safety critical systems, facilities and operations. Since the 1960's there has hardly been a spacecraft launched that does not have a computer on board that will provide command and control services. There have been recent incidents where software has played a role in high-profile mission failures and hazardous incidents. For example, the Mars Orbiter, Mars Polar Lander, the DART (Demonstration of Autonomous Rendezvous Technology), and MER (Mars Exploration Rover) Spirit anomalies were all caused or contributed to by software. The Mission Control Centers for the Shuttle, ISS, and unmanned programs are highly dependant on software for data displays, analysis, and mission planning. Despite this growing dependence on software control and monitoring, there has been little to no consistent application of software safety practices and methodology to NASA's projects with safety critical software. Meanwhile, academia and private industry have been stepping forward with procedures and standards for safety critical systems and software, for example Dr. Nancy Leveson's book Safeware: System Safety and Computers. The NASA Software Safety Standard, originally published in 1997, was widely ignored due to its complexity and poor organization. It also focused on concepts rather than definite procedural requirements organized around a software project lifecycle. Led by NASA Headquarters Office of Safety and Mission Assurance, the NASA Software Safety Standard has recently undergone a significant update. This new standard provides the procedures and guidelines for evaluating a project for safety criticality and then lays out the minimum project lifecycle requirements to assure the software is created, operated, and maintained in the safest possible manner. This update of the standard clearly delineates the minimum set of software safety requirements for a project without detailing the implementation for those

  8. NASA's Software Safety Standard

    NASA Technical Reports Server (NTRS)

    Ramsay, Christopher M.

    2005-01-01

    NASA (National Aeronautics and Space Administration) relies more and more on software to control, monitor, and verify its safety critical systems, facilities and operations. Since the 1960's there has hardly been a spacecraft (manned or unmanned) launched that did not have a computer on board that provided vital command and control services. Despite this growing dependence on software control and monitoring, there has been no consistent application of software safety practices and methodology to NASA's projects with safety critical software. Led by the NASA Headquarters Office of Safety and Mission Assurance, the NASA Software Safety Standard (STD-18l9.13B) has recently undergone a significant update in an attempt to provide that consistency. This paper will discuss the key features of the new NASA Software Safety Standard. It will start with a brief history of the use and development of software in safety critical applications at NASA. It will then give a brief overview of the NASA Software Working Group and the approach it took to revise the software engineering process across the Agency.

  9. libdrdc: software standards library

    NASA Astrophysics Data System (ADS)

    Erickson, David; Peng, Tie

    2008-04-01

    This paper presents the libdrdc software standards library including internal nomenclature, definitions, units of measure, coordinate reference frames, and representations for use in autonomous systems research. This library is a configurable, portable C-function wrapped C++ / Object Oriented C library developed to be independent of software middleware, system architecture, processor, or operating system. It is designed to use the automatically-tuned linear algebra suite (ATLAS) and Basic Linear Algebra Suite (BLAS) and port to firmware and software. The library goal is to unify data collection and representation for various microcontrollers and Central Processing Unit (CPU) cores and to provide a common Application Binary Interface (ABI) for research projects at all scales. The library supports multi-platform development and currently works on Windows, Unix, GNU/Linux, and Real-Time Executive for Multiprocessor Systems (RTEMS). This library is made available under LGPL version 2.1 license.

  10. Development of a viability standard curve for microencapsulated probiotic bacteria using confocal microscopy and image analysis software.

    PubMed

    Moore, Sarah; Kailasapathy, Kasipathy; Phillips, Michael; Jones, Mark R

    2015-07-01

    Microencapsulation is proposed to protect probiotic strains from food processing procedures and to maintain probiotic viability. Little research has described the in situ viability of microencapsulated probiotics. This study successfully developed a real-time viability standard curve for microencapsulated bacteria using confocal microscopy, fluorescent dyes and image analysis software.

  11. Design and evaluation of a THz time domain imaging system using standard optical design software.

    PubMed

    Brückner, Claudia; Pradarutti, Boris; Müller, Ralf; Riehemann, Stefan; Notni, Gunther; Tünnermann, Andreas

    2008-09-20

    A terahertz (THz) time domain imaging system is analyzed and optimized with standard optical design software (ZEMAX). Special requirements to the illumination optics and imaging optics are presented. In the optimized system, off-axis parabolic mirrors and lenses are combined. The system has a numerical aperture of 0.4 and is diffraction limited for field points up to 4 mm and wavelengths down to 750 microm. ZEONEX is used as the lens material. Higher aspherical coefficients are used for correction of spherical aberration and reduction of lens thickness. The lenses were manufactured by ultraprecision machining. For optimization of the system, ray tracing and wave-optical methods were combined. We show how the ZEMAX Gaussian beam analysis tool can be used to evaluate illumination optics. The resolution of the THz system was tested with a wire and a slit target, line gratings of different period, and a Siemens star. The behavior of the temporal line spread function can be modeled with the polychromatic coherent line spread function feature in ZEMAX. The spectral and temporal resolutions of the line gratings are compared with the respective modulation transfer function of ZEMAX. For maximum resolution, the system has to be diffraction limited down to the smallest wavelength of the spectrum of the THz pulse. Then, the resolution on time domain analysis of the pulse maximum can be estimated with the spectral resolution of the center of gravity wavelength. The system resolution near the optical axis on time domain analysis of the pulse maximum is 1 line pair/mm with an intensity contrast of 0.22. The Siemens star is used for estimation of the resolution of the whole system. An eight channel electro-optic sampling system was used for detection. The resolution on time domain analysis of the pulse maximum of all eight channels could be determined with the Siemens star to be 0.7 line pairs/mm.

  12. Image Processing Software

    NASA Astrophysics Data System (ADS)

    Bosio, M. A.

    1990-11-01

    ABSTRACT: A brief description of astronomical image software is presented. This software was developed in a Digital Micro Vax II Computer System. : St presenta una somera descripci6n del software para procesamiento de imagenes. Este software fue desarrollado en un equipo Digital Micro Vax II. : DATA ANALYSIS - IMAGE PROCESSING

  13. NASA software documentation standard software engineering program

    NASA Technical Reports Server (NTRS)

    1991-01-01

    The NASA Software Documentation Standard (hereinafter referred to as Standard) can be applied to the documentation of all NASA software. This Standard is limited to documentation format and content requirements. It does not mandate specific management, engineering, or assurance standards or techniques. This Standard defines the format and content of documentation for software acquisition, development, and sustaining engineering. Format requirements address where information shall be recorded and content requirements address what information shall be recorded. This Standard provides a framework to allow consistency of documentation across NASA and visibility into the completeness of project documentation. This basic framework consists of four major sections (or volumes). The Management Plan contains all planning and business aspects of a software project, including engineering and assurance planning. The Product Specification contains all technical engineering information, including software requirements and design. The Assurance and Test Procedures contains all technical assurance information, including Test, Quality Assurance (QA), and Verification and Validation (V&V). The Management, Engineering, and Assurance Reports is the library and/or listing of all project reports.

  14. Image Processing Software

    NASA Technical Reports Server (NTRS)

    1990-01-01

    The Ames digital image velocimetry technology has been incorporated in a commercially available image processing software package that allows motion measurement of images on a PC alone. The software, manufactured by Werner Frei Associates, is IMAGELAB FFT. IMAGELAB FFT is a general purpose image processing system with a variety of other applications, among them image enhancement of fingerprints and use by banks and law enforcement agencies for analysis of videos run during robberies.

  15. Software engineering standards and practices

    NASA Technical Reports Server (NTRS)

    Durachka, R. W.

    1981-01-01

    Guidelines are presented for the preparation of a software development plan. The various phases of a software development project are discussed throughout its life cycle including a general description of the software engineering standards and practices to be followed during each phase.

  16. Biological imaging software tools.

    PubMed

    Eliceiri, Kevin W; Berthold, Michael R; Goldberg, Ilya G; Ibáñez, Luis; Manjunath, B S; Martone, Maryann E; Murphy, Robert F; Peng, Hanchuan; Plant, Anne L; Roysam, Badrinath; Stuurman, Nico; Stuurmann, Nico; Swedlow, Jason R; Tomancak, Pavel; Carpenter, Anne E

    2012-06-28

    Few technologies are more widespread in modern biological laboratories than imaging. Recent advances in optical technologies and instrumentation are providing hitherto unimagined capabilities. Almost all these advances have required the development of software to enable the acquisition, management, analysis and visualization of the imaging data. We review each computational step that biologists encounter when dealing with digital images, the inherent challenges and the overall status of available software for bioimage informatics, focusing on open-source options.

  17. Biological Imaging Software Tools

    PubMed Central

    Eliceiri, Kevin W.; Berthold, Michael R.; Goldberg, Ilya G.; Ibáñez, Luis; Manjunath, B.S.; Martone, Maryann E.; Murphy, Robert F.; Peng, Hanchuan; Plant, Anne L.; Roysam, Badrinath; Stuurman, Nico; Swedlow, Jason R.; Tomancak, Pavel; Carpenter, Anne E.

    2013-01-01

    Few technologies are more widespread in modern biological laboratories than imaging. Recent advances in optical technologies and instrumentation are providing hitherto unimagined capabilities. Almost all these advances have required the development of software to enable the acquisition, management, analysis, and visualization of the imaging data. We review each computational step that biologists encounter when dealing with digital images, the challenges in that domain, and the overall status of available software for bioimage informatics, focusing on open source options. PMID:22743775

  18. Future of Software Engineering Standards

    NASA Technical Reports Server (NTRS)

    Poon, Peter T.

    1997-01-01

    In the new millennium, software engineering standards are expected to continue to influence the process of producing software-intensive systems which are cost-effetive and of high quality. These sytems may range from ground and flight systems used for planetary exploration to educational support systems used in schools as well as consumer-oriented systems.

  19. Software Development Standard Processes (SDSP)

    NASA Technical Reports Server (NTRS)

    Lavin, Milton L.; Wang, James J.; Morillo, Ronald; Mayer, John T.; Jamshidian, Barzia; Shimizu, Kenneth J.; Wilkinson, Belinda M.; Hihn, Jairus M.; Borgen, Rosana B.; Meyer, Kenneth N.; Crean, Kathleen A.; Rinker, George C.; Smith, Thomas P.; Lum, Karen T.; Hanna, Robert A.; Erickson, Daniel E.; Gamble, Edward B., Jr.; Morgan, Scott C.; Kelsay, Michael G.; Newport, Brian J.; Lewicki, Scott A.; Stipanuk, Jeane G.; Cooper, Tonja M.; Meshkat, Leila

    2011-01-01

    A JPL-created set of standard processes is to be used throughout the lifecycle of software development. These SDSPs cover a range of activities, from management and engineering activities, to assurance and support activities. These processes must be applied to software tasks per a prescribed set of procedures. JPL s Software Quality Improvement Project is currently working at the behest of the JPL Software Process Owner to ensure that all applicable software tasks follow these procedures. The SDSPs are captured as a set of 22 standards in JPL s software process domain. They were developed in-house at JPL by a number of Subject Matter Experts (SMEs) residing primarily within the Engineering and Science Directorate, but also from the Business Operations Directorate and Safety and Mission Success Directorate. These practices include not only currently performed best practices, but also JPL-desired future practices in key thrust areas like software architecting and software reuse analysis. Additionally, these SDSPs conform to many standards and requirements to which JPL projects are beholden.

  20. Image Processing Software

    NASA Technical Reports Server (NTRS)

    1992-01-01

    To convert raw data into environmental products, the National Weather Service and other organizations use the Global 9000 image processing system marketed by Global Imaging, Inc. The company's GAE software package is an enhanced version of the TAE, developed by Goddard Space Flight Center to support remote sensing and image processing applications. The system can be operated in three modes and is combined with HP Apollo workstation hardware.

  1. NASA space station software standards issues

    NASA Technical Reports Server (NTRS)

    Tice, G. D., Jr.

    1985-01-01

    The selection and application of software standards present the NASA Space Station Program with the opportunity to serve as a pacesetter for the United States software in the area of software standards. The strengths and weaknesses of each of the NASA defined software standards issues are summerized and discussed. Several significant standards issues are offered for NASA consideration. A challenge is presented for the NASA Space Station Program to serve as a pacesetter for the U.S. Software Industry through: (1) Management commitment to software standards; (2) Overall program participation in software standards; and (3) Employment of the best available technology to support software standards

  2. Computed Tomography software and standards

    SciTech Connect

    Azevedo, S.G.; Martz, H.E.; Skeate, M.F.; Schneberk, D.J.; Roberson, G.P.

    1990-02-20

    This document establishes the software design, nomenclature, and conventions for industrial Computed Tomography (CT) used in the Nondestructive Evaluation Section at Lawrence Livermore National Laboratory. It is mainly a users guide to the technical use of the CT computer codes, but also presents a proposed standard for describing CT experiments and reconstructions. Each part of this document specifies different aspects of the CT software organization. A set of tables at the end describes the CT parameters of interest in our project. 4 refs., 6 figs., 1 tab.

  3. Quantitative Redox Imaging Software.

    PubMed

    Fricker, Mark D

    2016-05-01

    A wealth of fluorescent reporters and imaging systems are now available to characterize dynamic physiological processes in living cells with high spatiotemporal resolution. The most reliable probes for quantitative measurements show shifts in their excitation or emission spectrum, rather than just a change in intensity, as spectral shifts are independent of optical path length, illumination intensity, probe concentration, and photobleaching, and they can be easily determined by ratiometric measurements at two wavelengths. A number of ratiometric fluorescent reporters, such as reduction-oxidation-sensitive green fluorescent protein (roGFP), have been developed that respond to the glutathione redox potential and allow redox imaging in vivo. roGFP and its derivatives can be expressed in the cytoplasm or targeted to different organelles, giving fine control of measurements from sub-cellular compartments. Furthermore, roGFP can be imaged with probes for other physiological parameters, such as reactive oxygen species or mitochondrial membrane potential, to give multi-channel, multi-dimensional 4D (x,y,z,t) images. Live cell imaging approaches are needed to capture transient or highly spatially localized physiological behavior from intact, living specimens, which are often not accessible by other biochemical or genetic means. The next challenge is to be able to extract useful data rapidly from such large (GByte) images with due care given to the assumptions used during image processing. This article describes a suite of software programs, available for download, that provide intuitive user interfaces to conduct multi-channel ratio imaging, or alternative analysis methods such as pixel-population statistics or image segmentation and object-based ratio analysis. Antioxid. Redox Signal. 24, 752-762.

  4. Confined Space Imager (CSI) Software

    SciTech Connect

    Karelilz, David

    2013-07-03

    The software provides real-time image capture, enhancement, and display, and sensor control for the Confined Space Imager (CSI) sensor system The software captures images over a Cameralink connection and provides the following image enhancements: camera pixel to pixel non-uniformity correction, optical distortion correction, image registration and averaging, and illumination non-uniformity correction. The software communicates with the custom CSI hardware over USB to control sensor parameters and is capable of saving enhanced sensor images to an external USB drive. The software provides sensor control, image capture, enhancement, and display for the CSI sensor system. It is designed to work with the custom hardware.

  5. SOFT-1: Imaging Processing Software

    NASA Technical Reports Server (NTRS)

    1984-01-01

    Five levels of image processing software are enumerated and discussed: (1) logging and formatting; (2) radiometric correction; (3) correction for geometric camera distortion; (4) geometric/navigational corrections; and (5) general software tools. Specific concerns about access to and analysis of digital imaging data within the Planetary Data System are listed.

  6. Standardized development of computer software. Part 2: Standards

    NASA Technical Reports Server (NTRS)

    Tausworthe, R. C.

    1978-01-01

    This monograph contains standards for software development and engineering. The book sets forth rules for design, specification, coding, testing, documentation, and quality assurance audits of software; it also contains detailed outlines for the documentation to be produced.

  7. Standard Leak Calibration Facility software system

    SciTech Connect

    McClain, S.K.

    1989-06-01

    A Standard Leak Calibration Facility Software System has been developed and implemented for controlling, and running a standard Leak Calibration Facility. Primary capabilities provided by the software system include computer control of the vacuum system, automatic leak calibration, and data acquisition, manipulation, and storage.

  8. Diversification and Challenges of Software Engineering Standards

    NASA Technical Reports Server (NTRS)

    Poon, Peter T.

    1994-01-01

    The author poses certain questions in this paper: 'In the future, should there be just one software engineering standards set? If so, how can we work towards that goal? What are the challenges of internationalizing standards?' Based on the author's personal view, the statement of his position is as follows: 'There should NOT be just one set of software engineering standards in the future. At the same time, there should NOT be the proliferation of standards, and the number of sets of standards should be kept to a minimum.It is important to understand the diversification of the areas which are spanned by the software engineering standards.' The author goes on to describe the diversification of processes, the diversification in the national and international character of standards organizations, the diversification of the professional organizations producing standards, the diversification of the types of businesses and industries, and the challenges of internationalizing standards.

  9. Diversification and Challenges of Software Engineering Standards

    NASA Technical Reports Server (NTRS)

    Poon, Peter T.

    1994-01-01

    The author poses certain questions in this paper: 'In the future, should there be just one software engineering standards set? If so, how can we work towards that goal? What are the challenges of internationalizing standards?' Based on the author's personal view, the statement of his position is as follows: 'There should NOT be just one set of software engineering standards in the future. At the same time, there should NOT be the proliferation of standards, and the number of sets of standards should be kept to a minimum.It is important to understand the diversification of the areas which are spanned by the software engineering standards.' The author goes on to describe the diversification of processes, the diversification in the national and international character of standards organizations, the diversification of the professional organizations producing standards, the diversification of the types of businesses and industries, and the challenges of internationalizing standards.

  10. Software for Managing an Archive of Images

    NASA Technical Reports Server (NTRS)

    Hallai, Charles; Jones, Helene; Callac, Chris

    2003-01-01

    This is a revised draft by Innovators concerning the report on Software for Managing and Archive of Images.The SSC Multimedia Archive is an automated electronic system to manage images, acquired both by film and digital cameras, for the Public Affairs Office (PAO) at Stennis Space Center (SSC). Previously, the image archive was based on film photography and utilized a manual system that, by todays standards, had become inefficient and expensive. Now, the SSC Multimedia Archive, based on a server at SSC, contains both catalogs and images for pictures taken both digitally and with a traditional film-based camera, along with metadata about each image.

  11. Standard classification of software documentation

    NASA Technical Reports Server (NTRS)

    Tausworthe, R. C.

    1976-01-01

    General conceptual requirements for standard levels of documentation and for application of these requirements to intended usages. These standards encourage the policy to produce only those forms of documentation that are needed and adequate for the purpose. Documentation standards are defined with respect to detail and format quality. Classes A through D range, in order, from the most definitive down to the least definitive, and categories 1 through 4 range, in order, from high-quality typeset down to handwritten material. Criteria for each of the classes and categories, as well as suggested selection guidelines for each are given.

  12. Software Development Standard for Mission Critical Systems

    DTIC Science & Technology

    2014-03-17

    applied on contracts for mission critical systems . This report provides a full lifecycle software development process standard. This version includes an...integration and test environments. 5.3 Updated requirements for system requirements analysis . v Issue Date Sections Changes 5.4 Updated...requirements for system architectural design. 5.5 Updated requirements for software requirements analysis . 5.6 Major update to software

  13. The IEEE Software Engineering Standards Process

    PubMed Central

    Buckley, Fletcher J.

    1984-01-01

    Software Engineering has emerged as a field in recent years, and those involved increasingly recognize the need for standards. As a result, members of the Institute of Electrical and Electronics Engineers (IEEE) formed a subcommittee to develop these standards. This paper discusses the ongoing standards development, and associated efforts.

  14. Image processing software for imaging spectrometry

    NASA Technical Reports Server (NTRS)

    Mazer, Alan S.; Martin, Miki; Lee, Meemong; Solomon, Jerry E.

    1988-01-01

    The paper presents a software system, Spectral Analysis Manager (SPAM), which has been specifically designed and implemented to provide the exploratory analysis tools necessary for imaging spectrometer data, using only modest computational resources. The basic design objectives are described as well as the major algorithms designed or adapted for high-dimensional images. Included in a discussion of system implementation are interactive data display, statistical analysis, image segmentation and spectral matching, and mixture analysis.

  15. Spotlight-8 Image Analysis Software

    NASA Technical Reports Server (NTRS)

    Klimek, Robert; Wright, Ted

    2006-01-01

    Spotlight is a cross-platform GUI-based software package designed to perform image analysis on sequences of images generated by combustion and fluid physics experiments run in a microgravity environment. Spotlight can perform analysis on a single image in an interactive mode or perform analysis on a sequence of images in an automated fashion. Image processing operations can be employed to enhance the image before various statistics and measurement operations are performed. An arbitrarily large number of objects can be analyzed simultaneously with independent areas of interest. Spotlight saves results in a text file that can be imported into other programs for graphing or further analysis. Spotlight can be run on Microsoft Windows, Linux, and Apple OS X platforms.

  16. Non-standard analysis and embedded software

    NASA Technical Reports Server (NTRS)

    Platek, Richard

    1995-01-01

    One model for computing in the future is ubiquitous, embedded computational devices analogous to embedded electrical motors. Many of these computers will control physical objects and processes. Such hidden computerized environments introduce new safety and correctness concerns whose treatment go beyond present Formal Methods. In particular, one has to begin to speak about Real Space software in analogy with Real Time software. By this we mean, computerized systems which have to meet requirements expressed in the real geometry of space. How to translate such requirements into ordinary software specifications and how to carry out proofs is a major challenge. In this talk we propose a research program based on the use of no-standard analysis. Much detail remains to be carried out. The purpose of the talk is to inform the Formal Methods community that Non-Standard Analysis provides a possible avenue to attack which we believe will be fruitful.

  17. Medical image database for software and algorithm evaluation

    NASA Astrophysics Data System (ADS)

    dos Santos, Marcelo; Furuie, Sergio S.

    2005-04-01

    This work presents the development of a framework to make available a free, online, multipurpose and multimodality medical image database for software and algorithm evaluation. We have implemented a distributed architecture for medical image database, including authoring, storage, and repository for documents and image processing software. The system aims to offer a complete test bed and a set of resources including software, link to scientific papers, gold standards, reference images and post-processed images, enabling medical image processing community (scientists, physicians, students and industrials) to be more aware of evaluation issues. Our focus of development was on convenience and easy of use of a generic system adaptable to different contexts.

  18. FITS Liberator: Image processing software

    NASA Astrophysics Data System (ADS)

    Lindberg Christensen, Lars; Nielsen, Lars Holm; Nielsen, Kaspar K.; Johansen, Teis; Hurt, Robert; de Martin, David

    2012-06-01

    The ESA/ESO/NASA FITS Liberator makes it possible to process and edit astronomical science data in the FITS format to produce stunning images of the universe. Formerly a plugin for Adobe Photoshop, the current version of FITS Liberator is a stand-alone application and no longer requires Photoshop. This image processing software makes it possible to create color images using raw observations from a range of telescopes; the FITS Liberator continues to support the FITS and PDS formats, preferred by astronomers and planetary scientists respectively, which enables data to be processed from a wide range of telescopes and planetary probes, including ESO's Very Large Telescope, the NASA/ESA Hubble Space Telescope, NASA's Spitzer Space Telescope, ESA's XMM-Newton Telescope and Cassini-Huygens or Mars Reconnaissance Orbiter.

  19. Automated computer software development standards enforcement

    SciTech Connect

    Yule, H.P.; Formento, J.W.

    1991-01-01

    The Uniform Development Environment (UDE) is being investigated as a means of enforcing software engineering standards. For the programmer, it provides an environment containing the tools and utilities necessary for orderly and controlled development and maintenance of code according to requirements. In addition, it provides DoD management and developer management the tools needed for all phases of software life cycle management and control, from project planning and management, to code development, configuration management, version control, and change control. This paper reports the status of UDE development and field testing. 5 refs.

  20. Automatic AVHRR image navigation software

    NASA Technical Reports Server (NTRS)

    Baldwin, Dan; Emery, William

    1992-01-01

    This is the final report describing the work done on the project entitled Automatic AVHRR Image Navigation Software funded through NASA-Washington, award NAGW-3224, Account 153-7529. At the onset of this project, we had developed image navigation software capable of producing geo-registered images from AVHRR data. The registrations were highly accurate but required a priori knowledge of the spacecraft's axes alignment deviations, commonly known as attitude. The three angles needed to describe the attitude are called roll, pitch, and yaw, and are the components of the deviations in the along scan, along track and about center directions. The inclusion of the attitude corrections in the navigation software results in highly accurate georegistrations, however, the computation of the angles is very tedious and involves human interpretation for several steps. The technique also requires easily identifiable ground features which may not be available due to cloud cover or for ocean data. The current project was motivated by the need for a navigation system which was automatic and did not require human intervention or ground control points. The first step in creating such a system must be the ability to parameterize the spacecraft's attitude. The immediate goal of this project was to study the attitude fluctuations and determine if they displayed any systematic behavior which could be modeled or parameterized. We chose a period in 1991-1992 to study the attitude of the NOAA 11 spacecraft using data from the Tiros receiving station at the Colorado Center for Astrodynamic Research (CCAR) at the University of Colorado.

  1. Objective facial photograph analysis using imaging software.

    PubMed

    Pham, Annette M; Tollefson, Travis T

    2010-05-01

    Facial analysis is an integral part of the surgical planning process. Clinical photography has long been an invaluable tool in the surgeon's practice not only for accurate facial analysis but also for enhancing communication between the patient and surgeon, for evaluating postoperative results, for medicolegal documentation, and for educational and teaching opportunities. From 35-mm slide film to the digital technology of today, clinical photography has benefited greatly from technological advances. With the development of computer imaging software, objective facial analysis becomes easier to perform and less time consuming. Thus, while the original purpose of facial analysis remains the same, the process becomes much more efficient and allows for some objectivity. Although clinical judgment and artistry of technique is never compromised, the ability to perform objective facial photograph analysis using imaging software may become the standard in facial plastic surgery practices in the future.

  2. Salvo: Seismic imaging software for complex geologies

    SciTech Connect

    OBER,CURTIS C.; GJERTSEN,ROB; WOMBLE,DAVID E.

    2000-03-01

    This report describes Salvo, a three-dimensional seismic-imaging software for complex geologies. Regions of complex geology, such as overthrusts and salt structures, can cause difficulties for many seismic-imaging algorithms used in production today. The paraxial wave equation and finite-difference methods used within Salvo can produce high-quality seismic images in these difficult regions. However this approach comes with higher computational costs which have been too expensive for standard production. Salvo uses improved numerical algorithms and methods, along with parallel computing, to produce high-quality images and to reduce the computational and the data input/output (I/O) costs. This report documents the numerical algorithms implemented for the paraxial wave equation, including absorbing boundary conditions, phase corrections, imaging conditions, phase encoding, and reduced-source migration. This report also describes I/O algorithms for large seismic data sets and images and parallelization methods used to obtain high efficiencies for both the computations and the I/O of seismic data sets. Finally, this report describes the required steps to compile, port and optimize the Salvo software, and describes the validation data sets used to help verify a working copy of Salvo.

  3. Easy and Accessible Imaging Software

    NASA Technical Reports Server (NTRS)

    2003-01-01

    DATASTAR, Inc., of Picayune, Mississippi, has taken NASA s award-winning Earth Resources Laboratory Applications Software (ELAS) program and evolved it into a user-friendly desktop application and Internet service to perform processing, analysis, and manipulation of remotely sensed imagery data. NASA s Stennis Space Center developed ELAS in the early 1980s to process satellite and airborne sensor imagery data of the Earth s surface into readable and accessible information. Since then, ELAS information has been applied worldwide to determine soil content, rainfall levels, and numerous other variances of topographical information. However, end-users customarily had to depend on scientific or computer experts to provide the results, because the imaging processing system was intricate and labor intensive.

  4. Software for Automated Image-to-Image Co-registration

    NASA Technical Reports Server (NTRS)

    Benkelman, Cody A.; Hughes, Heidi

    2007-01-01

    The project objectives are: a) Develop software to fine-tune image-to-image co-registration, presuming images are orthorectified prior to input; b) Create a reusable software development kit (SDK) to enable incorporation of these tools into other software; d) provide automated testing for quantitative analysis; and e) Develop software that applies multiple techniques to achieve subpixel precision in the co-registration of image pairs.

  5. Sandia software guidelines. Volume 3. Standards, practices, and conventions

    SciTech Connect

    Not Available

    1986-07-01

    This volume is one in a series of Sandia Software Guidelines intended for use in producing quality software within Sandia National Laboratories. In consonance with the IEEE Standard for Software Quality Assurance Plans, this volume identifies software standards, conventions, and practices. These guidelines are the result of a collective effort within Sandia National Laboratories to define recommended deliverables and to document standards, practices, and conventions which will help ensure quality software. 66 refs., 5 figs., 6 tabs.

  6. Software For Computing Image Ratios

    NASA Technical Reports Server (NTRS)

    Yates, Gigi L.

    1993-01-01

    RATIO_TOOL is interactive computer program for viewing and analyzing large sets of multispectral image data created by imaging spectrometer. Uses ratios between intensities in different spectral bands in order to spot significant areas of interest within multispectral image. Each image band viewed iteratively, or selected image band of set of data requested and displayed. When image ratios computed, result displayed as grayscale image. Written in C Language.

  7. Standardization of Software Application Development and Governance

    DTIC Science & Technology

    2015-03-01

    software - defined systems continues to increase. Size and complexity of the systems also continue to increase, and the design problems go beyond algorithms... software expects to meet the requirements as it is about defining system-coding methodology. There are many styles of software architectures, and they...development can take place. A software framework commonly defined as “a platform for developing applications. It provides the foundation on which software

  8. OSIRIX: open source multimodality image navigation software

    NASA Astrophysics Data System (ADS)

    Rosset, Antoine; Pysher, Lance; Spadola, Luca; Ratib, Osman

    2005-04-01

    The goal of our project is to develop a completely new software platform that will allow users to efficiently and conveniently navigate through large sets of multidimensional data without the need of high-end expensive hardware or software. We also elected to develop our system on new open source software libraries allowing other institutions and developers to contribute to this project. OsiriX is a free and open-source imaging software designed manipulate and visualize large sets of medical images: http://homepage.mac.com/rossetantoine/osirix/

  9. Image analysis library software development

    NASA Technical Reports Server (NTRS)

    Guseman, L. F., Jr.; Bryant, J.

    1977-01-01

    The Image Analysis Library consists of a collection of general purpose mathematical/statistical routines and special purpose data analysis/pattern recognition routines basic to the development of image analysis techniques for support of current and future Earth Resources Programs. Work was done to provide a collection of computer routines and associated documentation which form a part of the Image Analysis Library.

  10. A Software Package For Biomedical Image Processing And Analysis

    NASA Astrophysics Data System (ADS)

    Goncalves, Joao G. M.; Mealha, Oscar

    1988-06-01

    The decreasing cost of computing power and the introduction of low cost imaging boards justifies the increasing number of applications of digital image processing techniques in the area of biomedicine. There is however a large software gap to be fulfilled, between the application and the equipment. The requirements to bridge this gap are twofold: good knowledge of the hardware provided and its interface to the host computer, and expertise in digital image processing and analysis techniques. A software package incorporating these two requirements was developped using the C programming language, in order to create a user friendly image processing programming environment. The software package can be considered in two different ways: as a data structure adapted to image processing and analysis, which acts as the backbone and the standard of communication for all the software; and as a set of routines implementing the basic algorithms used in image processing and analysis. Hardware dependency is restricted to a single module upon which all hardware calls are based. The data structure that was built has four main features: hierchical, open, object oriented, and object dependent dimensions. Considering the vast amount of memory needed by imaging applications and the memory available in small imaging systems, an effective image memory management scheme was implemented. This software package is being used for more than one and a half years by users with different applications. It proved to be an efficient tool for helping people to get adapted into the system, and for standardizing and exchanging software, yet preserving flexibility allowing for users' specific implementations. The philosophy of the software package is discussed and the data structure that was built is described in detail.

  11. Infrared Imaging Data Reduction Software and Techniques

    NASA Astrophysics Data System (ADS)

    Sabbey, C. N.; McMahon, R. G.; Lewis, J. R.; Irwin, M. J.

    Developed to satisfy certain design requirements not met in existing packages (e.g., full weight map handling) and to optimize the software for large data sets (non-interactive tasks that are CPU and disk efficient), the InfraRed Data Reduction software package is a small ANSI C library of fast image processing routines for automated pipeline reduction of infrared (dithered) observations. The software includes stand-alone C programs for tasks such as running sky frame subtraction with object masking, image registration and co-addition with weight maps, dither offset measurement using cross-correlation, and object mask dilation. Although currently used for near-IR mosaic images, the modular software is concise and readily adaptable for reuse in other work. IRDR, available via anonymous ftp at ftp.ast.cam.ac.uk in pub/sabbey

  12. Imaging standards for smart cards

    NASA Astrophysics Data System (ADS)

    Ellson, Richard N.; Ray, Lawrence A.

    1996-01-01

    'Smart cards' are plastic cards the size of credit cards which contain integrated circuits for the storage of digital information. The applications of these cards for image storage has been growing as card data capacities have moved from tens of bytes to thousands of bytes. This has prompted the recommendation of standards by the X3B10 committee of ANSI for inclusion in ISO standards for card image storage of a variety of image data types including digitized signatures and color portrait images. This paper reviews imaging requirements of the smart card industry, challenges of image storage for small memory devices, card image communications, and the present status of standards. The paper concludes with recommendations for the evolution of smart card image standards towards image formats customized to the image content and more optimized for smart card memory constraints.

  13. Analyzing huge pathology images with open source software

    PubMed Central

    2013-01-01

    Background Digital pathology images are increasingly used both for diagnosis and research, because slide scanners are nowadays broadly available and because the quantitative study of these images yields new insights in systems biology. However, such virtual slides build up a technical challenge since the images occupy often several gigabytes and cannot be fully opened in a computer’s memory. Moreover, there is no standard format. Therefore, most common open source tools such as ImageJ fail at treating them, and the others require expensive hardware while still being prohibitively slow. Results We have developed several cross-platform open source software tools to overcome these limitations. The NDPITools provide a way to transform microscopy images initially in the loosely supported NDPI format into one or several standard TIFF files, and to create mosaics (division of huge images into small ones, with or without overlap) in various TIFF and JPEG formats. They can be driven through ImageJ plugins. The LargeTIFFTools achieve similar functionality for huge TIFF images which do not fit into RAM. We test the performance of these tools on several digital slides and compare them, when applicable, to standard software. A statistical study of the cells in a tissue sample from an oligodendroglioma was performed on an average laptop computer to demonstrate the efficiency of the tools. Conclusions Our open source software enables dealing with huge images with standard software on average computers. They are cross-platform, independent of proprietary libraries and very modular, allowing them to be used in other open source projects. They have excellent performance in terms of execution speed and RAM requirements. They open promising perspectives both to the clinician who wants to study a single slide and to the research team or data centre who do image analysis of many slides on a computer cluster. Virtual slides The virtual slide(s) for this article can be found here: http

  14. Analyzing huge pathology images with open source software.

    PubMed

    Deroulers, Christophe; Ameisen, David; Badoual, Mathilde; Gerin, Chloé; Granier, Alexandre; Lartaud, Marc

    2013-06-06

    Digital pathology images are increasingly used both for diagnosis and research, because slide scanners are nowadays broadly available and because the quantitative study of these images yields new insights in systems biology. However, such virtual slides build up a technical challenge since the images occupy often several gigabytes and cannot be fully opened in a computer's memory. Moreover, there is no standard format. Therefore, most common open source tools such as ImageJ fail at treating them, and the others require expensive hardware while still being prohibitively slow. We have developed several cross-platform open source software tools to overcome these limitations. The NDPITools provide a way to transform microscopy images initially in the loosely supported NDPI format into one or several standard TIFF files, and to create mosaics (division of huge images into small ones, with or without overlap) in various TIFF and JPEG formats. They can be driven through ImageJ plugins. The LargeTIFFTools achieve similar functionality for huge TIFF images which do not fit into RAM. We test the performance of these tools on several digital slides and compare them, when applicable, to standard software. A statistical study of the cells in a tissue sample from an oligodendroglioma was performed on an average laptop computer to demonstrate the efficiency of the tools. Our open source software enables dealing with huge images with standard software on average computers. They are cross-platform, independent of proprietary libraries and very modular, allowing them to be used in other open source projects. They have excellent performance in terms of execution speed and RAM requirements. They open promising perspectives both to the clinician who wants to study a single slide and to the research team or data centre who do image analysis of many slides on a computer cluster. The virtual slide(s) for this article can be found here

  15. A Standardized Software Reliability Measurement Methodology

    DTIC Science & Technology

    1991-12-01

    application areas: avionics: communications: command, control. comjmunication, and intelligence; electronic warfare: and radar systems [81 :6]. The study...reliability tools [29, 34, 61]. Goel states: Software reliability is a useful measure in planning and controlling resources during the development...of Goel to identify four classes of soft-ware reliabihity models: fault seeding models: inlut 2-. domain models; times between failure models; and

  16. Standard practices for the implementation of computer software

    NASA Technical Reports Server (NTRS)

    Irvine, A. P. (Editor)

    1978-01-01

    A standard approach to the development of computer program is provided that covers the file cycle of software development from the planning and requirements phase through the software acceptance testing phase. All documents necessary to provide the required visibility into the software life cycle process are discussed in detail.

  17. Software for Simulation of Hyperspectral Images

    NASA Technical Reports Server (NTRS)

    Richtsmeier, Steven C.; Singer-Berk, Alexander; Bernstein, Lawrence S.

    2002-01-01

    A package of software generates simulated hyperspectral images for use in validating algorithms that generate estimates of Earth-surface spectral reflectance from hyperspectral images acquired by airborne and spaceborne instruments. This software is based on a direct simulation Monte Carlo approach for modeling three-dimensional atmospheric radiative transport as well as surfaces characterized by spatially inhomogeneous bidirectional reflectance distribution functions. In this approach, 'ground truth' is accurately known through input specification of surface and atmospheric properties, and it is practical to consider wide variations of these properties. The software can treat both land and ocean surfaces and the effects of finite clouds with surface shadowing. The spectral/spatial data cubes computed by use of this software can serve both as a substitute for and a supplement to field validation data.

  18. A study of software standards used in the avionics industry

    NASA Technical Reports Server (NTRS)

    Hayhurst, Kelly J.

    1994-01-01

    Within the past decade, software has become an increasingly common element in computing systems. In particular, the role of software used in the aerospace industry, especially in life- or safety-critical applications, is rapidly expanding. This intensifies the need to use effective techniques for achieving and verifying the reliability of avionics software. Although certain software development processes and techniques are mandated by government regulating agencies, no one methodology has been shown to consistently produce reliable software. The knowledge base for designing reliable software simply has not reached the maturity of its hardware counterpart. In an effort to increase our understanding of software, the Langley Research Center conducted a series of experiments over 15 years with the goal of understanding why and how software fails. As part of this program, the effectiveness of current industry standards for the development of avionics is being investigated. This study involves the generation of a controlled environment to conduct scientific experiments on software processes.

  19. Video Image Stabilization and Registration (VISAR) Software

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Two scientists at NASA's Marshall Space Flight Center, atmospheric scientist Paul Meyer and solar physicist Dr. David Hathaway, developed promising new software, called Video Image Stabilization and Registration (VISAR), which is illustrated in this Quick Time movie. VISAR is a computer algorithm that stabilizes camera motion in the horizontal and vertical as well as rotation and zoom effects producing clearer images of moving objects, smoothes jagged edges, enhances still images, and reduces video noise or snow. It could steady images of ultrasounds, which are infamous for their grainy, blurred quality. VISAR could also have applications in law enforcement, medical, and meteorological imaging. The software can be used for defense application by improving reconnaissance video imagery made by military vehicles, aircraft, and ships traveling in harsh, rugged environments.

  20. Video Image Stabilization and Registration (VISAR) Software

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Two scientists at NASA's Marshall Space Flight Center,atmospheric scientist Paul Meyer and solar physicist Dr. David Hathaway, developed promising new software, called Video Image stabilization and Registration (VISAR), which is illustrated in this Quick Time movie. VISAR is a computer algorithm that stabilizes camera motion in the horizontal and vertical as well as rotation and zoom effects producing clearer images of moving objects, smoothes jagged edges, enhances still images, and reduces video noise or snow. It could steady images of ultrasounds, which are infamous for their grainy, blurred quality. VISAR could also have applications in law enforcement, medical, and meteorological imaging. The software can be used for defense application by improving reconnaissance video imagery made by military vehicles, aircraft, and ships traveling in harsh, rugged environments.

  1. Video Image Stabilization and Registration (VISAR) Software

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Two scientists at NASA's Marshall Space Flight Center, atmospheric scientist Paul Meyer and solar physicist Dr. David Hathaway, developed promising new software, called Video Image Stabilization and Registration (VISAR), which is illustrated in this Quick Time movie. VISAR is a computer algorithm that stabilizes camera motion in the horizontal and vertical as well as rotation and zoom effects producing clearer images of moving objects, smoothes jagged edges, enhances still images, and reduces video noise or snow. It could steady images of ultrasounds, which are infamous for their grainy, blurred quality. VISAR could also have applications in law enforcement, medical, and meteorological imaging. The software can be used for defense application by improving reconnaissance video imagery made by military vehicles, aircraft, and ships traveling in harsh, rugged environments.

  2. Video Image Stabilization and Registration (VISAR) Software

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Two scientists at NASA's Marshall Space Flight Center,atmospheric scientist Paul Meyer and solar physicist Dr. David Hathaway, developed promising new software, called Video Image stabilization and Registration (VISAR), which is illustrated in this Quick Time movie. VISAR is a computer algorithm that stabilizes camera motion in the horizontal and vertical as well as rotation and zoom effects producing clearer images of moving objects, smoothes jagged edges, enhances still images, and reduces video noise or snow. It could steady images of ultrasounds, which are infamous for their grainy, blurred quality. VISAR could also have applications in law enforcement, medical, and meteorological imaging. The software can be used for defense application by improving reconnaissance video imagery made by military vehicles, aircraft, and ships traveling in harsh, rugged environments.

  3. Toward community standards and software for whole-cell modeling

    PubMed Central

    Bergmann, Frank T.; Chelliah, Vijayalakshmi; Hucka, Michael; Krantz, Marcus; Liebermeister, Wolfram; Mendes, Pedro; Myers, Chris J.; Pir, Pinar; Alaybeyoglu, Begum; Aranganathan, Naveen K; Baghalian, Kambiz; Bittig, Arne T.; Pinto Burke, Paulo E.; Cantarelli, Matteo; Chew, Yin Hoon; Costa, Rafael S.; Cursons, Joseph; Czauderna, Tobias; Goldberg, Arthur P.; Gómez, Harold F.; Hahn, Jens; Hameri, Tuure; Hernandez Gardiol, Daniel F.; Kazakiewicz, Denis; Kiselev, Ilya; Knight-Schrijver, Vincent; Knüpfer, Christian; König, Matthias; Lee, Daewon; Lloret-Villas, Audald; Mandrik, Nikita; Medley, J. Kyle; Moreau, Bertrand; Naderi-Meshkin, Hojjat; Palaniappan, Sucheendra K.; Priego-Espinosa, Daniel; Scharm, Martin; Sharma, Mahesh; Smallbone, Kieran; Stanford, Natalie J.; Song, Je-Hoon; Theile, Tom; Tokic, Milenko; Tomar, Namrata; Touré, Vasundra; Uhlendorf, Jannis; Varusai, Thawfeek M; Watanabe, Leandro H.; Wendland, Florian; Wolfien, Markus; Yurkovich, James T.; Zhu, Yan; Zardilis, Argyris; Zhukova, Anna; Schreiber, Falk

    2017-01-01

    Objective Whole-cell (WC) modeling is a promising tool for biological research, bioengineering, and medicine. However, substantial work remains to create accurate, comprehensive models of complex cells. Methods We organized the 2015 Whole-Cell Modeling Summer School to teach WC modeling and evaluate the need for new WC modeling standards and software by recoding a recently published WC model in SBML. Results Our analysis revealed several challenges to representing WC models using the current standards. Conclusion We, therefore, propose several new WC modeling standards, software, and databases. Significance We anticipate that these new standards and software will enable more comprehensive models. PMID:27305665

  4. Video Image Stabilization and Registration (VISAR) Software

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Two scientists at NASA's Marshall Space Flight Center,atmospheric scientist Paul Meyer and solar physicist Dr. David Hathaway, developed promising new software, called Video Image Stabilization and Registration (VISAR). VISAR may help law enforcement agencies catch criminals by improving the quality of video recorded at crime scenes. In this photograph, the single frame at left, taken at night, was brightened in order to enhance details and reduce noise or snow. To further overcome the video defects in one frame, Law enforcement officials can use VISAR software to add information from multiple frames to reveal a person. Images from less than a second of videotape were added together to create the clarified image at right. VISAR stabilizes camera motion in the horizontal and vertical as well as rotation and zoom effects producing clearer images of moving objects, smoothes jagged edges, enhances still images, and reduces video noise or snow. VISAR could also have applications in medical and meteorological imaging. It could steady images of ultrasounds, which are infamous for their grainy, blurred quality. The software can be used for defense application by improving recornaissance video imagery made by military vehicles, aircraft, and ships traveling in harsh, rugged environments.

  5. Video Image Stabilization and Registration (VISAR) Software

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Two scientists at NASA's Marshall Space Flight Center,atmospheric scientist Paul Meyer and solar physicist Dr. David Hathaway, developed promising new software, called Video Image Stabilization and Registration (VISAR). VISAR may help law enforcement agencies catch criminals by improving the quality of video recorded at crime scenes. In this photograph, the single frame at left, taken at night, was brightened in order to enhance details and reduce noise or snow. To further overcome the video defects in one frame, Law enforcement officials can use VISAR software to add information from multiple frames to reveal a person. Images from less than a second of videotape were added together to create the clarified image at right. VISAR stabilizes camera motion in the horizontal and vertical as well as rotation and zoom effects producing clearer images of moving objects, smoothes jagged edges, enhances still images, and reduces video noise or snow. VISAR could also have applications in medical and meteorological imaging. It could steady images of ultrasounds, which are infamous for their grainy, blurred quality. The software can be used for defense application by improving recornaissance video imagery made by military vehicles, aircraft, and ships traveling in harsh, rugged environments.

  6. Military Standard: Software Development and Documentation

    DTIC Science & Technology

    1994-12-05

    9 4.2.4.1 Safety assurance . ............................. 9 4.2.4.2 Security assurance ............................. 9 4.2.4.3 Privacy...meet the following requirements. 4.2.4.1 Safety assurance. The developer shall identify as safety -critical those CSCIs or portions thereof whose failure...software, the developer shall develop a safety assurance strategy, including both tests and analyses, to assure that the requirements, design

  7. Contracting for Computer Software in Standardized Computer Languages

    PubMed Central

    Brannigan, Vincent M.; Dayhoff, Ruth E.

    1982-01-01

    The interaction between standardized computer languages and contracts for programs which use these languages is important to the buyer or seller of software. The rationale for standardization, the problems in standardizing computer languages, and the difficulties of determining whether the product conforms to the standard are issues which must be understood. The contract law processes of delivery, acceptance testing, acceptance, rejection, and revocation of acceptance are applicable to the contracting process for standard language software. Appropriate contract language is suggested for requiring strict compliance with a standard, and an overview of remedies is given for failure to comply.

  8. Video Image Stabilization and Registration (VISAR) Software

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Two scientists at NASA Marshall Space Flight Center, atmospheric scientist Paul Meyer (left) and solar physicist Dr. David Hathaway, have developed promising new software, called Video Image Stabilization and Registration (VISAR), that may help law enforcement agencies to catch criminals by improving the quality of video recorded at crime scenes, VISAR stabilizes camera motion in the horizontal and vertical as well as rotation and zoom effects; produces clearer images of moving objects; smoothes jagged edges; enhances still images; and reduces video noise of snow. VISAR could also have applications in medical and meteorological imaging. It could steady images of Ultrasounds which are infamous for their grainy, blurred quality. It would be especially useful for tornadoes, tracking whirling objects and helping to determine the tornado's wind speed. This image shows two scientists reviewing an enhanced video image of a license plate taken from a moving automobile.

  9. Video Image Stabilization and Registration (VISAR) Software

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Two scientists at NASA Marshall Space Flight Center, atmospheric scientist Paul Meyer (left) and solar physicist Dr. David Hathaway, have developed promising new software, called Video Image Stabilization and Registration (VISAR), that may help law enforcement agencies to catch criminals by improving the quality of video recorded at crime scenes, VISAR stabilizes camera motion in the horizontal and vertical as well as rotation and zoom effects; produces clearer images of moving objects; smoothes jagged edges; enhances still images; and reduces video noise of snow. VISAR could also have applications in medical and meteorological imaging. It could steady images of Ultrasounds which are infamous for their grainy, blurred quality. It would be especially useful for tornadoes, tracking whirling objects and helping to determine the tornado's wind speed. This image shows two scientists reviewing an enhanced video image of a license plate taken from a moving automobile.

  10. Telescope image modelling software in STSDAS

    NASA Technical Reports Server (NTRS)

    Hodge, P. E.; Eisenhamer, J. D.; Shaw, R. A.; Williamson, R. L., II

    1992-01-01

    The Telescope Image Modelling (TIM) system creates model point spread functions for optical systems based on ray-trace information. The original TIM runs only on VAX/VMS systems, but we are modifying the software to run under IRAF in the STSDAS package in order to make TIM more widely available. The current status of this project will be discussed. Initially the changes will be restricted to the user interface and replacing VAX-specific code. Soon thereafter the IMSL and NAG subroutines will be replaced by public-domain software, and some of the algorithms may be improved.

  11. Image analysis software and sample preparation demands

    NASA Astrophysics Data System (ADS)

    Roth, Karl n.; Wenzelides, Knut; Wolf, Guenter; Hufnagl, Peter

    1990-11-01

    Image analysis offers the opportunity to analyse many processes in medicine, biology and engeneering in a quantitative manner. Experience shows that it is only by awareness of preparation methods and attention to software design that full benefit can be reaped from a picture processing system in the fields of cytology and histology. Some examples of special stains for automated analysis are given here and the effectiveness of commercially available software packages is investigated. The application of picture processing and development of related special hardware and software has been increasing within the last years. As PC-based picture processing systems can be purchased at reasonable costs more and more users are confronted with these problems. Experience shows that the quality of commercially available software packages differ and the requirements on the sample preparation needed for successful problem solutions are often underestimated. But as always, sample preparation is still the key to success in automated image analysis for cells and tissues. Hence, a problem solution requires the permanent interaction between sample preparation methods and algorithm development.

  12. QA, QC and validation of imaging hardware and software

    SciTech Connect

    Weber, D.A.; Ivanovic, M.

    1988-01-01

    This paper addresses the development, testing, quality assurance (QA) and validation (V) of imaging hardware and software in nuclear medicine. QA, QC and V are discussed from the perspective of the nuclear medicine specialist, the regulator and the legal specialist. Complete testing programs and specific methods of QA, QC and V of nuclear medicine hardware and software are presented. NEMA standards for performance measurements of planar scintillation cameras and camera SPECT systems are reviewed; FDA policy on regulation of software used in medicine is discussed; legal aspects underlying the development of clinical software, and the practical value of copyright and patents are presented; and new, promising directions in hardware and software development are given. Since the primary focus of the meeting is QA, QC and V of gamma camera, single photon emission computed tomography (SPECT) and positron emission tomography (PET) systems and related imaging software, we introduce the presentations and proceedings with discussion of a few questions that are key to the use and understanding of these tests and measurements. 22 refs., 10 figs., 1 tab.

  13. Awake Animal Imaging Motion Tracking Software

    SciTech Connect

    Goddard, James

    2010-03-15

    The Awake Animal Motion Tracking Software code calculates the 3D movement of the head motion of a live, awake animal during a medical imaging scan. In conjunction with markers attached to the head, images acquired from multiple cameras are processed and marker locations precisely determined. Using off-line camera calibration data, the 3D positions of the markers are calculated along with a 6 degree of freedom position and orientation (pose) relative to a fixed initial position. This calculation is performed in real time at frame rates up to 30 frames per second. A time stamp with microsecond accuracy from a time base source is attached to each pose measurement.

  14. Sine-Fitting Software for IEEE Standard 1057

    SciTech Connect

    Blair, Jerome

    1999-05-01

    Software application that performs the calculations related to the sine-fit tests of IEEE Standard 1057/94. Example outputs and explainations of these outputs to determine the important characteristics of the device under test. This application performs the calculations related to sine-fit tests and uses 4-parameter sine fit from IEEE Standard 1057-1994.

  15. Imaging Sensor Flight and Test Equipment Software

    NASA Technical Reports Server (NTRS)

    Freestone, Kathleen; Simeone, Louis; Robertson, Byran; Frankford, Maytha; Trice, David; Wallace, Kevin; Wilkerson, DeLisa

    2007-01-01

    The Lightning Imaging Sensor (LIS) is one of the components onboard the Tropical Rainfall Measuring Mission (TRMM) satellite, and was designed to detect and locate lightning over the tropics. The LIS flight code was developed to run on a single onboard digital signal processor, and has operated the LIS instrument since 1997 when the TRMM satellite was launched. The software provides controller functions to the LIS Real-Time Event Processor (RTEP) and onboard heaters, collects the lightning event data from the RTEP, compresses and formats the data for downlink to the satellite, collects housekeeping data and formats the data for downlink to the satellite, provides command processing and interface to the spacecraft communications and data bus, and provides watchdog functions for error detection. The Special Test Equipment (STE) software was designed to operate specific test equipment used to support the LIS hardware through development, calibration, qualification, and integration with the TRMM spacecraft. The STE software provides the capability to control instrument activation, commanding (including both data formatting and user interfacing), data collection, decompression, and display and image simulation. The LIS STE code was developed for the DOS operating system in the C programming language. Because of the many unique data formats implemented by the flight instrument, the STE software was required to comprehend the same formats, and translate them for the test operator. The hardware interfaces to the LIS instrument using both commercial and custom computer boards, requiring that the STE code integrate this variety into a working system. In addition, the requirement to provide RTEP test capability dictated the need to provide simulations of background image data with short-duration lightning transients superimposed. This led to the development of unique code used to control the location, intensity, and variation above background for simulated lightning strikes

  16. ICER-3D Hyperspectral Image Compression Software

    NASA Technical Reports Server (NTRS)

    Xie, Hua; Kiely, Aaron; Klimesh, matthew; Aranki, Nazeeh

    2010-01-01

    Software has been developed to implement the ICER-3D algorithm. ICER-3D effects progressive, three-dimensional (3D), wavelet-based compression of hyperspectral images. If a compressed data stream is truncated, the progressive nature of the algorithm enables reconstruction of hyperspectral data at fidelity commensurate with the given data volume. The ICER-3D software is capable of providing either lossless or lossy compression, and incorporates an error-containment scheme to limit the effects of data loss during transmission. The compression algorithm, which was derived from the ICER image compression algorithm, includes wavelet-transform, context-modeling, and entropy coding subalgorithms. The 3D wavelet decomposition structure used by ICER-3D exploits correlations in all three dimensions of sets of hyperspectral image data, while facilitating elimination of spectral ringing artifacts, using a technique summarized in "Improving 3D Wavelet-Based Compression of Spectral Images" (NPO-41381), NASA Tech Briefs, Vol. 33, No. 3 (March 2009), page 7a. Correlation is further exploited by a context-modeling subalgorithm, which exploits spectral dependencies in the wavelet-transformed hyperspectral data, using an algorithm that is summarized in "Context Modeler for Wavelet Compression of Hyperspectral Images" (NPO-43239), which follows this article. An important feature of ICER-3D is a scheme for limiting the adverse effects of loss of data during transmission. In this scheme, as in the similar scheme used by ICER, the spatial-frequency domain is partitioned into rectangular error-containment regions. In ICER-3D, the partitions extend through all the wavelength bands. The data in each partition are compressed independently of those in the other partitions, so that loss or corruption of data from any partition does not affect the other partitions. Furthermore, because compression is progressive within each partition, when data are lost, any data from that partition received

  17. Terahertz/mm wave imaging simulation software

    NASA Astrophysics Data System (ADS)

    Fetterman, M. R.; Dougherty, J.; Kiser, W. L., Jr.

    2006-10-01

    We have developed a mm wave/terahertz imaging simulation package from COTS graphic software and custom MATLAB code. In this scheme, a commercial ray-tracing package was used to simulate the emission and reflections of radiation from scenes incorporating highly realistic imagery. Accurate material properties were assigned to objects in the scenes, with values obtained from the literature, and from our own terahertz spectroscopy measurements. The images were then post-processed with custom Matlab code to include the blur introduced by the imaging system and noise levels arising from system electronics and detector noise. The Matlab code was also used to simulate the effect of fog, an important aspect for mm wave imaging systems. Several types of image scenes were evaluated, including bar targets, contrast detail targets, a person in a portal screening situation, and a sailboat on the open ocean. The images produced by this simulation are currently being used as guidance for a 94 GHz passive mm wave imaging system, but have broad applicability for frequencies extending into the terahertz region.

  18. Standardized development of computer software. Part 1: Methods

    NASA Technical Reports Server (NTRS)

    Tausworthe, R. C.

    1976-01-01

    This work is a two-volume set on standards for modern software engineering methodology. This volume presents a tutorial and practical guide to the efficient development of reliable computer software, a unified and coordinated discipline for design, coding, testing, documentation, and project organization and management. The aim of the monograph is to provide formal disciplines for increasing the probability of securing software that is characterized by high degrees of initial correctness, readability, and maintainability, and to promote practices which aid in the consistent and orderly development of a total software system within schedule and budgetary constraints. These disciplines are set forth as a set of rules to be applied during software development to drastically reduce the time traditionally spent in debugging, to increase documentation quality, to foster understandability among those who must come in contact with it, and to facilitate operations and alterations of the program as requirements on the program environment change.

  19. Image analysis software versus direct anthropometry for breast measurements.

    PubMed

    Quieregatto, Paulo Rogério; Hochman, Bernardo; Furtado, Fabianne; Machado, Aline Fernanda Perez; Sabino Neto, Miguel; Ferreira, Lydia Masako

    2014-10-01

    To compare breast measurements performed using the software packages ImageTool(r), AutoCAD(r) and Adobe Photoshop(r) with direct anthropometric measurements. Points were marked on the breasts and arms of 40 volunteer women aged between 18 and 60 years. When connecting the points, seven linear segments and one angular measurement on each half of the body, and one medial segment common to both body halves were defined. The volunteers were photographed in a standardized manner. Photogrammetric measurements were performed by three independent observers using the three software packages and compared to direct anthropometric measurements made with calipers and a protractor. Measurements obtained with AutoCAD(r) were the most reproducible and those made with ImageTool(r) were the most similar to direct anthropometry, while measurements with Adobe Photoshop(r) showed the largest differences. Except for angular measurements, significant differences were found between measurements of line segments made using the three software packages and those obtained by direct anthropometry. AutoCAD(r) provided the highest precision and intermediate accuracy; ImageTool(r) had the highest accuracy and lowest precision; and Adobe Photoshop(r) showed intermediate precision and the worst accuracy among the three software packages.

  20. Data to Pictures to Data: Outreach Imaging Software and Metadata

    NASA Astrophysics Data System (ADS)

    Levay, Z.

    2011-07-01

    A convergence between astronomy science and digital photography has enabled a steady stream of visually rich imagery from state-of-the-art data. The accessibility of hardware and software has facilitated an explosion of astronomical images for outreach, from space-based observatories, ground-based professional facilities and among the vibrant amateur astrophotography community. Producing imagery from science data involves a combination of custom software to understand FITS data (FITS Liberator), off-the-shelf, industry-standard software to composite multi-wavelength data and edit digital photographs (Adobe Photoshop), and application of photo/image-processing techniques. Some additional effort is needed to close the loop and enable this imagery to be conveniently available for various purposes beyond web and print publication. The metadata paradigms in digital photography are now complying with FITS and science software to carry information such as keyword tags and world coordinates, enabling these images to be usable in more sophisticated, imaginative ways exemplified by Sky in Google Earth and World Wide Telescope.

  1. Improved detection of bone metastases from lung cancer in the thoracic cage using 5- and 1-mm axial images versus a new CT software generating rib unfolding images: comparison with standard ¹⁸F-FDG-PET/CT.

    PubMed

    Homann, Georg; Mustafa, Deedar F; Ditt, Hendrik; Spengler, Werner; Kopp, Hans-Georg; Nikolaou, Konstantin; Horger, Marius

    2015-04-01

    To evaluate the performance of a dedicated computed tomography (CT) software called "bone reading" generating rib unfolded images for improved detection of rib metastases in patients with lung cancer in comparison to readings of 5- and 1-mm axial CT images and (18)F-Fluordeoxyglucose positron emission tomography/computed tomography (FDG-PET/CT). Ninety consecutive patients who underwent (18)F-FDG-PET/CT and chest CT scanning between 2012 and 2014 at our institution were analyzed retrospectively. Chest CT scans with 5- and 1-mm slice thickness were interpreted blindly and separately focused on the detection of rib metastases (location, number, cortical vs. medullary, and osteoblastic vs. sclerotic). Subsequent image analysis of unfolded 1 mm-based CT rib images was performed. For all three data sets the reading time was registered. Finally, results were compared to those of FDG-PET. Validation was based on FDG-PET positivity for osteolytic and mixed osteolytic/osteoblastic focal rib lesions and follow-up for sclerotic PET-negative lesions. A total of 47 metastatic rib lesions were found on FDG-PET/CT plus another 30 detected by CT bone reading and confirmed by follow-up CT. Twenty-nine lesions were osteolytic, 14 were mixed osteolytic/osteoblastic, and 34 were sclerotic. On a patient-based analysis, CT (5 mm), CT (1 mm), and CT (1-mm bone reading) yielded a sensitivity, specificity, and accuracy of 76.5/97.3/93, 81.3/97.3/94, and 88.2/95.9/92, respectively. On segment-based (unfolded rib) analysis, the sensitivity, specificity, and accuracy of the three evaluations were 47.7/95.7/67, 59.5/95.8/77, and 94.8/88.2/92, respectively. Reading time for 5 mm/1 mm axial images and unfolded images was 40.5/50.7/21.56 seconds, respectively. The use of unfolded rib images in patients with lung cancer improves sensitivity and specificity of rib metastasis detection in comparison to 5- and 1-mm CT slice reading. Moreover, it may reduce the reading time. Copyright © 2015 AUR

  2. Standardization: Hardware and Software Standardization Can Reduce Costs and Save Time

    ERIC Educational Resources Information Center

    Brooks-Young, Susan

    2005-01-01

    Sadly, technical support doesn't come cheap. One money-saving strategy that's gained popularity among school technicians is equipment and software standardization. When it works, standardization can be very effective. However, standardization has its drawbacks. This article discusses the advantages and disadvantages of standardization.

  3. Standardization: Hardware and Software Standardization Can Reduce Costs and Save Time

    ERIC Educational Resources Information Center

    Brooks-Young, Susan

    2005-01-01

    Sadly, technical support doesn't come cheap. One money-saving strategy that's gained popularity among school technicians is equipment and software standardization. When it works, standardization can be very effective. However, standardization has its drawbacks. This article discusses the advantages and disadvantages of standardization.

  4. Free and open-source software application for the evaluation of coronary computed tomography angiography images.

    PubMed

    Hadlich, Marcelo Souza; Oliveira, Gláucia Maria Moraes; Feijóo, Raúl A; Azevedo, Clerio F; Tura, Bernardo Rangel; Ziemer, Paulo Gustavo Portela; Blanco, Pablo Javier; Pina, Gustavo; Meira, Márcio; Souza e Silva, Nelson Albuquerque de

    2012-10-01

    The standardization of images used in Medicine in 1993 was performed using the DICOM (Digital Imaging and Communications in Medicine) standard. Several tests use this standard and it is increasingly necessary to design software applications capable of handling this type of image; however, these software applications are not usually free and open-source, and this fact hinders their adjustment to most diverse interests. To develop and validate a free and open-source software application capable of handling DICOM coronary computed tomography angiography images. We developed and tested the ImageLab software in the evaluation of 100 tests randomly selected from a database. We carried out 600 tests divided between two observers using ImageLab and another software sold with Philips Brilliance computed tomography appliances in the evaluation of coronary lesions and plaques around the left main coronary artery (LMCA) and the anterior descending artery (ADA). To evaluate intraobserver, interobserver and intersoftware agreements, we used simple and kappa statistics agreements. The agreements observed between software applications were generally classified as substantial or almost perfect in most comparisons. The ImageLab software agreed with the Philips software in the evaluation of coronary computed tomography angiography tests, especially in patients without lesions, with lesions < 50% in the LMCA and < 70% in the ADA. The agreement for lesions > 70% in the ADA was lower, but this is also observed when the anatomical reference standard is used.

  5. Image processing software for imaging spectrometry data analysis

    NASA Technical Reports Server (NTRS)

    Mazer, Alan; Martin, Miki; Lee, Meemong; Solomon, Jerry E.

    1988-01-01

    Imaging spectrometers simultaneously collect image data in hundreds of spectral channels, from the near-UV to the IR, and can thereby provide direct surface materials identification by means resembling laboratory reflectance spectroscopy. Attention is presently given to a software system, the Spectral Analysis Manager (SPAM) for the analysis of imaging spectrometer data. SPAM requires only modest computational resources and is composed of one main routine and a set of subroutine libraries. Additions and modifications are relatively easy, and special-purpose algorithms have been incorporated that are tailored to geological applications.

  6. Image processing software for imaging spectrometry data analysis

    NASA Astrophysics Data System (ADS)

    Mazer, Alan; Martin, Miki; Lee, Meemong; Solomon, Jerry E.

    1988-02-01

    Imaging spectrometers simultaneously collect image data in hundreds of spectral channels, from the near-UV to the IR, and can thereby provide direct surface materials identification by means resembling laboratory reflectance spectroscopy. Attention is presently given to a software system, the Spectral Analysis Manager (SPAM) for the analysis of imaging spectrometer data. SPAM requires only modest computational resources and is composed of one main routine and a set of subroutine libraries. Additions and modifications are relatively easy, and special-purpose algorithms have been incorporated that are tailored to geological applications.

  7. Porcess-industry CAPE-OPEN software standard overview

    SciTech Connect

    Zitney, S.

    2009-01-01

    CAPE-OPEN (CAPE is short for Computer Aided Process Engineering) is a standard for writing computer software interfaces. It is mainly applied in process engineering where it enables a standardized communication between process simulators (e.g. Aspen Plus) and products developed by ourselves. The advantage of CAPE-OPEN is that these products are applicable to more than just one process simulator; they are aimed at all process simulators that are CAPE-OPEN compliant.

  8. Standards guide for space and earth sciences computer software

    NASA Technical Reports Server (NTRS)

    Mason, G.; Chapman, R.; Klinglesmith, D.; Linnekin, J.; Putney, W.; Shaffer, F.; Dapice, R.

    1972-01-01

    Guidelines for the preparation of systems analysis and programming work statements are presented. The data is geared toward the efficient administration of available monetary and equipment resources. Language standards and the application of good management techniques to software development are emphasized.

  9. Software for Viewing Landsat Mosaic Images

    NASA Technical Reports Server (NTRS)

    Watts, Zack; Farve, Catharine L.; Harvey, Craig

    2003-01-01

    A Windows-based computer program has been written to enable novice users (especially educators and students) to view images of large areas of the Earth (e.g., the continental United States) generated from image data acquired in the Landsat observations performed circa the year 1990. The large-area images are constructed as mosaics from the original Landsat images, which were acquired in several wavelength bands and each of which spans an area (in effect, one tile of a mosaic) of .5 in latitude by .6 in longitude. Whereas the original Landsat data are registered on a universal transverse Mercator (UTM) grid, the program converts the UTM coordinates of a mouse pointer in the image to latitude and longitude, which are continuously updated and displayed as the pointer is moved. The mosaic image currently on display can be exported as a Windows bitmap file. Other images (e.g., of state boundaries or interstate highways) can be overlaid on Landsat mosaics. The program interacts with the user via standard toolbar, keyboard, and mouse user interfaces. The program is supplied on a compact disk along with tutorial and educational information.

  10. MRI/TRUS fusion software-based targeted biopsy: the new standard of care?

    PubMed

    Manfredi, M; Costa Moretti, T B; Emberton, M; Villers, A; Valerio, M

    2015-09-01

    The advent of multiparametric MRI has made it possible to change the way in which prostate biopsy is done, allowing to direct biopsies to suspicious lesions rather than randomly. The subject of this review relates to a computer-assisted strategy, the MRI/US fusion software-based targeted biopsy, and to its performance compared to the other sampling methods. Different devices with different methods to register MR images to live TRUS are currently in use to allow software-based targeted biopsy. Main clinical indications of MRI/US fusion software-based targeted biopsy are re-biopsy in men with persistent suspicious of prostate cancer after first negative standard biopsy and the follow-up of patients under active surveillance. Some studies have compared MRI/US fusion software-based targeted versus standard biopsy. In men at risk with MRI-suspicious lesion, targeted biopsy consistently detects more men with clinically significant disease as compared to standard biopsy; some studies have also shown decreased detection of insignificant disease. Only two studies directly compared MRI/US fusion software-based targeted biopsy with MRI/US fusion visual targeted biopsy, and the diagnostic ability seems to be in favor of the software approach. To date, no study comparing software-based targeted biopsy against in-bore MRI biopsy is available. The new software-based targeted approach seems to have the characteristics to be added in the standard pathway for achieving accurate risk stratification. Once reproducibility and cost-effectiveness will be verified, the actual issue will be to determine whether MRI/TRUS fusion software-based targeted biopsy represents anadd-on test or a replacement to standard TRUS biopsy.

  11. Electrophoretic gel image analysis software for the molecular biology laboratory.

    PubMed

    Redman, T; Jacobs, T

    1991-06-01

    We present GelReader 1.0, a microcomputer program designed to make precision, digital analysis of one-dimensional electrophoretic gels accessible to the molecular biology laboratory of modest means. Images of electrophoretic gels are digitized via a desktop flatbed scanner from instant photographs, autoradiograms or chromogenically stained blotting media. GelReader is then invoked to locate lanes and bands and generate a report of molecular weights of unknowns, based on specified sets of standards. Frequently used standards can be stored in the program. Lanes and bands can be added or removed, based upon users' subjective preferences. A unique lane histogram feature facilitates precise manual addition of bands missed by the software. Image enhancement features include palette manipulation, histogram equalization, shadowing and magnification. The user interface strikes a balance between program autonomy and user intervention, in recognition of the variability in electrophoretic gel quality and users' analytical needs.

  12. Introduction to color facsimile: hardware, software, and standards

    NASA Astrophysics Data System (ADS)

    Lee, Daniel T. L.

    1996-03-01

    The design of a color facsimile machine presents a number of unique challenges. From the technical side it requires a very efficient, seamless integration of algorithms and architectures in image scanning, compression, color processing, communications and printing. From the standardization side, it requires that agreements on the color representation space, negotiation protocols and coding methods must be reached through formal international standardization process. This paper presents an introduction to the overall development of color facsimile. An overview of the recent development of the international Color Facsimile Standard is first presented. The standard enables the transmission of continuous-tone colors and gray-scale images in Group 3 (over conventional telephone lines) and Group 4 (over digital lines) facsimile services, with backwards compatibility to current black and white facsimile. The standard provides specifications on color representation and color image encoding methods as well as extensions to current facsimile protocols to enable the transmission of color images. The technical challenges in implementing the color facsimile standard on existing facsimile machines are described next. The integration of algorithms and architectures in color scanning, compression, color processing, transmission and rendering of received hardcopy facsimile in a color imaging pipeline is described. Lastly, the current status on softcopy color facsimile standardization is reported.

  13. Standardizing Activation Analysis: New Software for Photon Activation Analysis

    SciTech Connect

    Sun, Z. J.; Wells, D.; Green, J.; Segebade, C.

    2011-06-01

    Photon Activation Analysis (PAA) of environmental, archaeological and industrial samples requires extensive data analysis that is susceptible to error. For the purpose of saving time, manpower and minimizing error, a computer program was designed, built and implemented using SQL, Access 2007 and asp.net technology to automate this process. Based on the peak information of the spectrum and assisted by its PAA library, the program automatically identifies elements in the samples and calculates their concentrations and respective uncertainties. The software also could be operated in browser/server mode, which gives the possibility to use it anywhere the internet is accessible. By switching the nuclide library and the related formula behind, the new software can be easily expanded to neutron activation analysis (NAA), charged particle activation analysis (CPAA) or proton-induced X-ray emission (PIXE). Implementation of this would standardize the analysis of nuclear activation data. Results from this software were compared to standard PAA analysis with excellent agreement. With minimum input from the user, the software has proven to be fast, user-friendly and reliable.

  14. Software Maintenance: Improvement through Better Development Standards and Documentation.

    DTIC Science & Technology

    1982-02-22

    criteria for achieving maintainability and evaluates Weapons Specification WS 8506 and MIL- STD 1679 against these criteria. Using these documents as...is Maintainability V. EVALUATION OF WEAPONS SPECIFICATION WS 8506 ---------- 19 VI. EVALUATION OF MILITARY STANDARD MIL- STD 1679 ------------ 27...techniques which were reviewed (e.g., MIL- STD 1679) were designed to be used for software development and not for maintenance, specifically. This

  15. The role of open-source software in innovation and standardization in radiology.

    PubMed

    Erickson, Bradley J; Langer, Steve; Nagy, Paul

    2005-11-01

    The use of open-source software (OSS), in which developers release the source code to applications they have developed, is popular in the software industry. This is done to allow others to modify and improve software (which may or may not be shared back to the community) and to allow others to learn from the software. Radiology was an early participant in this model, supporting OSS that implemented the ACR-National Electrical Manufacturers Association (now Digital Imaging and Communications in Medicine) standard for medical image communications. In radiology and in other fields, OSS has promoted innovation and the adoption of standards. Popular OSS is of high quality because access to source code allows many people to identify and resolve errors. Open-source software is analogous to the peer-review scientific process: one must be able to see and reproduce results to understand and promote what is shared. The authors emphasize that support for OSS need not threaten vendors; most vendors embrace and benefit from standards. Open-source development does not replace vendors but more clearly defines their roles, typically focusing on areas in which proprietary differentiators benefit customers and on professional services such as implementation planning and service. Continued support for OSS is essential for the success of our field.

  16. Software components for medical image visualization and surgical planning

    NASA Astrophysics Data System (ADS)

    Starreveld, Yves P.; Gobbi, David G.; Finnis, Kirk; Peters, Terence M.

    2001-05-01

    Purpose: The development of new applications in medical image visualization and surgical planning requires the completion of many common tasks such as image reading and re-sampling, segmentation, volume rendering, and surface display. Intra-operative use requires an interface to a tracking system and image registration, and the application requires basic, easy to understand user interface components. Rapid changes in computer and end-application hardware, as well as in operating systems and network environments make it desirable to have a hardware and operating system as an independent collection of reusable software components that can be assembled rapidly to prototype new applications. Methods: Using the OpenGL based Visualization Toolkit as a base, we have developed a set of components that implement the above mentioned tasks. The components are written in both C++ and Python, but all are accessible from Python, a byte compiled scripting language. The components have been used on the Red Hat Linux, Silicon Graphics Iris, Microsoft Windows, and Apple OS X platforms. Rigorous object-oriented software design methods have been applied to ensure hardware independence and a standard application programming interface (API). There are components to acquire, display, and register images from MRI, MRA, CT, Computed Rotational Angiography (CRA), Digital Subtraction Angiography (DSA), 2D and 3D ultrasound, video and physiological recordings. Interfaces to various tracking systems for intra-operative use have also been implemented. Results: The described components have been implemented and tested. To date they have been used to create image manipulation and viewing tools, a deep brain functional atlas, a 3D ultrasound acquisition and display platform, a prototype minimally invasive robotic coronary artery bypass graft planning system, a tracked neuro-endoscope guidance system and a frame-based stereotaxy neurosurgery planning tool. The frame-based stereotaxy module has been

  17. Digital image processing software system using an array processor

    SciTech Connect

    Sherwood, R.J.; Portnoff, M.R.; Journeay, C.H.; Twogood, R.E.

    1981-03-10

    A versatile array processor-based system for general-purpose image processing was developed. At the heart of this system is an extensive, flexible software package that incorporates the array processor for effective interactive image processing. The software system is described in detail, and its application to a diverse set of applications at LLNL is briefly discussed. 4 figures, 1 table.

  18. Retina Image Screening and Analysis Software Version 2.0

    SciTech Connect

    Tobin, Jr., Kenneth W.; Karnowski, Thomas P.; Aykac, Deniz

    2009-04-01

    The software allows physicians or researchers to ground-truth images of retinas, identifying key physiological features and lesions that are indicative of disease. The software features methods to automatically detect the physiological features and lesions. The software contains code to measure the quality of images received from a telemedicine network; create and populate a database for a telemedicine network; review and report the diagnosis of a set of images; and also contains components to transmit images from a Zeiss camera to the network through SFTP.

  19. Integration of CMM software standards for nanopositioning and nanomeasuring machines

    NASA Astrophysics Data System (ADS)

    Sparrer, E.; Machleidt, T.; Hausotte, T.; Manske, E.; Franke, K.-H.

    2011-06-01

    The paper focuses on the utilization of nanopositioning and nanomeasuring machines as a three dimensional coordinate measuring machine by means of the international harmonized communication protocol Inspection plus plus for Dimensional Measurement Equipment (abbreviated I++DME). I++DME was designed 1999 to enable the interoperability of different measuring hardware, like coordinate measuring machines, form tester, camshaft or crankshaft measuring machines, with a priori unknown third party controlling and analyzing software. Our recent work was focused on the implementation of a modular, standard conform command interpreter server for the Inspection plus plus protocol. This communication protocol enables the application of I++DME compliant graphical controlling software, which is easy to operate and less error prone than the currently used textural programming via MathWorks MATLab. The function and architecture of the I++DME command interpreter is discussed and the principle of operation is demonstrated by means of an example controlling a nanopositioning and nanomeasuring machine with Hexagon Metrology's controlling and analyzing software QUINDOS 7 via the I++DME command interpreter server.

  20. Perspective automated inkless fingerprinting imaging software for fingerprint research.

    PubMed

    Nanakorn, Somsong; Poosankam, Pongsakorn; Mongconthawornchai, Paiboon

    2008-01-01

    Fingerprint collection using ink-and-paper image is a conventional method i.e. an ink-print, transparent-adhesive tape techniques which are slower and cumbersome. This is a pilot research for software development aimed at imaging an automated, inkless fingerprint using a fingerprint sensor, a development kit of the IT WORKS Company Limited, PC camera, and printer The development of software was performed to connect with the fingerprint sensor for collection of fingerprint images and recorded into a hard disk. It was also developed to connect with the PC camera for recording a face image of persons' fingerprints or identification card images. These images had been appropriately arranged in a PDF file prior to printing. This software is able to scan ten fingerprints and store high-quality electronics fingertip images with rapid, large, and clear images without dirt of ink or carbon. This fingerprint technology is helpful in a potential application in public health and clinical medicine research.

  1. Automatic Image Registration Using Free and Open Source Software

    NASA Astrophysics Data System (ADS)

    Giri Babu, D.; Raja Shekhar, S. S.; Chandrasekar, K.; Sesha Sai, M. V. R.; Diwakar, P. G.; Dadhwal, V. K.

    2014-11-01

    Image registration is the most critical operation in remote sensing applications to enable location based referencing and analysis of earth features. This is the first step for any process involving identification, time series analysis or change detection using a large set of imagery over a region. Most of the reliable procedures involve time consuming and laborious manual methods of finding the corresponding matching features of the input image with respect to reference. Also the process, as it involves human interaction, does not converge with multiple operations at different times. Automated procedures rely on accurately determining the matching locations or points from both the images under comparison and the procedures are robust and consistent over time. Different algorithms are available to achieve this, based on pattern recognition, feature based detection, similarity techniques etc. In the present study and implementation, Correlation based methods have been used with a improvement over newly developed technique of identifying and pruning the false points of match. Free and Open Source Software (FOSS) have been used to develop the methodology to reach a wider audience, without any dependency on COTS (Commercially off the shelf) software. Standard deviation from foci of the ellipse of correlated points, is a statistical means of ensuring the best match of the points of interest based on both intensity values and location correspondence. The methodology is developed and standardised by enhancements to meet the registration requirements of remote sensing imagery. Results have shown a performance improvement, nearly matching the visual techniques and have been implemented in remote sensing operational projects. The main advantage of the proposed methodology is its viability in production mode environment. This paper also shows that the visualization capabilities of MapWinGIS, GDAL's image handling abilities and OSSIM's correlation facility can be efficiently

  2. Software to model AXAF image quality

    NASA Technical Reports Server (NTRS)

    Ahmad, Anees

    1993-01-01

    This draft final report describes the work performed under this delivery order from May 1992 through June 1993. The purpose of this contract was to enhance and develop an integrated optical performance modeling software for complex x-ray optical systems such as AXAF. The GRAZTRACE program developed by the MSFC Optical Systems Branch for modeling VETA-I was used as the starting baseline program. The original program was a large single file program and, therefore, could not be modified very efficiently. The original source code has been reorganized, and a 'Make Utility' has been written to update the original program. The new version of the source code consists of 36 small source files to make it easier for the code developer to manage and modify the program. A user library has also been built and a 'Makelib' utility has been furnished to update the library. With the user library, the users can easily access the GRAZTRACE source files and build a custom library. A user manual for the new version of GRAZTRACE has been compiled. The plotting capability for the 3-D point spread functions and contour plots has been provided in the GRAZTRACE using the graphics package DISPLAY. The Graphics emulator over the network has been set up for programming the graphics routine. The point spread function and the contour plot routines have also been modified to display the plot centroid, and to allow the user to specify the plot range, and the viewing angle options. A Command Mode version of GRAZTRACE has also been developed. More than 60 commands have been implemented in a Code-V like format. The functions covered in this version include data manipulation, performance evaluation, and inquiry and setting of internal parameters. The user manual for these commands has been formatted as in Code-V, showing the command syntax, synopsis, and options. An interactive on-line help system for the command mode has also been accomplished to allow the user to find valid commands, command syntax

  3. Survey of geographical information system and image processing software

    USGS Publications Warehouse

    Vanderzee, D.; Singh, A.

    1995-01-01

    The Global Resource Information Database—a part of the United Nations Environment Programme—conducts a bi-annual survey of geographical information system (GIS) and image processing (IP) software. This survey makes information about software products available in developing countries. The 1993 survey showed that the number of installations of GIS, IP, and related software products increased dramatically from 1991 to 1993, mostly in North America and Europe.

  4. Quantification of fungal infection of leaves with digital images and Scion Image software.

    PubMed

    Goodwin, Paul H; Hsiang, Tom

    2010-01-01

    Digital image analysis has been used to distinguish and quantify leaf color changes arising from a variety of factors. Its use to assess the percentage of leaf area with color differences caused by plant disease symptoms, such as necrosis, chlorosis, or sporulation, can provide a rigorous and quantitative means of assessing disease severity. A method is described for measuring symptoms of different fungal foliar infections that involves capturing the image with a standard flatbed scanner or digital camera followed by quantifying the area, where the color has been affected because of fungal infection. The method uses the freely available program, Scion Image for Windows or MAC, which is derived from the public domain software, NIH Image. The method has thus far been used to quantify the percentage of tissue with necrosis, chlorosis, or sporulation on leaves of variety of plants with several different diseases (anthracnose, apple scab, powdery mildew or rust).

  5. Image standards in tissue-based diagnosis (diagnostic surgical pathology).

    PubMed

    Kayser, Klaus; Görtler, Jürgen; Goldmann, Torsten; Vollmer, Ekkehard; Hufnagl, Peter; Kayser, Gian

    2008-04-18

    Progress in automated image analysis, virtual microscopy, hospital information systems, and interdisciplinary data exchange require image standards to be applied in tissue-based diagnosis. To describe the theoretical background, practical experiences and comparable solutions in other medical fields to promote image standards applicable for diagnostic pathology. THEORY AND EXPERIENCES: Images used in tissue-based diagnosis present with pathology-specific characteristics. It seems appropriate to discuss their characteristics and potential standardization in relation to the levels of hierarchy in which they appear. All levels can be divided into legal, medical, and technological properties. Standards applied to the first level include regulations or aims to be fulfilled. In legal properties, they have to regulate features of privacy, image documentation, transmission, and presentation; in medical properties, features of disease-image combination, human-diagnostics, automated information extraction, archive retrieval and access; and in technological properties features of image acquisition, display, formats, transfer speed, safety, and system dynamics. The next lower second level has to implement the prescriptions of the upper one, i.e. describe how they are implemented. Legal aspects should demand secure encryption for privacy of all patient related data, image archives that include all images used for diagnostics for a period of 10 years at minimum, accurate annotations of dates and viewing, and precise hardware and software information. Medical aspects should demand standardized patients' files such as DICOM 3 or HL 7 including history and previous examinations, information of image display hardware and software, of image resolution and fields of view, of relation between sizes of biological objects and image sizes, and of access to archives and retrieval. Technological aspects should deal with image acquisition systems (resolution, colour temperature, focus

  6. Earth Observation Services (Image Processing Software)

    NASA Technical Reports Server (NTRS)

    1992-01-01

    San Diego State University and Environmental Systems Research Institute, with other agencies, have applied satellite imaging and image processing techniques to geographic information systems (GIS) updating. The resulting images display land use and are used by a regional planning agency for applications like mapping vegetation distribution and preserving wildlife habitats. The EOCAP program provides government co-funding to encourage private investment in, and to broaden the use of NASA-developed technology for analyzing information about Earth and ocean resources.

  7. Software Helps Extract Information From Astronomical Images

    NASA Technical Reports Server (NTRS)

    Hartley, Booth; Ebert, Rick; Laughlin, Gaylin

    1995-01-01

    PAC Skyview 2.0 is interactive program for display and analysis of astronomical images. Includes large set of functions for display, analysis and manipulation of images. "Man" pages with descriptions of functions and examples of usage included. Skyview used interactively or in "server" mode, in which another program calls Skyview and executes commands itself. Skyview capable of reading image data files of four types, including those in FITS, S, IRAF, and Z formats. Written in C.

  8. Software Helps Extract Information From Astronomical Images

    NASA Technical Reports Server (NTRS)

    Hartley, Booth; Ebert, Rick; Laughlin, Gaylin

    1995-01-01

    PAC Skyview 2.0 is interactive program for display and analysis of astronomical images. Includes large set of functions for display, analysis and manipulation of images. "Man" pages with descriptions of functions and examples of usage included. Skyview used interactively or in "server" mode, in which another program calls Skyview and executes commands itself. Skyview capable of reading image data files of four types, including those in FITS, S, IRAF, and Z formats. Written in C.

  9. Software for Acquiring Image Data for PIV

    NASA Technical Reports Server (NTRS)

    Wernet, Mark P.; Cheung, H. M.; Kressler, Brian

    2003-01-01

    PIV Acquisition (PIVACQ) is a computer program for acquisition of data for particle-image velocimetry (PIV). In the PIV system for which PIVACQ was developed, small particles entrained in a flow are illuminated with a sheet of light from a pulsed laser. The illuminated region is monitored by a charge-coupled-device camera that operates in conjunction with a data-acquisition system that includes a frame grabber and a counter-timer board, both installed in a single computer. The camera operates in "frame-straddle" mode where a pair of images can be obtained closely spaced in time (on the order of microseconds). The frame grabber acquires image data from the camera and stores the data in the computer memory. The counter/timer board triggers the camera and synchronizes the pulsing of the laser with acquisition of data from the camera. PIVPROC coordinates all of these functions and provides a graphical user interface, through which the user can control the PIV data-acquisition system. PIVACQ enables the user to acquire a sequence of single-exposure images, display the images, process the images, and then save the images to the computer hard drive. PIVACQ works in conjunction with the PIVPROC program which processes the images of particles into the velocity field in the illuminated plane.

  10. Development and implementation of software systems for imaging spectroscopy

    USGS Publications Warehouse

    Boardman, J.W.; Clark, R.N.; Mazer, A.S.; Biehl, L.L.; Kruse, F.A.; Torson, J.; Staenz, K.

    2006-01-01

    Specialized software systems have played a crucial role throughout the twenty-five year course of the development of the new technology of imaging spectroscopy, or hyperspectral remote sensing. By their very nature, hyperspectral data place unique and demanding requirements on the computer software used to visualize, analyze, process and interpret them. Often described as a marriage of the two technologies of reflectance spectroscopy and airborne/spaceborne remote sensing, imaging spectroscopy, in fact, produces data sets with unique qualities, unlike previous remote sensing or spectrometer data. Because of these unique spatial and spectral properties hyperspectral data are not readily processed or exploited with legacy software systems inherited from either of the two parent fields of study. This paper provides brief reviews of seven important software systems developed specifically for imaging spectroscopy.

  11. Increasing software testability with standard access and control interfaces

    NASA Technical Reports Server (NTRS)

    Nikora, Allen P; Some, Raphael R.; Tamir, Yuval

    2003-01-01

    We describe an approach to improving the testability of complex software systems with software constructs modeled after the hardware JTAG bus, used to provide visibility and controlability in testing digital circuits.

  12. gr-MRI: A software package for magnetic resonance imaging using software defined radios

    NASA Astrophysics Data System (ADS)

    Hasselwander, Christopher J.; Cao, Zhipeng; Grissom, William A.

    2016-09-01

    The goal of this work is to develop software that enables the rapid implementation of custom MRI spectrometers using commercially-available software defined radios (SDRs). The developed gr-MRI software package comprises a set of Python scripts, flowgraphs, and signal generation and recording blocks for GNU Radio, an open-source SDR software package that is widely used in communications research. gr-MRI implements basic event sequencing functionality, and tools for system calibrations, multi-radio synchronization, and MR signal processing and image reconstruction. It includes four pulse sequences: a single-pulse sequence to record free induction signals, a gradient-recalled echo imaging sequence, a spin echo imaging sequence, and an inversion recovery spin echo imaging sequence. The sequences were used to perform phantom imaging scans with a 0.5 Tesla tabletop MRI scanner and two commercially-available SDRs. One SDR was used for RF excitation and reception, and the other for gradient pulse generation. The total SDR hardware cost was approximately 2000. The frequency of radio desynchronization events and the frequency with which the software recovered from those events was also measured, and the SDR's ability to generate frequency-swept RF waveforms was validated and compared to the scanner's commercial spectrometer. The spin echo images geometrically matched those acquired using the commercial spectrometer, with no unexpected distortions. Desynchronization events were more likely to occur at the very beginning of an imaging scan, but were nearly eliminated if the user invoked the sequence for a short period before beginning data recording. The SDR produced a 500 kHz bandwidth frequency-swept pulse with high fidelity, while the commercial spectrometer produced a waveform with large frequency spike errors. In conclusion, the developed gr-MRI software can be used to develop high-fidelity, low-cost custom MRI spectrometers using commercially-available SDRs.

  13. gr-MRI: A software package for magnetic resonance imaging using software defined radios.

    PubMed

    Hasselwander, Christopher J; Cao, Zhipeng; Grissom, William A

    2016-09-01

    The goal of this work is to develop software that enables the rapid implementation of custom MRI spectrometers using commercially-available software defined radios (SDRs). The developed gr-MRI software package comprises a set of Python scripts, flowgraphs, and signal generation and recording blocks for GNU Radio, an open-source SDR software package that is widely used in communications research. gr-MRI implements basic event sequencing functionality, and tools for system calibrations, multi-radio synchronization, and MR signal processing and image reconstruction. It includes four pulse sequences: a single-pulse sequence to record free induction signals, a gradient-recalled echo imaging sequence, a spin echo imaging sequence, and an inversion recovery spin echo imaging sequence. The sequences were used to perform phantom imaging scans with a 0.5Tesla tabletop MRI scanner and two commercially-available SDRs. One SDR was used for RF excitation and reception, and the other for gradient pulse generation. The total SDR hardware cost was approximately $2000. The frequency of radio desynchronization events and the frequency with which the software recovered from those events was also measured, and the SDR's ability to generate frequency-swept RF waveforms was validated and compared to the scanner's commercial spectrometer. The spin echo images geometrically matched those acquired using the commercial spectrometer, with no unexpected distortions. Desynchronization events were more likely to occur at the very beginning of an imaging scan, but were nearly eliminated if the user invoked the sequence for a short period before beginning data recording. The SDR produced a 500kHz bandwidth frequency-swept pulse with high fidelity, while the commercial spectrometer produced a waveform with large frequency spike errors. In conclusion, the developed gr-MRI software can be used to develop high-fidelity, low-cost custom MRI spectrometers using commercially-available SDRs.

  14. MOSAIC: Software for creating mosaics from collections of images

    NASA Technical Reports Server (NTRS)

    Varosi, F.; Gezari, D. Y.

    1992-01-01

    We have developed a powerful, versatile image processing and analysis software package called MOSAIC, designed specifically for the manipulation of digital astronomical image data obtained with (but not limited to) two-dimensional array detectors. The software package is implemented using the Interactive Data Language (IDL), and incorporates new methods for processing, calibration, analysis, and visualization of astronomical image data, stressing effective methods for the creation of mosaic images from collections of individual exposures, while at the same time preserving the photometric integrity of the original data. Since IDL is available on many computers, the MOSAIC software runs on most UNIX and VAX workstations with the X-Windows or Sun View graphics interface.

  15. Quantifying fungal infection of plant leaves by digital image analysis using Scion Image software.

    PubMed

    Wijekoon, C P; Goodwin, P H; Hsiang, T

    2008-08-01

    A digital image analysis method previously used to evaluate leaf color changes due to nutritional changes was modified to measure the severity of several foliar fungal diseases. Images captured with a flatbed scanner or digital camera were analyzed with a freely available software package, Scion Image, to measure changes in leaf color caused by fungal sporulation or tissue damage. High correlations were observed between the percent diseased leaf area estimated by Scion Image analysis and the percent diseased leaf area from leaf drawings. These drawings of various foliar diseases came from a disease key previously developed to aid in visual estimation of disease severity. For leaves of Nicotiana benthamiana inoculated with different spore concentrations of the anthracnose fungus Colletotrichum destructivum, a high correlation was found between the percent diseased tissue measured by Scion Image analysis and the number of leaf spots. The method was adapted to quantify percent diseased leaf area ranging from 0 to 90% for anthracnose of lily-of-the-valley, apple scab, powdery mildew of phlox and rust of golden rod. In some cases, the brightness and contrast of the images were adjusted and other modifications were made, but these were standardized for each disease. Detached leaves were used with the flatbed scanner, but a method using attached leaves with a digital camera was also developed to make serial measurements of individual leaves to quantify symptom progression. This was successfully applied to monitor anthracnose on N. benthamiana leaves. Digital image analysis using Scion Image software is a useful tool for quantifying a wide variety of fungal interactions with plant leaves.

  16. Colonoscopy tutorial software made with a cadaver's sectioned images.

    PubMed

    Chung, Beom Sun; Chung, Min Suk; Park, Hyung Seon; Shin, Byeong-Seok; Kwon, Koojoo

    2016-11-01

    Novice doctors may watch tutorial videos in training for actual or computed tomographic (CT) colonoscopy. The conventional learning videos can be complemented by virtual colonoscopy software made with a cadaver's sectioned images (SIs). The objective of this study was to assist colonoscopy trainees with the new interactive software. Submucosal segmentation on the SIs was carried out through the whole length of the large intestine. With the SIs and segmented images, a three dimensional model was reconstructed. Six-hundred seventy-one proximal colonoscopic views (conventional views) and corresponding distal colonoscopic views (simulating the retroflexion of a colonoscope) were produced. Not only navigation views showing the current location of the colonoscope tip and its course, but also, supplementary description views were elaborated. The four corresponding views were put into convenient browsing software to be downloaded free from the homepage (anatomy.co.kr). The SI colonoscopy software with the realistic images and supportive tools was available to anybody. Users could readily notice the position and direction of the virtual colonoscope tip and recognize meaningful structures in colonoscopic views. The software is expected to be an auxiliary learning tool to improve technique and related knowledge in actual and CT colonoscopies. Hopefully, the software will be updated using raw images from the Visible Korean project.

  17. The influence of software filtering in digital mammography image quality

    NASA Astrophysics Data System (ADS)

    Michail, C.; Spyropoulou, V.; Kalyvas, N.; Valais, I.; Dimitropoulos, N.; Fountos, G.; Kandarakis, I.; Panayiotakis, G.

    2009-05-01

    Breast cancer is one of the most frequently diagnosed cancers among women. Several techniques have been developed to help in the early detection of breast cancer such as conventional and digital x-ray mammography, positron and single-photon emission mammography, etc. A key advantage in digital mammography is that images can be manipulated as simple computer image files. Thus non-dedicated commercially available image manipulation software can be employed to process and store the images. The image processing tools of the Photoshop (CS 2) software usually incorporate digital filters which may be used to reduce image noise, enhance contrast and increase spatial resolution. However, improving an image quality parameter may result in degradation of another. The aim of this work was to investigate the influence of three sharpening filters, named hereafter sharpen, sharpen more and sharpen edges on image resolution and noise. Image resolution was assessed by means of the Modulation Transfer Function (MTF).In conclusion it was found that the correct use of commercial non-dedicated software on digital mammograms may improve some aspects of image quality.

  18. MOPEX: a software package for astronomical image processing and visualization

    NASA Astrophysics Data System (ADS)

    Makovoz, David; Roby, Trey; Khan, Iffat; Booth, Hartley

    2006-06-01

    We present MOPEX - a software package for astronomical image processing and display. The package is a combination of command-line driven image processing software written in C/C++ with a Java-based GUI. The main image processing capabilities include creating mosaic images, image registration, background matching, point source extraction, as well as a number of minor image processing tasks. The combination of the image processing and display capabilities allows for much more intuitive and efficient way of performing image processing. The GUI allows for the control over the image processing and display to be closely intertwined. Parameter setting, validation, and specific processing options are entered by the user through a set of intuitive dialog boxes. Visualization feeds back into further processing by providing a prompt feedback of the processing results. The GUI also allows for further analysis by accessing and displaying data from existing image and catalog servers using a virtual observatory approach. Even though originally designed for the Spitzer Space Telescope mission, a lot of functionalities are of general usefulness and can be used for working with existing astronomical data and for new missions. The software used in the package has undergone intensive testing and benefited greatly from effective software reuse. The visualization part has been used for observation planning for both the Spitzer and Herschel Space Telescopes as part the tool Spot. The visualization capabilities of Spot have been enhanced and integrated with the image processing functionality of the command-line driven MOPEX. The image processing software is used in the Spitzer automated pipeline processing, which has been in operation for nearly 3 years. The image processing capabilities have also been tested in off-line processing by numerous astronomers at various institutions around the world. The package is multi-platform and includes automatic update capabilities. The software

  19. Vertical bone measurements from cone beam computed tomography images using different software packages.

    PubMed

    Vasconcelos, Taruska Ventorini; Neves, Frederico Sampaio; Moraes, Lívia Almeida Bueno; Freitas, Deborah Queiroz

    2015-01-01

    This article aimed at comparing the accuracy of linear measurement tools of different commercial software packages. Eight fully edentulous dry mandibles were selected for this study. Incisor, canine, premolar, first molar and second molar regions were selected. Cone beam computed tomography (CBCT) images were obtained with i-CAT Next Generation. Linear bone measurements were performed by one observer on the cross-sectional images using three different software packages: XoranCat®, OnDemand3D® and KDIS3D®, all able to assess DICOM images. In addition, 25% of the sample was reevaluated for the purpose of reproducibility. The mandibles were sectioned to obtain the gold standard for each region. Intraclass coefficients (ICC) were calculated to examine the agreement between the two periods of evaluation; the one-way analysis of variance performed with the post-hoc Dunnett test was used to compare each of the software-derived measurements with the gold standard. The ICC values were excellent for all software packages. The least difference between the software-derived measurements and the gold standard was obtained with the OnDemand3D and KDIS3D (-0.11 and -0.14 mm, respectively), and the greatest, with the XoranCAT (+0.25 mm). However, there was no statistical significant difference between the measurements obtained with the different software packages and the gold standard (p> 0.05). In conclusion, linear bone measurements were not influenced by the software package used to reconstruct the image from CBCT DICOM data.

  20. Uses of software in digital image analysis: a forensic report

    NASA Astrophysics Data System (ADS)

    Sharma, Mukesh; Jha, Shailendra

    2010-02-01

    Forensic image analysis is required an expertise to interpret the content of an image or the image itself in legal matters. Major sub-disciplines of forensic image analysis with law enforcement applications include photo-grammetry, photographic comparison, content analysis and image authentication. It has wide applications in forensic science range from documenting crime scenes to enhancing faint or indistinct patterns such as partial fingerprints. The process of forensic image analysis can involve several different tasks, regardless of the type of image analysis performed. Through this paper authors have tried to explain these tasks, which are described in to three categories: Image Compression, Image Enhancement & Restoration and Measurement Extraction. With the help of examples like signature comparison, counterfeit currency comparison and foot-wear sole impression using the software Canvas and Corel Draw.

  1. Approach to standardizing MR image intensity scale

    NASA Astrophysics Data System (ADS)

    Nyul, Laszlo G.; Udupa, Jayaram K.

    1999-05-01

    Despite the many advantages of MR images, they lack a standard image intensity scale. MR image intensity ranges and the meaning of intensity values vary even for the same protocol (P) and the same body region (D). This causes many difficulties in image display and analysis. We propose a two-step method for standardizing the intensity scale in such a way that for the same P and D, similar intensities will have similar meanings. In the first step, the parameters of the standardizing transformation are 'learned' from an image set. In the second step, for each MR study, these parameters are used to map their histogram into the standardized histogram. The method was tested quantitatively on 90 whole brain FSE T2, PD and T1 studies of MS patients and qualitatively on several other SE PD, T2 and SPGR studies of the grain and foot. Measurements using mean squared difference showed that the standardized image intensities have statistically significantly more consistent range and meaning than the originals. Fixed windows can be established for standardized imags and used for display without the need of per case adjustment. Preliminary results also indicate that the method facilitates improving the degree of automation of image segmentation.

  2. Single-molecule localization software applied to photon counting imaging.

    PubMed

    Hirvonen, Liisa M; Kilfeather, Tiffany; Suhling, Klaus

    2015-06-01

    Centroiding in photon counting imaging has traditionally been accomplished by a single-step, noniterative algorithm, often implemented in hardware. Single-molecule localization techniques in superresolution fluorescence microscopy are conceptually similar, but use more sophisticated iterative software-based fitting algorithms to localize the fluorophore. Here, we discuss common features and differences between single-molecule localization and photon counting imaging and investigate the suitability of single-molecule localization software for photon event localization. We find that single-molecule localization software packages designed for superresolution microscopy-QuickPALM, rapidSTORM, and ThunderSTORM-can work well when applied to photon counting imaging with a microchannel-plate-based intensified camera system: photon event recognition can be excellent, fixed pattern noise can be low, and the microchannel plate pores can easily be resolved.

  3. Modified control software for imaging ultracold atomic clouds

    SciTech Connect

    Whitaker, D. L.; Sharma, A.; Brown, J. M.

    2006-12-15

    A charge-coupled device (CCD) camera capable of taking high-quality images of ultracold atomic samples can often represent a significant portion of the equipment costs in atom trapping experiment. We have modified the commercial control software of a CCD camera designed for astronomical imaging to take absorption images of ultracold rubidium clouds. This camera is sensitive at 780 nm and has been modified to take three successive 16-bit images at full resolution. The control software can be integrated into a Matlab graphical user interface with fitting routines written as Matlab functions. This camera is capable of recording high-quality images at a fraction of the cost of similar cameras typically used in atom trapping experiments.

  4. TANGO standard software to control the Nuclotron beam slow extraction

    NASA Astrophysics Data System (ADS)

    Andreev, V. A.; Volkov, V. I.; Gorbachev, E. V.; Isadov, V. A.; Kirichenko, A. E.; Romanov, S. V.; Sedykh, G. S.

    2016-09-01

    TANGO Controls is a basis of the NICA control system. The report describes the software which integrates the Nuclotron beam slow extraction subsystem into the TANGO system of NICA. Objects of control are power supplies for resonance lenses. The software consists of the subsystem device server, remote client and web-module for viewing the subsystem data.

  5. Computer Software Configuration Item-Specific Flight Software Image Transfer Script Generator

    NASA Technical Reports Server (NTRS)

    Bolen, Kenny; Greenlaw, Ronald

    2010-01-01

    A K-shell UNIX script enables the International Space Station (ISS) Flight Control Team (FCT) operators in NASA s Mission Control Center (MCC) in Houston to transfer an entire or partial computer software configuration item (CSCI) from a flight software compact disk (CD) to the onboard Portable Computer System (PCS). The tool is designed to read the content stored on a flight software CD and generate individual CSCI transfer scripts that are capable of transferring the flight software content in a given subdirectory on the CD to the scratch directory on the PCS. The flight control team can then transfer the flight software from the PCS scratch directory to the Electronically Erasable Programmable Read Only Memory (EEPROM) of an ISS Multiplexer/ Demultiplexer (MDM) via the Indirect File Transfer capability. The individual CSCI scripts and the CSCI Specific Flight Software Image Transfer Script Generator (CFITSG), when executed a second time, will remove all components from their original execution. The tool will identify errors in the transfer process and create logs of the transferred software for the purposes of configuration management.

  6. Image processing software for providing radiometric inputs to land surface climatology models

    NASA Technical Reports Server (NTRS)

    Newcomer, Jeffrey A.; Goetz, Scott J.; Strebel, Donald E.; Hall, Forrest G.

    1989-01-01

    During the First International Land Surface Climatology Project (ISLSCP) Field Experiment (FIFE), 80 gigabytes of image data were generated from a variety of satellite and airborne sensors in a multidisciplinary attempt to study energy and mass exchange between the land surface and the atmosphere. To make these data readily available to researchers with a range of image data handling experience and capabilities, unique image-processing software was designed to perform a variety of nonstandard image-processing manipulations and to derive a set of standard-format image products. The nonconventional features of the software include: (1) adding new layers of geographic coordinates, and solar and viewing conditions to existing data; (2) providing image polygon extraction and calibration of data to at-sensor radiances; and, (3) generating standard-format derived image products that can be easily incorporated into radiometric or climatology models. The derived image products consist of easily handled ASCII descriptor files, byte image data files, and additional per-pixel integer data files (e.g., geographic coordinates, and sun and viewing conditions). Details of the solutions to the image-processing problems, the conventions adopted for handling a variety of satellite and aircraft image data, and the applicability of the output products to quantitative modeling are presented. They should be of general interest to future experiment and data-handling design considerations.

  7. Software development for a Ring Imaging Detector

    NASA Astrophysics Data System (ADS)

    Torisky, Benjamin; Benmokhtar, Fatiha

    2015-04-01

    Jefferson Lab (Jlab) is performing a large-scale upgrade to their Continuous Electron Beam Accelerator Facility (CEBAF) up to 12 GeV beam. The Large Acceptance Spectrometer (CLAS12) in Hall B is being upgraded and a new Ring Imaging CHerenkov (RICH) detector is being developed to provide better kaon - pion separation throughout the 3 to 12 GeV range. With this addition, when the electron beam hits the target, the resulting pions, kaons, and other particles will pass through a wall of translucent aerogel tiles and create Cherenkov radiation. This light can then be accurately detected by a large array of Multi-Anode PhotoMultiplier Tubes (MA-PMT). I am presenting my work on the implementation of Java based reconstruction programs for the RICH in the CLAS12 main analysis package.

  8. Software Development for Ring Imaging Detector

    NASA Astrophysics Data System (ADS)

    Torisky, Benjamin

    2016-03-01

    Jefferson Lab (Jlab) is performing a large-scale upgrade to their Continuous Electron Beam Accelerator Facility (CEBAF) up to 12GeV beam. The Large Acceptance Spectrometer (CLAS12) in Hall B is being upgraded and a new Ring Imaging Cherenkov (RICH) detector is being developed to provide better kaon - pion separation throughout the 3 to 12 GeV range. With this addition, when the electron beam hits the target, the resulting pions, kaons, and other particles will pass through a wall of translucent aerogel tiles and create Cherenkov radiation. This light can then be accurately detected by a large array of Multi-Anode PhotoMultiplier Tubes (MA-PMT). I am presenting an update on my work on the implementation of Java based reconstruction programs for the RICH in the CLAS12 main analysis package.

  9. Cost Effectiveness Trade-Offs in Software Support Environment Standardization.

    DTIC Science & Technology

    1986-09-30

    Memorandum. OTA-TM-SET-36, April 1986. Paper discusses difficulties of measurement in this field. (Pind8l] Pindyck , Robert S., and Rubinfeld, Daniel L...software projects. (Boeh82] Boehm, Barry W., Elwell, James F., Pyster, Arthur B., Stuckle, E. Donald, and Williams, Robert D. "The TRW Software...received their Ph.D. degrees as M.I.T., where Pindyck is Professor of Applied Economics at the Sloan School of Management. Rubinfeld is Assoc

  10. Combining speech recognition software with Digital Imaging and Communications in Medicine (DICOM) workstation software on a Microsoft Windows platform.

    PubMed

    Ernst, R; Carpenter, W; Torres, W; Wheeler, S

    2001-06-01

    This presentation describes our experience in combining speech recognition software, clinical review software, and other software products on a single computer. Different processor speeds, random access memory (RAM), and computer costs were evaluated. We found that combining continuous speech recognition software with Digital Imaging and Communications in Medicine (DICOM) workstation software on the same platform is feasible and can lead to substantial savings of hardware cost. This combination optimizes use of limited workspace and can improve radiology workflow.

  11. Non-Imaging Software/Data Analysis Requirements

    NASA Technical Reports Server (NTRS)

    1984-01-01

    The analysis software needs of the non-imaging planetary data user are discussed. Assumptions as to the nature of the planetary science data centers where the data are physically stored are advanced, the scope of the non-imaging data is outlined, and facilities that users are likely to need to define and access data are identified. Data manipulation and analysis needs and display graphics are discussed.

  12. Comparison of ISO 9000 and recent software life cycle standards to nuclear regulatory review guidance

    SciTech Connect

    Preckshot, G.G.; Scott, J.A.

    1998-01-20

    Lawrence Livermore National Laboratory is assisting the Nuclear Regulatory Commission with the assessment of certain quality and software life cycle standards to determine whether additional guidance for the U.S. nuclear regulatory context should be derived from the standards. This report describes the nature of the standards and compares the guidance of the standards to that of the recently updated Standard Review Plan.

  13. Software to model AXAF-I image quality

    NASA Technical Reports Server (NTRS)

    Ahmad, Anees; Feng, Chen

    1995-01-01

    A modular user-friendly computer program for the modeling of grazing-incidence type x-ray optical systems has been developed. This comprehensive computer software GRAZTRACE covers the manipulation of input data, ray tracing with reflectivity and surface deformation effects, convolution with x-ray source shape, and x-ray scattering. The program also includes the capabilities for image analysis, detector scan modeling, and graphical presentation of the results. A number of utilities have been developed to interface the predicted Advanced X-ray Astrophysics Facility-Imaging (AXAF-I) mirror structural and thermal distortions with the ray-trace. This software is written in FORTRAN 77 and runs on a SUN/SPARC station. An interactive command mode version and a batch mode version of the software have been developed.

  14. SIMA: Python software for analysis of dynamic fluorescence imaging data.

    PubMed

    Kaifosh, Patrick; Zaremba, Jeffrey D; Danielson, Nathan B; Losonczy, Attila

    2014-01-01

    Fluorescence imaging is a powerful method for monitoring dynamic signals in the nervous system. However, analysis of dynamic fluorescence imaging data remains burdensome, in part due to the shortage of available software tools. To address this need, we have developed SIMA, an open source Python package that facilitates common analysis tasks related to fluorescence imaging. Functionality of this package includes correction of motion artifacts occurring during in vivo imaging with laser-scanning microscopy, segmentation of imaged fields into regions of interest (ROIs), and extraction of signals from the segmented ROIs. We have also developed a graphical user interface (GUI) for manual editing of the automatically segmented ROIs and automated registration of ROIs across multiple imaging datasets. This software has been designed with flexibility in mind to allow for future extension with different analysis methods and potential integration with other packages. Software, documentation, and source code for the SIMA package and ROI Buddy GUI are freely available at http://www.losonczylab.org/sima/.

  15. Predicting the image nonuniformities on IR devices: a dedicated software

    NASA Astrophysics Data System (ADS)

    Rollin, Joel

    1993-04-01

    This paper deals with illumination assessment for IR devices: a skill software has been developed to predict image non-uniformities, including Narcissus effect. Built upon real raytraces, special algorithms have been carried out to reduce the computation time and to increase the accuracy. Some tutorial examples are provided.

  16. The Role and Design of Screen Images in Software Documentation.

    ERIC Educational Resources Information Center

    van der Meij, Hans

    2000-01-01

    Discussion of learning a new computer software program focuses on how to support the joint handling of a manual, input devices, and screen display. Describes a study that examined three design styles for manuals that included screen images to reduce split-attention problems and discusses theory versus practice and cognitive load theory.…

  17. Backhoe 3D "gold standard" image

    NASA Astrophysics Data System (ADS)

    Gorham, LeRoy; Naidu, Kiranmai D.; Majumder, Uttam; Minardi, Michael A.

    2005-05-01

    ViSUAl-D (VIsual Sar Using ALl Dimensions), a 2004 DARPA/IXO seedling effort, is developing a capability for reliable high confidence ID from standoff ranges. Recent conflicts have demonstrated that the warfighter would greatly benefit from the ability to ID targets beyond visual and electro-optical ranges[1]. Forming optical-quality SAR images while exploiting full polarization, wide angles, and large bandwidth would be key evidence such a capability is achievable. Using data generated by the Xpatch EM scattering code, ViSUAl-D investigates all degrees of freedom available to the radar designer, including 6 GHz bandwidth, full polarization and angle sampling over 2π steradians (upper hemisphere), in order to produce a "literal" image or representation of the target. This effort includes the generation of a "Gold Standard" image that can be produced at X-band utilizing all available target data. This "Gold Standard" image of the backhoe will serve as a test bed for future more relevant military targets and their image development. The seedling team produced a public release data which was released at the 2004 SPIE conference, as well as a 3D "Gold Standard" backhoe image using a 3D image formation algorithm. This paper describes the full backhoe data set, the image formation algorithm, the visualization process and the resulting image.

  18. Open Architecture Standard for NASA's Software-Defined Space Telecommunications Radio Systems

    NASA Technical Reports Server (NTRS)

    Reinhart, Richard C.; Johnson, Sandra K.; Kacpura, Thomas J.; Hall, Charles S.; Smith, Carl R.; Liebetreu, John

    2008-01-01

    NASA is developing an architecture standard for software-defined radios used in space- and ground-based platforms to enable commonality among radio developments to enhance capability and services while reducing mission and programmatic risk. Transceivers (or transponders) with functionality primarily defined in software (e.g., firmware) have the ability to change their functional behavior through software alone. This radio architecture standard offers value by employing common waveform software interfaces, method of instantiation, operation, and testing among different compliant hardware and software products. These common interfaces within the architecture abstract application software from the underlying hardware to enable technology insertion independently at either the software or hardware layer. This paper presents the initial Space Telecommunications Radio System (STRS) Architecture for NASA missions to provide the desired software abstraction and flexibility while minimizing the resources necessary to support the architecture.

  19. FBI compression standard for digitized fingerprint images

    NASA Astrophysics Data System (ADS)

    Brislawn, Christopher M.; Bradley, Jonathan N.; Onyshczak, Remigius J.; Hopper, Thomas

    1996-11-01

    The FBI has formulated national standards for digitization and compression of gray-scale fingerprint images. The compression algorithm for the digitized images is based on adaptive uniform scalar quantization of a discrete wavelet transform subband decomposition, a technique referred to as the wavelet/scalar quantization method. The algorithm produces archival-quality images at compression ratios of around 15 to 1 and will allow the current database of paper fingerprint cards to be replaced by digital imagery. A compliance testing program is also being implemented to ensure high standards of image quality and interchangeability of data between different implementations. We will review the current status of the FBI standard, including the compliance testing process and the details of the first-generation encoder.

  20. SAO mission support software and data standards, version 1.0

    NASA Technical Reports Server (NTRS)

    Hsieh, P.

    1993-01-01

    This document defines the software developed by the SAO AXAF Mission Support (MS) Program and defines standards for the software development process and control of data products generated by the software. The SAO MS is tasked to develop and use software to perform a variety of functions in support of the AXAF mission. Software is developed by software engineers and scientists, and commercial off-the-shelf (COTS) software is used either directly or customized through the use of scripts to implement analysis procedures. Software controls real-time laboratory instruments, performs data archiving, displays data, and generates model predictions. Much software is used in the analysis of data to generate data products that are required by the AXAF project, for example, on-orbit mirror performance predictions or detailed characterization of the mirror reflection performance with energy.

  1. Stromatoporoid biometrics using image analysis software: A first order approach

    NASA Astrophysics Data System (ADS)

    Wolniewicz, Pawel

    2010-04-01

    Strommetric is a new image analysis computer program that performs morphometric measurements of stromatoporoid sponges. The program measures 15 features of skeletal elements (pillars and laminae) visible in both longitudinal and transverse thin sections. The software is implemented in C++, using the Open Computer Vision (OpenCV) library. The image analysis system distinguishes skeletal elements from sparry calcite using Otsu's method for image thresholding. More than 150 photos of thin sections were used as a test set, from which 36,159 measurements were obtained. The software provided about one hundred times more data than the current method applied until now. The data obtained are reproducible, even if the work is repeated by different workers. Thus the method makes the biometric studies of stromatoporoids objective.

  2. MMX-I: data-processing software for multimodal X-ray imaging and tomography

    PubMed Central

    Bergamaschi, Antoine; Medjoubi, Kadda; Messaoudi, Cédric; Marco, Sergio; Somogyi, Andrea

    2016-01-01

    A new multi-platform freeware has been developed for the processing and reconstruction of scanning multi-technique X-ray imaging and tomography datasets. The software platform aims to treat different scanning imaging techniques: X-ray fluorescence, phase, absorption and dark field and any of their combinations, thus providing an easy-to-use data processing tool for the X-ray imaging user community. A dedicated data input stream copes with the input and management of large datasets (several hundred GB) collected during a typical multi-technique fast scan at the Nanoscopium beamline and even on a standard PC. To the authors’ knowledge, this is the first software tool that aims at treating all of the modalities of scanning multi-technique imaging and tomography experiments. PMID:27140159

  3. Image Fusion Software in the Clearpem-Sonic Project

    NASA Astrophysics Data System (ADS)

    Pizzichemi, M.; di Vara, N.; Cucciati, G.; Ghezzi, A.; Paganoni, M.; Farina, F.; Frisch, B.; Bugalho, R.

    2012-08-01

    ClearPEM-Sonic is a mammography scanner that combines Positron Emission Tomography with 3D ultrasound echographic and elastographic imaging. It has been developed to improve early stage detection of breast cancer by combining metabolic and anatomical information. The PET system has been developed by the Crystal Clear Collaboration, while the 3D ultrasound probe has been provided by SuperSonic Imagine. In this framework, the visualization and fusion software is an essential tool for the radiologists in the diagnostic process. This contribution discusses the design choices, the issues faced during the implementation, and the commissioning of the software tools developed for ClearPEM-Sonic.

  4. Pattern recognition software and techniques for biological image analysis.

    PubMed

    Shamir, Lior; Delaney, John D; Orlov, Nikita; Eckley, D Mark; Goldberg, Ilya G

    2010-11-24

    The increasing prevalence of automated image acquisition systems is enabling new types of microscopy experiments that generate large image datasets. However, there is a perceived lack of robust image analysis systems required to process these diverse datasets. Most automated image analysis systems are tailored for specific types of microscopy, contrast methods, probes, and even cell types. This imposes significant constraints on experimental design, limiting their application to the narrow set of imaging methods for which they were designed. One of the approaches to address these limitations is pattern recognition, which was originally developed for remote sensing, and is increasingly being applied to the biology domain. This approach relies on training a computer to recognize patterns in images rather than developing algorithms or tuning parameters for specific image processing tasks. The generality of this approach promises to enable data mining in extensive image repositories, and provide objective and quantitative imaging assays for routine use. Here, we provide a brief overview of the technologies behind pattern recognition and its use in computer vision for biological and biomedical imaging. We list available software tools that can be used by biologists and suggest practical experimental considerations to make the best use of pattern recognition techniques for imaging assays.

  5. Pattern Recognition Software and Techniques for Biological Image Analysis

    PubMed Central

    Shamir, Lior; Delaney, John D.; Orlov, Nikita; Eckley, D. Mark; Goldberg, Ilya G.

    2010-01-01

    The increasing prevalence of automated image acquisition systems is enabling new types of microscopy experiments that generate large image datasets. However, there is a perceived lack of robust image analysis systems required to process these diverse datasets. Most automated image analysis systems are tailored for specific types of microscopy, contrast methods, probes, and even cell types. This imposes significant constraints on experimental design, limiting their application to the narrow set of imaging methods for which they were designed. One of the approaches to address these limitations is pattern recognition, which was originally developed for remote sensing, and is increasingly being applied to the biology domain. This approach relies on training a computer to recognize patterns in images rather than developing algorithms or tuning parameters for specific image processing tasks. The generality of this approach promises to enable data mining in extensive image repositories, and provide objective and quantitative imaging assays for routine use. Here, we provide a brief overview of the technologies behind pattern recognition and its use in computer vision for biological and biomedical imaging. We list available software tools that can be used by biologists and suggest practical experimental considerations to make the best use of pattern recognition techniques for imaging assays. PMID:21124870

  6. Development of standard digital images for pneumoconiosis.

    PubMed

    Lee, Won-Jeong; Choi, Byung-Soon; Kim, Sung Jin; Park, Choong-Ki; Park, Jai-Soung; Tae, Seok; Hering, Kurt Georg

    2011-11-01

    We developed the standard digital images (SDIs) to be used in the classification and recognition of pneumoconiosis. From July 3, 2006 through August 31, 2007, 531 retired male workers exposed to inorganic dust were examined by digital (DR) and analog radiography (AR) on the same day, after being approved by our institutional review board and obtaining informed consent from all participants. All images were twice classified according to the International Labour Office (ILO) 2000 guidelines with reference to ILO standard analog radiographs (SARs) by four chest radiologists. After consensus reading on 349 digital images matched with the first selected analog images, 120 digital images were selected as the SDIs that considered the distribution of pneumoconiosis findings. Images with profusion category 0/1, 1, 2, and 3 were 12, 50, 40, and 15, respectively, and a large opacity were in 43 images (A = 20, B = 22, C = 1). Among pleural abnormality, costophrenic angle obliteration, pleural plaque and thickening were in 11 (9.2%), 31 (25.8%), and 9 (7.5%) images, respectively. Twenty-one of 29 symbols were present except cp, ef, ho, id, me, pa, ra, and rp. A set of 120 SDIs had more various pneumoconiosis findings than ILO SARs that were developed from adequate methods. It can be used as digital reference images for the recognition and classification of pneumoconiosis.

  7. Software Defined Radio Standard Architecture and its Application to NASA Space Missions

    NASA Technical Reports Server (NTRS)

    Andro, Monty; Reinhart, Richard C.

    2006-01-01

    A software defined radio (SDR) architecture used in space-based platforms proposes to standardize certain aspects of radio development such as interface definitions, functional control and execution, and application software and firmware development. NASA has charted a team to develop an open software defined radio hardware and software architecture to support NASA missions and determine the viability of an Agency-wide Standard. A draft concept of the proposed standard has been released and discussed among organizations in the SDR community. Appropriate leveraging of the JTRS SCA, OMG's SWRadio Architecture and other aspects are considered. A standard radio architecture offers potential value by employing common waveform software instantiation, operation, testing and software maintenance. While software defined radios offer greater flexibility, they also poses challenges to the radio development for the space environment in terms of size, mass and power consumption and available technology. An SDR architecture for space must recognize and address the constraints of space flight hardware, and systems along with flight heritage and culture. NASA is actively participating in the development of technology and standards related to software defined radios. As NASA considers a standard radio architecture for space communications, input and coordination from government agencies, the industry, academia, and standards bodies is key to a successful architecture. The unique aspects of space require thorough investigation of relevant terrestrial technologies properly adapted to space. The talk will describe NASA's current effort to investigate SDR applications to space missions and a brief overview of a candidate architecture under consideration for space based platforms.

  8. Standardizing PhenoCam Image Processing and Data Products

    NASA Astrophysics Data System (ADS)

    Milliman, T. E.; Richardson, A. D.; Klosterman, S.; Gray, J. M.; Hufkens, K.; Aubrecht, D.; Chen, M.; Friedl, M. A.

    2014-12-01

    The PhenoCam Network (http://phenocam.unh.edu) contains an archive of imagery from digital webcams to be used for scientific studies of phenological processes of vegetation. The image archive continues to grow and currently has over 4.8 million images representing 850 site-years of data. Time series of broadband reflectance (e.g., red, green, blue, infrared bands) and derivative vegetation indices (e.g. green chromatic coordinate or GCC) are calculated for regions of interest (ROI) within each image series. These time series form the basis for subsequent analysis, such as spring and autumn transition date extraction (using curvature analysis techniques) and modeling the climate-phenology relationship. Processing is relatively straightforward but time consuming, with some sites having more than 100,000 images available. While the PhenoCam Network distributes the original image data, it is our goal to provide higher-level vegetation phenology products, generated in a standardized way, to encourage use of the data without the need to download and analyze individual images. We describe here the details of the standard image processing procedures, and also provide a description of the products that will be available for download. Products currently in development include an "all-image" file, which contains a statistical summary of the red, green and blue bands over the pixels in predefined ROI's for each image from a site. This product is used to generate 1-day and 3-day temporal aggregates with 90th percentile values of GCC for the specified time-periodwith standard image selection/filtering criteria applied. Sample software (in python, R, MATLAB) that can be used to read in and plot these products will also be described.

  9. Open source tools for standardized privacy protection of medical images

    NASA Astrophysics Data System (ADS)

    Lien, Chung-Yueh; Onken, Michael; Eichelberg, Marco; Kao, Tsair; Hein, Andreas

    2011-03-01

    In addition to the primary care context, medical images are often useful for research projects and community healthcare networks, so-called "secondary use". Patient privacy becomes an issue in such scenarios since the disclosure of personal health information (PHI) has to be prevented in a sharing environment. In general, most PHIs should be completely removed from the images according to the respective privacy regulations, but some basic and alleviated data is usually required for accurate image interpretation. Our objective is to utilize and enhance these specifications in order to provide reliable software implementations for de- and re-identification of medical images suitable for online and offline delivery. DICOM (Digital Imaging and Communications in Medicine) images are de-identified by replacing PHI-specific information with values still being reasonable for imaging diagnosis and patient indexing. In this paper, this approach is evaluated based on a prototype implementation built on top of the open source framework DCMTK (DICOM Toolkit) utilizing standardized de- and re-identification mechanisms. A set of tools has been developed for DICOM de-identification that meets privacy requirements of an offline and online sharing environment and fully relies on standard-based methods.

  10. SIVIC: Open-Source, Standards-Based Software for DICOM MR Spectroscopy Workflows

    PubMed Central

    Crane, Jason C.; Olson, Marram P.; Nelson, Sarah J.

    2013-01-01

    Quantitative analysis of magnetic resonance spectroscopic imaging (MRSI) data provides maps of metabolic parameters that show promise for improving medical diagnosis and therapeutic monitoring. While anatomical images are routinely reconstructed on the scanner, formatted using the DICOM standard, and interpreted using PACS workstations, this is not the case for MRSI data. The evaluation of MRSI data is made more complex because files are typically encoded with vendor-specific file formats and there is a lack of standardized tools for reconstruction, processing, and visualization. SIVIC is a flexible open-source software framework and application suite that enables a complete scanner-to-PACS workflow for evaluation and interpretation of MRSI data. It supports conversion of vendor-specific formats into the DICOM MR spectroscopy (MRS) standard, provides modular and extensible reconstruction and analysis pipelines, and provides tools to support the unique visualization requirements associated with such data. Workflows are presented which demonstrate the routine use of SIVIC to support the acquisition, analysis, and delivery to PACS of clinical 1H MRSI datasets at UCSF. PMID:23970895

  11. Current and future trends in marine image annotation software

    NASA Astrophysics Data System (ADS)

    Gomes-Pereira, Jose Nuno; Auger, Vincent; Beisiegel, Kolja; Benjamin, Robert; Bergmann, Melanie; Bowden, David; Buhl-Mortensen, Pal; De Leo, Fabio C.; Dionísio, Gisela; Durden, Jennifer M.; Edwards, Luke; Friedman, Ariell; Greinert, Jens; Jacobsen-Stout, Nancy; Lerner, Steve; Leslie, Murray; Nattkemper, Tim W.; Sameoto, Jessica A.; Schoening, Timm; Schouten, Ronald; Seager, James; Singh, Hanumant; Soubigou, Olivier; Tojeira, Inês; van den Beld, Inge; Dias, Frederico; Tempera, Fernando; Santos, Ricardo S.

    2016-12-01

    Given the need to describe, analyze and index large quantities of marine imagery data for exploration and monitoring activities, a range of specialized image annotation tools have been developed worldwide. Image annotation - the process of transposing objects or events represented in a video or still image to the semantic level, may involve human interactions and computer-assisted solutions. Marine image annotation software (MIAS) have enabled over 500 publications to date. We review the functioning, application trends and developments, by comparing general and advanced features of 23 different tools utilized in underwater image analysis. MIAS requiring human input are basically a graphical user interface, with a video player or image browser that recognizes a specific time code or image code, allowing to log events in a time-stamped (and/or geo-referenced) manner. MIAS differ from similar software by the capability of integrating data associated to video collection, the most simple being the position coordinates of the video recording platform. MIAS have three main characteristics: annotating events in real time, posteriorly to annotation and interact with a database. These range from simple annotation interfaces, to full onboard data management systems, with a variety of toolboxes. Advanced packages allow to input and display data from multiple sensors or multiple annotators via intranet or internet. Posterior human-mediated annotation often include tools for data display and image analysis, e.g. length, area, image segmentation, point count; and in a few cases the possibility of browsing and editing previous dive logs or to analyze the annotations. The interaction with a database allows the automatic integration of annotations from different surveys, repeated annotation and collaborative annotation of shared datasets, browsing and querying of data. Progress in the field of automated annotation is mostly in post processing, for stable platforms or still images

  12. The application of image processing software: Photoshop in environmental design

    NASA Astrophysics Data System (ADS)

    Dong, Baohua; Zhang, Chunmi; Zhuo, Chen

    2011-02-01

    In the process of environmental design and creation, the design sketch holds a very important position in that it not only illuminates the design's idea and concept but also shows the design's visual effects to the client. In the field of environmental design, computer aided design has made significant improvement. Many types of specialized design software for environmental performance of the drawings and post artistic processing have been implemented. Additionally, with the use of this software, working efficiency has greatly increased and drawings have become more specific and more specialized. By analyzing the application of photoshop image processing software in environmental design and comparing and contrasting traditional hand drawing and drawing with modern technology, this essay will further explore the way for computer technology to play a bigger role in environmental design.

  13. A standard interface between simulation programs and systems analysis software.

    PubMed

    Reichert, P

    2006-01-01

    A simple interface between simulation programs and systems analytical software is proposed. This interface is designed to facilitate linkage of environmental simulation programs with systems analytical software and thus can contribute to remedying the deficiency in applying systems analytical techniques to environmental modelling studies. The proposed concept, consisting of a text file interface combined with a batch mode simulation program call, is independent of model structure, operating system and programming language. It is open for implementation by academic and commercial simulation and systems analytical software developers and is very simple to implement. Its practicability is demonstrated by implementations for three environmental simulation packages (AQUASIM, SWAT and LEACHM) and two systems analytical program packages (UNCSIM, SUFI). The properties listed above and the demonstration of the ease of implementation of the approach are prerequisites for the stimulation of a widespread implementation of the proposed interface that would be beneficial for the dissemination of systems analytical techniques in the environmental and engineering sciences. Furthermore, such a development could stimulate the transfer of systems analytical techniques between different fields of application.

  14. Content standards for medical image metadata

    NASA Astrophysics Data System (ADS)

    d'Ornellas, Marcos C.; da Rocha, Rafael P.

    2003-12-01

    Medical images are at the heart of the healthcare diagnostic procedures. They have provided not only a noninvasive mean to view anatomical cross-sections of internal organs but also a mean for physicians to evaluate the patient"s diagnosis and monitor the effects of the treatment. For a Medical Center, the emphasis may shift from the generation of image to post processing and data management since the medical staff may generate even more processed images and other data from the original image after various analyses and post processing. A medical image data repository for health care information system is becoming a critical need. This data repository would contain comprehensive patient records, including information such as clinical data and related diagnostic images, and post-processed images. Due to the large volume and complexity of the data as well as the diversified user access requirements, the implementation of the medical image archive system will be a complex and challenging task. This paper discusses content standards for medical image metadata. In addition it also focuses on the image metadata content evaluation and metadata quality management.

  15. Determining Angle of Humeral Torsion Using Image Software Technique

    PubMed Central

    Sethi, Madhu; Vasudeva, Neelam

    2016-01-01

    Introduction Several researches have been done on the measurement of angles of humeral torsion in different parts of the world. Previously described methods were more complicated, not much accurate, cumbersome or required sophisticated instruments. Aim The present study was conducted with the aim to determine the angles of humeral torsion with a newer simple technique using digital images and image tool software. Materials and Methods A total of 250 dry normal adult human humeri were obtained from the bone bank of Department of Anatomy. The length and mid-shaft circumference of each bone was measured with the help of measuring tape. The angle of humeral torsion was measured directly from the digital images by the image analysis using Image Tool 3.0 software program. The data was analysed statistically with SPSS version 17 using unpaired t-test and Spearman’s rank order correlation coefficient. Results The mean angle of torsion was 64.57°±7.56°. On the right side it was 66.84°±9.69°, whereas, on the left side it was found to be 63.31±9.50°. The mean humeral length was 31.6 cm on right side and 30.33 cm on left side. Mid shaft circumference was 5.79 on right side and 5.63 cm on left side. No statistical differences were seen in angles between right and left humeri (p>0.001). Conclusion From our study, it was concluded that circumference of shaft is inversely proportional to angle of humeral torsion. The length and side of humerus has no relation with the humeral torsion. With advancement of digital technology, it is better to use new image softwares for anatomical studies. PMID:27891326

  16. Software for visualization, analysis, and manipulation of laser scan images

    NASA Astrophysics Data System (ADS)

    Burnsides, Dennis B.

    1997-03-01

    The recent introduction of laser surface scanning to scientific applications presents a challenge to computer scientists and engineers. Full utilization of this two- dimensional (2-D) and three-dimensional (3-D) data requires advances in techniques and methods for data processing and visualization. This paper explores the development of software to support the visualization, analysis and manipulation of laser scan images. Specific examples presented are from on-going efforts at the Air Force Computerized Anthropometric Research and Design (CARD) Laboratory.

  17. Sungrabber - Software for Measurements on Solar Synoptic Images

    NASA Astrophysics Data System (ADS)

    Hržina, D.; Roša, D.; Hanslmeier, A.; Ruždjak, V.; Brajša, R.

    Measurement of positions of the tracers on synoptic solar images and conversion to heliographic coordinates is a time-consuming procedure with different sources of errors. To make measurements faster and easier, the application "Sungrabber" was developed. The data of the measured heliographic coordinates are stored in text files which are linked to the related solar images, which allows also a fast and simple comparison of the measurements from different sources. Extension of the software is possible and therefore Sungrabber can be used for different purposes (e.g. determining the solar rotation rate, proper motions of the tracers on the Sun, etc.).

  18. Software phantom for the synthesis of equilibrium radionuclide ventriculography images.

    PubMed

    Ruiz-de-Jesus, Oscar; Yanez-Suarez, Oscar; Jimenez-Angeles, Luis; Vallejo-Venegas, Enrique

    2006-01-01

    This paper presents the novel design of a software phantom for the evaluation of equilibrium radionuclide ventriculography systems. Through singular value decomposition, the data matrix corresponding to an equilibrium image series is decomposed into both spatial and temporal fundamental components that can be parametrized. This parametric model allows for the application of user-controlled conditions related to a desired dynamic behavior. Being invertible, the decomposition is used to regenerate the radionuclide image series, which is then translated into a DICOM ventriculography file that can be read by commercial equipment.

  19. Parallel-Processing Software for Creating Mosaic Images

    NASA Technical Reports Server (NTRS)

    Klimeck, Gerhard; Deen, Robert; McCauley, Michael; DeJong, Eric

    2008-01-01

    A computer program implements parallel processing for nearly real-time creation of panoramic mosaics of images of terrain acquired by video cameras on an exploratory robotic vehicle (e.g., a Mars rover). Because the original images are typically acquired at various camera positions and orientations, it is necessary to warp the images into the reference frame of the mosaic before stitching them together to create the mosaic. [Also see "Parallel-Processing Software for Correlating Stereo Images," Software Supplement to NASA Tech Briefs, Vol. 31, No. 9 (September 2007) page 26.] The warping algorithm in this computer program reflects the considerations that (1) for every pixel in the desired final mosaic, a good corresponding point must be found in one or more of the original images and (2) for this purpose, one needs a good mathematical model of the cameras and a good correlation of individual pixels with respect to their positions in three dimensions. The desired mosaic is divided into slices, each of which is assigned to one of a number of central processing units (CPUs) operating simultaneously. The results from the CPUs are gathered and placed into the final mosaic. The time taken to create the mosaic depends upon the number of CPUs, the speed of each CPU, and whether a local or a remote data-staging mechanism is used.

  20. Web-based interactive 2D/3D medical image processing and visualization software.

    PubMed

    Mahmoudi, Seyyed Ehsan; Akhondi-Asl, Alireza; Rahmani, Roohollah; Faghih-Roohi, Shahrooz; Taimouri, Vahid; Sabouri, Ahmad; Soltanian-Zadeh, Hamid

    2010-05-01

    There are many medical image processing software tools available for research and diagnosis purposes. However, most of these tools are available only as local applications. This limits the accessibility of the software to a specific machine, and thus the data and processing power of that application are not available to other workstations. Further, there are operating system and processing power limitations which prevent such applications from running on every type of workstation. By developing web-based tools, it is possible for users to access the medical image processing functionalities wherever the internet is available. In this paper, we introduce a pure web-based, interactive, extendable, 2D and 3D medical image processing and visualization application that requires no client installation. Our software uses a four-layered design consisting of an algorithm layer, web-user-interface layer, server communication layer, and wrapper layer. To compete with extendibility of the current local medical image processing software, each layer is highly independent of other layers. A wide range of medical image preprocessing, registration, and segmentation methods are implemented using open source libraries. Desktop-like user interaction is provided by using AJAX technology in the web-user-interface. For the visualization functionality of the software, the VRML standard is used to provide 3D features over the web. Integration of these technologies has allowed implementation of our purely web-based software with high functionality without requiring powerful computational resources in the client side. The user-interface is designed such that the users can select appropriate parameters for practical research and clinical studies.

  1. [Development of a software standardizing optical density with operation settings related to several limitations].

    PubMed

    Tu, Xiao-Ming; Zhang, Zuo-Heng; Wan, Cheng; Zheng, Yu; Xu, Jin-Mei; Zhang, Yuan-Yuan; Luo, Jian-Ping; Wu, Hai-Wei

    2012-12-01

    To develop a software that can be used to standardize optical density to normalize the procedures and results of standardization in order to effectively solve several problems generated during standardization of in-direct ELISA results. The software was designed based on the I-STOD method with operation settings to solve the problems that one might encounter during the standardization. Matlab GUI was used as a tool for the development. The software was tested with the results of the detection of sera of persons from schistosomiasis japonica endemic areas. I-STOD V1.0 (WINDOWS XP/WIN 7, 0.5 GB) was successfully developed to standardize optical density. A serial of serum samples from schistosomiasis japonica endemic areas were used to examine the operational effects of I-STOD V1.0 software. The results indicated that the software successfully overcame several problems including reliability of standard curve, applicable scope of samples and determination of dilution for samples outside the scope, so that I-STOD was performed more conveniently and the results of standardization were more consistent. I-STOD V1.0 is a professional software based on I-STOD. It can be easily operated and can effectively standardize the testing results of in-direct ELISA.

  2. Development of Automatic Testing Tool for `Design & Coding Standard' for Railway Signaling Software

    NASA Astrophysics Data System (ADS)

    Hwang, Jong-gyu; Jo, Hyun-jeong

    2009-08-01

    In accordance with the development of recent computer technology, the dependency of railway signaling system on the computer software is being increased further, and accordingly, the testing for the safety and reliability of railway signaling system software became more important. This thesis suggested an automated testing tool for coding rules on this railway signaling system software, and presented its result of implementation. The testing items in the implemented tool had referred to the international standards in relation to the software for railway system and MISRA-C standards. This automated testing tool for railway signaling system can be utilized at the assessment stage for railway signaling system software also, and it is anticipated that it can be utilized usefully at the software development stage also.

  3. The quest for standards in medical imaging.

    PubMed

    Gibaud, Bernard

    2011-05-01

    This article focuses on standards supporting interoperability and system integration in the medical imaging domain. We introduce the basic concepts and actors and we review the most salient achievements in this domain, especially with the DICOM standard, and the definition of IHE integration profiles. We analyze and discuss what was successful, and what could still be more widely adopted by industry. We then sketch out a perspective of what should be done next, based on our vision of new requirements for the next decade. In particular, we discuss the challenges of a more explicit sharing of image and image processing semantics, and we discuss the help that semantic web technologies (and especially ontologies) may bring to achieving this goal.

  4. Planning the Unplanned Experiment: Assessing the Efficacy of Standards for Safety Critical Software

    NASA Technical Reports Server (NTRS)

    Graydon, Patrick J.; Holloway, C. Michael

    2015-01-01

    We need well-founded means of determining whether software is t for use in safety-critical applications. While software in industries such as aviation has an excellent safety record, the fact that software aws have contributed to deaths illustrates the need for justi ably high con dence in software. It is often argued that software is t for safety-critical use because it conforms to a standard for software in safety-critical systems. But little is known about whether such standards `work.' Reliance upon a standard without knowing whether it works is an experiment; without collecting data to assess the standard, this experiment is unplanned. This paper reports on a workshop intended to explore how standards could practicably be assessed. Planning the Unplanned Experiment: Assessing the Ecacy of Standards for Safety Critical Software (AESSCS) was held on 13 May 2014 in conjunction with the European Dependable Computing Conference (EDCC). We summarize and elaborate on the workshop's discussion of the topic, including both the presented positions and the dialogue that ensued.

  5. The Effective Use of System and Software Architecture Standards for Software Technology Readiness Assessments

    DTIC Science & Technology

    2011-05-01

    DEV V1.2” SEI Course. 2 Acknowledgements • This work would not have been possible without the help of the following people of The Aerospace...Motivation • Technology Readiness Assessments – the 64,000-foot View • Tutorial Scope • Risks of Software CTE Identification • Missing TRA...Definitions • Algorithms • Department of Defense Architecture Framework Version 2 0 . • Why the Work Breakdown Structure is Inadequate for CTE

  6. Woods Hole Image Processing System Software implementation; using NetCDF as a software interface for image processing

    USGS Publications Warehouse

    Paskevich, Valerie F.

    1992-01-01

    The Branch of Atlantic Marine Geology has been involved in the collection, processing and digital mosaicking of high, medium and low-resolution side-scan sonar data during the past 6 years. In the past, processing and digital mosaicking has been accomplished with a dedicated, shore-based computer system. With the need to process sidescan data in the field with increased power and reduced cost of major workstations, a need to have an image processing package on a UNIX based computer system which could be utilized in the field as well as be more generally available to Branch personnel was identified. This report describes the initial development of that package referred to as the Woods Hole Image Processing System (WHIPS). The software was developed using the Unidata NetCDF software interface to allow data to be more readily portable between different computer operating systems.

  7. Software tools of the Computis European project to process mass spectrometry images.

    PubMed

    Robbe, Marie-France; Both, Jean-Pierre; Prideaux, Brendan; Klinkert, Ivo; Picaud, Vincent; Schramm, Thorsten; Hester, Atfons; Guevara, Victor; Stoeckli, Markus; Roempp, Andreas; Heeren, Ron M A; Spengler, Bernhard; Gala, Olivier; Haan, Serge

    2014-01-01

    Among the needs usually expressed by teams using mass spectrometry imaging, one that often arises is that for user-friendly software able to manage huge data volumes quickly and to provide efficient assistance for the interpretation of data. To answer this need, the Computis European project developed several complementary software tools to process mass spectrometry imaging data. Data Cube Explorer provides a simple spatial and spectral exploration for matrix-assisted laser desorption/ionisation-time of flight (MALDI-ToF) and time of flight-secondary-ion mass spectrometry (ToF-SIMS) data. SpectViewer offers visualisation functions, assistance to the interpretation of data, classification functionalities, peak list extraction to interrogate biological database and image overlay, and it can process data issued from MALDI-ToF, ToF-SIMS and desorption electrospray ionisation (DESI) equipment. EasyReg2D is able to register two images, in American Standard Code for Information Interchange (ASCII) format, issued from different technologies. The collaboration between the teams was hampered by the multiplicity of equipment and data formats, so the project also developed a common data format (imzML) to facilitate the exchange of experimental data and their interpretation by the different software tools. The BioMap platform for visualisation and exploration of MALDI-ToF and DESI images was adapted to parse imzML files, enabling its access to all project partners and, more globally, to a larger community of users. Considering the huge advantages brought by the imzML standard format, a specific editor (vBrowser) for imzML files and converters from proprietary formats to imzML were developed to enable the use of the imzML format by a broad scientific community. This initiative paves the way toward the development of a large panel of software tools able to process mass spectrometry imaging datasets in the future.

  8. Development of Software to Model AXAF-I Image Quality

    NASA Technical Reports Server (NTRS)

    Ahmad, Anees; Hawkins, Lamar

    1996-01-01

    This draft final report describes the work performed under the delivery order number 145 from May 1995 through August 1996. The scope of work included a number of software development tasks for the performance modeling of AXAF-I. A number of new capabilities and functions have been added to the GT software, which is the command mode version of the GRAZTRACE software, originally developed by MSFC. A structural data interface has been developed for the EAL (old SPAR) finite element analysis FEA program, which is being used by MSFC Structural Analysis group for the analysis of AXAF-I. This interface utility can read the structural deformation file from the EAL and other finite element analysis programs such as NASTRAN and COSMOS/M, and convert the data to a suitable format that can be used for the deformation ray-tracing to predict the image quality for a distorted mirror. There is a provision in this utility to expand the data from finite element models assuming 180 degrees symmetry. This utility has been used to predict image characteristics for the AXAF-I HRMA, when subjected to gravity effects in the horizontal x-ray ground test configuration. The development of the metrology data processing interface software has also been completed. It can read the HDOS FITS format surface map files, manipulate and filter the metrology data, and produce a deformation file, which can be used by GT for ray tracing for the mirror surface figure errors. This utility has been used to determine the optimum alignment (axial spacing and clocking) for the four pairs of AXAF-I mirrors. Based on this optimized alignment, the geometric images and effective focal lengths for the as built mirrors were predicted to cross check the results obtained by Kodak.

  9. Demineralization Depth Using QLF and a Novel Image Processing Software

    PubMed Central

    Wu, Jun; Donly, Zachary R.; Donly, Kevin J.; Hackmyer, Steven

    2010-01-01

    Quantitative Light-Induced fluorescence (QLF) has been widely used to detect tooth demineralization indicated by fluorescence loss with respect to surrounding sound enamel. The correlation between fluorescence loss and demineralization depth is not fully understood. The purpose of this project was to study this correlation to estimate demineralization depth. Extracted teeth were collected. Artificial caries-like lesions were created and imaged with QLF. Novel image processing software was developed to measure the largest percent of fluorescence loss in the region of interest. All teeth were then sectioned and imaged by polarized light microscopy. The largest depth of demineralization was measured by NIH ImageJ software. The statistical linear regression method was applied to analyze these data. The linear regression model was Y = 0.32X + 0.17, where X was the percent loss of fluorescence and Y was the depth of demineralization. The correlation coefficient was 0.9696. The two-tailed t-test for coefficient was 7.93, indicating the P-value = .0014. The F test for the entire model was 62.86, which shows the P-value = .0013. The results indicated statistically significant linear correlation between the percent loss of fluorescence and depth of the enamel demineralization. PMID:20445755

  10. Software Standards and Procedures Manual for the JNGG Graphics Program

    DTIC Science & Technology

    1990-12-01

    from compilation depending upon the outcome of some test usiing some combination of appropriate commands (e.g., #if, #else, #ifdef, #ifndef, # elif ...defunct WWMCCS Standard Graphic Terminal; the Aydin 5807 processor WWS ------------ WAM Workstation; previously known as the WIS Workstation 4-7 THIS

  11. Software architecture standard for simulation virtual machine, version 2.0

    NASA Technical Reports Server (NTRS)

    Sturtevant, Robert; Wessale, William

    1994-01-01

    The Simulation Virtual Machine (SBM) is an Ada architecture which eases the effort involved in the real-time software maintenance and sustaining engineering. The Software Architecture Standard defines the infrastructure which all the simulation models are built from. SVM was developed for and used in the Space Station Verification and Training Facility.

  12. Infusion of CCSDS Flight Dynamics Standards in the NASA AMMOS Ground System Software

    NASA Technical Reports Server (NTRS)

    Berry, David S.

    2015-01-01

    This paper discusses how the CCSDS flight dynamics standards have been implemented in the AMMOS/MDN software, and how they facilitate the provision of a multimission, multiagency operations environment.

  13. Megapixel ion imaging with standard video

    SciTech Connect

    Li Wen; Chambreau, Steven D.; Lahankar, Sridhar A.; Suits, Arthur G.

    2005-06-15

    We present an ion imaging approach employing a real-time ion counting method with standard video. This method employs a center-of-mass calculation of each ion spot (more than 3x3 pixels spread) prior to integration. The results of this algorithm are subpixel precision position data of the corresponding ion spots. These addresses are then converted to the final image with user selected resolution, which can be up to ten times higher than the standard video camera resolution (640x480). This method removes the limiting factor imposed by the resolution of standard video cameras and does so at very low cost. The technique is used in conjunction with dc slice imaging, replacing the local maximum searching algorithm developed by Houston and co-workers [B. Y. Chang, R. C. Hoetzlein, J. A. Mueller, J. D. Geiser, and P. L. Houston, Rev. Sci. Instrum. 69, 1665 (1998)]. The performance is demonstrated using HBr and DBr photodissociation at 193 nm with 3+1 resonance enhanced multiphoton ionization detection of hydrogen and deuterium atom products. The measured velocity resolution for DBr dissociation is 0.50% ({delta}v/v), mainly limited in this case by the bandwidth of the photolysis laser. Issues affecting slice imaging resolution and performance are also discussed.

  14. High prostate cancer gene 3 (PCA3) scores are associated with elevated Prostate Imaging Reporting and Data System (PI-RADS) grade and biopsy Gleason score, at magnetic resonance imaging/ultrasonography fusion software-based targeted prostate biopsy after a previous negative standard biopsy.

    PubMed

    De Luca, Stefano; Passera, Roberto; Cattaneo, Giovanni; Manfredi, Matteo; Mele, Fabrizio; Fiori, Cristian; Bollito, Enrico; Cirillo, Stefano; Porpiglia, Francesco

    2016-11-01

    To determine the association among prostate cancer gene 3 (PCA3) score, Prostate Imaging Reporting and Data System (PI-RADS) grade and Gleason score, in a cohort of patients with elevated prostate-specific antigen (PSA), undergoing magnetic resonance imaging/ultrasonography fusion software-based targeted prostate biopsy (TBx) after a previous negative randomised 'standard' biopsy (SBx). In all, 282 patients who underwent TBx after previous negative SBx and a PCA3 urine assay, were enrolled. The associations between PCA3 score/PI-RADS and PCA3 score/Gleason score were investigated by K-means clustering, a receiver operating characteristic analysis and binary logistic regression. The PCA3 score difference for the negative vs positive TBx cohorts was highly statistically significant. A 1-unit increase in the PCA3 score was associated to a 2.4% increased risk of having a positive TBx result. A PCA3 score of >80 and a PI-RADS grade of ≥4 were independent predictors of a positive TBx. The association between the PCA3 score and PI-RADS grade was statistically significant (the median PCA3 score for PI-RADS grade groups 3, 4, and 5 was 58, 104, and 146, respectively; P = 0.006). A similar pattern was detected for the relationship between the PCA3 score and Gleason score; an increasing PCA3 score was associated with a worsening Gleason score (median PCA3 score equal to 62, 105, 132, 153, 203, and 322 for Gleason Score 3+4, 4+3, 4+4, 4+5, 5+4, and 5+5, respectively; P < 0.001). TBx improved PCA3 score diagnostic and prognostic performance for prostate cancer. The PCA3 score was directly associated both with biopsy Gleason score and PI-RADS grade: notably, in the 'indeterminate' PI-RADS grade 3 subgroup. © 2016 The Authors BJU International © 2016 BJU International Published by John Wiley & Sons Ltd.

  15. Software and Algorithms for Biomedical Image Data Processing and Visualization

    NASA Technical Reports Server (NTRS)

    Talukder, Ashit; Lambert, James; Lam, Raymond

    2004-01-01

    A new software equipped with novel image processing algorithms and graphical-user-interface (GUI) tools has been designed for automated analysis and processing of large amounts of biomedical image data. The software, called PlaqTrak, has been specifically used for analysis of plaque on teeth of patients. New algorithms have been developed and implemented to segment teeth of interest from surrounding gum, and a real-time image-based morphing procedure is used to automatically overlay a grid onto each segmented tooth. Pattern recognition methods are used to classify plaque from surrounding gum and enamel, while ignoring glare effects due to the reflection of camera light and ambient light from enamel regions. The PlaqTrak system integrates these components into a single software suite with an easy-to-use GUI (see Figure 1) that allows users to do an end-to-end run of a patient s record, including tooth segmentation of all teeth, grid morphing of each segmented tooth, and plaque classification of each tooth image. The automated and accurate processing of the captured images to segment each tooth [see Figure 2(a)] and then detect plaque on a tooth-by-tooth basis is a critical component of the PlaqTrak system to do clinical trials and analysis with minimal human intervention. These features offer distinct advantages over other competing systems that analyze groups of teeth or synthetic teeth. PlaqTrak divides each segmented tooth into eight regions using an advanced graphics morphing procedure [see results on a chipped tooth in Figure 2(b)], and a pattern recognition classifier is then used to locate plaque [red regions in Figure 2(d)] and enamel regions. The morphing allows analysis within regions of teeth, thereby facilitating detailed statistical analysis such as the amount of plaque present on the biting surfaces on teeth. This software system is applicable to a host of biomedical applications, such as cell analysis and life detection, or robotic applications, such

  16. Planning the Unplanned Experiment: Towards Assessing the Efficacy of Standards for Safety-Critical Software

    NASA Technical Reports Server (NTRS)

    Graydon, Patrick J.; Holloway, C. M.

    2015-01-01

    Safe use of software in safety-critical applications requires well-founded means of determining whether software is fit for such use. While software in industries such as aviation has a good safety record, little is known about whether standards for software in safety-critical applications 'work' (or even what that means). It is often (implicitly) argued that software is fit for safety-critical use because it conforms to an appropriate standard. Without knowing whether a standard works, such reliance is an experiment; without carefully collecting assessment data, that experiment is unplanned. To help plan the experiment, we organized a workshop to develop practical ideas for assessing software safety standards. In this paper, we relate and elaborate on the workshop discussion, which revealed subtle but important study design considerations and practical barriers to collecting appropriate historical data and recruiting appropriate experimental subjects. We discuss assessing standards as written and as applied, several candidate definitions for what it means for a standard to 'work,' and key assessment strategies and study techniques and the pros and cons of each. Finally, we conclude with thoughts about the kinds of research that will be required and how academia, industry, and regulators might collaborate to overcome the noted barriers.

  17. Standardized food images: A photographing protocol and image database.

    PubMed

    Charbonnier, Lisette; van Meer, Floor; van der Laan, Laura N; Viergever, Max A; Smeets, Paul A M

    2016-01-01

    The regulation of food intake has gained much research interest because of the current obesity epidemic. For research purposes, food images are a good and convenient alternative for real food because many dietary decisions are made based on the sight of foods. Food pictures are assumed to elicit anticipatory responses similar to real foods because of learned associations between visual food characteristics and post-ingestive consequences. In contemporary food science, a wide variety of images are used which introduces between-study variability and hampers comparison and meta-analysis of results. Therefore, we created an easy-to-use photographing protocol which enables researchers to generate high resolution food images appropriate for their study objective and population. In addition, we provide a high quality standardized picture set which was characterized in seven European countries. With the use of this photographing protocol a large number of food images were created. Of these images, 80 were selected based on their recognizability in Scotland, Greece and The Netherlands. We collected image characteristics such as liking, perceived calories and/or perceived healthiness ratings from 449 adults and 191 children. The majority of the foods were recognized and liked at all sites. The differences in liking ratings, perceived calories and perceived healthiness between sites were minimal. Furthermore, perceived caloric content and healthiness ratings correlated strongly (r ≥ 0.8) with actual caloric content in both adults and children. The photographing protocol as well as the images and the data are freely available for research use on http://nutritionalneuroscience.eu/. By providing the research community with standardized images and the tools to create their own, comparability between studies will be improved and a head-start is made for a world-wide standardized food image database. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. Towards establishing compact imaging spectrometer standards

    USGS Publications Warehouse

    Slonecker, E. Terrence; Allen, David W.; Resmini, Ronald G.

    2016-01-01

    Remote sensing science is currently undergoing a tremendous expansion in the area of hyperspectral imaging (HSI) technology. Spurred largely by the explosive growth of Unmanned Aerial Vehicles (UAV), sometimes called Unmanned Aircraft Systems (UAS), or drones, HSI capabilities that once required access to one of only a handful of very specialized and expensive sensor systems are now miniaturized and widely available commercially. Small compact imaging spectrometers (CIS) now on the market offer a number of hyperspectral imaging capabilities in terms of spectral range and sampling. The potential uses of HSI/CIS on UAVs/UASs seem limitless. However, the rapid expansion of unmanned aircraft and small hyperspectral sensor capabilities has created a number of questions related to technological, legal, and operational capabilities. Lightweight sensor systems suitable for UAV platforms are being advertised in the trade literature at an ever-expanding rate with no standardization of system performance specifications or terms of reference. To address this issue, both the U.S. Geological Survey and the National Institute of Standards and Technology are eveloping draft standards to meet these issues. This paper presents the outline of a combined USGS/NIST cooperative strategy to develop and test a characterization methodology to meet the needs of a new and expanding UAV/CIS/HSI user community.

  19. Counting pollen grains using readily available, free image processing and analysis software

    PubMed Central

    Costa, Clayton M.; Yang, Suann

    2009-01-01

    Background and Aims Although many methods exist for quantifying the number of pollen grains in a sample, there are few standard methods that are user-friendly, inexpensive and reliable. The present contribution describes a new method of counting pollen using readily available, free image processing and analysis software. Methods Pollen was collected from anthers of two species, Carduus acanthoides and C. nutans (Asteraceae), then illuminated on slides and digitally photographed through a stereomicroscope. Using ImageJ (NIH), these digital images were processed to remove noise and sharpen individual pollen grains, then analysed to obtain a reliable total count of the number of grains present in the image. A macro was developed to analyse multiple images together. To assess the accuracy and consistency of pollen counting by ImageJ analysis, counts were compared with those made by the human eye. Key Results and Conclusions Image analysis produced pollen counts in 60 s or less per image, considerably faster than counting with the human eye (5–68 min). In addition, counts produced with the ImageJ procedure were similar to those obtained by eye. Because count parameters are adjustable, this image analysis protocol may be used for many other plant species. Thus, the method provides a quick, inexpensive and reliable solution to counting pollen from digital images, not only reducing the chance of error but also substantially lowering labour requirements. PMID:19640891

  20. 25 CFR 547.8 - What are the minimum technical software standards applicable to Class II gaming systems?

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 25 Indians 2 2010-04-01 2010-04-01 false What are the minimum technical software standards... OF CLASS II GAMES § 547.8 What are the minimum technical software standards applicable to Class II gaming systems? This section provides general software standards for Class II gaming systems for the...

  1. IDL Object Oriented Software for Hinode/XRT Image Analysis

    NASA Astrophysics Data System (ADS)

    Higgins, P. A.; Gallagher, P. T.

    2008-09-01

    We have developed a set of object oriented IDL routines that enable users to search, download and analyse images from the X-Ray Telescope (XRT) on-board Hinode. In this paper, we give specific examples of how the object can be used and how multi-instrument data analysis can be performed. The XRT object is a highly versatile and powerful IDL object, which will prove to be a useful tool for solar researchers. This software utilizes the generic Framework object available within the GEN branch of SolarSoft.

  2. Desing and Implementation of the Image Format Batch-Conversion Software Based on ImageJ

    NASA Astrophysics Data System (ADS)

    Xu, Jun; Chen, Dong

    2008-09-01

    The authors introduce ImageJ which is the open source and pure Java language image processing procedure and how to use ImageJ package for secondary development. Using the package, they have realized the format conversion from TIFF and SPE that acquised from WinView software to FITS. And based on this, put forward on the method how to use the package to achieve other format conversion as a separate or batch.

  3. Software architecture for intelligent image processing using Prolog

    NASA Astrophysics Data System (ADS)

    Jones, Andrew C.; Batchelor, Bruce G.

    1994-10-01

    We describe a prototype system for interactive image processing using Prolog, implemented by the first author on an Apple Macintosh computer. This system is inspired by Prolog+, but differs from it in two particularly important respects. The first is that whereas Prolog+ assumes the availability of dedicated image processing hardware, with which the Prolog system communicates, our present system implements image processing functions in software using the C programming language. The second difference is that although our present system supports Prolog+ commands, these are implemented in terms of lower-level Prolog predicates which provide a more flexible approach to image manipulation. We discuss the impact of the Apple Macintosh operating system upon the implementation of the image-processing functions, and the interface between these functions and the Prolog system. We also explain how the Prolog+ commands have been implemented. The system described in this paper is a fairly early prototype, and we outline how we intend to develop the system, a task which is expedited by the extensible architecture we have implemented.

  4. Comparison of quality control software tools for diffusion tensor imaging.

    PubMed

    Liu, Bilan; Zhu, Tong; Zhong, Jianhui

    2015-04-01

    Image quality of diffusion tensor imaging (DTI) is critical for image interpretation, diagnostic accuracy and efficiency. However, DTI is susceptible to numerous detrimental artifacts that may impair the reliability and validity of the obtained data. Although many quality control (QC) software tools are being developed and are widely used and each has its different tradeoffs, there is still no general agreement on an image quality control routine for DTIs, and the practical impact of these tradeoffs is not well studied. An objective comparison that identifies the pros and cons of each of the QC tools will be helpful for the users to make the best choice among tools for specific DTI applications. This study aims to quantitatively compare the effectiveness of three popular QC tools including DTI studio (Johns Hopkins University), DTIprep (University of North Carolina at Chapel Hill, University of Iowa and University of Utah) and TORTOISE (National Institute of Health). Both synthetic and in vivo human brain data were used to quantify adverse effects of major DTI artifacts to tensor calculation as well as the effectiveness of different QC tools in identifying and correcting these artifacts. The technical basis of each tool was discussed, and the ways in which particular techniques affect the output of each of the tools were analyzed. The different functions and I/O formats that three QC tools provide for building a general DTI processing pipeline and integration with other popular image processing tools were also discussed. Copyright © 2015 Elsevier Inc. All rights reserved.

  5. Special Software for Planetary Image Processing and Research

    NASA Astrophysics Data System (ADS)

    Zubarev, A. E.; Nadezhdina, I. E.; Kozlova, N. A.; Brusnikin, E. S.; Karachevtseva, I. P.

    2016-06-01

    The special modules of photogrammetric processing of remote sensing data that provide the opportunity to effectively organize and optimize the planetary studies were developed. As basic application the commercial software package PHOTOMOD™ is used. Special modules were created to perform various types of data processing: calculation of preliminary navigation parameters, calculation of shape parameters of celestial body, global view image orthorectification, estimation of Sun illumination and Earth visibilities from planetary surface. For photogrammetric processing the different types of data have been used, including images of the Moon, Mars, Mercury, Phobos, Galilean satellites and Enceladus obtained by frame or push-broom cameras. We used modern planetary data and images that were taken over the years, shooting from orbit flight path with various illumination and resolution as well as obtained by planetary rovers from surface. Planetary data image processing is a complex task, and as usual it can take from few months to years. We present our efficient pipeline procedure that provides the possibilities to obtain different data products and supports a long way from planetary images to celestial body maps. The obtained data - new three-dimensional control point networks, elevation models, orthomosaics - provided accurate maps production: a new Phobos atlas (Karachevtseva et al., 2015) and various thematic maps that derived from studies of planetary surface (Karachevtseva et al., 2016a).

  6. OsiriX: an open-source software for navigating in multidimensional DICOM images.

    PubMed

    Rosset, Antoine; Spadola, Luca; Ratib, Osman

    2004-09-01

    A multidimensional image navigation and display software was designed for display and interpretation of large sets of multidimensional and multimodality images such as combined PET-CT studies. The software is developed in Objective-C on a Macintosh platform under the MacOS X operating system using the GNUstep development environment. It also benefits from the extremely fast and optimized 3D graphic capabilities of the OpenGL graphic standard widely used for computer games optimized for taking advantage of any hardware graphic accelerator boards available. In the design of the software special attention was given to adapt the user interface to the specific and complex tasks of navigating through large sets of image data. An interactive jog-wheel device widely used in the video and movie industry was implemented to allow users to navigate in the different dimensions of an image set much faster than with a traditional mouse or on-screen cursors and sliders. The program can easily be adapted for very specific tasks that require a limited number of functions, by adding and removing tools from the program's toolbar and avoiding an overwhelming number of unnecessary tools and functions. The processing and image rendering tools of the software are based on the open-source libraries ITK and VTK. This ensures that all new developments in image processing that could emerge from other academic institutions using these libraries can be directly ported to the OsiriX program. OsiriX is provided free of charge under the GNU open-source licensing agreement at http://homepage.mac.com/rossetantoine/osirix.

  7. 'Face value': new medical imaging software in commercial view.

    PubMed

    Coopmans, Catelijne

    2011-04-01

    Based on three ethnographic vignettes describing the engagements of a small start-up company with prospective competitors, partners and customers, this paper shows how commercial considerations are folded into the ways visual images become 'seeable'. When company members mount demonstrations of prototype mammography software, they seek to generate interest but also to protect their intellectual property. Pivotal to these efforts to manage revelation and concealment is the visual interface, which is variously performed as obstacle and ally in the development of a profitable product. Using the concept of 'face value', the paper seeks to develop further insight into contemporary dynamics of seeing and showing by tracing the way techno-visual presentations and commercial considerations become entangled in practice. It also draws attention to the salience and significance of enactments of surface and depth in image-based practices.

  8. The accuracy of a designed software for automated localization of craniofacial landmarks on CBCT images.

    PubMed

    Shahidi, Shoaleh; Bahrampour, Ehsan; Soltanimehr, Elham; Zamani, Ali; Oshagh, Morteza; Moattari, Marzieh; Mehdizadeh, Alireza

    2014-09-16

    Two-dimensional projection radiographs have been traditionally considered the modality of choice for cephalometric analysis. To overcome the shortcomings of two-dimensional images, three-dimensional computed tomography (CT) has been used to evaluate craniofacial structures. However, manual landmark detection depends on medical expertise, and the process is time-consuming. The present study was designed to produce software capable of automated localization of craniofacial landmarks on cone beam (CB) CT images based on image registration and to evaluate its accuracy. The software was designed using MATLAB programming language. The technique was a combination of feature-based (principal axes registration) and voxel similarity-based methods for image registration. A total of 8 CBCT images were selected as our reference images for creating a head atlas. Then, 20 CBCT images were randomly selected as the test images for evaluating the method. Three experts twice located 14 landmarks in all 28 CBCT images during two examinations set 6 weeks apart. The differences in the distances of coordinates of each landmark on each image between manual and automated detection methods were calculated and reported as mean errors. The combined intraclass correlation coefficient for intraobserver reliability was 0.89 and for interobserver reliability 0.87 (95% confidence interval, 0.82 to 0.93). The mean errors of all 14 landmarks were <4 mm. Additionally, 63.57% of landmarks had a mean error of <3 mm compared with manual detection (gold standard method). The accuracy of our approach for automated localization of craniofacial landmarks, which was based on combining feature-based and voxel similarity-based methods for image registration, was acceptable. Nevertheless we recommend repetition of this study using other techniques, such as intensity-based methods.

  9. Multi-institutional Validation Study of Commercially Available Deformable Image Registration Software for Thoracic Images.

    PubMed

    Kadoya, Noriyuki; Nakajima, Yujiro; Saito, Masahide; Miyabe, Yuki; Kurooka, Masahiko; Kito, Satoshi; Fujita, Yukio; Sasaki, Motoharu; Arai, Kazuhiro; Tani, Kensuke; Yagi, Masashi; Wakita, Akihisa; Tohyama, Naoki; Jingu, Keiichi

    2016-10-01

    To assess the accuracy of the commercially available deformable image registration (DIR) software for thoracic images at multiple institutions. Thoracic 4-dimensional (4D) CT images of 10 patients with esophageal or lung cancer were used. Datasets for these patients were provided by DIR-lab (dir-lab.com) and included a coordinate list of anatomic landmarks (300 bronchial bifurcations) that had been manually identified. Deformable image registration was performed between the peak-inhale and -exhale images. Deformable image registration error was determined by calculating the difference at each landmark point between the displacement calculated by DIR software and that calculated by the landmark. Eleven institutions participated in this study: 4 used RayStation (RaySearch Laboratories, Stockholm, Sweden), 5 used MIM Software (Cleveland, OH), and 3 used Velocity (Varian Medical Systems, Palo Alto, CA). The ranges of the average absolute registration errors over all cases were as follows: 0.48 to 1.51 mm (right-left), 0.53 to 2.86 mm (anterior-posterior), 0.85 to 4.46 mm (superior-inferior), and 1.26 to 6.20 mm (3-dimensional). For each DIR software package, the average 3-dimensional registration error (range) was as follows: RayStation, 3.28 mm (1.26-3.91 mm); MIM Software, 3.29 mm (2.17-3.61 mm); and Velocity, 5.01 mm (4.02-6.20 mm). These results demonstrate that there was moderate variation among institutions, although the DIR software was the same. We evaluated the commercially available DIR software using thoracic 4D-CT images from multiple centers. Our results demonstrated that DIR accuracy differed among institutions because it was dependent on both the DIR software and procedure. Our results could be helpful for establishing prospective clinical trials and for the widespread use of DIR software. In addition, for clinical care, we should try to find the optimal DIR procedure using thoracic 4D-CT data. Copyright © 2016 Elsevier Inc. All rights

  10. Quantification of video-taped images in microcirculation research using inexpensive imaging software (Adobe Photoshop).

    PubMed

    Brunner, J; Krummenauer, F; Lehr, H A

    2000-04-01

    Study end-points in microcirculation research are usually video-taped images rather than numeric computer print-outs. Analysis of these video-taped images for the quantification of microcirculatory parameters usually requires computer-based image analysis systems. Most software programs for image analysis are custom-made, expensive, and limited in their applicability to selected parameters and study end-points. We demonstrate herein that an inexpensive, commercially available computer software (Adobe Photoshop), run on a Macintosh G3 computer with inbuilt graphic capture board provides versatile, easy to use tools for the quantification of digitized video images. Using images obtained by intravital fluorescence microscopy from the pre- and postischemic muscle microcirculation in the skinfold chamber model in hamsters, Photoshop allows simple and rapid quantification (i) of microvessel diameters, (ii) of the functional capillary density and (iii) of postischemic leakage of FITC-labeled high molecular weight dextran from postcapillary venules. We present evidence of the technical accuracy of the software tools and of a high degree of interobserver reliability. Inexpensive commercially available imaging programs (i.e., Adobe Photoshop) provide versatile tools for image analysis with a wide range of potential applications in microcirculation research.

  11. Status of the New IAASS Software Safety Standard for Commercial Suborbital Vehicles

    NASA Astrophysics Data System (ADS)

    Klicker, Michael; Atencia Yepez, Amaya

    2013-09-01

    The complexity and novelty of commercial suborbital flight poses a number of challenges to industry, operators and regulators likewise. In order to promote industry self-regulation IAASS has established a technical committee to tackle the safety issues surrounding commercial suborbital flight vehicles (as will be presented in other papers for the conference). In this context, one of the tasks was to look at existing software safety standards and/or propose new suitable standards to complement the Suborbital Standards proposed by IAASS.The proposed IAASS suborbital software standard will be based on a safety case approach to seamlessly integrate with the proposed IAASS system safety standard for suborbital vehicles. The paper outlines the requirements of the standard as well as the available guidance material.

  12. Development of Automated Image Analysis Software for Suspended Marine Particle Classification

    DTIC Science & Technology

    2003-09-30

    Development of Automated Image Analysis Software for Suspended Marine Particle Classification Scott Samson Center for Ocean Technology...REPORT TYPE 3. DATES COVERED 00-00-2003 to 00-00-2003 4. TITLE AND SUBTITLE Development of Automated Image Analysis Software for Suspended...objective is to develop automated image analysis software to reduce the effort and time required for manual identification of plankton images. Automated

  13. Development of a Consensus Standard for Verification and Validation of Nuclear System Thermal-Fluids Software

    SciTech Connect

    Edwin A. Harvego; Richard R. Schultz; Ryan L. Crane

    2011-12-01

    With the resurgence of nuclear power and increased interest in advanced nuclear reactors as an option to supply abundant energy without the associated greenhouse gas emissions of the more conventional fossil fuel energy sources, there is a need to establish internationally recognized standards for the verification and validation (V&V) of software used to calculate the thermal-hydraulic behavior of advanced reactor designs for both normal operation and hypothetical accident conditions. To address this need, ASME (American Society of Mechanical Engineers) Standards and Certification has established the V&V 30 Committee, under the jurisdiction of the V&V Standards Committee, to develop a consensus standard for verification and validation of software used for design and analysis of advanced reactor systems. The initial focus of this committee will be on the V&V of system analysis and computational fluid dynamics (CFD) software for nuclear applications. To limit the scope of the effort, the committee will further limit its focus to software to be used in the licensing of High-Temperature Gas-Cooled Reactors. In this framework, the Standard should conform to Nuclear Regulatory Commission (NRC) and other regulatory practices, procedures and methods for licensing of nuclear power plants as embodied in the United States (U.S.) Code of Federal Regulations and other pertinent documents such as Regulatory Guide 1.203, 'Transient and Accident Analysis Methods' and NUREG-0800, 'NRC Standard Review Plan'. In addition, the Standard should be consistent with applicable sections of ASME NQA-1-2008 'Quality Assurance Requirements for Nuclear Facility Applications (QA)'. This paper describes the general requirements for the proposed V&V 30 Standard, which includes; (a) applicable NRC and other regulatory requirements for defining the operational and accident domain of a nuclear system that must be considered if the system is to be licensed, (b) the corresponding calculation domain of

  14. Software development for ACR-approved phantom-based nuclear medicine tomographic image quality control with cross-platform compatibility

    NASA Astrophysics Data System (ADS)

    Oh, Jungsu S.; Choi, Jae Min; Nam, Ki Pyo; Chae, Sun Young; Ryu, Jin-Sook; Moon, Dae Hyuk; Kim, Jae Seung

    2015-07-01

    Quality control and quality assurance (QC/QA) have been two of the most important issues in modern nuclear medicine (NM) imaging for both clinical practices and academic research. Whereas quantitative QC analysis software is common to modern positron emission tomography (PET) scanners, the QC of gamma cameras and/or single-photon-emission computed tomography (SPECT) scanners has not been sufficiently addressed. Although a thorough standard operating process (SOP) for mechanical and software maintenance may help the QC/QA of a gamma camera and SPECT-computed tomography (CT), no previous study has addressed a unified platform or process to decipher or analyze SPECT phantom images acquired from various scanners thus far. In addition, a few approaches have established cross-platform software to enable the technologists and physicists to assess the variety of SPECT scanners from different manufacturers. To resolve these issues, we have developed Interactive Data Language (IDL)-based in-house software for crossplatform (in terms of not only operating systems (OS) but also manufacturers) analyses of the QC data on an ACR SPECT phantom, which is essential for assessing and assuring the tomographical image quality of SPECT. We applied our devised software to our routine quarterly QC of ACR SPECT phantom images acquired from a number of platforms (OS/manufacturers). Based on our experience, we suggest that our devised software can offer a unified platform that allows images acquired from various types of scanners to be analyzed with great precision and accuracy.

  15. Vobi One: a data processing software package for functional optical imaging

    PubMed Central

    Takerkart, Sylvain; Katz, Philippe; Garcia, Flavien; Roux, Sébastien; Reynaud, Alexandre; Chavane, Frédéric

    2014-01-01

    Optical imaging is the only technique that allows to record the activity of a neuronal population at the mesoscopic scale. A large region of the cortex (10–20 mm diameter) is directly imaged with a CCD camera while the animal performs a behavioral task, producing spatio-temporal data with an unprecedented combination of spatial and temporal resolutions (respectively, tens of micrometers and milliseconds). However, researchers who have developed and used this technique have relied on heterogeneous software and methods to analyze their data. In this paper, we introduce Vobi One, a software package entirely dedicated to the processing of functional optical imaging data. It has been designed to facilitate the processing of data and the comparison of different analysis methods. Moreover, it should help bring good analysis practices to the community because it relies on a database and a standard format for data handling and it provides tools that allow producing reproducible research. Vobi One is an extension of the BrainVISA software platform, entirely written with the Python programming language, open source and freely available for download at https://trac.int.univ-amu.fr/vobi_one. PMID:24478623

  16. Application of Scion image software to the simultaneous determination of curcuminoids in turmeric (Curcuma longa).

    PubMed

    Sotanaphun, Uthai; Phattanawasin, Panadda; Sriphong, Lawan

    2009-01-01

    Curcumin, desmethoxycurcumin and bisdesmethoxycurcumin are bioactive constituents of turmeric (Curcuma longa). Owing to their different potency, quality control of turmeric based on the content of each curcuminoid is more reliable than that based on total curcuminoids. However, to perform such an assay, high-cost instrument is needed. To develop a simple and low-cost method for the simultaneous quantification of three curcuminoids in turmeric using TLC and the public-domain software Scion Image. The image of a TLC chromatogram of turmeric extract was recorded using a digital scanner. The density of the TLC spot of each curcuminoid was analysed by the Scion Image software. The density value was transformed to concentration by comparison with the calibration curve of standard curcuminoids developed on the same TLC plate. The polynomial regression data for all curcuminoids showed good linear relationship with R(2) > 0.99 in the concentration range of 0.375-6 microg/spot. The limits of detection and quantitation were 43-73 and 143-242 ng/spot, respectively. The method gave adequate precision, accuracy and recovery. The contents of each curcuminoid determined using this method were not significantly different from those determined using the TLC densitometric method. TLC image analysis using Scion Image is shown to be a reliable method for the simultaneous analysis of the content of each curcuminoid in turmeric.

  17. Understanding the Perception of Very Small Software Companies towards the Adoption of Process Standards

    NASA Astrophysics Data System (ADS)

    Basri, Shuib; O'Connor, Rory V.

    This paper is concerned with understanding the issues that affect the adoption of software process standards by Very Small Entities (VSEs), their needs from process standards and their willingness to engage with the new ISO/IEC 29110 standard in particular. In order to achieve this goal, a series of industry data collection studies were undertaken with a collection of VSEs. A twin track approach of a qualitative data collection (interviews and focus groups) and quantitative data collection (questionnaire) were undertaken. Data analysis was being completed separately and the final results were merged, using the coding mechanisms of grounded theory. This paper serves as a roadmap for both researchers wishing to understand the issues of process standards adoption by very small companies and also for the software process standards community.

  18. Verification and Validation of a Fingerprint Image Registration Software

    NASA Astrophysics Data System (ADS)

    Desovski, Dejan; Gandikota, Vijai; Liu, Yan; Jiang, Yue; Cukic, Bojan

    2006-12-01

    The need for reliable identification and authentication is driving the increased use of biometric devices and systems. Verification and validation techniques applicable to these systems are rather immature and ad hoc, yet the consequences of the wide deployment of biometric systems could be significant. In this paper we discuss an approach towards validation and reliability estimation of a fingerprint registration software. Our validation approach includes the following three steps: (a) the validation of the source code with respect to the system requirements specification; (b) the validation of the optimization algorithm, which is in the core of the registration system; and (c) the automation of testing. Since the optimization algorithm is heuristic in nature, mathematical analysis and test results are used to estimate the reliability and perform failure analysis of the image registration module.

  19. Development of image-processing software for automatic segmentation of brain tumors in MR images

    PubMed Central

    Vijayakumar, C.; Gharpure, Damayanti Chandrashekhar

    2011-01-01

    Most of the commercially available software for brain tumor segmentation have limited functionality and frequently lack the careful validation that is required for clinical studies. We have developed an image-analysis software package called ‘Prometheus,’ which performs neural system–based segmentation operations on MR images using pre-trained information. The software also has the capability to improve its segmentation performance by using the training module of the neural system. The aim of this article is to present the design and modules of this software. The segmentation module of Prometheus can be used primarily for image analysis in MR images. Prometheus was validated against manual segmentation by a radiologist and its mean sensitivity and specificity was found to be 85.71±4.89% and 93.2±2.87%, respectively. Similarly, the mean segmentation accuracy and mean correspondence ratio was found to be 92.35±3.37% and 0.78±0.046, respectively. PMID:21897560

  20. Two-Dimensional Gel Electrophoresis Image Analysis via Dedicated Software Packages.

    PubMed

    Maurer, Martin H

    2016-01-01

    Analyzing two-dimensional gel electrophoretic images is supported by a number of freely and commercially available software. Although the respective program is highly specific, all the programs follow certain standardized algorithms. General steps are: (1) detecting and separating individual spots, (2) subtracting background, (3) creating a reference gel and (4) matching the spots to the reference gel, (5) modifying the reference gel, (6) normalizing the gel measurements for comparison, (7) calibrating for isoelectric point and molecular weight markers, and moreover, (8) constructing a database containing the measurement results and (9) comparing data by statistical and bioinformatic methods.

  1. Variability of standard liver volume estimation versus software-assisted total liver volume measurement.

    PubMed

    Pomposelli, James J; Tongyoo, Assanee; Wald, Christoph; Pomfret, Elizabeth A

    2012-09-01

    The estimation of the standard liver volume (SLV) is an important component of the evaluation of potential living liver donors and the surgical planning for resection for tumors. At least 16 different formulas for estimating SLV have been published in the worldwide literature. More recently, several proprietary software-assisted image postprocessing (SAIP) programs have been developed to provide accurate volume measurements based on the actual anatomy of a specific patient. Using SAIP, we measured SLV in 375 healthy potential liver donors and compared the results to SLV values that were estimated with the previously published formulas and each donor's demographic and anthropomorphic data. The percentage errors of the 16 SLV formulas versus SAIP varied by more than 59% (from -21.6% to +37.7%). One formula was not statistically different from SAIP with respect to the percentage error (-1.2%), and another formula was not statistically different with respect to the absolute liver volume (18 mL). More than 75% of the estimated SLV values produced by these 2 formulas had percentage errors within ±15%, and the formulas provided good predictions within acceptable agreement (±15%) on scatter plots. Because of the wide variability, care must be taken when a formula is being chosen for estimating SLV, but the 2 aforementioned formulas provided the most accurate results with our patient demographics.

  2. Collaboration using open standards and open source software (examples of DIAS/CEOS Water Portal)

    NASA Astrophysics Data System (ADS)

    Miura, S.; Sekioka, S.; Kuroiwa, K.; Kudo, Y.

    2015-12-01

    The DIAS/CEOS Water Portal is a part of the DIAS (Data Integration and Analysis System, http://www.editoria.u-tokyo.ac.jp/projects/dias/?locale=en_US) systems for data distribution for users including, but not limited to, scientists, decision makers and officers like river administrators. One of the functions of this portal is to enable one-stop search and access variable water related data archived multiple data centers located all over the world. This portal itself does not store data. Instead, according to requests made by users on the web page, it retrieves data from distributed data centers on-the-fly and lets them download and see rendered images/plots. Our system mainly relies on the open source software GI-cat (http://essi-lab.eu/do/view/GIcat) and open standards such as OGC-CSW, Opensearch and OPeNDAP protocol to enable the above functions. Details on how it works will be introduced during the presentation. Although some data centers have unique meta data format and/or data search protocols, our portal's brokering function enables users to search across various data centers at one time. And this portal is also connected to other data brokering systems, including GEOSS DAB (Discovery and Access Broker). As a result, users can search over thousands of datasets, millions of files at one time. Users can access the DIAS/CEOS Water Portal system at http://waterportal.ceos.org/.

  3. Towards an Improvement of Software Development Processes through Standard Business Rules

    NASA Astrophysics Data System (ADS)

    Martínez-Fernández, José L.; Martínez, Paloma; González-Cristóbal, José C.

    The automation of software development processes is a desirable goal of current software companies which would lead to a cost reduction in software production. This automation is the backbone of approaches such as Model Driven Architecture (MDA) or Software Factories. This paper proposes the use of standard Business Rules (using Rules Interchange Format, RIF) to specify application functionality along with a platform to produce automatic implementations for them. The novelty of this proposal is to introduce Business Rules at all levels of MDA architecture in a software development process, providing a supporting tool where production Business Rules are considered at every abstraction level. Production Business Rules are represented through standard languages, rule engine vendor independence is assured via automatic transformation between rule languages, and Business Rules reuse is made possible. The objective is to get the development of production Business Rules closer to non-technical people involved in the software development process through the use of natural language processing approaches, automatic transformations among models and semantic web languages such as Ontology Web Language (OWL).

  4. Predictive images of postoperative levator resection outcome using image processing software

    PubMed Central

    Mawatari, Yuki; Fukushima, Mikiko

    2016-01-01

    Purpose This study aims to evaluate the efficacy of processed images to predict postoperative appearance following levator resection. Methods Analysis involved 109 eyes from 65 patients with blepharoptosis who underwent advancement of levator aponeurosis and Müller’s muscle complex (levator resection). Predictive images were prepared from preoperative photographs using the image processing software (Adobe Photoshop®). Images of selected eyes were digitally enlarged in an appropriate manner and shown to patients prior to surgery. Results Approximately 1 month postoperatively, we surveyed our patients using questionnaires. Fifty-six patients (89.2%) were satisfied with their postoperative appearances, and 55 patients (84.8%) positively responded to the usefulness of processed images to predict postoperative appearance. Conclusion Showing processed images that predict postoperative appearance to patients prior to blepharoptosis surgery can be useful for those patients concerned with their postoperative appearance. This approach may serve as a useful tool to simulate blepharoptosis surgery. PMID:27757008

  5. Predictive images of postoperative levator resection outcome using image processing software.

    PubMed

    Mawatari, Yuki; Fukushima, Mikiko

    2016-01-01

    This study aims to evaluate the efficacy of processed images to predict postoperative appearance following levator resection. Analysis involved 109 eyes from 65 patients with blepharoptosis who underwent advancement of levator aponeurosis and Müller's muscle complex (levator resection). Predictive images were prepared from preoperative photographs using the image processing software (Adobe Photoshop(®)). Images of selected eyes were digitally enlarged in an appropriate manner and shown to patients prior to surgery. Approximately 1 month postoperatively, we surveyed our patients using questionnaires. Fifty-six patients (89.2%) were satisfied with their postoperative appearances, and 55 patients (84.8%) positively responded to the usefulness of processed images to predict postoperative appearance. Showing processed images that predict postoperative appearance to patients prior to blepharoptosis surgery can be useful for those patients concerned with their postoperative appearance. This approach may serve as a useful tool to simulate blepharoptosis surgery.

  6. A comprehensive software system for image processing and programming. Final report

    SciTech Connect

    Rasure, J.; Hallett, S.; Jordan, R.

    1994-12-31

    XVision is an example of a comprehensive software system dedicated to the processing of multidimensional scientific data. Because it is comprehensive it is necessarily complex. This design complexity is dealt with by considering XVision as nine overlapping software systems, their components and the required standards. The complexity seen by a user of XVision is minimized by the different interfaces providing access to the image processing routines as well as an interface to ease the incorporation of new routines. The XVision project has stressed the importance of having: (1) interfaces to accommodate users with differing preferences and backgrounds and (2) tools to support the programmer and the scientist. The result is a system that provides a framework for building a powerful research, education and development tool.

  7. The family of standard hydrogen monitoring system computer software design description: Revision 2

    SciTech Connect

    Bender, R.M.

    1994-11-16

    In March 1990, 23 waste tanks at the Hanford Nuclear Reservation were identified as having the potential for the buildup of gas to a flammable or explosive level. As a result of the potential for hydrogen gas buildup, a project was initiated to design a standard hydrogen monitoring system (SHMS) for use at any waste tank to analyze gas samples for hydrogen content. Since it was originally deployed three years ago, two variations of the original system have been developed: the SHMS-B and SHMS-C. All three are currently in operation at the tank farms and will be discussed in this document. To avoid confusion in this document, when a feature is common to all three of the SHMS variants, it will be referred to as ``The family of SHMS.`` When it is specific to only one or two, they will be identified. The purpose of this computer software design document is to provide the following: the computer software requirements specification that documents the essential requirements of the computer software and its external interfaces; the computer software design description; the computer software user documentation for using and maintaining the computer software and any dedicated hardware; and the requirements for computer software design verification and validation.

  8. An open-source software tool for the generation of relaxation time maps in magnetic resonance imaging.

    PubMed

    Messroghli, Daniel R; Rudolph, Andre; Abdel-Aty, Hassan; Wassmuth, Ralf; Kühne, Titus; Dietz, Rainer; Schulz-Menger, Jeanette

    2010-07-30

    In magnetic resonance (MR) imaging, T1, T2 and T2* relaxation times represent characteristic tissue properties that can be quantified with the help of specific imaging strategies. While there are basic software tools for specific pulse sequences, until now there is no universal software program available to automate pixel-wise mapping of relaxation times from various types of images or MR systems. Such a software program would allow researchers to test and compare new imaging strategies and thus would significantly facilitate research in the area of quantitative tissue characterization. After defining requirements for a universal MR mapping tool, a software program named MRmap was created using a high-level graphics language. Additional features include a manual registration tool for source images with motion artifacts and a tabular DICOM viewer to examine pulse sequence parameters. MRmap was successfully tested on three different computer platforms with image data from three different MR system manufacturers and five different sorts of pulse sequences: multi-image inversion recovery T1; Look-Locker/TOMROP T1; modified Look-Locker (MOLLI) T1; single-echo T2/T2*; and multi-echo T2/T2*. Computing times varied between 2 and 113 seconds. Estimates of relaxation times compared favorably to those obtained from non-automated curve fitting. Completed maps were exported in DICOM format and could be read in standard software packages used for analysis of clinical and research MR data. MRmap is a flexible cross-platform research tool that enables accurate mapping of relaxation times from various pulse sequences. The software allows researchers to optimize quantitative MR strategies in a manufacturer-independent fashion. The program and its source code were made available as open-source software on the internet.

  9. A Survey of DICOM Viewer Software to Integrate Clinical Research and Medical Imaging.

    PubMed

    Haak, Daniel; Page, Charles-E; Deserno, Thomas M

    2016-04-01

    The digital imaging and communications in medicine (DICOM) protocol is the leading standard for image data management in healthcare. Imaging biomarkers and image-based surrogate endpoints in clinical trials and medical registries require DICOM viewer software with advanced functionality for visualization and interfaces for integration. In this paper, a comprehensive evaluation of 28 DICOM viewers is performed. The evaluation criteria are obtained from application scenarios in clinical research rather than patient care. They include (i) platform, (ii) interface, (iii) support, (iv) two-dimensional (2D), and (v) three-dimensional (3D) viewing. On the average, 4.48 and 1.43 of overall 8 2D and 5 3D image viewing criteria are satisfied, respectively. Suitable DICOM interfaces for central viewing in hospitals are provided by GingkoCADx, MIPAV, and OsiriX Lite. The viewers ImageJ, MicroView, MIPAV, and OsiriX Lite offer all included 3D-rendering features for advanced viewing. Interfaces needed for decentral viewing in web-based systems are offered by Oviyam, Weasis, and Xero. Focusing on open source components, MIPAV is the best candidate for 3D imaging as well as DICOM communication. Weasis is superior for workflow optimization in clinical trials. Our evaluation shows that advanced visualization and suitable interfaces can also be found in the open source field and not only in commercial products.

  10. The Effects of Personalized Practice Software on Learning Math Standards in the Third through Fifth Grades

    ERIC Educational Resources Information Center

    Gomez, Angela Nicole

    2012-01-01

    The purpose of this study was to investigate the effectiveness of "MathFacts in a Flash" software in helping students learn math standards. In each of their classes, the third-, fourth-, and fifth-grade students in a small private Roman Catholic school from the Pacific Northwest were randomly assigned either to a control group that used…

  11. The Effects of Personalized Practice Software on Learning Math Standards in the Third through Fifth Grades

    ERIC Educational Resources Information Center

    Gomez, Angela Nicole

    2012-01-01

    The purpose of this study was to investigate the effectiveness of "MathFacts in a Flash" software in helping students learn math standards. In each of their classes, the third-, fourth-, and fifth-grade students in a small private Roman Catholic school from the Pacific Northwest were randomly assigned either to a control group that used…

  12. Software defined multi-spectral imaging for Arctic sensor networks

    NASA Astrophysics Data System (ADS)

    Siewert, Sam; Angoth, Vivek; Krishnamurthy, Ramnarayan; Mani, Karthikeyan; Mock, Kenrick; Singh, Surjith B.; Srivistava, Saurav; Wagner, Chris; Claus, Ryan; Vis, Matthew Demi

    2016-05-01

    Availability of off-the-shelf infrared sensors combined with high definition visible cameras has made possible the construction of a Software Defined Multi-Spectral Imager (SDMSI) combining long-wave, near-infrared and visible imaging. The SDMSI requires a real-time embedded processor to fuse images and to create real-time depth maps for opportunistic uplink in sensor networks. Researchers at Embry Riddle Aeronautical University working with University of Alaska Anchorage at the Arctic Domain Awareness Center and the University of Colorado Boulder have built several versions of a low-cost drop-in-place SDMSI to test alternatives for power efficient image fusion. The SDMSI is intended for use in field applications including marine security, search and rescue operations and environmental surveys in the Arctic region. Based on Arctic marine sensor network mission goals, the team has designed the SDMSI to include features to rank images based on saliency and to provide on camera fusion and depth mapping. A major challenge has been the design of the camera computing system to operate within a 10 to 20 Watt power budget. This paper presents a power analysis of three options: 1) multi-core, 2) field programmable gate array with multi-core, and 3) graphics processing units with multi-core. For each test, power consumed for common fusion workloads has been measured at a range of frame rates and resolutions. Detailed analyses from our power efficiency comparison for workloads specific to stereo depth mapping and sensor fusion are summarized. Preliminary mission feasibility results from testing with off-the-shelf long-wave infrared and visible cameras in Alaska and Arizona are also summarized to demonstrate the value of the SDMSI for applications such as ice tracking, ocean color, soil moisture, animal and marine vessel detection and tracking. The goal is to select the most power efficient solution for the SDMSI for use on UAVs (Unoccupied Aerial Vehicles) and other drop

  13. Efficient 3D rendering for web-based medical imaging software: a proof of concept

    NASA Astrophysics Data System (ADS)

    Cantor-Rivera, Diego; Bartha, Robert; Peters, Terry

    2011-03-01

    Medical Imaging Software (MIS) found in research and in clinical practice, such as in Picture and Archiving Communication Systems (PACS) and Radiology Information Systems (RIS), has not been able to take full advantage of the Internet as a deployment platform. MIS is usually tightly coupled to algorithms that have substantial hardware and software requirements. Consequently, MIS is deployed on thick clients which usually leads project managers to allocate more resources during the deployment phase of the application than the resources that would be allocated if the application were deployed through a web interface.To minimize the costs associated with this scenario, many software providers use or develop plug-ins to provide the delivery platform (internet browser) with the features to load, interact and analyze medical images. Nevertheless there has not been a successful standard means to achieve this goal so far. This paper presents a study of WebGL as an alternative to plug-in development for efficient rendering of 3D medical models and DICOM images. WebGL is a technology that enables the internet browser to have access to the local graphics hardware in a native fashion. Because it is based in OpenGL, a widely accepted graphic industry standard, WebGL is being implemented in most of the major commercial browsers. After a discussion on the details of the technology, a series of experiments are presented to determine the operational boundaries in which WebGL is adequate for MIS. A comparison with current alternatives is also addressed. Finally conclusions and future work are discussed.

  14. [Study on the image file conformance to DICOM standard about medical imaging device].

    PubMed

    Qiu, Minghui

    2013-09-01

    The format of medical image file conformance to DICOM standard have important influence on users of the PACS. This paper summarized the results of the writer's study on the image file conformance to DICOM standard about medical imaging device for many years. The questions of medical image file no conformance to DICOM standard are pointed in detail and the questions resulted from the troubled image files are analyzed. Finally, the methods of avoiding image file no conformance to DICOM standard are presented.

  15. Computer systems and software description for Standard-E+ Hydrogen Monitoring System (SHMS-E+)

    SciTech Connect

    Tate, D.D.

    1997-05-01

    The primary function of the Standard-E+ Hydrogen Monitoring System (SHMS-E+) is to determine tank vapor space gas composition and gas release rate, and to detect gas release events. Characterization of the gas composition is needed for safety analyses. The lower flammability limit, as well as the peak burn temperature and pressure, are dependent upon the gas composition. If there is little or no knowledge about the gas composition, safety analyses utilize compositions that yield the worst case in a deflagration or detonation. Knowledge of the true composition could lead to reductions in the assumptions and therefore there may be a potential for a reduction in controls and work restrictions. Also, knowledge of the actual composition will be required information for the analysis that is needed to remove tanks from the Watch List. Similarly, the rate of generation and release of gases is required information for performing safety analyses, developing controls, designing equipment, and closing safety issues. This report outlines the computer system design layout description for the Standard-E+ Hydrogen Monitoring System.

  16. JUPOS : Amateur analysis of Jupiter images with specialized measurement software

    NASA Astrophysics Data System (ADS)

    Jacquesson, M.; Mettig, H.-J.

    2008-09-01

    spectral range (sorted by descending priority): o color o monochrome red o IR broadband o green o not blue except for particular cases o not narrow band (except for methane band at 889nm) 5) Correct alignment of RGB color images from 3 monochrome frames 6) Choose images of better quality if several are available from about the same time An important prerequisite: adjust the outline frame correctly - Problems: o phase (darkening of the terminator) o limb darkening o tilt of the image (belt edges are not always horizontal) o north-south asymmetries (rare) o mirror-inverted images o invisibility of the illuminated limb on IR broad band and methane images - How to adjust the outline frame : o increase the luminosity and gamma to display the "real limb". This solution is not sufficient: many images do not show the real limb because of the image processing o use positions of satellites and shadows if visible o refer to latitudes of permanent or long lived objects, but only from recent images as their latitude can vary o setting the frame first on: the limb; the north and south poles; NOT at the terminator o in a series of images taken about 1 ½ hours apart, the same object must have the same position (+/- 0.5°) Measuring objects : 1) Place the WinJUPOS cursor onto the feature's centre - What to measure, what to omit? o some regions of Jupiter with big activity (SEB at present) show many small features that we omit because - finest details are often indistinguishable from image artefacts and noise - they are often short-living, and appear as "noise" in drift charts o omit measuring features too close to the planet's limb o problem with measuring the center of extended objects (e.g. GRS) : visual estimation can give a systematic error. The solution: rotate the image o diffuse features have no clear boundaries 2) Enter the standard JUPOS code of the object (longitude and latitude are automatically computed) 3) Optional: add a description of particular characteristics of the

  17. WorkstationJ: workstation emulation software for medical image perception and technology evaluation research

    NASA Astrophysics Data System (ADS)

    Schartz, Kevin M.; Berbaum, Kevin S.; Caldwell, Robert T.; Madsen, Mark T.

    2007-03-01

    We developed image presentation software that mimics the functionality available in the clinic, but also records time-stamped, observer-display interactions and is readily deployable on diverse workstations making it possible to collect comparable observer data at multiple sites. Commercial image presentation software for clinical use has limited application for research on image perception, ergonomics, computer-aids and informatics because it does not collect observer responses, or other information on observer-display interactions, in real time. It is also very difficult to collect observer data from multiple institutions unless the same commercial software is available at different sites. Our software not only records observer reports of abnormalities and their locations, but also inspection time until report, inspection time for each computed radiograph and for each slice of tomographic studies, window/level, and magnification settings used by the observer. The software is a modified version of the open source ImageJ software available from the National Institutes of Health. Our software involves changes to the base code and extensive new plugin code. Our free software is currently capable of displaying computed tomography and computed radiography images. The software is packaged as Java class files and can be used on Windows, Linux, or Mac systems. By deploying our software together with experiment-specific script files that administer experimental procedures and image file handling, multi-institutional studies can be conducted that increase reader and/or case sample sizes or add experimental conditions.

  18. Software.

    ERIC Educational Resources Information Center

    Journal of Chemical Education, 1989

    1989-01-01

    Presented are reviews of two computer software packages for Apple II computers; "Organic Spectroscopy," and "Videodisc Display Program" for use with "The Periodic Table Videodisc." A sample spectrograph from "Organic Spectroscopy" is included. (CW)

  19. ImFCS: a software for imaging FCS data analysis and visualization.

    PubMed

    Sankaran, Jagadish; Shi, Xianke; Ho, Liang Yoong; Stelzer, Ernst H K; Wohland, Thorsten

    2010-12-06

    The multiplexing of fluorescence correlation spectroscopy (FCS), especially in imaging FCS using fast, sensitive array detectors, requires the handling of large amounts of data. One can easily collect in excess of 100,000 FCS curves a day, too many to be treated manually. Therefore, ImFCS, an open-source software which relies on standard image files was developed and provides a wide range of options for the calculation of spatial and temporal auto- and cross-correlations, as well as differences in Cross-Correlation Functions (ΔCCF). ImFCS permits fitting of standard models to correlation functions and provides optimized histograms of fitted parameters. Applications include the measurement of diffusion and flow with Imaging Total Internal Reflection FCS (ITIR-FCS) and Single Plane Illumination Microscopy FCS (SPIM-FCS) in biologically relevant samples. As a compromise between ITIR-FCS and SPIM-FCS, we extend the applications to Imaging Variable Angle-FCS (IVA-FCS) where sub-critical oblique illumination provides sample sectioning close to the cover slide.

  20. A simple method of image analysis to estimate CAM vascularization by APERIO ImageScope software.

    PubMed

    Marinaccio, Christian; Ribatti, Domenico

    2015-01-01

    The chick chorioallantoic membrane (CAM) assay is a well-established method to test the angiogenic stimulation or inhibition induced by molecules and cells administered onto the CAM. The quantification of blood vessels in the CAM assay relies on a semi-manual image analysis approach which can be time consuming when considering large experimental groups. Therefore we present here a simple and fast volumetric method to inspect differences in vascularization between experimental conditions related to the stimulation and inhibition of CAM angiogenesis based on the Positive Pixel Count algorithm embedded in the APERIO ImageScope software.

  1. Automated facial coding software outperforms people in recognizing neutral faces as neutral from standardized datasets

    PubMed Central

    Lewinski, Peter

    2015-01-01

    Little is known about people’s accuracy of recognizing neutral faces as neutral. In this paper, I demonstrate the importance of knowing how well people recognize neutral faces. I contrasted human recognition scores of 100 typical, neutral front-up facial images with scores of an arguably objective judge – automated facial coding (AFC) software. I hypothesized that the software would outperform humans in recognizing neutral faces because of the inherently objective nature of computer algorithms. Results confirmed this hypothesis. I provided the first-ever evidence that computer software (90%) was more accurate in recognizing neutral faces than people were (59%). I posited two theoretical mechanisms, i.e., smile-as-a-baseline and false recognition of emotion, as possible explanations for my findings. PMID:26441761

  2. Automated facial coding software outperforms people in recognizing neutral faces as neutral from standardized datasets.

    PubMed

    Lewinski, Peter

    2015-01-01

    Little is known about people's accuracy of recognizing neutral faces as neutral. In this paper, I demonstrate the importance of knowing how well people recognize neutral faces. I contrasted human recognition scores of 100 typical, neutral front-up facial images with scores of an arguably objective judge - automated facial coding (AFC) software. I hypothesized that the software would outperform humans in recognizing neutral faces because of the inherently objective nature of computer algorithms. Results confirmed this hypothesis. I provided the first-ever evidence that computer software (90%) was more accurate in recognizing neutral faces than people were (59%). I posited two theoretical mechanisms, i.e., smile-as-a-baseline and false recognition of emotion, as possible explanations for my findings.

  3. Development of Automated Image Analysis Software for Suspended Marine Particle Classification

    DTIC Science & Technology

    2002-09-30

    Development of Automated Image Analysis Software for Suspended Marine Particle Classification Scott Samson Center for Ocean Technology...and global water column. 1 OBJECTIVES The project’s objective is to develop automated image analysis software to reduce the effort and time

  4. Design and validation of Segment - freely available software for cardiovascular image analysis

    PubMed Central

    2010-01-01

    Background Commercially available software for cardiovascular image analysis often has limited functionality and frequently lacks the careful validation that is required for clinical studies. We have already implemented a cardiovascular image analysis software package and released it as freeware for the research community. However, it was distributed as a stand-alone application and other researchers could not extend it by writing their own custom image analysis algorithms. We believe that the work required to make a clinically applicable prototype can be reduced by making the software extensible, so that researchers can develop their own modules or improvements. Such an initiative might then serve as a bridge between image analysis research and cardiovascular research. The aim of this article is therefore to present the design and validation of a cardiovascular image analysis software package (Segment) and to announce its release in a source code format. Results Segment can be used for image analysis in magnetic resonance imaging (MRI), computed tomography (CT), single photon emission computed tomography (SPECT) and positron emission tomography (PET). Some of its main features include loading of DICOM images from all major scanner vendors, simultaneous display of multiple image stacks and plane intersections, automated segmentation of the left ventricle, quantification of MRI flow, tools for manual and general object segmentation, quantitative regional wall motion analysis, myocardial viability analysis and image fusion tools. Here we present an overview of the validation results and validation procedures for the functionality of the software. We describe a technique to ensure continued accuracy and validity of the software by implementing and using a test script that tests the functionality of the software and validates the output. The software has been made freely available for research purposes in a source code format on the project home page http

  5. An analysis of type F2 software measurement standards for profile surface texture parameters

    NASA Astrophysics Data System (ADS)

    Todhunter, L. D.; Leach, R. K.; Lawes, S. D. A.; Blateyron, F.

    2017-06-01

    This paper reports on an in-depth analysis of ISO 5436 part 2 type F2 reference software for the calculation of profile surface texture parameters that has been performed on the input, implementation and output results of the reference software developed by the National Physical Laboratory (NPL), the National Institute of Standards and Technology (NIST) and Physikalisch-Technische Bundesanstalt (PTB). Surface texture parameters have been calculated for a selection of 17 test data files obtained from the type F1 reference data sets on offer from NPL and NIST. The surface texture parameter calculation results show some disagreements between the software methods of the National Metrology Institutes. These disagreements have been investigated further, and some potential explanations are given.

  6. BioBrick assembly standards and techniques and associated software tools.

    PubMed

    Røkke, Gunvor; Korvald, Eirin; Pahr, Jarle; Oyås, Ove; Lale, Rahmi

    2014-01-01

    The BioBrick idea was developed to introduce the engineering principles of abstraction and standardization into synthetic biology. BioBricks are DNA sequences that serve a defined biological function and can be readily assembled with any other BioBrick parts to create new BioBricks with novel properties. In order to achieve this, several assembly standards can be used. Which assembly standards a BioBrick is compatible with, depends on the prefix and suffix sequences surrounding the part. In this chapter, five of the most common assembly standards will be described, as well as some of the most used assembly techniques, cloning procedures, and a presentation of the available software tools that can be used for deciding on the best method for assembling of different BioBricks, and searching for BioBrick parts in the Registry of Standard Biological Parts database.

  7. GRO/EGRET data analysis software: An integrated system of custom and commercial software using standard interfaces

    NASA Technical Reports Server (NTRS)

    Laubenthal, N. A.; Bertsch, D.; Lal, N.; Etienne, A.; Mcdonald, L.; Mattox, J.; Sreekumar, P.; Nolan, P.; Fierro, J.

    1992-01-01

    The Energetic Gamma Ray Telescope Experiment (EGRET) on the Compton Gamma Ray Observatory has been in orbit for more than a year and is being used to map the full sky for gamma rays in a wide energy range from 30 to 20,000 MeV. Already these measurements have resulted in a wide range of exciting new information on quasars, pulsars, galactic sources, and diffuse gamma ray emission. The central part of the analysis is done with sky maps that typically cover an 80 x 80 degree section of the sky for an exposure time of several days. Specific software developed for this program generates the counts, exposure, and intensity maps. The analysis is done on a network of UNIX based workstations and takes full advantage of a custom-built user interface called X-dialog. The maps that are generated are stored in the FITS format for a collection of energies. These, along with similar diffuse emission background maps generated from a model calculation, serve as input to a maximum likelihood program that produces maps of likelihood with optional contours that are used to evaluate regions for sources. Likelihood also evaluates the background corrected intensity at each location for each energy interval from which spectra can be generated. Being in a standard FITS format permits all of the maps to be easily accessed by the full complement of tools available in several commercial astronomical analysis systems. In the EGRET case, IDL is used to produce graphics plots in two and three dimensions and to quickly implement any special evaluation that might be desired. Other custom-built software, such as the spectral and pulsar analyses, take advantage of the XView toolkit for display and Postscript output for the color hard copy. This poster paper outlines the data flow and provides examples of the user interfaces and output products. It stresses the advantages that are derived from the integration of the specific instrument-unique software and powerful commercial tools for graphics and

  8. Magnetic resonance imaging diffusion tensor tractography: evaluation of anatomic accuracy of different fiber tracking software packages.

    PubMed

    Feigl, Guenther C; Hiergeist, Wolfgang; Fellner, Claudia; Schebesch, Karl-Michael M; Doenitz, Christian; Finkenzeller, Thomas; Brawanski, Alexander; Schlaier, Juergen

    2014-01-01

    Diffusion tensor imaging (DTI)-based tractography has become an integral part of preoperative diagnostic imaging in many neurosurgical centers, and other nonsurgical specialties depend increasingly on DTI tractography as a diagnostic tool. The aim of this study was to analyze the anatomic accuracy of visualized white matter fiber pathways using different, readily available DTI tractography software programs. Magnetic resonance imaging scans of the head of 20 healthy volunteers were acquired using a Siemens Symphony TIM 1.5T scanner and a 12-channel head array coil. The standard settings of the scans in this study were 12 diffusion directions and 5-mm slices. The fornices were chosen as an anatomic structure for the comparative fiber tracking. Identical data sets were loaded into nine different fiber tracking packages that used different algorithms. The nine software packages and algorithms used were NeuroQLab (modified tensor deflection [TEND] algorithm), Sörensen DTI task card (modified streamline tracking technique algorithm), Siemens DTI module (modified fourth-order Runge-Kutta algorithm), six different software packages from Trackvis (interpolated streamline algorithm, modified FACT algorithm, second-order Runge-Kutta algorithm, Q-ball [FACT algorithm], tensorline algorithm, Q-ball [second-order Runge-Kutta algorithm]), DTI Query (modified streamline tracking technique algorithm), Medinria (modified TEND algorithm), Brainvoyager (modified TEND algorithm), DTI Studio modified FACT algorithm, and the BrainLab DTI module based on the modified Runge-Kutta algorithm. Three examiners (a neuroradiologist, a magnetic resonance imaging physicist, and a neurosurgeon) served as examiners. They were double-blinded with respect to the test subject and the fiber tracking software used in the presented images. Each examiner evaluated 301 images. The examiners were instructed to evaluate screenshots from the different programs based on two main criteria: (i) anatomic

  9. The 3D scanner prototype utilize object profile imaging using line laser and octave software

    NASA Astrophysics Data System (ADS)

    Nurdini, Mugi; Manunggal, Trikarsa Tirtadwipa; Samsi, Agus

    2016-11-01

    Three-dimensional scanner or 3D Scanner is a device to reconstruct the real object into digital form on a computer. 3D Scanner is a technology that is being developed, especially in developed countries, where the current 3D Scanner devices is the advanced version with a very expensive prices. This study is basically a simple prototype of 3D Scanner with a very low investment costs. 3D Scanner prototype device consists of a webcam, a rotating desk system controlled by a stepper motor and Arduino UNO, and a line laser. Objects that limit the research is the object with same radius from its center point (object pivot). Scanning is performed by using object profile imaging by line laser which is then captured by the camera and processed by a computer (image processing) using Octave software. On each image acquisition, the scanned object on a rotating desk rotated by a certain degree, so for one full turn multiple images of a number of existing side are finally obtained. Then, the profile of the entire images is extracted in order to obtain digital object dimension. Digital dimension is calibrated by length standard, called gage block. Overall dimensions are then digitally reconstructed into a three-dimensional object. Validation of the scanned object reconstruction of the original object dimensions expressed as a percentage error. Based on the results of data validation, horizontal dimension error is about 5% to 23% and vertical dimension error is about +/- 3%.

  10. Survey of standards for electronic image displays

    NASA Astrophysics Data System (ADS)

    Rowe, William A.

    1996-01-01

    Electronic visual displays have been evolving from the 1960s basis of cathode ray tube (CRT) technology. Now, many other technologies are also available, including both flat panels and projection displays. Standards for these displays are being developed at both the national level and the international levels. Standards activity within the United States is in its infancy and is fragmented according to the inclination of each of the standards developing organizations. The latest round of flat panel display technology was primarily developed in Japan. Initially standards arose from component vendor-to-OEM customer relationships. As a result, Japanese standards for components are the best developed. The Electronics Industries Association of Japan (EIAJ) is providing their standards to the International Electrotechnical Commission (IEC) for adoption. On the international level, professional societies such as the human factors society (hfs) and the International Organization for Standardization (ISO) have completed major standards. Human factors society developed the first ergonomic standard hfs-100 and the ISO has developed some sections of a broader ergonomic standard ISO 9241. This paper addresses the organization of standards activity. Active organizations and their areas of focus are identified. The major standards that have been completed or are in development are described. Finally, suggestions for improving this standards activity are proposed.

  11. Standardization of (99m)Tc by means of a software coincidence system.

    PubMed

    Brito, A B; Koskinas, M F; Litvak, F; Toledo, F; Dias, M S

    2012-09-01

    The procedure followed by the Nuclear Metrology Laboratory, at IPEN, for the primary standardization of (99m)Tc is described. The primary standardization has been accomplished by the coincidence method. The beta channel efficiency was varied by electronic discrimination using a software coincidence counting system. Two windows were selected for the gamma channel: one at 140 keV gamma-ray and the other at 20 keV X-ray total absorption peaks. The experimental extrapolation curves were compared with Monte Carlo simulations by means of code ESQUEMA. Copyright © 2012 Elsevier Ltd. All rights reserved.

  12. AnaSP: a software suite for automatic image analysis of multicellular spheroids.

    PubMed

    Piccinini, Filippo

    2015-04-01

    Today, more and more biological laboratories use 3D cell cultures and tissues grown in vitro as a 3D model of in vivo tumours and metastases. In the last decades, it has been extensively established that multicellular spheroids represent an efficient model to validate effects of drugs and treatments for human care applications. However, a lack of methods for quantitative analysis limits the usage of spheroids as models for routine experiments. Several methods have been proposed in literature to perform high throughput experiments employing spheroids by automatically computing different morphological parameters, such as diameter, volume and sphericity. Nevertheless, these systems are typically grounded on expensive automated technologies, that make the suggested solutions affordable only for a limited subset of laboratories, frequently performing high content screening analysis. In this work we propose AnaSP, an open source software suitable for automatically estimating several morphological parameters of spheroids, by simply analyzing brightfield images acquired with a standard widefield microscope, also not endowed with a motorized stage. The experiments performed proved sensitivity and precision of the segmentation method proposed, and excellent reliability of AnaSP to compute several morphological parameters of spheroids imaged in different conditions. AnaSP is distributed as an open source software tool. Its modular architecture and graphical user interface make it attractive also for researchers who do not work in areas of computer vision and suitable for both high content screenings and occasional spheroid-based experiments. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  13. HARPS-N: software path from the observation block to the image

    NASA Astrophysics Data System (ADS)

    Sosnowska, D.; Lodi, M.; Gao, X.; Buchschacher, N.; Vick, A.; Guerra, J.; Gonzalez, M.; Kelly, D.; Lovis, C.; Pepe, F.; Molinari, E.; Cameron, A. C.; Latham, D.; Udry, S.

    2012-09-01

    HARPS North is the twin of the HARPS (High Accuracy Radial velocity for Planetary Search) spectrograph operating in La Silla (Chile) recently installed on the TNG in La Palma observatory and used to follow-up, the "hot" candidates delivered by the Kepler satellite. HARPS-N is delivered with its own software that completely integrates with the TNG control system. A special care has been dedicated to develop tools that will assist the astronomers during the whole process of taking images: from the observation schedule to the raw image acquisition. All these tools are presented in the paper. In order to provide a stable and reliable system, the software has been developed keeping in mind concepts like failover and high-availability. HARPS-N is made of heterogeneous systems, from normal computer to real-time systems, that's why the standard message queue middleware (ActiveMQ) was chosen to provide the communications between different processes. The path of operations starting with the Observation Blocks and ending with the FITS frames is fully automated and could allow, in the future, the completely remote observing runs optimized for the time and quality constraints.

  14. ASAP (Automatic Software for ASL Processing): A toolbox for processing Arterial Spin Labeling images.

    PubMed

    Mato Abad, Virginia; García-Polo, Pablo; O'Daly, Owen; Hernández-Tamames, Juan Antonio; Zelaya, Fernando

    2016-04-01

    The method of Arterial Spin Labeling (ASL) has experienced a significant rise in its application to functional imaging, since it is the only technique capable of measuring blood perfusion in a truly non-invasive manner. Currently, there are no commercial packages for processing ASL data and there is no recognized standard for normalizing ASL data to a common frame of reference. This work describes a new Automated Software for ASL Processing (ASAP) that can automatically process several ASL datasets. ASAP includes functions for all stages of image pre-processing: quantification, skull-stripping, co-registration, partial volume correction and normalization. To assess the applicability and validity of the toolbox, this work shows its application in the study of hypoperfusion in a sample of healthy subjects at risk of progressing to Alzheimer's disease. ASAP requires limited user intervention, minimizing the possibility of random and systematic errors, and produces cerebral blood flow maps that are ready for statistical group analysis. The software is easy to operate and results in excellent quality of spatial normalization. The results found in this evaluation study are consistent with previous studies that find decreased perfusion in Alzheimer's patients in similar regions and demonstrate the applicability of ASAP. Copyright © 2015 Elsevier Inc. All rights reserved.

  15. NEIGHBOUR-IN: Image processing software for spatial analysis of animal grouping.

    PubMed

    Caubet, Yves; Richard, Freddie-Jeanne

    2015-01-01

    Animal grouping is a very complex process that occurs in many species, involving many individuals under the influence of different mechanisms. To investigate this process, we have created an image processing software, called NEIGHBOUR-IN, designed to analyse individuals' coordinates belonging to up to three different groups. The software also includes statistical analysis and indexes to discriminate aggregates based on spatial localisation of individuals and their neighbours. After the description of the software, the indexes computed by the software are illustrated using both artificial patterns and case studies using the spatial distribution of woodlice. The added strengths of this software and methods are also discussed.

  16. NEIGHBOUR-IN: Image processing software for spatial analysis of animal grouping

    PubMed Central

    Caubet, Yves; Richard, Freddie-Jeanne

    2015-01-01

    Abstract Animal grouping is a very complex process that occurs in many species, involving many individuals under the influence of different mechanisms. To investigate this process, we have created an image processing software, called NEIGHBOUR-IN, designed to analyse individuals’ coordinates belonging to up to three different groups. The software also includes statistical analysis and indexes to discriminate aggregates based on spatial localisation of individuals and their neighbours. After the description of the software, the indexes computed by the software are illustrated using both artificial patterns and case studies using the spatial distribution of woodlice. The added strengths of this software and methods are also discussed. PMID:26261448

  17. [Diagnostic efficiency on digital snapshots of the standard radiology imaging].

    PubMed

    Campanella, Nando; Antico, Ettore; Dini, Leonardo; Morosini, Pierpaolo

    2004-12-01

    The authors had experienced the telediagnosis on digital snapshots of standard radiology imaging (chest, abdomen, and bones), sent by e-mailing, to support the medical doctors working in remote areas of developing countries. In order to validate the overall procedure, the authors have set up a simulating model and estimated some parameters of accuracy of the diagnosis on digital snapshots against the golden standard of the diagnosis by direct look. The study concerned the standard X-ray tests of one hundred randomly-selected patients out of a hospital archive. Four years later the diagnosis by direct look, the team of radiologists carried out the blind cross check on the digital snapshots of the radiograms and stated their second diagnosis. Sensibility, specificity, predictive value of positives, predictive value of negatives and efficiency of the whole series have been 83.0, 95.1, 96.1, 79.6 and 88.0%. By breaking up the series by apparatus, the skeleton test shows similar data of the whole series. The chest test shows a specificity and predictive value of positives of 100.0%. Although the number of cases is low, the abdomen test apparently shows a sensibility and predictive value of negatives as high as 100%, but a lower specificity and predictive value of negatives (85.7 and 87.5%). Though this data is supportive to the validation of the procedure, even better results are supposedly achieved by increasing the quality of the snapshots and by improving the skills of using the software.

  18. A Critical Appraisal of Techniques, Software Packages, and Standards for Quantitative Proteomic Analysis

    PubMed Central

    Lawless, Craig; Hubbard, Simon J.; Fan, Jun; Bessant, Conrad; Hermjakob, Henning; Jones, Andrew R.

    2012-01-01

    Abstract New methods for performing quantitative proteome analyses based on differential labeling protocols or label-free techniques are reported in the literature on an almost monthly basis. In parallel, a correspondingly vast number of software tools for the analysis of quantitative proteomics data has also been described in the literature and produced by private companies. In this article we focus on the review of some of the most popular techniques in the field and present a critical appraisal of several software packages available to process and analyze the data produced. We also describe the importance of community standards to support the wide range of software, which may assist researchers in the analysis of data using different platforms and protocols. It is intended that this review will serve bench scientists both as a useful reference and a guide to the selection and use of different pipelines to perform quantitative proteomics data analysis. We have produced a web-based tool (http://www.proteosuite.org/?q=other_resources) to help researchers find appropriate software for their local instrumentation, available file formats, and quantitative methodology. PMID:22804616

  19. Optimising MR perfusion imaging: comparison of different software-based approaches in acute ischaemic stroke.

    PubMed

    Schaafs, Lars-Arne; Porter, David; Audebert, Heinrich J; Fiebach, Jochen B; Villringer, Kersten

    2016-11-01

    Perfusion imaging (PI) is susceptible to confounding factors such as motion artefacts as well as delay and dispersion (D/D). We evaluate the influence of different post-processing algorithms on hypoperfusion assessment in PI analysis software packages to improve the clinical accuracy of stroke PI. Fifty patients with acute ischaemic stroke underwent MRI imaging in the first 24 h after onset. Diverging approaches to motion and D/D correction were applied. The calculated MTT and CBF perfusion maps were assessed by volumetry of lesions and tested for agreement with a standard approach and with the final lesion volume (FLV) on day 6 in patients with persisting vessel occlusion. MTT map lesion volumes were significantly smaller throughout the software packages with correction of motion and D/D when compared to the commonly used approach with no correction (p = 0.001-0.022). Volumes on CBF maps did not differ significantly (p = 0.207-0.925). All packages with advanced post-processing algorithms showed a high level of agreement with FLV (ICC = 0.704-0.879). Correction of D/D had a significant influence on estimated lesion volumes and leads to significantly smaller lesion volumes on MTT maps. This may improve patient selection. • Assessment on hypoperfusion using advanced post-processing with correction for motion and D/D. • CBF appears to be more robust regarding differences in post-processing. • Tissue at risk is estimated more accurately by correcting software algorithms. • Advanced post-processing algorithms show a higher agreement with the final lesion volume.

  20. Platform-independent software for medical image processing on the Internet

    NASA Astrophysics Data System (ADS)

    Mancuso, Michael E.; Pathak, Sayan D.; Kim, Yongmin

    1997-05-01

    We have developed a software tool for image processing over the Internet. The tool is a general purpose, easy to use, flexible, platform independent image processing software package with functions most commonly used in medical image processing.It provides for processing of medical images located wither remotely on the Internet or locally. The software was written in Java - the new programming language developed by Sun Microsystems. It was compiled and tested using Microsoft's Visual Java 1.0 and Microsoft's Just in Time Compiler 1.00.6211. The software is simple and easy to use. In order to use the tool, the user needs to download the software from our site before he/she runs it using any Java interpreter, such as those supplied by Sun, Symantec, Borland or Microsoft. Future versions of the operating systems supplied by Sun, Microsoft, Apple, IBM, and others will include Java interpreters. The software is then able to access and process any image on the iNternet or on the local computer. Using a 512 X 512 X 8-bit image, a 3 X 3 convolution took 0.88 seconds on an Intel Pentium Pro PC running at 200 MHz with 64 Mbytes of memory. A window/level operation took 0.38 seconds while a 3 X 3 median filter took 0.71 seconds. These performance numbers demonstrate the feasibility of using this software interactively on desktop computes. Our software tool supports various image processing techniques commonly used in medical image processing and can run without the need of any specialized hardware. It can become an easily accessible resource over the Internet to promote the learning and of understanding image processing algorithms. Also, it could facilitate sharing of medical image databases and collaboration amongst researchers and clinicians, regardless of location.

  1. Multislice mapping and quantification of brain perfusion MR imaging data: a comparative study of homemade and commercial software.

    PubMed

    Ariöz, Umut; Oğuz, Kader Karli; Sentürk, Senem; Cila, Ayşenur

    2005-12-01

    We developed a homemade computer program for analysis of perfusion weighted MR imaging (PW-MRI) data in order to produce colored multislice rCBV, rCBF, and MTT maps. We then compared those maps with others produced by a commercially available program, obtained from the same PW-MRI data, to determine the feasibility of using our program in clinical practice. Studies of 20 patients were performed on a high field MR scanner. Imaging protocol consisted of perfusion study (EPI, TR/TE: 1430/46 msec, 10 mm gap, matrix: 128x128, FOV: 240 cm, NEX: 1). Twenty ml of Gd-DTPA was administered at a rate of 4-5 ml/sec beginning at the 5th acquisition of 50 dynamic series. MATLAB software was used for writing codes of both mathematical equations and the graphical user interface. All images were in DICOM standard. For validation of the results, all maps were compared with another commercially available program, which is widely being used in daily practice, and was installed on the MR scanner. Ability to define the lesion contours and extension, and artifacts at the bone-soft tissue interface were the criteria used for statistical evaluation. Field definition was equally good in 38% of the patient scans for both software programs; our homemade software was better in 23% of the cases and the commercial software was better in 31%. In 6% of the results, either software program was not sufficient. For the elimination of artifacts, our homemade software was 100% successful in every case. Our homemade program is a user friendly one that gives comparable results with those of a commonly used commercial one. However, this program should be tested with different categories of diseases and a larger patient population and then compared with different commercial software programs to be validated more clearly.

  2. Integration of XNAT/PACS, DICOM, and Research Software for Automated Multi-modal Image Analysis

    PubMed Central

    Gao, Yurui; Burns, Scott S.; Lauzon, Carolyn B.; Fong, Andrew E.; James, Terry A.; Lubar, Joel F.; Thatcher, Robert W.; Twillie, David A.; Wirt, Michael D.; Zola, Marc A.; Logan, Bret W.; Anderson, Adam W.; Landman, Bennett A.

    2013-01-01

    Traumatic brain injury (TBI) is an increasingly important public health concern. While there are several promising avenues of intervention, clinical assessments are relatively coarse and comparative quantitative analysis is an emerging field. Imaging data provide potentially useful information for evaluating TBI across functional, structural, and microstructural phenotypes. Integration and management of disparate data types are major obstacles. In a multi-institution collaboration, we are collecting electroencephalogy (EEG), structural MRI, diffusion tensor MRI (DTI), and single photon emission computed tomography (SPECT) from a large cohort of US Army service members exposed to mild or moderate TBI who are undergoing experimental treatment. We have constructed a robust informatics backbone for this project centered on the DICOM standard and eXtensible Neuroimaging Archive Toolkit (XNAT) server. Herein, we discuss (1) optimization of data transmission, validation and storage, (2) quality assurance and workflow management, and (3) integration of high performance computing with research software. PMID:24386548

  3. Integration of XNAT/PACS, DICOM, and research software for automated multi-modal image analysis

    NASA Astrophysics Data System (ADS)

    Gao, Yurui; Burns, Scott S.; Lauzon, Carolyn B.; Fong, Andrew E.; James, Terry A.; Lubar, Joel F.; Thatcher, Robert W.; Twillie, David A.; Wirt, Michael D.; Zola, Marc A.; Logan, Bret W.; Anderson, Adam W.; Landman, Bennett A.

    2013-03-01

    Traumatic brain injury (TBI) is an increasingly important public health concern. While there are several promising avenues of intervention, clinical assessments are relatively coarse and comparative quantitative analysis is an emerging field. Imaging data provide potentially useful information for evaluating TBI across functional, structural, and microstructural phenotypes. Integration and management of disparate data types are major obstacles. In a multi-institution collaboration, we are collecting electroencephalogy (EEG), structural MRI, diffusion tensor MRI (DTI), and single photon emission computed tomography (SPECT) from a large cohort of US Army service members exposed to mild or moderate TBI who are undergoing experimental treatment. We have constructed a robust informatics backbone for this project centered on the DICOM standard and eXtensible Neuroimaging Archive Toolkit (XNAT) server. Herein, we discuss (1) optimization of data transmission, validation and storage, (2) quality assurance and workflow management, and (3) integration of high performance computing with research software.

  4. Metrology Standards for Quantitative Imaging Biomarkers

    PubMed Central

    Obuchowski, Nancy A.; Kessler, Larry G.; Raunig, David L.; Gatsonis, Constantine; Huang, Erich P.; Kondratovich, Marina; McShane, Lisa M.; Reeves, Anthony P.; Barboriak, Daniel P.; Guimaraes, Alexander R.; Wahl, Richard L.

    2015-01-01

    Although investigators in the imaging community have been active in developing and evaluating quantitative imaging biomarkers (QIBs), the development and implementation of QIBs have been hampered by the inconsistent or incorrect use of terminology or methods for technical performance and statistical concepts. Technical performance is an assessment of how a test performs in reference objects or subjects under controlled conditions. In this article, some of the relevant statistical concepts are reviewed, methods that can be used for evaluating and comparing QIBs are described, and some of the technical performance issues related to imaging biomarkers are discussed. More consistent and correct use of terminology and study design principles will improve clinical research, advance regulatory science, and foster better care for patients who undergo imaging studies. © RSNA, 2015 PMID:26267831

  5. Metrology Standards for Quantitative Imaging Biomarkers.

    PubMed

    Sullivan, Daniel C; Obuchowski, Nancy A; Kessler, Larry G; Raunig, David L; Gatsonis, Constantine; Huang, Erich P; Kondratovich, Marina; McShane, Lisa M; Reeves, Anthony P; Barboriak, Daniel P; Guimaraes, Alexander R; Wahl, Richard L

    2015-12-01

    Although investigators in the imaging community have been active in developing and evaluating quantitative imaging biomarkers (QIBs), the development and implementation of QIBs have been hampered by the inconsistent or incorrect use of terminology or methods for technical performance and statistical concepts. Technical performance is an assessment of how a test performs in reference objects or subjects under controlled conditions. In this article, some of the relevant statistical concepts are reviewed, methods that can be used for evaluating and comparing QIBs are described, and some of the technical performance issues related to imaging biomarkers are discussed. More consistent and correct use of terminology and study design principles will improve clinical research, advance regulatory science, and foster better care for patients who undergo imaging studies.

  6. An instructional guide for leaf color analysis using digital imaging software

    Treesearch

    Paula F. Murakami; Michelle R. Turner; Abby K. van den Berg; Paul G. Schaberg

    2005-01-01

    Digital color analysis has become an increasingly popular and cost-effective method utilized by resource managers and scientists for evaluating foliar nutrition and health in response to environmental stresses. We developed and tested a new method of digital image analysis that uses Scion Image or NIH image public domain software to quantify leaf color. This...

  7. Application of open source image guided therapy software in MR-guided therapies.

    PubMed

    Hata, Nobuhiko; Piper, Steve; Jolesz, Ferenc A; Tempany, Clare M C; Black, Peter McL; Morikawa, Shigehiro; Iseki, Horoshi; Hashizume, Makoto; Kikinis, Ron

    2007-01-01

    We present software engineering methods to provide free open-source software for MR-guided therapy. We report that graphical representation of the surgical tools, interconnectively with the tracking device, patient-to-image registration, and MRI-based thermal mapping are crucial components of MR-guided therapy in sharing such software. Software process includes a network-based distribution mechanism by multi-platform compiling tool CMake, CVS, quality assurance software DART. We developed six procedures in four separate clinical sites using proposed software engineering and process, and found the proposed method is feasible to facilitate multicenter clinical trial of MR-guided therapies. Our future studies include use of the software in non-MR-guided therapies.

  8. An adaptive software defined radio design based on a standard space telecommunication radio system API

    NASA Astrophysics Data System (ADS)

    Xiong, Wenhao; Tian, Xin; Chen, Genshe; Pham, Khanh; Blasch, Erik

    2017-05-01

    Software defined radio (SDR) has become a popular tool for the implementation and testing for communications performance. The advantage of the SDR approach includes: a re-configurable design, adaptive response to changing conditions, efficient development, and highly versatile implementation. In order to understand the benefits of SDR, the space telecommunication radio system (STRS) was proposed by NASA Glenn research center (GRC) along with the standard application program interface (API) structure. Each component of the system uses a well-defined API to communicate with other components. The benefit of standard API is to relax the platform limitation of each component for addition options. For example, the waveform generating process can support a field programmable gate array (FPGA), personal computer (PC), or an embedded system. As long as the API defines the requirements, the generated waveform selection will work with the complete system. In this paper, we demonstrate the design and development of adaptive SDR following the STRS and standard API protocol. We introduce step by step the SDR testbed system including the controlling graphic user interface (GUI), database, GNU radio hardware control, and universal software radio peripheral (USRP) tranceiving front end. In addition, a performance evaluation in shown on the effectiveness of the SDR approach for space telecommunication.

  9. Comparison of grey scale median (GSM) measurement in ultrasound images of human carotid plaques using two different softwares.

    PubMed

    Östling, Gerd; Persson, Margaretha; Hedblad, Bo; Gonçalves, Isabel

    2013-11-01

    Grey scale median (GSM) measured on ultrasound images of carotid plaques has been used for several years now in research to find the vulnerable plaque. Centres have used different software and also different methods for GSM measurement. This has resulted in a wide range of GSM values and cut-off values for the detection of the vulnerable plaque. The aim of this study was to compare the values obtained with two different softwares, using different standardization methods, for the measurement of GSM on ultrasound images of carotid human plaques. GSM was measured with Adobe Photoshop(®) and with Artery Measurement System (AMS) on duplex ultrasound images of 100 consecutive medium- to large-sized carotid plaques of the Beta-blocker Cholesterol-lowering Asymptomatic Plaque Study (BCAPS). The mean values of GSM were 35·2 ± 19·3 and 55·8 ± 22·5 for Adobe Photoshop(®) and AMS, respectively. Mean difference was 20·45 (95% CI: 19·17-21·73). Although the absolute values of GSM differed, the agreement between the two measurements was good, correlation coefficient 0·95. A chi-square test revealed a kappa value of 0·68 when studying quartiles of GSM. The intra-observer variability was 1·9% for AMS and 2·5% for Adobe Photoshop. The difference between softwares and standardization methods must be taken into consideration when comparing studies. To avoid these problems, researcher should come to a consensus regarding software and standardization method for GSM measurement on ultrasound images of plaque in the arteries.

  10. Xmipp 3.0: an improved software suite for image processing in electron microscopy.

    PubMed

    de la Rosa-Trevín, J M; Otón, J; Marabini, R; Zaldívar, A; Vargas, J; Carazo, J M; Sorzano, C O S

    2013-11-01

    Xmipp is a specialized software package for image processing in electron microscopy, and that is mainly focused on 3D reconstruction of macromolecules through single-particles analysis. In this article we present Xmipp 3.0, a major release which introduces several improvements and new developments over the previous version. A central improvement is the concept of a project that stores the entire processing workflow from data import to final results. It is now possible to monitor, reproduce and restart all computing tasks as well as graphically explore the complete set of interrelated tasks associated to a given project. Other graphical tools have also been improved such as data visualization, particle picking and parameter "wizards" that allow the visual selection of some key parameters. Many standard image formats are transparently supported for input/output from all programs. Additionally, results have been standardized, facilitating the interoperation between different Xmipp programs. Finally, as a result of a large code refactoring, the underlying C++ libraries are better suited for future developments and all code has been optimized. Xmipp is an open-source package that is freely available for download from: http://xmipp.cnb.csic.es. Copyright © 2013 Elsevier Inc. All rights reserved.

  11. [Development of automatic navigation measuring system using template-matching software in image guided neurosurgery].

    PubMed

    Watanabe, Yohei; Hayashi, Yuichiro; Fujii, Masazumi; Kimura, Miyuki; Sugiura, Akihiro; Tsuzaka, Masatoshi; Wakabayashi, Toshihiko

    2010-02-20

    An image-guided neurosurgery and neuronavigation system based on magnetic resonance imaging has been used as an indispensable tool for resection of brain tumors. Therefore, accuracy of the neuronavigation system, provided by periodic quality assurance (QA), is essential for image-guided neurosurgery. Two types of accuracy index, fiducial registration error (FRE) and target registration error (TRE), have been used to evaluate navigation accuracy. FRE shows navigation accuracy on points that have been registered. On the other hand, TRE shows navigation accuracy on points such as tumor, skin, and fiducial markers. This study shows that TRE is more reliable than FRE. However, calculation of TRE is a time-consuming, subjective task. Software for QA was developed to compute TRE. This software calculates TRE automatically by an image processing technique, such as automatic template matching. TRE was calculated by the software and compared with the results obtained by manual calculation. Using the software made it possible to achieve a reliable QA system.

  12. An open source software for analysis of dynamic contrast enhanced magnetic resonance images: UMMPerfusion revisited.

    PubMed

    Zöllner, Frank G; Daab, Markus; Sourbron, Steven P; Schad, Lothar R; Schoenberg, Stefan O; Weisser, Gerald

    2016-01-14

    Perfusion imaging has become an important image based tool to derive the physiological information in various applications, like tumor diagnostics and therapy, stroke, (cardio-) vascular diseases, or functional assessment of organs. However, even after 20 years of intense research in this field, perfusion imaging still remains a research tool without a broad clinical usage. One problem is the lack of standardization in technical aspects which have to be considered for successful quantitative evaluation; the second problem is a lack of tools that allow a direct integration into the diagnostic workflow in radiology. Five compartment models, namely, a one compartment model (1CP), a two compartment exchange (2CXM), a two compartment uptake model (2CUM), a two compartment filtration model (2FM) and eventually the extended Toft's model (ETM) were implemented as plugin for the DICOM workstation OsiriX. Moreover, the plugin has a clean graphical user interface and provides means for quality management during the perfusion data analysis. Based on reference test data, the implementation was validated against a reference implementation. No differences were found in the calculated parameters. We developed open source software to analyse DCE-MRI perfusion data. The software is designed as plugin for the DICOM Workstation OsiriX. It features a clean GUI and provides a simple workflow for data analysis while it could also be seen as a toolbox providing an implementation of several recent compartment models to be applied in research tasks. Integration into the infrastructure of a radiology department is given via OsiriX. Results can be saved automatically and reports generated automatically during data analysis ensure certain quality control.

  13. Wavelet/scalar quantization compression standard for fingerprint images

    SciTech Connect

    Brislawn, C.M.

    1996-06-12

    US Federal Bureau of Investigation (FBI) has recently formulated a national standard for digitization and compression of gray-scale fingerprint images. Fingerprints are scanned at a spatial resolution of 500 dots per inch, with 8 bits of gray-scale resolution. The compression algorithm for the resulting digital images is based on adaptive uniform scalar quantization of a discrete wavelet transform subband decomposition (wavelet/scalar quantization method). The FBI standard produces archival-quality images at compression ratios of around 15 to 1 and will allow the current database of paper fingerprint cards to be replaced by digital imagery. The compression standard specifies a class of potential encoders and a universal decoder with sufficient generality to reconstruct compressed images produced by any compliant encoder, allowing flexibility for future improvements in encoder technology. A compliance testing program is also being implemented to ensure high standards of image quality and interchangeability of data between different implementations.

  14. JHelioviewer: Open-Source Software for Discovery and Image Access in the Petabyte Age (Invited)

    NASA Astrophysics Data System (ADS)

    Mueller, D.; Dimitoglou, G.; Langenberg, M.; Pagel, S.; Dau, A.; Nuhn, M.; Garcia Ortiz, J. P.; Dietert, H.; Schmidt, L.; Hughitt, V. K.; Ireland, J.; Fleck, B.

    2010-12-01

    The unprecedented torrent of data returned by the Solar Dynamics Observatory is both a blessing and a barrier: a blessing for making available data with significantly higher spatial and temporal resolution, but a barrier for scientists to access, browse and analyze them. With such staggering data volume, the data is bound to be accessible only from a few repositories and users will have to deal with data sets effectively immobile and practically difficult to download. From a scientist's perspective this poses three challenges: accessing, browsing and finding interesting data while avoiding the proverbial search for a needle in a haystack. To address these challenges, we have developed JHelioviewer, an open-source visualization software that lets users browse large data volumes both as still images and movies. We did so by deploying an efficient image encoding, storage, and dissemination solution using the JPEG 2000 standard. This solution enables users to access remote images at different resolution levels as a single data stream. Users can view, manipulate, pan, zoom, and overlay JPEG 2000 compressed data quickly, without severe network bandwidth penalties. Besides viewing data, the browser provides third-party metadata and event catalog integration to quickly locate data of interest, as well as an interface to the Virtual Solar Observatory to download science-quality data. As part of the Helioviewer Project, JHelioviewer offers intuitive ways to browse large amounts of heterogeneous data remotely and provides an extensible and customizable open-source platform for the scientific community.

  15. A tutorial for software development in quantitative proteomics using PSI standard formats☆

    PubMed Central

    Gonzalez-Galarza, Faviel F.; Qi, Da; Fan, Jun; Bessant, Conrad; Jones, Andrew R.

    2014-01-01

    The Human Proteome Organisation — Proteomics Standards Initiative (HUPO-PSI) has been working for ten years on the development of standardised formats that facilitate data sharing and public database deposition. In this article, we review three HUPO-PSI data standards — mzML, mzIdentML and mzQuantML, which can be used to design a complete quantitative analysis pipeline in mass spectrometry (MS)-based proteomics. In this tutorial, we briefly describe the content of each data model, sufficient for bioinformaticians to devise proteomics software. We also provide guidance on the use of recently released application programming interfaces (APIs) developed in Java for each of these standards, which makes it straightforward to read and write files of any size. We have produced a set of example Java classes and a basic graphical user interface to demonstrate how to use the most important parts of the PSI standards, available from http://code.google.com/p/psi-standard-formats-tutorial. This article is part of a Special Issue entitled: Computational Proteomics in the Post-Identification Era. Guest Editors: Martin Eisenacher and Christian Stephan. PMID:23584085

  16. A tutorial for software development in quantitative proteomics using PSI standard formats.

    PubMed

    Gonzalez-Galarza, Faviel F; Qi, Da; Fan, Jun; Bessant, Conrad; Jones, Andrew R

    2014-01-01

    The Human Proteome Organisation - Proteomics Standards Initiative (HUPO-PSI) has been working for ten years on the development of standardised formats that facilitate data sharing and public database deposition. In this article, we review three HUPO-PSI data standards - mzML, mzIdentML and mzQuantML, which can be used to design a complete quantitative analysis pipeline in mass spectrometry (MS)-based proteomics. In this tutorial, we briefly describe the content of each data model, sufficient for bioinformaticians to devise proteomics software. We also provide guidance on the use of recently released application programming interfaces (APIs) developed in Java for each of these standards, which makes it straightforward to read and write files of any size. We have produced a set of example Java classes and a basic graphical user interface to demonstrate how to use the most important parts of the PSI standards, available from http://code.google.com/p/psi-standard-formats-tutorial. This article is part of a Special Issue entitled: Computational Proteomics in the Post-Identification Era. Guest Editors: Martin Eisenacher and Christian Stephan. Copyright © 2013 Elsevier B.V. All rights reserved.

  17. An image-based software tool for screening retinal fundus images using vascular morphology and network transport analysis

    NASA Astrophysics Data System (ADS)

    Clark, Richard D.; Dickrell, Daniel J.; Meadows, David L.

    2014-03-01

    As the number of digital retinal fundus images taken each year grows at an increasing rate, there exists a similarly increasing need for automatic eye disease detection through image-based analysis. A new method has been developed for classifying standard color fundus photographs into both healthy and diseased categories. This classification was based on the calculated network fluid conductance, a function of the geometry and connectivity of the vascular segments. To evaluate the network resistance, the retinal vasculature was first manually separated from the background to ensure an accurate representation of the geometry and connectivity. The arterial and venous networks were then semi-automatically separated into two separate binary images. The connectivity of the arterial network was then determined through a series of morphological image operations. The network comprised of segments of vasculature and points of bifurcation, with each segment having a characteristic geometric and fluid properties. Based on the connectivity and fluid resistance of each vascular segment, an arterial network flow conductance was calculated, which described the ease with which blood can pass through a vascular system. In this work, 27 eyes (13 healthy and 14 diabetic) from patients roughly 65 years in age were evaluated using this methodology. Healthy arterial networks exhibited an average fluid conductance of 419 ± 89 μm3/mPa-s while the average network fluid conductance of the diabetic set was 165 ± 87 μm3/mPa-s (p < 0.001). The results of this new image-based software demonstrated an ability to automatically, quantitatively and efficiently screen diseased eyes from color fundus imagery.

  18. Software Compression for Partially Parallel Imaging with Multi-channels.

    PubMed

    Huang, Feng; Vijayakumar, Sathya; Akao, James

    2005-01-01

    In magnetic resonance imaging, multi-channel phased array coils enjoy a high signal to noise ratio (SNR) and better parallel imaging performance. But with the increase in number of channels, the reconstruction time and requirement for computer memory become inevitable problems. In this work, principle component analysis is applied to reduce the size of data and protect the performance of parallel imaging. Clinical data collected using a 32-channel cardiac coil are used in the experiments. Experimental results show that the proposed method dramatically reduces the processing time without much damage to the reconstructed image.

  19. The creation of a public database of precision phantoms to facilitate the evaluation and standardization of advanced visualization and quantification software

    NASA Astrophysics Data System (ADS)

    Chen, Joseph J.; Saenz, Naomi J.; Siegel, Eliot L.

    2009-02-01

    In order to validate CT imaging as a biomarker, it is important to ascertain the variability and artifacts associated with various forms of advanced visualization and quantification software. The purpose of the paper is to describe the rationale behind the creation of a free, public resource that contains phantom datasets for CT designed to facilitate testing, development and standardization of advanced visualization and quantification software. For our research, three phantoms were scanned at multiple kVp and mAs settings utilizing a 64-channel MDCT scanner at a collimation of 0.75 mm. Images were reconstructed at a slice thickness of 0.75 mm and archived in DICOM format. The phantoms consisted of precision spheres, balls of different materials and sizes, and slabs of Last-A-Foam(R) at varying densities. The database of scans is stored in an archive utilizing software developed for the National Cancer Imaging Archive and is publically available. The scans were completed successfully and the datasets are available for free and unrestricted download. The CT images can be accessed in DICOM format via http or FTP or utilizing caGRID. A DICOM database of phantom data was successfully created and made available to the public. We anticipate that this database will be useful as a reference for physicists for quality control purposes, for developers of advanced visualization and quantification software, and for others who need to test the performance of their systems against a known "gold" standard. We plan to add more phantom images in the future and expand to other imaging modalities.

  20. Intra-operative image update: first experiences with new software in computer-assisted sinus surgery.

    PubMed

    Wurm, Jochen; Bohr, Christopher; Iro, Heinrich; Bumm, Klaus

    2008-09-01

    So far, conventional navigation systems do not provide the opportunity for any modification of acquired image datasets. In particular, the surgical progress in the operating field cannot be visualized unless new imaging scans are performed. In a feasibility study, new software creating intra-operative image updates by virtual means was tested in conjunction with conventional navigation. With this new software, surgically removed tissue volumes can be traced and viewed directly within the diagnostic image data. The new software represents an interesting and helpful amendment to conventional computer-assisted surgery in selected cases. During surgical procedures around bony structures, the surgeon gets an accurate virtual image update of the surgical progress in the operating field and the amount of tissue removed. However, in cases where mobile structures are present or soft tissue shifts are expected, this feature seems to be suitable only to a limited extent.

  1. Fluorescence Image Analyzer - FLIMA: software for quantitative analysis of fluorescence in situ hybridization.

    PubMed

    Silva, H C M; Martins-Júnior, M M C; Ribeiro, L B; Matoso, D A

    2017-03-30

    The Fluorescence Image Analyzer (FLIMA) software was developed for the quantitative analysis of images generated by fluorescence in situ hybridization (FISH). Currently, the images of FISH are examined without a coefficient that enables a comparison between them. Through GD Graphics Library, the FLIMA software calculates the amount of pixels on image and recognizes each present color. The coefficient generated by the algorithm shows the percentage of marks (probes) hybridized on the chromosomes. This software can be used for any type of image generated by a fluorescence microscope and is able to quantify digoxigenin probes exhibiting a red color, biotin probes exhibiting a green color, and double-FISH probes (digoxigenin and biotin used together), where the white color is displayed.

  2. Digital radiography: optimization of image quality and dose using multi-frequency software.

    PubMed

    Precht, H; Gerke, O; Rosendahl, K; Tingberg, A; Waaler, D

    2012-09-01

    New developments in processing of digital radiographs (DR), including multi-frequency processing (MFP), allow optimization of image quality and radiation dose. This is particularly promising in children as they are believed to be more sensitive to ionizing radiation than adults. To examine whether the use of MFP software reduces the radiation dose without compromising quality at DR of the femur in 5-year-old-equivalent anthropomorphic and technical phantoms. A total of 110 images of an anthropomorphic phantom were imaged on a DR system (Canon DR with CXDI-50 C detector and MLT[S] software) and analyzed by three pediatric radiologists using Visual Grading Analysis. In addition, 3,500 images taken of a technical contrast-detail phantom (CDRAD 2.0) provide an objective image-quality assessment. Optimal image-quality was maintained at a dose reduction of 61% with MLT(S) optimized images. Even for images of diagnostic quality, MLT(S) provided a dose reduction of 88% as compared to the reference image. Software impact on image quality was found significant for dose (mAs), dynamic range dark region and frequency band. By optimizing image processing parameters, a significant dose reduction is possible without significant loss of image quality.

  3. Detection of patient setup errors with a portal image - DRR registration software application.

    PubMed

    Sutherland, Kenneth; Ishikawa, Masayori; Bengua, Gerard; Ito, Yoichi M; Miyamoto, Yoshiko; Shirato, Hiroki

    2011-02-18

    The purpose of this study was to evaluate a custom portal image - digitally reconstructed radiograph (DRR) registration software application. The software works by transforming the portal image into the coordinate space of the DRR image using three control points placed on each image by the user, and displaying the fused image. In order to test statistically that the software actually improves setup error estimation, an intra- and interobserver phantom study was performed. Portal images of anthropomorphic thoracic and pelvis phantoms with virtually placed irradiation fields at known setup errors were prepared. A group of five doctors was first asked to estimate the setup errors by examining the portal and DRR image side-by-side, not using the software. A second group of four technicians then estimated the same set of images using the registration software. These two groups of human subjects were then compared with an auto-registration feature of the software, which is based on the mutual information between the portal and DRR images. For the thoracic case, the average distance between the actual setup error and the estimated error was 4.3 ± 3.0 mm for doctors using the side-by-side method, 2.1 ± 2.4 mm for technicians using the registration method, and 0.8 ± 0.4mm for the automatic algorithm. For the pelvis case, the average distance between the actual setup error and estimated error was 2.0 ± 0.5 mm for the doctors using the side-by-side method, 2.5 ± 0.4 mm for technicians using the registration method, and 2.0 ± 1.0 mm for the automatic algorithm. The ability of humans to estimate offset values improved statistically using our software for the chest phantom that we tested. Setup error estimation was further improved using our automatic error estimation algorithm. Estimations were not statistically different for the pelvis case. Consistency improved using the software for both the chest and pelvis phantoms. We also tested the automatic algorithm with a

  4. Self-contained off-line media for exchanging medical images using DICOM-compliant standard

    NASA Astrophysics Data System (ADS)

    Ratib, Osman M.; Ligier, Yves; Rosset, Antoine; Staub, Jean-Christophe; Logean, Marianne; Girard, Christian

    2000-05-01

    The goal of this project is to develop and implement off-line DICOM-compliant CD ROMs that contain the necessary software tools for displaying the images and related data on any personal computer. We implemented a hybrid recording technique allowing CD-ROMs for Macintosh and Windows platforms to be fully DICOM compliant. A public domain image viewing program (OSIRIS) is recorded on the CD for display and manipulation of sequences of images. The content of the disk is summarized in a standard HTML file that can be displayed on any web-browser. This allows the images to be easily accessible on any desktop computer, while being also readable on high-end commercial DICOM workstations. The HTML index page contains a set of thumbnails and full-size JPEG images that are directly linked to the original high-resolution DICOM images through an activation of the OSIRIS program. Reports and associated text document are also converted to HTML format to be easily displayable directly within the web browser. This portable solution provides a convenient and low cost alternative to hard copy images for exchange and transmission of images to referring physicians and external care providers without the need for any specialized software or hardware.

  5. ESO C Library for an Image Processing Software Environment (eclipse)

    NASA Astrophysics Data System (ADS)

    Devillard, N.

    Written in ANSI C, eclipse is a library offering numerous services related to astronomical image processing: FITS data access, various image and cube loading methods, binary image handling and filtering (including convolution and morphological filters), 2-D cross-correlation, connected components, cube and image arithmetic, dead pixel detection and correction, object detection, data extraction, flat-fielding with robust fit, image generation, statistics, photometry, image-space resampling, image combination, and cube stacking. It also contains support for mathematical tools like random number generation, FFT, curve fitting, matrices, fast median computation, and point-pattern matching. The main feature of this library is its ability to handle large amounts of input data (up to 2 GB in the current version) regardless of the amount of memory and swap available on the local machine. Another feature is the very high speed allowed by optimized C, making it an ideal base tool for programming efficient number-crunching applications, e.g., on parallel (Beowulf) systems. Running on all Unix-like platforms, eclipse is portable. A high-level interface to Python is foreseen that would allow programmers to prototype their applications much faster than through C programs.

  6. Eclipse: ESO C Library for an Image Processing Software Environment

    NASA Astrophysics Data System (ADS)

    Devillard, Nicolas

    2011-12-01

    Written in ANSI C, eclipse is a library offering numerous services related to astronomical image processing: FITS data access, various image and cube loading methods, binary image handling and filtering (including convolution and morphological filters), 2-D cross-correlation, connected components, cube and image arithmetic, dead pixel detection and correction, object detection, data extraction, flat-fielding with robust fit, image generation, statistics, photometry, image-space resampling, image combination, and cube stacking. It also contains support for mathematical tools like random number generation, FFT, curve fitting, matrices, fast median computation, and point-pattern matching. The main feature of this library is its ability to handle large amounts of input data (up to 2GB in the current version) regardless of the amount of memory and swap available on the local machine. Another feature is the very high speed allowed by optimized C, making it an ideal base tool for programming efficient number-crunching applications, e.g., on parallel (Beowulf) systems.

  7. Software for Analyzing Sequences of Flow-Related Images

    NASA Technical Reports Server (NTRS)

    Klimek, Robert; Wright, Ted

    2004-01-01

    Spotlight is a computer program for analysis of sequences of images generated in combustion and fluid physics experiments. Spotlight can perform analysis of a single image in an interactive mode or a sequence of images in an automated fashion. The primary type of analysis is tracking of positions of objects over sequences of frames. Features and objects that are typically tracked include flame fronts, particles, droplets, and fluid interfaces. Spotlight automates the analysis of object parameters, such as centroid position, velocity, acceleration, size, shape, intensity, and color. Images can be processed to enhance them before statistical and measurement operations are performed. An unlimited number of objects can be analyzed simultaneously. Spotlight saves results of analyses in a text file that can be exported to other programs for graphing or further analysis. Spotlight is a graphical-user-interface-based program that at present can be executed on Microsoft Windows and Linux operating systems. A version that runs on Macintosh computers is being considered.

  8. Accuracy of 3D Imaging Software in Cephalometric Analysis

    DTIC Science & Technology

    2013-06-21

    of a small region. The two most common types are periapical and bitewing radiographs. Periapical radiographs obtain an image of the entire tooth ...ability to image craniofacial anatomy in three dimensions. For orthodontists, this means improved visualization of tooth position, skeletal features...sufficient to localize the tooth to one side of the alveolus or the other using the “same lingual, opposite buccal” (SLOB) rule (Maverna & Gracco

  9. Polarization information processing and software system design for simultaneously imaging polarimetry

    NASA Astrophysics Data System (ADS)

    Wang, Yahui; Liu, Jing; Jin, Weiqi; Wen, Renjie

    2015-08-01

    Simultaneous imaging polarimetry can realize real-time polarization imaging of the dynamic scene, which has wide application prospect. This paper first briefly illustrates the design of the double separate Wollaston Prism simultaneous imaging polarimetry, and then emphases are put on the polarization information processing methods and software system design for the designed polarimetry. Polarization information processing methods consist of adaptive image segmentation, high-accuracy image registration, instrument matrix calibration. Morphological image processing was used for image segmentation by taking dilation of an image; The accuracy of image registration can reach 0.1 pixel based on the spatial and frequency domain cross-correlation; Instrument matrix calibration adopted four-point calibration method. The software system was implemented under Windows environment based on C++ programming language, which realized synchronous polarization images acquisition and preservation, image processing and polarization information extraction and display. Polarization data obtained with the designed polarimetry shows that: the polarization information processing methods and its software system effectively performs live realize polarization measurement of the four Stokes parameters of a scene. The polarization information processing methods effectively improved the polarization detection accuracy.

  10. BIRP: Software for interactive search and retrieval of image engineering data

    NASA Technical Reports Server (NTRS)

    Arvidson, R. E.; Bolef, L. K.; Guinness, E. A.; Norberg, P.

    1980-01-01

    Better Image Retrieval Programs (BIRP), a set of programs to interactively sort through and to display a database, such as engineering data for images acquired by spacecraft is described. An overview of the philosophy of BIRP design, the structure of BIRP data files, and examples that illustrate the capabilities of the software are provided.

  11. Spatial data software integration - Merging CAD/CAM/mapping with GIS and image processing

    NASA Technical Reports Server (NTRS)

    Logan, Thomas L.; Bryant, Nevin A.

    1987-01-01

    The integration of CAD/CAM/mapping with image processing using geographic information systems (GISs) as the interface is examined. Particular emphasis is given to the development of software interfaces between JPL's Video Image Communication and Retrieval (VICAR)/Imaged Based Information System (IBIS) raster-based GIS and the CAD/CAM/mapping system. The design and functions of the VICAR and IBIS are described. Vector data capture and editing are studied. Various software programs for interfacing between the VICAR/IBIS and CAD/CAM/mapping are presented and analyzed.

  12. A Review of Diffusion Tensor Magnetic Resonance Imaging Computational Methods and Software Tools

    PubMed Central

    Hasan, Khader M.; Walimuni, Indika S.; Abid, Humaira; Hahn, Klaus R.

    2010-01-01

    In this work we provide an up-to-date short review of computational magnetic resonance imaging (MRI) and software tools that are widely used to process and analyze diffusion-weighted MRI data. A review of different methods used to acquire, model and analyze diffusion-weighted imaging data (DWI) is first provided with focus on diffusion tensor imaging (DTI). The major preprocessing, processing and post-processing procedures applied to DTI data are discussed. A list of freely available software packages to analyze diffusion MRI data is also provided. PMID:21087766

  13. Spatial data software integration - Merging CAD/CAM/mapping with GIS and image processing

    NASA Technical Reports Server (NTRS)

    Logan, Thomas L.; Bryant, Nevin A.

    1987-01-01

    The integration of CAD/CAM/mapping with image processing using geographic information systems (GISs) as the interface is examined. Particular emphasis is given to the development of software interfaces between JPL's Video Image Communication and Retrieval (VICAR)/Imaged Based Information System (IBIS) raster-based GIS and the CAD/CAM/mapping system. The design and functions of the VICAR and IBIS are described. Vector data capture and editing are studied. Various software programs for interfacing between the VICAR/IBIS and CAD/CAM/mapping are presented and analyzed.

  14. 75 FR 28058 - In the Matter of Certain Digital Imaging Devices and Related Software; Notice of Investigation

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-05-19

    ... COMMISSION In the Matter of Certain Digital Imaging Devices and Related Software; Notice of Investigation... software by reason of infringement of certain claims of U.S. Patent Nos. 6,031,964 and RE38,911. The... importation of certain digital imaging devices and related software that infringe one or more of claim 1-3...

  15. Illinois Occupational Skill Standards: Imaging/Pre-Press Cluster.

    ERIC Educational Resources Information Center

    Illinois Occupational Skill Standards and Credentialing Council, Carbondale.

    This document, which is intended as a guide for work force preparation program providers, details the Illinois occupational skill standards for programs preparing students for employment in occupations in the imaging/pre-press cluster. The document begins with a brief overview of the Illinois perspective on occupational skill standards and…

  16. Role of JTAG2 in coordinating standards for imaging technology

    NASA Astrophysics Data System (ADS)

    McDowell, David Q.

    1998-12-01

    In the modern world of image technology, standards are developed by many different groups, each with specific applications in mind. While some of these groups are part of the accredited standards community - ISO, IEC, CIE, ITU, etc. others are industrial organizations, or consortia.

  17. Pocket-sized versus standard ultrasound machines in abdominal imaging.

    PubMed

    Tse, K H; Luk, W H; Lam, M C

    2014-06-01

    The pocket-sized ultrasound machine has emerged as an invaluable tool for quick assessment in emergency and general practice settings. It is suitable for instant and quick assessment in cardiac imaging. However, its applicability in the imaging of other body parts has yet to be established. In this pictorial review, we compared the performance of the pocketsized ultrasound machine against the standard ultrasound machine for its image quality in common abdominal pathology.

  18. LISIRD 2: Applying Standards and Open Source Software in Exploring and Serving Scientific Data

    NASA Astrophysics Data System (ADS)

    Wilson, A.; Lindholm, D. M.; Ware Dewolfe, A.; Lindholm, C.; Pankratz, C. K.; Snow, M.; Woods, T. N.

    2009-12-01

    The LASP Interactive Solar IRradiance Datacenter (LISIRD), http://lasp.colorado.edu/lisird, seeks to provide exploration of and access to solar irradiance data, models and other related data. These irradiance datasets, from the SME, UARS, TIMED, and SORCE missions, are primarily a function of time and often also wavelength. Their measurements are typically made on a scale of seconds and derived products are provided at daily cadence. The first version of the LISIRD site was built using non standard, proprietary software. The non standard application structure and tight coupling to a variety of dataset representations made changes arduous and maintenance difficult. Eventually the software vender decided to no longer support a critical software component, further decreasing the viability of the site. In LISIRD 2, through the application of the Java EE standard coupled with open source software to fetch and plot the data, the functionality of the original site is being improved while the code structure is being streamlined and simplified. With a relatively minimal effort, the new site can access and serve a greater variety of datasets in an easier fashion, and produce responsive, interactive plots of datasets overlaid and/or linked in time. And it does so using a significantly smaller code base that is, at the same time, much more flexible and extensible. In particular, LISIRD 2 heavily leverages powerful, flexible functionality provided by the Time Series Data Server (TSDS). The OPeNDAP compliant TSDS supports requests for any data that are function of time. It can support scalar, vector, and spectra data types. Through the use of the Unidata NetCDF-Java library and NcML, the TSDS supports multiple input and output formats and is easily extended to support more. It also supports a variety of filters that can be chained and applied to the data on the server before being delivered. TSDS thinning capabilities make it easy for the clients to request appropriate data

  19. Image compression software for the SOHO LASCO and EIT experiments

    NASA Technical Reports Server (NTRS)

    Grunes, Mitchell R.; Howard, Russell A.; Hoppel, Karl; Mango, Stephen A.; Wang, Dennis

    1994-01-01

    This paper describes the lossless and lossy image compression algorithms to be used on board the Solar Heliospheric Observatory (SOHO) in conjunction with the Large Angle Spectrometric Coronograph and Extreme Ultraviolet Imaging Telescope experiments. It also shows preliminary results obtained using similar prior imagery and discusses the lossy compression artifacts which will result. This paper is in part intended for the use of SOHO investigators who need to understand the results of SOHO compression in order to better allocate the transmission bits which they have been allocated.

  20. Software optimization for electrical conductivity imaging in polycrystalline diamond cutters

    SciTech Connect

    Bogdanov, G.; Ludwig, R.; Wiggins, J.; Bertagnolli, K.

    2014-02-18

    We previously reported on an electrical conductivity imaging instrument developed for measurements on polycrystalline diamond cutters. These cylindrical cutters for oil and gas drilling feature a thick polycrystalline diamond layer on a tungsten carbide substrate. The instrument uses electrical impedance tomography to profile the conductivity in the diamond table. Conductivity images must be acquired quickly, on the order of 5 sec per cutter, to be useful in the manufacturing process. This paper reports on successful efforts to optimize the conductivity reconstruction routine, porting major portions of it to NVIDIA GPUs, including a custom CUDA kernel for Jacobian computation.

  1. IDP: Image and data processing (software) in C++

    SciTech Connect

    Lehman, S.

    1994-11-15

    IDP++(Image and Data Processing in C++) is a complied, multidimensional, multi-data type, signal processing environment written in C++. It is being developed within the Radar Ocean Imaging group and is intended as a partial replacement for View. IDP++ takes advantage of the latest object-oriented compiler technology to provide `information hiding.` Users need only know C, not C++. Signals are treated like any other variable with a defined set of operators and functions in an intuitive manner. IDP++ is being designed for real-time environment where interpreted signal processing packages are less efficient.

  2. PyElph - a software tool for gel images analysis and phylogenetics.

    PubMed

    Pavel, Ana Brânduşa; Vasile, Cristian Ioan

    2012-01-13

    This paper presents PyElph, a software tool which automatically extracts data from gel images, computes the molecular weights of the analyzed molecules or fragments, compares DNA patterns which result from experiments with molecular genetic markers and, also, generates phylogenetic trees computed by five clustering methods, using the information extracted from the analyzed gel image. The software can be successfully used for population genetics, phylogenetics, taxonomic studies and other applications which require gel image analysis. Researchers and students working in molecular biology and genetics would benefit greatly from the proposed software because it is free, open source, easy to use, has a friendly Graphical User Interface and does not depend on specific image acquisition devices like other commercial programs with similar functionalities do. PyElph software tool is entirely implemented in Python which is a very popular programming language among the bioinformatics community. It provides a very friendly Graphical User Interface which was designed in six steps that gradually lead to the results. The user is guided through the following steps: image loading and preparation, lane detection, band detection, molecular weights computation based on a molecular weight marker, band matching and finally, the computation and visualization of phylogenetic trees. A strong point of the software is the visualization component for the processed data. The Graphical User Interface provides operations for image manipulation and highlights lanes, bands and band matching in the analyzed gel image. All the data and images generated in each step can be saved. The software has been tested on several DNA patterns obtained from experiments with different genetic markers. Examples of genetic markers which can be analyzed using PyElph are RFLP (Restriction Fragment Length Polymorphism), AFLP (Amplified Fragment Length Polymorphism), RAPD (Random Amplification of Polymorphic DNA) and

  3. OSPAR standard method and software for statistical analysis of beach litter data.

    PubMed

    Schulz, Marcus; van Loon, Willem; Fleet, David M; Baggelaar, Paul; van der Meulen, Eit

    2017-09-15

    The aim of this study is to develop standard statistical methods and software for the analysis of beach litter data. The optimal ensemble of statistical methods comprises the Mann-Kendall trend test, the Theil-Sen slope estimation, the Wilcoxon step trend test and basic descriptive statistics. The application of Litter Analyst, a tailor-made software for analysing the results of beach litter surveys, to OSPAR beach litter data from seven beaches bordering on the south-eastern North Sea, revealed 23 significant trends in the abundances of beach litter types for the period 2009-2014. Litter Analyst revealed a large variation in the abundance of litter types between beaches. To reduce the effects of spatial variation, trend analysis of beach litter data can most effectively be performed at the beach or national level. Spatial aggregation of beach litter data within a region is possible, but resulted in a considerable reduction in the number of significant trends. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Image analysis software for following progression of peripheral neuropathy

    NASA Astrophysics Data System (ADS)

    Epplin-Zapf, Thomas; Miller, Clayton; Larkin, Sean; Hermesmeyer, Eduardo; Macy, Jenny; Pellegrini, Marco; Luccarelli, Saverio; Staurenghi, Giovanni; Holmes, Timothy

    2009-02-01

    A relationship has been reported by several research groups [1 - 4] between the density and shapes of nerve fibers in the cornea and the existence and severity of peripheral neuropathy. Peripheral neuropathy is a complication of several prevalent diseases or conditions, which include diabetes, HIV, prolonged alcohol overconsumption and aging. A common clinical technique for confirming the condition is intramuscular electromyography (EMG), which is invasive, so a noninvasive technique like the one proposed here carries important potential advantages for the physician and patient. A software program that automatically detects the nerve fibers, counts them and measures their shapes is being developed and tested. Tests were carried out with a database of subjects with levels of severity of diabetic neuropathy as determined by EMG testing. Results from this testing, that include a linear regression analysis are shown.

  5. Development of Software to Model AXAF-I Image Quality

    NASA Technical Reports Server (NTRS)

    Geary, Joseph; Hawkins, Lamar; Ahmad, Anees; Gong, Qian

    1997-01-01

    This report describes work conducted on Delivery Order 181 between October 1996 through June 1997. During this period software was written to: compute axial PSD's from RDOS AXAF-I mirror surface maps; plot axial surface errors and compute PSD's from HDOS "Big 8" axial scans; plot PSD's from FITS format PSD files; plot band-limited RMS vs axial and azimuthal position for multiple PSD files; combine and organize PSD's from multiple mirror surface measurements formatted as input to GRAZTRACE; modify GRAZTRACE to read FITS formatted PSD files; evaluate AXAF-I test results; improve and expand the capabilities of the GT x-ray mirror analysis package. During this period work began on a more user-friendly manual for the GT program, and improvements were made to the on-line help manual.

  6. Topographic analysis of eyelid position using digital image processing software.

    PubMed

    Chun, Yeoun Sook; Park, Hong Hyun; Park, In Ki; Moon, Nam Ju; Park, Sang Joon; Lee, Jeong Kyu

    2017-04-09

    To propose a novel analysis technique for objective quantification of topographic eyelid position with an algorithmatically calculated scheme and to determine its feasibility. One hundred normal eyelids from 100 patients were segmented using a graph cut algorithm, and 11 shape features of eyelids were semi-automatically quantified using in-house software. To evaluate the intra- and inter-examiner reliability of this software, intra-class correlation coefficients (ICCs) were used. To evaluate the diagnostic value of this scheme, the correlations between semi-automatic and manual measurements of margin reflex distance 1 (MRD1) and margin reflex distance 2 (MRD2) were analysed using a Bland-Altman analysis. To determine the degree of agreement according to manual MRD length, the relationship between the variance of semi-automatic measurements and the manual measurements was evaluated using linear regression. Intra- and inter-examiner reliability were excellent, with ICCs ranging from 0.913 to 0.980 in 11 shape features including MRD1, MRD2, palpebral fissure, lid perimeter, upper and lower lid lengths, roundness, total area, and medial, central, and lateral areas. The correlations between semi-automatic and manual MRDs were also excellent, with better correlation in MRD1 than in MRD2 (R = 0.893 and 0.823, respectively). In addition, significant positive relationships were observed between the variance and the length of MRD1 and 2; the longer the MRD length, the more the variance. The proposed novel optimized integrative scheme, which is shown to have high repeatability and reproducibility, is useful for topographic analysis of eyelid position. © 2017 Acta Ophthalmologica Scandinavica Foundation. Published by John Wiley & Sons Ltd.

  7. Software for browsing sectioned images of a dog body and generating a 3D model.

    PubMed

    Park, Jin Seo; Jung, Yong Wook

    2016-01-01

    The goals of this study were (1) to provide accessible and instructive browsing software for sectioned images and a portable document format (PDF) file that includes three-dimensional (3D) models of an entire dog body and (2) to develop techniques for segmentation and 3D modeling that would enable an investigator to perform these tasks without the aid of a computer engineer. To achieve these goals, relatively important or large structures in the sectioned images were outlined to generate segmented images. The sectioned and segmented images were then packaged into browsing software. In this software, structures in the sectioned images are shown in detail and in real color. After 3D models were made from the segmented images, the 3D models were exported into a PDF file. In this format, the 3D models could be manipulated freely. The browsing software and PDF file are available for study by students, for lecture for teachers, and for training for clinicians. These files will be helpful for anatomical study by and clinical training of veterinary students and clinicians. Furthermore, these techniques will be useful for researchers who study two-dimensional images and 3D models.

  8. Reliability evaluation of I-123 ADAM SPECT imaging using SPM software and AAL ROI methods

    NASA Astrophysics Data System (ADS)

    Yang, Bang-Hung; Tsai, Sung-Yi; Wang, Shyh-Jen; Su, Tung-Ping; Chou, Yuan-Hwa; Chen, Chia-Chieh; Chen, Jyh-Cheng

    2011-08-01

    The level of serotonin was regulated by serotonin transporter (SERT), which is a decisive protein in regulation of serotonin neurotransmission system. Many psychiatric disorders and therapies were also related to concentration of cerebral serotonin. I-123 ADAM was the novel radiopharmaceutical to image SERT in brain. The aim of this study was to measure reliability of SERT densities of healthy volunteers by automated anatomical labeling (AAL) method. Furthermore, we also used statistic parametric mapping (SPM) on a voxel by voxel analysis to find difference of cortex between test and retest of I-123 ADAM single photon emission computed tomography (SPECT) images.Twenty-one healthy volunteers were scanned twice with SPECT at 4 h after intravenous administration of 185 MBq of 123I-ADAM. The image matrix size was 128×128 and pixel size was 3.9 mm. All images were obtained through filtered back-projection (FBP) reconstruction algorithm. Region of interest (ROI) definition was performed based on the AAL brain template in PMOD version 2.95 software package. ROI demarcations were placed on midbrain, pons, striatum, and cerebellum. All images were spatially normalized to the SPECT MNI (Montreal Neurological Institute) templates supplied with SPM2. And each image was transformed into standard stereotactic space, which was matched to the Talairach and Tournoux atlas. Then differences across scans were statistically estimated on a voxel by voxel analysis using paired t-test (population main effect: 2 cond's, 1 scan/cond.), which was applied to compare concentration of SERT between the test and retest cerebral scans.The average of specific uptake ratio (SUR: target/cerebellum-1) of 123I-ADAM binding to SERT in midbrain was 1.78±0.27, pons was 1.21±0.53, and striatum was 0.79±0.13. The cronbach's α of intra-class correlation coefficient (ICC) was 0.92. Besides, there was also no significant statistical finding in cerebral area using SPM2 analysis. This finding might help us

  9. Do we really need standards in digital image management?

    PubMed Central

    Ho, ELM

    2008-01-01

    Convention dictates that standards are a necessity rather than a luxury. Standards are supposed to improve the exchange of health and image data information resulting in improved quality and efficiency of patient care. True standardisation is some time away yet, as barriers exist with evolving equipment, storage formats and even the standards themselves. The explosive growth in the size and complexity of images such as those generated by multislice computed tomography have driven the need for digital image management, created problems of storage space and costs, and created a challenge for increasing or getting an adequate speed for transmitting, accessing and retrieving the image data. The search for a suitable and practical format for storing the data without loss of information and medico-legal implications has become a necessity and a matter of ‘urgency’. Existing standards are either open or proprietary and must comply with local, regional or national laws. Currently there are the Picture Archiving and Communications System (PACS); Digital Imaging and Communications in Medicine (DICOM); Health Level 7 (HL7) and Integrating the Healthcare Enterprise (IHE). Issues in digital image management can be categorised as operational, procedural, technical and administrative. Standards must stay focussed on the ultimate goal – that is, improved patient care worldwide. PMID:21611012

  10. A software platform for phase contrast x-ray breast imaging research.

    PubMed

    Bliznakova, K; Russo, P; Mettivier, G; Requardt, H; Popov, P; Bravin, A; Buliev, I

    2015-06-01

    To present and validate a computer-based simulation platform dedicated for phase contrast x-ray breast imaging research. The software platform, developed at the Technical University of Varna on the basis of a previously validated x-ray imaging software simulator, comprises modules for object creation and for x-ray image formation. These modules were updated to take into account the refractive index for phase contrast imaging as well as implementation of the Fresnel-Kirchhoff diffraction theory of the propagating x-ray waves. Projection images are generated in an in-line acquisition geometry. To test and validate the platform, several phantoms differing in their complexity were constructed and imaged at 25 keV and 60 keV at the beamline ID17 of the European Synchrotron Radiation Facility. The software platform was used to design computational phantoms that mimic those used in the experimental study and to generate x-ray images in absorption and phase contrast modes. The visual and quantitative results of the validation process showed an overall good correlation between simulated and experimental images and show the potential of this platform for research in phase contrast x-ray imaging of the breast. The application of the platform is demonstrated in a feasibility study for phase contrast images of complex inhomogeneous and anthropomorphic breast phantoms, compared to x-ray images generated in absorption mode. The improved visibility of mammographic structures suggests further investigation and optimisation of phase contrast x-ray breast imaging, especially when abnormalities are present. The software platform can be exploited also for educational purposes. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. The FBI compression standard for digitized fingerprint images

    SciTech Connect

    Brislawn, C.M.; Bradley, J.N.; Onyshczak, R.J.; Hopper, T.

    1996-10-01

    The FBI has formulated national standards for digitization and compression of gray-scale fingerprint images. The compression algorithm for the digitized images is based on adaptive uniform scalar quantization of a discrete wavelet transform subband decomposition, a technique referred to as the wavelet/scalar quantization method. The algorithm produces archival-quality images at compression ratios of around 15 to 1 and will allow the current database of paper fingerprint cards to be replaced by digital imagery. A compliance testing program is also being implemented to ensure high standards of image quality and interchangeability of data between different implementations. We will review the current status of the FBI standard, including the compliance testing process and the details of the first-generation encoder.

  12. Starworld: Preparing Accountants for the Future: A Case-Based Approach to Teach International Financial Reporting Standards Using ERP Software

    ERIC Educational Resources Information Center

    Ragan, Joseph M.; Savino, Christopher J.; Parashac, Paul; Hosler, Jonathan C.

    2010-01-01

    International Financial Reporting Standards now constitute an important part of educating young professional accountants. This paper looks at a case based process to teach International Financial Reporting Standards using integrated Enterprise Resource Planning software. The case contained within the paper can be used within a variety of courses…

  13. Validated novel software to measure the conspicuity index of lesions in DICOM images

    NASA Astrophysics Data System (ADS)

    Szczepura, K. R.; Manning, D. J.

    2016-03-01

    A novel software programme and associated Excel spreadsheet has been developed to provide an objective measure of the expected visual detectability of focal abnormalities within DICOM images. ROIs are drawn around the abnormality, the software then fits the lesion using a least squares method to recognize the edges of the lesion based on the full width half maximum. 180 line profiles are then plotted around the lesion, giving 360 edge profiles.

  14. Parallel-Processing Software for Correlating Stereo Images

    NASA Technical Reports Server (NTRS)

    Klimeck, Gerhard; Deen, Robert; Mcauley, Michael; DeJong, Eric

    2007-01-01

    A computer program implements parallel- processing algorithms for cor relating images of terrain acquired by stereoscopic pairs of digital stereo cameras on an exploratory robotic vehicle (e.g., a Mars rove r). Such correlations are used to create three-dimensional computatio nal models of the terrain for navigation. In this program, the scene viewed by the cameras is segmented into subimages. Each subimage is assigned to one of a number of central processing units (CPUs) opera ting simultaneously.

  15. Time Efficiency and Diagnostic Accuracy of New Automated Myocardial Perfusion Analysis Software in 320-Row CT Cardiac Imaging

    PubMed Central

    Rief, Matthias; Stenzel, Fabian; Kranz, Anisha; Schlattmann, Peter

    2013-01-01

    Objective We aimed to evaluate the time efficiency and diagnostic accuracy of automated myocardial computed tomography perfusion (CTP) image analysis software. Materials and Methods 320-row CTP was performed in 30 patients, and analyses were conducted independently by three different blinded readers by the use of two recent software releases (version 4.6 and novel version 4.71GR001, Toshiba, Tokyo, Japan). Analysis times were compared, and automated epi- and endocardial contour detection was subjectively rated in five categories (excellent, good, fair, poor and very poor). As semi-quantitative perfusion parameters, myocardial attenuation and transmural perfusion ratio (TPR) were calculated for each myocardial segment and agreement was tested by using the intraclass correlation coefficient (ICC). Conventional coronary angiography served as reference standard. Results The analysis time was significantly reduced with the novel automated software version as compared with the former release (Reader 1: 43:08 ± 11:39 min vs. 09:47 ± 04:51 min, Reader 2: 42:07 ± 06:44 min vs. 09:42 ± 02:50 min and Reader 3: 21:38 ± 3:44 min vs. 07:34 ± 02:12 min; p < 0.001 for all). Epi- and endocardial contour detection for the novel software was rated to be significantly better (p < 0.001) than with the former software. ICCs demonstrated strong agreement (≥ 0.75) for myocardial attenuation in 93% and for TPR in 82%. Diagnostic accuracy for the two software versions was not significantly different (p = 0.169) as compared with conventional coronary angiography. Conclusion The novel automated CTP analysis software offers enhanced time efficiency with an improvement by a factor of about four, while maintaining diagnostic accuracy. PMID:23323027

  16. Standardizing Quality Assessment of Fused Remotely Sensed Images

    NASA Astrophysics Data System (ADS)

    Pohl, C.; Moellmann, J.; Fries, K.

    2017-09-01

    The multitude of available operational remote sensing satellites led to the development of many image fusion techniques to provide high spatial, spectral and temporal resolution images. The comparison of different techniques is necessary to obtain an optimized image for the different applications of remote sensing. There are two approaches in assessing image quality: 1. Quantitatively by visual interpretation and 2. Quantitatively using image quality indices. However an objective comparison is difficult due to the fact that a visual assessment is always subject and a quantitative assessment is done by different criteria. Depending on the criteria and indices the result varies. Therefore it is necessary to standardize both processes (qualitative and quantitative assessment) in order to allow an objective image fusion quality evaluation. Various studies have been conducted at the University of Osnabrueck (UOS) to establish a standardized process to objectively compare fused image quality. First established image fusion quality assessment protocols, i.e. Quality with No Reference (QNR) and Khan's protocol, were compared on varies fusion experiments. Second the process of visual quality assessment was structured and standardized with the aim to provide an evaluation protocol. This manuscript reports on the results of the comparison and provides recommendations for future research.

  17. Software Toolbox for Low-frequency Conductivity and Current Density Imaging using MRI.

    PubMed

    Sajib, Saurav Z K; Katoch, Nitish; Kim, Hyung Joong; Kwon, Oh In; Woo, Eung Je

    2017-07-27

    Low-frequency conductivity and current density imaging using MRI includes magnetic resonance electrical impedance tomography (MREIT), diffusion tensor MREIT (DT-MREIT), conductivity tensor imaging (CTI), and magnetic resonance current density imaging (MRCDI). MRCDI and MREIT provide current density and isotropic conductivity images, respectively, using current-injection phase MRI techniques. DT-MREIT produces anisotropic conductivity tensor images by incorporating diffusion weighted MRI into MREIT. These current-injection techniques are finding clinical applications in diagnostic imaging and also in tDCS, DBS, and electroporation where treatment currents can function as imaging currents. To avoid adverse effects of nerve and muscle stimulations due to injected currents, conductivity tensor imaging (CTI) utilizes B1 mapping and multi-b diffusion weighted MRI to produce low-frequency anisotropic conductivity tensor images without injecting current. This paper describes numerical implementations of several key mathematical functions for conductivity and current density image reconstructions in MRCDI, MREIT, DT-MREIT, and CTI. To facilitate experimental studies of clinical applications, we developed a software toolbox for these low-frequency conductivity and current density imaging methods. This MR-based conductivity imaging (MRCI) toolbox includes 11 toolbox functions which can be used in the Matlab environment. The MRCI toolbox is available at http://iirc.khu.ac.kr/software.html. Its functions were tested by using several experimental data sets which are provided together with the toolbox. Users of the toolbox can focus on experimental designs and interpretations of reconstructed images instead of developing their own image reconstruction softwares. We expect more toolbox functions to be added from future research outcomes.

  18. Development of a Standard for Verification and Validation of Software Used to Calculate Nuclear System Thermal Fluids Behavior

    SciTech Connect

    Richard R. Schultz; Edwin A. Harvego; Ryan L. Crane

    2010-05-01

    With the resurgence of nuclear power and increased interest in advanced nuclear reactors as an option to supply abundant energy without the associated greenhouse gas emissions of the more conventional fossil fuel energy sources, there is a need to establish internationally recognized standards for the verification and validation (V&V) of software used to calculate the thermal-hydraulic behavior of advanced reactor designs for both normal operation and hypothetical accident conditions. To address this need, ASME (American Society of Mechanical Engineers) Standards and Certification has established the V&V 30 Committee, under the responsibility of the V&V Standards Committee, to develop a consensus Standard for verification and validation of software used for design and analysis of advanced reactor systems. The initial focus of this committee will be on the V&V of system analysis and computational fluid dynamics (CFD) software for nuclear applications. To limit the scope of the effort, the committee will further limit its focus to software to be used in the licensing of High-Temperature Gas-Cooled Reactors. In this framework, the standard should conform to Nuclear Regulatory Commission (NRC) practices, procedures and methods for licensing of nuclear power plants as embodied in the United States (U.S.) Code of Federal Regulations and other pertinent documents such as Regulatory Guide 1.203, “Transient and Accident Analysis Methods” and NUREG-0800, “NRC Standard Review Plan”. In addition, the standard should be consistent with applicable sections of ASME Standard NQA-1 (“Quality Assurance Requirements for Nuclear Facility Applications (QA)”). This paper describes the general requirements for the V&V Standard, which includes; (a) the definition of the operational and accident domain of a nuclear system that must be considered if the system is to licensed, (b) the corresponding calculational domain of the software that should encompass the nuclear operational

  19. New StatPhantom software for assessment of digital image quality

    NASA Astrophysics Data System (ADS)

    Gurvich, Victor A.; Davydenko, George I.

    2002-04-01

    The rapid development of digital imaging and computers networks, using Picture Archiving and Communication Systems (PACS) and DICOM compatible devices increase requirements to the quality control process in medical imaging departments, but provide new opportunities for evaluation of image quality. New StatPhantom software simplifies statistical techniques based on modern detection theory and ROC analysis improving the accuracy and reliability of known methods and allowing to implement statistical analysis with phantoms of any design. In contrast to manual statistical methods, all calculation, analysis of results, and test elements positions changes in the image of phantom are implemented by computer. This paper describes the user interface and functionality of StatPhantom software, its opportunities and advantages in the assessment of various imaging modalities, and the diagnostic preference of an observer. The results obtained by the conventional ROC analysis, manual, and computerized statistical methods are analyzed. Different designs of phantoms are considered.

  20. JHelioviewer: Open-Source Software for Discovery and Image Access in the Petabyte Age

    NASA Astrophysics Data System (ADS)

    Mueller, D.; Dimitoglou, G.; Garcia Ortiz, J.; Langenberg, M.; Nuhn, M.; Dau, A.; Pagel, S.; Schmidt, L.; Hughitt, V. K.; Ireland, J.; Fleck, B.

    2011-12-01

    The unprecedented torrent of data returned by the Solar Dynamics Observatory is both a blessing and a barrier: a blessing for making available data with significantly higher spatial and temporal resolution, but a barrier for scientists to access, browse and analyze them. With such staggering data volume, the data is accessible only from a few repositories and users have to deal with data sets effectively immobile and practically difficult to download. From a scientist's perspective this poses three challenges: accessing, browsing and finding interesting data while avoiding the proverbial search for a needle in a haystack. To address these challenges, we have developed JHelioviewer, an open-source visualization software that lets users browse large data volumes both as still images and movies. We did so by deploying an efficient image encoding, storage, and dissemination solution using the JPEG 2000 standard. This solution enables users to access remote images at different resolution levels as a single data stream. Users can view, manipulate, pan, zoom, and overlay JPEG 2000 compressed data quickly, without severe network bandwidth penalties. Besides viewing data, the browser provides third-party metadata and event catalog integration to quickly locate data of interest, as well as an interface to the Virtual Solar Observatory to download science-quality data. As part of the ESA/NASA Helioviewer Project, JHelioviewer offers intuitive ways to browse large amounts of heterogeneous data remotely and provides an extensible and customizable open-source platform for the scientific community. In addition, the easy-to-use graphical user interface enables the general public and educators to access, enjoy and reuse data from space missions without barriers.

  1. JMorph: Software for performing rapid morphometric measurements on digital images of fossil assemblages

    NASA Astrophysics Data System (ADS)

    Lelièvre, Peter G.; Grey, Melissa

    2017-08-01

    Quantitative morphometric analyses of form are widely used in palaeontology, especially for taxonomic and evolutionary research. These analyses can involve several measurements performed on hundreds or even thousands of samples. Performing measurements of size and shape on large assemblages of macro- or microfossil samples is generally infeasible or impossible with traditional instruments such as vernier calipers. Instead, digital image processing software is required to perform measurements via suitable digital images of samples. Many software packages exist for morphometric analyses but there is not much available for the integral stage of data collection, particularly for the measurement of the outlines of samples. Some software exists to automatically detect the outline of a fossil sample from a digital image. However, automatic outline detection methods may perform inadequately when samples have incomplete outlines or images contain poor contrast between the sample and staging background. Hence, a manual digitization approach may be the only option. We are not aware of any software packages that are designed specifically for efficient digital measurement of fossil assemblages with numerous samples, especially for the purposes of manual outline analysis. Throughout several previous studies, we have developed a new software tool, JMorph, that is custom-built for that task. JMorph provides the means to perform many different types of measurements, which we describe in this manuscript. We focus on JMorph's ability to rapidly and accurately digitize the outlines of fossils. JMorph is freely available from the authors.

  2. Novel mass spectrometry imaging software assisting labeled normalization and quantitation of drugs and neuropeptides directly in tissue sections.

    PubMed

    Källback, Patrik; Shariatgorji, Mohammadreza; Nilsson, Anna; Andrén, Per E

    2012-08-30

    MALDI MS imaging has been extensively used to produce qualitative distribution maps of proteins, peptides, lipids, small molecule pharmaceuticals and their metabolites directly in biological tissue sections. There is growing demand to quantify the amount of target compounds in the tissue sections of different organs. We present a novel MS imaging software including protocol for the quantitation of drugs, and for the first time, an endogenous neuropeptide directly in tissue sections. After selecting regions of interest on the tissue section, data is read and processed by the software using several available methods for baseline corrections, subtractions, denoising, smoothing, recalibration and normalization. The concentrations of in vivo administered drugs or endogenous compounds are then determined semi-automatically using either external standard curves, or by using labeled compounds, i.e., isotope labeled analogs as standards. As model systems, we have quantified the distribution of imipramine and tiotropium in the brain and lung of dosed rats. Substance P was quantified in different mouse brain structures, which correlated well with previously reported peptide levels. Our approach facilitates quantitative data processing and labeled standards provide better reproducibility and may be considered as an efficient tool to quantify drugs and endogenous compounds in tissue regions of interest.

  3. Web-based spatial analysis with the ILWIS open source GIS software and satellite images from GEONETCast

    NASA Astrophysics Data System (ADS)

    Lemmens, R.; Maathuis, B.; Mannaerts, C.; Foerster, T.; Schaeffer, B.; Wytzisk, A.

    2009-12-01

    fingertips of users around the globe. This user-friendly and low-cost information dissemination provides global information as a basis for decision-making in a number of critical areas, including public health, energy, agriculture, weather, water, climate, natural disasters and ecosystems. GEONETCast makes available satellite images via Digital Video Broadcast (DVB) technology. An OGC WMS interface and plug-ins which convert GEONETCast data streams allow an ILWIS user to integrate various distributed data sources with data locally stored on his machine. Our paper describes a use case in which ILWIS is used with GEONETCast satellite imagery for decision making processes in Ghana. We also explain how the ILWIS software can be extended with additional functionality by means of building plug-ins and unfold our plans to implement other OGC standards, such as WCS and WPS in the same context. Especially, the latter one can be seen as a major step forward in terms of moving well-proven desktop based processing functionality to the web. This enables the embedding of ILWIS functionality in Spatial Data Infrastructures or even the execution in scalable and on-demand cloud computing environments.

  4. Technique Standards for Skin Lesion Imaging: A Delphi Consensus Statement.

    PubMed

    Katragadda, Chinmayee; Finnane, Anna; Soyer, H Peter; Marghoob, Ashfaq A; Halpern, Allan; Malvehy, Josep; Kittler, Harald; Hofmann-Wellenhof, Rainer; Da Silva, Dennis; Abraham, Ivo; Curiel-Lewandrowski, Clara

    2016-11-23

    Variability in the metrics for image acquisition at the total body, regional, close-up, and dermoscopic levels impacts the quality and generalizability of skin images. Consensus guidelines are indicated to achieve universal imaging standards in dermatology. To achieve consensus among members of the International Skin Imaging Collaboration (ISIC) on standards for image acquisition metrics using a hybrid Delphi method. Delphi study with 5 rounds of ratings and revisions until relative consensus was achieved. The initial set of statements was developed by a core group (CG) on the basis of a literature review and clinical experience followed by 2 rounds of rating and revisions. The consensus process was validated by an extended group (EG) of ISIC members through 2 rounds of scoring and revisions. In all rounds, respondents rated the draft recommendations on a 1 (strongly agree) to 5 (strongly disagree) scale, explained ratings of less than 5, and optionally provided comments. At any stage, a recommendation was retained if both mean and median rating was 4 or higher. The initial set of 45 items (round 1) was expanded by the CG to 56 variants in round 2, subsequently reduced to 42 items scored by the EG in round 3, yielding an EG set of 33 recommendations (rounds 4 and 5): general recommendation (1 guideline), lighting (5), background color (3), field of view (3), image orientation (8), focus/depth of field (3), resolution (4), scale (3), color calibration (2), and image storage (1). This iterative process of ratings and comments yielded a strong consensus on standards for skin imaging in dermatology practice. Adoption of these methods for image standardization is likely to improve clinical practice, information exchange, electronic health record documentation, harmonization of clinical studies and database development, and clinical decision support. Feasibility and validity testing under real-world clinical conditions is indicated.

  5. Oxygen octahedra picker: A software tool to extract quantitative information from STEM images.

    PubMed

    Wang, Yi; Salzberger, Ute; Sigle, Wilfried; Eren Suyolcu, Y; van Aken, Peter A

    2016-09-01

    In perovskite oxide based materials and hetero-structures there are often strong correlations between oxygen octahedral distortions and functionality. Thus, atomistic understanding of the octahedral distortion, which requires accurate measurements of atomic column positions, will greatly help to engineer their properties. Here, we report the development of a software tool to extract quantitative information of the lattice and of BO6 octahedral distortions from STEM images. Center-of-mass and 2D Gaussian fitting methods are implemented to locate positions of individual atom columns. The precision of atomic column distance measurements is evaluated on both simulated and experimental images. The application of the software tool is demonstrated using practical examples.

  6. Imaging and quantification of endothelial cell loss in eye bank prepared DMEK grafts using trainable segmentation software.

    PubMed

    Jardine, Griffin J; Holiman, Jeffrey D; Stoeger, Christopher G; Chamberlain, Winston D

    2014-09-01

    To improve accuracy and efficiency in quantifying the endothelial cell loss (ECL) in eye bank preparation of corneal endothelial grafts. Eight cadaveric corneas were subjected to Descemet Membrane Endothelial Keratoplasty (DMEK) preparation. The endothelial surfaces were stained with a viability stain, calcein AM dye (CAM) and then captured by a digital camera. The ECL rates were quantified in these images by three separate readers using trainable segmentation, a plug-in feature from the imaging software, Fiji. Images were also analyzed by Adobe Photoshop for comparison. Mean times required to process the images were measured between the two modalities. The mean ECL (with standard deviation) as analyzed by Fiji was 22.5% (6.5%) and Adobe was 18.7% (7.0%; p = 0.04). The mean time required to process the images through the two different imaging methods was 19.9 min (7.5) for Fiji and 23.4 min (12.9) for Adobe (p = 0.17). Establishing an accurate, efficient and reproducible means of quantifying ECL in graft preparation and surgical techniques can provide insight to the safety, long-term potential of the graft tissues as well as provide a quality control measure for eye banks and surgeons. Trainable segmentation in Fiji software using CAM is a novel approach to measuring ECL that captured a statistically significantly higher percentage of ECL comparable to Adobe and was more accurate in standardized testing. Interestingly, ECL as determined using both methods in eye bank-prepared DMEK grafts exceeded 18% on average.

  7. Standard portrait image and image quality assessment: II. Triplet comparison

    NASA Astrophysics Data System (ADS)

    Miyazaki, Keiichi; Kanafusa, Kunihiko; Umemoto, Hiroshi; Takemura, Kazuhiko; Urabe, Hitoshi; Hirai, Keisuke; Ishikawa, Kazuo; Hatada, Toyohiko

    2000-12-01

    We have already proposed a standard portrait for the assessment of preferable skin tone. The present report describes a psycho physical experimental method, i.e., simultaneous triplet comparison that has been developed for the assessment of skin tone by using the portrait and that is characterized not only by a scalability, stability and reproducibility of the resulting scale values, but also by a reduce stress on observers. We have confirmed that the present simultaneous triplet comparison has a degree of scalability and stability almost equivalent to that of paired comparison that is most widely used for similar purposes, and that the stress on observers is about half as much as that of paired comparison.

  8. An image-processing software package: UU and Fig for optical metrology applications

    NASA Astrophysics Data System (ADS)

    Chen, Lujie

    2013-06-01

    Modern optical metrology applications are largely supported by computational methods, such as phase shifting [1], Fourier Transform [2], digital image correlation [3], camera calibration [4], etc, in which image processing is a critical and indispensable component. While it is not too difficult to obtain a wide variety of image-processing programs from the internet; few are catered for the relatively special area of optical metrology. This paper introduces an image-processing software package: UU (data processing) and Fig (data rendering) that incorporates many useful functions to process optical metrological data. The cross-platform programs UU and Fig are developed based on wxWidgets. At the time of writing, it has been tested on Windows, Linux and Mac OS. The userinterface is designed to offer precise control of the underline processing procedures in a scientific manner. The data input/output mechanism is designed to accommodate diverse file formats and to facilitate the interaction with other independent programs. In terms of robustness, although the software was initially developed for personal use, it is comparably stable and accurate to most of the commercial software of similar nature. In addition to functions for optical metrology, the software package has a rich collection of useful tools in the following areas: real-time image streaming from USB and GigE cameras, computational geometry, computer vision, fitting of data, 3D image processing, vector image processing, precision device control (rotary stage, PZT stage, etc), point cloud to surface reconstruction, volume rendering, batch processing, etc. The software package is currently used in a number of universities for teaching and research.

  9. Standard Health Level Seven for Odontological Digital Imaging

    PubMed Central

    Abril-Gonzalez, Mauricio; Portilla, Fernando A.

    2017-01-01

    Abstract Background: A guide for the implementation of dental digital imaging reports was developed and validated through the International Standard of Health Informatics–Health Level Seven (HL7), achieving interoperability with an electronic system that keeps dental records. Introduction: Digital imaging benefits patients, who can view previous close-ups of dental examinations; providers, because of greater efficiency in managing information; and insurers, because of improved accessibility, patient monitoring, and more efficient cost management. Finally, imaging is beneficial for the dentist who can be more agile in the diagnosis and treatment of patients using this tool. Materials and Methods: The guide was developed under the parameters of an HL7 standard. It was necessary to create a group of dentists and three experts in information and communication technologies from different institutions. Discussion: Diagnostic images scanned with conventional radiology or from a radiovisiograph can be converted to Digital Imaging and Communications in Medicine (DICOM) format, while also retaining patient information. The guide shows how the information of the health record of the patient and the information of the dental image could be standardized in a Clinical Dental Record document using international informatics standard like HL7-V3-CDA document (dental document Level 2). Since it is an informatics standardized document, it could be sent, stored, or displayed using different devices—personal computers or mobile devices—independent of the platform used. Conclusions: Interoperability using dental images and dental record systems reduces adverse events, increases security for the patient, and makes more efficient use of resources. This article makes a contribution to the field of telemedicine in dental informatics. In addition to that, the results could be a reference for projects of electronic medical records when the dental documents are part of them. PMID

  10. Software-based on-site estimation of fractional flow reserve using standard coronary CT angiography data.

    PubMed

    De Geer, Jakob; Sandstedt, Mårten; Björkholm, Anders; Alfredsson, Joakim; Janzon, Magnus; Engvall, Jan; Persson, Anders

    2016-10-01

    The significance of a coronary stenosis can be determined by measuring the fractional flow reserve (FFR) during invasive coronary angiography. Recently, methods have been developed which claim to be able to estimate FFR using image data from standard coronary computed tomography angiography (CCTA) exams. To evaluate the accuracy of non-invasively computed fractional flow reserve (cFFR) from CCTA. A total of 23 vessels in 21 patients who had undergone both CCTA and invasive angiography with FFR measurement were evaluated using a cFFR software prototype. The cFFR results were compared to the invasively obtained FFR values. Correlation was calculated using Spearman's rank correlation, and agreement using intraclass correlation coefficient (ICC). Sensitivity, specificity, accuracy, negative predictive value, and positive predictive value for significant stenosis (defined as both FFR ≤0.80 and FFR ≤0.75) were calculated. The mean cFFR value for the whole group was 0.81 and the corresponding mean invFFR value was 0.84. The cFFR sensitivity for significant stenosis (FFR ≤0.80/0.75) on a per-lesion basis was 0.83/0.80, specificity was 0.76/0.89, and accuracy 0.78/0.87. The positive predictive value was 0.56/0.67 and the negative predictive value was 0.93/0.94. The Spearman rank correlation coefficient was ρ = 0.77 (P < 0.001) and ICC = 0.73 (P < 0.001). This particular CCTA-based cFFR software prototype allows for a rapid, non-invasive on-site evaluation of cFFR. The results are encouraging and cFFR may in the future be of help in the triage to invasive coronary angiography. © The Foundation Acta Radiologica 2015.

  11. Plume Ascent Tracker: Interactive Matlab software for analysis of ascending plumes in image data

    NASA Astrophysics Data System (ADS)

    Valade, S. A.; Harris, A. J. L.; Cerminara, M.

    2014-05-01

    This paper presents Matlab-based software designed to track and analyze an ascending plume as it rises above its source, in image data. It reads data recorded in various formats (video files, image files, or web-camera image streams), and at various wavelengths (infrared, visible, or ultra-violet). Using a set of filters which can be set interactively, the plume is first isolated from its background. A user-friendly interface then allows tracking of plume ascent and various parameters that characterize plume evolution during emission and ascent. These include records of plume height, velocity, acceleration, shape, volume, ash (fine-particle) loading, spreading rate, entrainment coefficient and inclination angle, as well as axial and radial profiles for radius and temperature (if data are radiometric). Image transformations (dilatation, rotation, resampling) can be performed to create new images with a vent-centered metric coordinate system. Applications may interest both plume observers (monitoring agencies) and modelers. For the first group, the software is capable of providing quantitative assessments of plume characteristics from image data, for post-event analysis or in near real-time analysis. For the second group, extracted data can serve as benchmarks for plume ascent models, and as inputs for cloud dispersal models. We here describe the software's tracking methodology and main graphical interfaces, using thermal infrared image data of an ascending volcanic ash plume at Santiaguito volcano.

  12. A software framework for diagnostic medical image perception with feedback, and a novel perception visualization technique

    NASA Astrophysics Data System (ADS)

    Phillips, Peter W.; Manning, David J.; Donovan, Tim; Crawford, Trevor; Higham, Stephen

    2005-04-01

    This paper describes a software framework and analysis tool to support the collection and analysis of eye movement and perceptual feedback data for a variety of diagnostic imaging modalities. The framework allows the rapid creation of experiment software that can display a collection of medical images of a particular modality, capture eye trace data, and record marks added to an image by the observer, together with their final decision. There are also a number of visualisation techniques for the display of eye trace information. The analysis tool supports the comparison of individual eye traces for a particular observer or traces from multiple observers for a particular image. Saccade and fixation data can be visualised, with user control of fixation identification functions and properties. Observer markings are displayed, and predefined regions of interest are supported. The software also supports some interactive and multi-image modalities. The analysis tool includes a novel visualisation of scan paths across multi-image modalities. Using an exploded 3D view of a stack of MRI scan sections, an observer's scan path can be shown traversing between images, in addition to inspecting them.

  13. Open source software in a practical approach for post processing of radiologic images.

    PubMed

    Valeri, Gianluca; Mazza, Francesco Antonino; Maggi, Stefania; Aramini, Daniele; La Riccia, Luigi; Mazzoni, Giovanni; Giovagnoni, Andrea

    2015-03-01

    The purpose of this paper is to evaluate the use of open source software (OSS) to process DICOM images. We selected 23 programs for Windows and 20 programs for Mac from 150 possible OSS programs including DICOM viewers and various tools (converters, DICOM header editors, etc.). The programs selected all meet the basic requirements such as free availability, stand-alone application, presence of graphical user interface, ease of installation and advanced features beyond simple display monitor. Capabilities of data import, data export, metadata, 2D viewer, 3D viewer, support platform and usability of each selected program were evaluated on a scale ranging from 1 to 10 points. Twelve programs received a score higher than or equal to eight. Among them, five obtained a score of 9: 3D Slicer, MedINRIA, MITK 3M3, VolView, VR Render; while OsiriX received 10. OsiriX appears to be the only program able to perform all the operations taken into consideration, similar to a workstation equipped with proprietary software, allowing the analysis and interpretation of images in a simple and intuitive way. OsiriX is a DICOM PACS workstation for medical imaging and software for image processing for medical research, functional imaging, 3D imaging, confocal microscopy and molecular imaging. This application is also a good tool for teaching activities because it facilitates the attainment of learning objectives among students and other specialists.

  14. A Software Framework for the Analysis of Complex Microscopy Image Data

    PubMed Central

    Chao, Jerry; Ward, E. Sally

    2012-01-01

    Technological advances in both hardware and software have made possible the realization of sophisticated biological imaging experiments using the optical microscope. As a result, modern microscopy experiments are capable of producing complex image data sets. For a given data analysis task, the images in a set are arranged, based on the requirements of the task, by attributes such as the time and focus levels at which they were acquired. Importantly, different tasks performed over the course of an analysis are often facilitated by the use of different arrangements of the images. We present a software framework which supports the use of different logical image arrangements to analyze a physical set of images. Called the Microscopy Image Analysis Tool (MIATool), this framework realizes the logical arrangements using arrays of pointers to the images, thereby removing the need to replicate and manipulate the actual images in their storage medium. In order that they may be tailored to the specific requirements of disparate analysis tasks, these logical arrangements may differ in size and dimensionality, with no restrictions placed on the number of dimensions and the meaning of each dimension. MIATool additionally supports processing flexibility, extensible image processing capabilities, and data storage management. PMID:20423810

  15. Effect of software manipulation (Photoshop) of digitised retinal images on the grading of diabetic retinopathy

    PubMed Central

    George, L; Lusty, J; Owens, D; Ollerton, R

    1999-01-01

    AIMS—To determine whether software processing of digitised retinal images using a "sharpen" filter improves the ability to grade diabetic retinopathy.
METHODS—150 macula centred retinal images were taken as 35 mm colour transparencies representing a spectrum of diabetic retinopathy, digitised, and graded in random order before and after the application of a sharpen filter (Adobe Photoshop). Digital enhancement of contrast and brightness was performed and a X2 digital zoom was utilised. The grades from the unenhanced and enhanced digitised images were compared with the same retinal fields viewed as slides.
RESULTS—Overall agreement in retinopathy grade from the digitised images improved from 83.3% (125/150) to 94.0% (141/150) with sight threatening diabetic retinopathy (STDR) correctly identified in 95.5% (84/88) and 98.9% (87/88) of cases when using unenhanced and enhanced images respectively. In total, five images were overgraded and four undergraded from the enhanced images compared with 17 and eight images respectively when using unenhanced images.
CONCLUSION—This study demonstrates that the already good agreement in grading performance can be further improved by software manipulation or processing of digitised retinal images.

 PMID:10413691

  16. Effect of software manipulation (Photoshop) of digitised retinal images on the grading of diabetic retinopathy.

    PubMed

    George, L D; Lusty, J; Owens, D R; Ollerton, R L

    1999-08-01

    To determine whether software processing of digitised retinal images using a "sharpen" filter improves the ability to grade diabetic retinopathy. 150 macula centred retinal images were taken as 35 mm colour transparencies representing a spectrum of diabetic retinopathy, digitised, and graded in random order before and after the application of a sharpen filter (Adobe Photoshop). Digital enhancement of contrast and brightness was performed and a X2 digital zoom was utilised. The grades from the unenhanced and enhanced digitised images were compared with the same retinal fields viewed as slides. Overall agreement in retinopathy grade from the digitised images improved from 83.3% (125/150) to 94.0% (141/150) with sight threatening diabetic retinopathy (STDR) correctly identified in 95.5% (84/88) and 98.9% (87/88) of cases when using unenhanced and enhanced images respectively. In total, five images were overgraded and four undergraded from the enhanced images compared with 17 and eight images respectively when using unenhanced images. This study demonstrates that the already good agreement in grading performance can be further improved by software manipulation or processing of digitised retinal images.

  17. Towards a multi-site international public dataset for the validation of retinal image analysis software.

    PubMed

    Trucco, Emanuele; Ruggeri, Alfredo

    2013-01-01

    This paper discusses concisely the main issues and challenges posed by the validation of retinal image analysis algorithms. It is designed to set the discussion for the IEEE EBMC 2013 invited session "From laboratory to clinic: the validation of retinal image processing tools ". The session carries forward an international initiative started at EMBC 2011, Boston, which resulted in the first large-consensus paper (14 international sites) on the validation of retinal image processing software, appearing in IOVS. This paper is meant as a focus for the session discussion, but the ubiquity and importance of validation makes its contents, arguably, of interest for the wider medical image processing community.

  18. 76 FR 51993 - Draft Guidance for Industry on Standards for Clinical Trial Imaging Endpoints; Availability

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-19

    ... standardization of imaging procedures when an important imaging endpoint is used in a clinical trial of a... outlines the major considerations for standardization of image acquisition, image interpretation methods... of image acquisition and interpretation standardization, a medical practice standard and a clinical...

  19. Despeckle filtering software toolbox for ultrasound imaging of the common carotid artery.

    PubMed

    Loizou, Christos P; Theofanous, Charoula; Pantziaris, Marios; Kasparis, Takis

    2014-04-01

    Ultrasound imaging of the common carotid artery (CCA) is a non-invasive tool used in medicine to assess the severity of atherosclerosis and monitor its progression through time. It is also used in border detection and texture characterization of the atherosclerotic carotid plaque in the CCA, the identification and measurement of the intima-media thickness (IMT) and the lumen diameter that all are very important in the assessment of cardiovascular disease (CVD). Visual perception, however, is hindered by speckle, a multiplicative noise, that degrades the quality of ultrasound B-mode imaging. Noise reduction is therefore essential for improving the visual observation quality or as a pre-processing step for further automated analysis, such as image segmentation of the IMT and the atherosclerotic carotid plaque in ultrasound images. In order to facilitate this preprocessing step, we have developed in MATLAB(®) a unified toolbox that integrates image despeckle filtering (IDF), texture analysis and image quality evaluation techniques to automate the pre-processing and complement the disease evaluation in ultrasound CCA images. The proposed software, is based on a graphical user interface (GUI) and incorporates image normalization, 10 different despeckle filtering techniques (DsFlsmv, DsFwiener, DsFlsminsc, DsFkuwahara, DsFgf, DsFmedian, DsFhmedian, DsFad, DsFnldif, DsFsrad), image intensity normalization, 65 texture features, 15 quantitative image quality metrics and objective image quality evaluation. The software is publicly available in an executable form, which can be downloaded from http://www.cs.ucy.ac.cy/medinfo/. It was validated on 100 ultrasound images of the CCA, by comparing its results with quantitative visual analysis performed by a medical expert. It was observed that the despeckle filters DsFlsmv, and DsFhmedian improved image quality perception (based on the expert's assessment and the image texture and quality metrics). It is anticipated that the

  20. New image processing software for analyzing object size-frequency distributions, geometry, orientation, and spatial distribution

    NASA Astrophysics Data System (ADS)

    Beggan, Ciarán; Hamilton, Christopher W.

    2010-04-01

    Geological Image Analysis Software (GIAS) combines basic tools for calculating object area, abundance, radius, perimeter, eccentricity, orientation, and centroid location, with the first automated method for characterizing the aerial distribution of objects using sample-size-dependent nearest neighbor (NN) statistics. The NN analyses include tests for (1) Poisson, (2) Normalized Poisson, (3) Scavenged k=1, and (4) Scavenged k=2 NN distributions. GIAS is implemented in MATLAB with a Graphical User Interface (GUI) that is available as pre-parsed pseudocode for use with MATLAB, or as a stand-alone application that runs on Windows and Unix systems. GIAS can process raster data (e.g., satellite imagery, photomicrographs, etc.) and tables of object coordinates to characterize the size, geometry, orientation, and spatial organization of a wide range of geological features. This information expedites quantitative measurements of 2D object properties, provides criteria for validating the use of stereology to transform 2D object sections into 3D models, and establishes a standardized NN methodology that can be used to compare the results of different geospatial studies and identify objects using non-morphological parameters.

  1. Development of HydroImage, A User Friendly Hydrogeophysical Characterization Software

    SciTech Connect

    Mok, Chin Man; Hubbard, Susan; Chen, Jinsong; Suribhatla, Raghu; Kaback, Dawn Samara

    2014-01-29

    HydroImage, user friendly software that utilizes high-resolution geophysical data for estimating hydrogeological parameters in subsurface strate, was developed under this grant. HydroImage runs on a personal computer platform to promote broad use by hydrogeologists to further understanding of subsurface processes that govern contaminant fate, transport, and remediation. The unique software provides estimates of hydrogeological properties over continuous volumes of the subsurface, whereas previous approaches only allow estimation of point locations. thus, this unique tool can be used to significantly enhance site conceptual models and improve design and operation of remediation systems. The HydroImage technical approach uses statistical models to integrate geophysical data with borehole geological data and hydrological measurements to produce hydrogeological parameter estimates as 2-D or 3-D images.

  2. Digital processing of side-scan sonar data with the Woods Hole image processing system software

    USGS Publications Warehouse

    Paskevich, Valerie F.

    1992-01-01

    Since 1985, the Branch of Atlantic Marine Geology has been involved in collecting, processing and digitally mosaicking high and low-resolution side-scan sonar data. Recent development of a UNIX-based image-processing software system includes a series of task specific programs for processing side-scan sonar data. This report describes the steps required to process the collected data and to produce an image that has equal along- and across-track resol

  3. Onboard utilization of ground control points for image correction. Volume 4: Correlation analysis software design

    NASA Technical Reports Server (NTRS)

    1981-01-01

    The software utilized for image correction accuracy measurement is described. The correlation analysis program is written to allow the user various tools to analyze different correlation algorithms. The algorithms were tested using LANDSAT imagery in two different spectral bands. Three classification algorithms are implemented.

  4. 3-dimensional root phenotyping with a novel imaging and software platform

    USDA-ARS?s Scientific Manuscript database

    A novel imaging and software platform was developed for the high-throughput phenotyping of 3-dimensional root traits during seedling development. To demonstrate the platform’s capacity, plants of two rice (Oryza sativa) genotypes, Azucena and IR64, were grown in a transparent gellan gum system and ...

  5. Creation of three-dimensional craniofacial standards from CBCT images

    NASA Astrophysics Data System (ADS)

    Subramanyan, Krishna; Palomo, Martin; Hans, Mark

    2006-03-01

    Low-dose three-dimensional Cone Beam Computed Tomography (CBCT) is becoming increasingly popular in the clinical practice of dental medicine. Two-dimensional Bolton Standards of dentofacial development are routinely used to identify deviations from normal craniofacial anatomy. With the advent of CBCT three dimensional imaging, we propose a set of methods to extend these 2D Bolton Standards to anatomically correct surface based 3D standards to allow analysis of morphometric changes seen in craniofacial complex. To create 3D surface standards, we have implemented series of steps. 1) Converting bi-plane 2D tracings into set of splines 2) Converting the 2D splines curves from bi-plane projection into 3D space curves 3) Creating labeled template of facial and skeletal shapes and 4) Creating 3D average surface Bolton standards. We have used datasets from patients scanned with Hitachi MercuRay CBCT scanner providing high resolution and isotropic CT volume images, digitized Bolton Standards from age 3 to 18 years of lateral and frontal male, female and average tracings and converted them into facial and skeletal 3D space curves. This new 3D standard will help in assessing shape variations due to aging in young population and provide reference to correct facial anomalies in dental medicine.

  6. Digital image measurement of specimen deformation based on CCD cameras and Image J software: an application to human pelvic biomechanics

    NASA Astrophysics Data System (ADS)

    Jia, Yongwei; Cheng, Liming; Yu, Guangrong; Lou, Yongjian; Yu, Yan; Chen, Bo; Ding, Zuquan

    2008-03-01

    A method of digital image measurement of specimen deformation based on CCD cameras and Image J software was developed. This method was used to measure the biomechanics behavior of human pelvis. Six cadaveric specimens from the third lumbar vertebra to the proximal 1/3 part of femur were tested. The specimens without any structural abnormalities were dissected of all soft tissue, sparing the hip joint capsules and the ligaments of the pelvic ring and floor. Markers with black dot on white background were affixed to the key regions of the pelvis. Axial loading from the proximal lumbar was applied by MTS in the gradient of 0N to 500N, which simulated the double feet standing stance. The anterior and lateral images of the specimen were obtained through two CCD cameras. Based on Image J software, digital image processing software, which can be freely downloaded from the National Institutes of Health, digital 8-bit images were processed. The procedure includes the recognition of digital marker, image invert, sub-pixel reconstruction, image segmentation, center of mass algorithm based on weighted average of pixel gray values. Vertical displacements of S1 (the first sacral vertebrae) in front view and micro-angular rotation of sacroiliac joint in lateral view were calculated according to the marker movement. The results of digital image measurement showed as following: marker image correlation before and after deformation was excellent. The average correlation coefficient was about 0.983. According to the 768 × 576 pixels image (pixel size 0.68mm × 0.68mm), the precision of the displacement detected in our experiment was about 0.018 pixels and the comparatively error could achieve 1.11\\perthou. The average vertical displacement of S1 of the pelvis was 0.8356+/-0.2830mm under vertical load of 500 Newtons and the average micro-angular rotation of sacroiliac joint in lateral view was 0.584+/-0.221°. The load-displacement curves obtained from our optical measure system

  7. Enhancing Image Characteristics of Retinal Images of Aggressive Posterior Retinopathy of Prematurity Using a Novel Software, (RetiView)

    PubMed Central

    Jayadev, Chaitra; Vinekar, Anand; Mohanachandra, Poornima; Desai, Samit; Suveer, Amit; Mangalesh, Shwetha; Bauer, Noel; Shetty, Bhujang

    2015-01-01

    Purpose. To report pilot data from a novel image analysis software “RetiView,” to highlight clinically relevant information in RetCam images of infants with aggressive posterior retinopathy of prematurity (APROP). Methods. Twenty-three imaging sessions of consecutive infants of Asian Indian origin with clinically diagnosed APROP underwent three protocols (Grey Enhanced (GE), Color Enhanced (CE), and “Vesselness Measure” (VNM)) of the software. The postprocessed images were compared to baseline data from the archived unprocessed images and clinical exam by the retinopathy of prematurity (ROP) specialist for anterior extent of the vessels, capillary nonperfusion zones (CNP), loops, hemorrhages, and flat neovascularization. Results. There was better visualization of tortuous loops in the GE protocol (56.5%); “bald” zones within the CNP zones (26.1%), hemorrhages (13%), and edge of the disease (34.8%) in the CE images; neovascularization on both GE and CE protocols (13% each); clinically relevant information in cases with poor pupillary dilatation (8.7%); anterior extent of vessels on the VNM protocol (13%) effecting a “reclassification” from zone 1 to zone 2 posterior. Conclusions. RetiView is a noninvasive and inexpensive method of customized image enhancement to detect clinically difficult characteristics in a subset of APROP images with a potential to influence treatment planning. PMID:26240830

  8. Enhancing Image Characteristics of Retinal Images of Aggressive Posterior Retinopathy of Prematurity Using a Novel Software, (RetiView).

    PubMed

    Jayadev, Chaitra; Vinekar, Anand; Mohanachandra, Poornima; Desai, Samit; Suveer, Amit; Mangalesh, Shwetha; Bauer, Noel; Shetty, Bhujang

    2015-01-01

    Purpose. To report pilot data from a novel image analysis software "RetiView," to highlight clinically relevant information in RetCam images of infants with aggressive posterior retinopathy of prematurity (APROP). Methods. Twenty-three imaging sessions of consecutive infants of Asian Indian origin with clinically diagnosed APROP underwent three protocols (Grey Enhanced (GE), Color Enhanced (CE), and "Vesselness Measure" (VNM)) of the software. The postprocessed images were compared to baseline data from the archived unprocessed images and clinical exam by the retinopathy of prematurity (ROP) specialist for anterior extent of the vessels, capillary nonperfusion zones (CNP), loops, hemorrhages, and flat neovascularization. Results. There was better visualization of tortuous loops in the GE protocol (56.5%); "bald" zones within the CNP zones (26.1%), hemorrhages (13%), and edge of the disease (34.8%) in the CE images; neovascularization on both GE and CE protocols (13% each); clinically relevant information in cases with poor pupillary dilatation (8.7%); anterior extent of vessels on the VNM protocol (13%) effecting a "reclassification" from zone 1 to zone 2 posterior. Conclusions. RetiView is a noninvasive and inexpensive method of customized image enhancement to detect clinically difficult characteristics in a subset of APROP images with a potential to influence treatment planning.

  9. Software requirements and support for image-algebraic analysis, detection, and recognition of small targets

    NASA Astrophysics Data System (ADS)

    Schmalz, Mark S.; Ritter, Gerhard X.; Forsman, Robert H.; Yang, Chyuan-Huei T.; Hu, Wen-Chen; Porter, Ryan A.; McTaggart, Gary; Hranicky, James F.; Davis, James F.

    1995-06-01

    The detection of hazardous targets frequently requires a multispectral approach to image acquisition and analysis, which we have implemented in a software system called MATRE (multispectral automated target recognition and enhancement). MATRE provides capabilities of image enhancement, image database management, spectral signature extraction and visualization, statistical analysis of greyscale imagery, as well as 2D and 3D image processing operations. Our system is based upon a client-server architecture that is amenable to distributed implementation. In this paper, we discuss salient issues and requirements for multispectral recognition of hazardous targets, and show that our software fulfills or exceeds such requirements. MATRE's capabilities, as well as statistical and morphological analysis results, are exemplified with emphasis upon computational cost, ease of installation, and maintenance on various Unix platforms. Additionally, MATRE's image processing functions can be coded in vector-parallel form, for ease of implementation of SIMD-parallel processors. Our algorithms are expressed in terms of image algebra, a concise, rigorous notation that unifies linear and nonlinear mathematics in the image domain. An image algebra class library for the C + + language has been incorporated into the our system, which facilitates fast algorithm prototyping without the numerous drawbacks of descrete coding.

  10. Simple and cost-effective hardware and software for functional brain mapping using intrinsic optical signal imaging.

    PubMed

    Harrison, Thomas C; Sigler, Albrecht; Murphy, Timothy H

    2009-09-15

    We describe a simple and low-cost system for intrinsic optical signal (IOS) imaging using stable LED light sources, basic microscopes, and commonly available CCD cameras. IOS imaging measures activity-dependent changes in the light reflectance of brain tissue, and can be performed with a minimum of specialized equipment. Our system uses LED ring lights that can be mounted on standard microscope objectives or video lenses to provide a homogeneous and stable light source, with less than 0.003% fluctuation across images averaged from 40 trials. We describe the equipment and surgical techniques necessary for both acute and chronic mouse preparations, and provide software that can create maps of sensory representations from images captured by inexpensive 8-bit cameras or by 12-bit cameras. The IOS imaging system can be adapted to commercial upright microscopes or custom macroscopes, eliminating the need for dedicated equipment or complex optical paths. This method can be combined with parallel high resolution imaging techniques such as two-photon microscopy.

  11. 3D thermography imaging standardization technique for inflammation diagnosis

    NASA Astrophysics Data System (ADS)

    Ju, Xiangyang; Nebel, Jean-Christophe; Siebert, J. Paul

    2005-01-01

    We develop a 3D thermography imaging standardization technique to allow quantitative data analysis. Medical Digital Infrared Thermal Imaging is very sensitive and reliable mean of graphically mapping and display skin surface temperature. It allows doctors to visualise in colour and quantify temperature changes in skin surface. The spectrum of colours indicates both hot and cold responses which may co-exist if the pain associate with an inflammatory focus excites an increase in sympathetic activity. However, due to thermograph provides only qualitative diagnosis information, it has not gained acceptance in the medical and veterinary communities as a necessary or effective tool in inflammation and tumor detection. Here, our technique is based on the combination of visual 3D imaging technique and thermal imaging technique, which maps the 2D thermography images on to 3D anatomical model. Then we rectify the 3D thermogram into a view independent thermogram and conform it a standard shape template. The combination of these imaging facilities allows the generation of combined 3D and thermal data from which thermal signatures can be quantified.

  12. 25 CFR 547.8 - What are the minimum technical software standards applicable to Class II gaming systems?

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... OF CLASS II GAMES § 547.8 What are the minimum technical software standards applicable to Class II... of Class II games. (a) Player interface displays. (1) If not otherwise provided to the player, the player interface shall display the following: (i) The purchase or wager amount; (ii) Game results;...

  13. 25 CFR 547.8 - What are the minimum technical software standards applicable to Class II gaming systems?

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... OF CLASS II GAMES § 547.8 What are the minimum technical software standards applicable to Class II... of Class II games. (a) Player interface displays. (1) If not otherwise provided to the player, the player interface shall display the following: (i) The purchase or wager amount; (ii) Game results;...

  14. IHE cross-enterprise document sharing for imaging: interoperability testing software

    PubMed Central

    2010-01-01

    Background With the deployments of Electronic Health Records (EHR), interoperability testing in healthcare is becoming crucial. EHR enables access to prior diagnostic information in order to assist in health decisions. It is a virtual system that results from the cooperation of several heterogeneous distributed systems. Interoperability between peers is therefore essential. Achieving interoperability requires various types of testing. Implementations need to be tested using software that simulates communication partners, and that provides test data and test plans. Results In this paper we describe a software that is used to test systems that are involved in sharing medical images within the EHR. Our software is used as part of the Integrating the Healthcare Enterprise (IHE) testing process to test the Cross Enterprise Document Sharing for imaging (XDS-I) integration profile. We describe its architecture and functionalities; we also expose the challenges encountered and discuss the elected design solutions. Conclusions EHR is being deployed in several countries. The EHR infrastructure will be continuously evolving to embrace advances in the information technology domain. Our software is built on a web framework to allow for an easy evolution with web technology. The testing software is publicly available; it can be used by system implementers to test their implementations. It can also be used by site integrators to verify and test the interoperability of systems, or by developers to understand specifications ambiguities, or to resolve implementations difficulties. PMID:20858241

  15. IHE cross-enterprise document sharing for imaging: interoperability testing software.

    PubMed

    Noumeir, Rita; Renaud, Bérubé

    2010-09-21

    With the deployments of Electronic Health Records (EHR), interoperability testing in healthcare is becoming crucial. EHR enables access to prior diagnostic information in order to assist in health decisions. It is a virtual system that results from the cooperation of several heterogeneous distributed systems. Interoperability between peers is therefore essential. Achieving interoperability requires various types of testing. Implementations need to be tested using software that simulates communication partners, and that provides test data and test plans. In this paper we describe a software that is used to test systems that are involved in sharing medical images within the EHR. Our software is used as part of the Integrating the Healthcare Enterprise (IHE) testing process to test the Cross Enterprise Document Sharing for imaging (XDS-I) integration profile. We describe its architecture and functionalities; we also expose the challenges encountered and discuss the elected design solutions. EHR is being deployed in several countries. The EHR infrastructure will be continuously evolving to embrace advances in the information technology domain. Our software is built on a web framework to allow for an easy evolution with web technology. The testing software is publicly available; it can be used by system implementers to test their implementations. It can also be used by site integrators to verify and test the interoperability of systems, or by developers to understand specifications ambiguities, or to resolve implementations difficulties.

  16. Software for MR image overlay guided needle insertions: the clinical translation process

    NASA Astrophysics Data System (ADS)

    Ungi, Tamas; U-Thainual, Paweena; Fritz, Jan; Iordachita, Iulian I.; Flammang, Aaron J.; Carrino, John A.; Fichtinger, Gabor

    2013-03-01

    PURPOSE: Needle guidance software using augmented reality image overlay was translated from the experimental phase to support preclinical and clinical studies. Major functional and structural changes were needed to meet clinical requirements. We present the process applied to fulfill these requirements, and selected features that may be applied in the translational phase of other image-guided surgical navigation systems. METHODS: We used an agile software development process for rapid adaptation to unforeseen clinical requests. The process is based on iterations of operating room test sessions, feedback discussions, and software development sprints. The open-source application framework of 3D Slicer and the NA-MIC kit provided sufficient flexibility and stable software foundations for this work. RESULTS: All requirements were addressed in a process with 19 operating room test iterations. Most features developed in this phase were related to workflow simplification and operator feedback. CONCLUSION: Efficient and affordable modifications were facilitated by an open source application framework and frequent clinical feedback sessions. Results of cadaver experiments show that software requirements were successfully solved after a limited number of operating room tests.

  17. A software tool to measure the geometric distortion in x-ray image systems

    NASA Astrophysics Data System (ADS)

    Prieto, Gabriel; Guibelalde, Eduardo; Chevalier, Margarita

    2010-04-01

    A software tool is presented to measure the geometric distortion in images obtained with X-ray systems that provides a more objective method than the usual measurements over the image of a phantom with usual rulers. In a first step, this software has been applied to mammography images and makes use of the grid included into the CDMAM phantom (University Hospital Nijmegen). For digital images, this software tool automatically locates the grid crossing points and obtains a set of corners (up to 237) that are used by the program to determine 6 different squares, at top, bottom, left, right and central positions. The sixth square is the largest that can be fitted in the grid (widest possible square). The distortion is calculated as ((length of left diagonal - length of right diagonal)/ length of left diagonal) (%) for the six positions. The algorithm error is of the order of 0.3%. The method might be applied to other radiological systems without any major changes to adjust the program code to other phantoms. In this work a set of measurements for 54 CDMAM images, acquired in 11 different mammography systems from 6 manufacturers are presented. We can conclude that the distortion of all equipments is smaller than the recommendations for maximum distortions in primary displays (2%)

  18. WHIPPET: a collaborative software environment for medical image processing and analysis

    NASA Astrophysics Data System (ADS)

    Hu, Yangqiu; Haynor, David R.; Maravilla, Kenneth R.

    2007-03-01

    While there are many publicly available software packages for medical image processing, making them available to end users in clinical and research labs remains non-trivial. An even more challenging task is to mix these packages to form pipelines that meet specific needs seamlessly, because each piece of software usually has its own input/output formats, parameter sets, and so on. To address these issues, we are building WHIPPET (Washington Heterogeneous Image Processing Pipeline EnvironmenT), a collaborative platform for integrating image analysis tools from different sources. The central idea is to develop a set of Python scripts which glue the different packages together and make it possible to connect them in processing pipelines. To achieve this, an analysis is carried out for each candidate package for WHIPPET, describing input/output formats, parameters, ROI description methods, scripting and extensibility and classifying its compatibility with other WHIPPET components as image file level, scripting level, function extension level, or source code level. We then identify components that can be connected in a pipeline directly via image format conversion. We set up a TWiki server for web-based collaboration so that component analysis and task request can be performed online, as well as project tracking, knowledge base management, and technical support. Currently WHIPPET includes the FSL, MIPAV, FreeSurfer, BrainSuite, Measure, DTIQuery, and 3D Slicer software packages, and is expanding. Users have identified several needed task modules and we report on their implementation.

  19. Exploiting the potential of free software to evaluate root canal biomechanical preparation outcomes through micro-CT images.

    PubMed

    Neves, A A; Silva, E J; Roter, J M; Belladona, F G; Alves, H D; Lopes, R T; Paciornik, S; De-Deus, G A

    2015-11-01

    To propose an automated image processing routine based on free software to quantify root canal preparation outcomes in pairs of sound and instrumented roots after micro-CT scanning procedures. Seven mesial roots of human mandibular molars with different canal configuration systems were studied: (i) Vertucci's type 1, (ii) Vertucci's type 2, (iii) two individual canals, (iv) Vertucci's type 6, canals (v) with and (vi) without debris, and (vii) canal with visible pulp calcification. All teeth were instrumented with the BioRaCe system and scanned in a Skyscan 1173 micro-CT before and after canal preparation. After reconstruction, the instrumented stack of images (IS) was registered against the preoperative sound stack of images (SS). Image processing included contrast equalization and noise filtering. Sound canal volumes were obtained by a minimum threshold. For the IS, a fixed conservative threshold was chosen as the best compromise between instrumented canal and dentine whilst avoiding debris, resulting in instrumented canal plus empty spaces. Arithmetic and logical operations between sound and instrumented stacks were used to identify debris. Noninstrumented dentine was calculated using a minimum threshold in the IS and subtracting from the SS and total debris. Removed dentine volume was obtained by subtracting SS from IS. Quantitative data on total debris present in the root canal space after instrumentation, noninstrumented areas and removed dentine volume were obtained for each test case, as well as three-dimensional volume renderings. After standardization of acquisition, reconstruction and image processing micro-CT images, a quantitative approach for calculation of root canal biomechanical outcomes was achieved using free software. © 2014 International Endodontic Journal. Published by John Wiley & Sons Ltd.

  20. Capturing a failure of an ASIC in-situ, using infrared radiometry and image processing software

    NASA Technical Reports Server (NTRS)

    Ruiz, Ronald P.

    2003-01-01

    Failures in electronic devices can sometimes be tricky to locate-especially if they are buried inside radiation-shielded containers designed to work in outer space. Such was the case with a malfunctioning ASIC (Application Specific Integrated Circuit) that was drawing excessive power at a specific temperature during temperature cycle testing. To analyze the failure, infrared radiometry (thermography) was used in combination with image processing software to locate precisely where the power was being dissipated at the moment the failure took place. The IR imaging software was used to make the image of the target and background, appear as unity. As testing proceeded and the failure mode was reached, temperature changes revealed the precise location of the fault. The results gave the design engineers the information they needed to fix the problem. This paper describes the techniques and equipment used to accomplish this failure analysis.

  1. New technique to count mosquito adults: using ImageJ software to estimate number of mosquito adults in a trap.

    PubMed

    Kesavaraju, Banugopan; Dickson, Sammie

    2012-12-01

    A new technique is described here to count mosquitoes using open-source software. We wanted to develop a protocol that would estimate the total number of mosquitoes from a picture using ImageJ. Adult mosquitoes from CO2-baited traps were spread on a tray and photographed. The total number of mosquitoes in a picture was estimated using various calibrations on ImageJ, and results were compared with manual counting to identify the ideal calibration. The average trap count was 1,541, and the average difference between the manual count and the best calibration was 174.11 +/- 21.59, with 93% correlation. Subsequently, contents of a trap were photographed 5 different times after they were shuffled between each picture to alter the picture pattern of adult mosquitoes. The standard error among variations stayed below 50, indicating limited variation for total count between pictures of the same trap when the pictures were processed through ImageJ. These results indicate the software could be utilized efficiently to estimate total number of mosquitoes from traps.

  2. A software tool for interactive generation, representation, and systematical storage of transfer functions for 3D medical images.

    PubMed

    Alper Selver, M; Fischer, Felix; Kuntalp, Mehmet; Hillen, Walter

    2007-06-01

    As being a tool that assigns optical parameters, i.e. color, transparency, used in interactive visualization, transfer functions have very important effects on the quality of volume rendered medical images. However, finding accurate transfer functions is a very difficult, tedious, and time consuming task because of the variety of all possibilities. By addressing this problem, a software module, which can be easily plugged into any visualization program, is developed based on the specific expectations of medical experts. Its design includes both a new user interface to ease the interactive generation of the volume rendered medical images and a volumetric histogram based method for initial generation of transfer functions. In addition, a novel file system has been implemented to represent 3D medical images using transfer functions based on the DICOM standard. For evaluation of the system by various medical experts, the software is installed into a DICOM viewer. Based on the feedback obtained from the medical experts, several improvements are made, especially to increase the flexibility of the program. The final version of the implemented system shortens the transfer function design process and is applicable to various application areas.

  3. Integrating digital image management software for improved patient care and optimal practice management.

    PubMed

    Starr, Jon C

    2006-06-01

    Photographic images provide vital documentation of preoperative, intraoperative, and postoperative results in the clinical dermatologic surgery practice and can document histologic findings from skin biopsies, thereby enhancing patient care. Images may be printed as part of text documents, transmitted via electronic mail, or included in electronic medical records. To describe existing computer software that integrates digital photography and the medical record to improve patient care and practice management. A variety of computer applications are available to optimize the use of digital images in the dermatologic practice.

  4. Name-Value Pair Specification For Image Data Headers And Logical Standards For Image Data Exchange

    NASA Astrophysics Data System (ADS)

    Prewitt, J. M.; Selfridge, Peter G.; Anderson, Alicia C.

    1984-08-01

    A chronic barrier to rapid progress in image processing and pattern recognition research is the lack of a universal and facile method of transferring image data between different facilities. Comparison of different approaches and algorithms on a common data base is often the only means for establishing the validity of results. Data collected under known recording conditions is mandatory for improvement of analytic methodology, yet such valuable data is costly and time consuming to obtain. Therefore, the sharing and exchange of image data may be expendient. The proliferation of different image data formats has compouned the problem of exchange. The establishement of logical formats and standards for images and image data headers is the first step towards dissolving this barrier. This paper presents initial recommendations of the IEEE Computer Society PAMI (Pattern Analysis and Machine Intelligence) and CompMed (Computational Medicine) Technical Committees' Database Subcommittees on the first of a series of digital image data standards.

  5. A software to digital image processing to be used in the voxel phantom development.

    PubMed

    Vieira, J W; Lima, F R A

    2009-11-15

    Anthropomorphic models used in computational dosimetry, also denominated phantoms, are based on digital images recorded from scanning of real people by Computed Tomography (CT) or Magnetic Resonance Imaging (MRI). The voxel phantom construction requests computational processing for transformations of image formats, to compact two-dimensional (2-D) images forming of three-dimensional (3-D) matrices, image sampling and quantization, image enhancement, restoration and segmentation, among others. Hardly the researcher of computational dosimetry will find all these available abilities in single software, and almost always this difficulty presents as a result the decrease of the rhythm of his researches or the use, sometimes inadequate, of alternative tools. The need to integrate the several tasks mentioned above to obtain an image that can be used in an exposure computational model motivated the development of the Digital Image Processing (DIP) software, mainly to solve particular problems in Dissertations and Thesis developed by members of the Grupo de Pesquisa em Dosimetria Numérica (GDN/CNPq). Because of this particular objective, the software uses the Portuguese idiom in their implementations and interfaces. This paper presents the second version of the DIP, whose main changes are the more formal organization on menus and menu items, and menu for digital image segmentation. Currently, the DIP contains the menus Fundamentos, Visualizações, Domínio Espacial, Domínio de Frequências, Segmentações and Estudos. Each menu contains items and sub-items with functionalities that, usually, request an image as input and produce an image or an attribute in the output. The DIP reads edits and writes binary files containing the 3-D matrix corresponding to a stack of axial images from a given geometry that can be a human body or other volume of interest. It also can read any type of computational image and to make conversions. When the task involves only an output image

  6. Image contrast enhancement based on a local standard deviation model

    SciTech Connect

    Chang, Dah-Chung; Wu, Wen-Rong

    1996-12-31

    The adaptive contrast enhancement (ACE) algorithm is a widely used image enhancement method, which needs a contrast gain to adjust high frequency components of an image. In the literature, the gain is usually inversely proportional to the local standard deviation (LSD) or is a constant. But these cause two problems in practical applications, i.e., noise overenhancement and ringing artifact. In this paper a new gain is developed based on Hunt`s Gaussian image model to prevent the two defects. The new gain is a nonlinear function of LSD and has the desired characteristic emphasizing the LSD regions in which details are concentrated. We have applied the new ACE algorithm to chest x-ray images and the simulations show the effectiveness of the proposed algorithm.

  7. A near-infrared fluorescence-based surgical navigation system imaging software for sentinel lymph node detection

    NASA Astrophysics Data System (ADS)

    Ye, Jinzuo; Chi, Chongwei; Zhang, Shuang; Ma, Xibo; Tian, Jie

    2014-02-01

    Sentinel lymph node (SLN) in vivo detection is vital in breast cancer surgery. A new near-infrared fluorescence-based surgical navigation system (SNS) imaging software, which has been developed by our research group, is presented for SLN detection surgery in this paper. The software is based on the fluorescence-based surgical navigation hardware system (SNHS) which has been developed in our lab, and is designed specifically for intraoperative imaging and postoperative data analysis. The surgical navigation imaging software consists of the following software modules, which mainly include the control module, the image grabbing module, the real-time display module, the data saving module and the image processing module. And some algorithms have been designed to achieve the performance of the software, for example, the image registration algorithm based on correlation matching. Some of the key features of the software include: setting the control parameters of the SNS; acquiring, display and storing the intraoperative imaging data in real-time automatically; analysis and processing of the saved image data. The developed software has been used to successfully detect the SLNs in 21 cases of breast cancer patients. In the near future, we plan to improve the software performance and it will be extensively used for clinical purpose.

  8. The Personal Computer as an Analytical Workstation: Interfacing Standard Software Products.

    ERIC Educational Resources Information Center

    Cawley, Jeffery L.

    1984-01-01

    Reviews the functions that should be included in an analytical workstation. Also discusses the common data file structures and the techniques of data interchange between software modules and presents a brief overview of commercial products and their interfacing characteristics. These software include word processors, spreadsheets, database…

  9. Open Source software and social networks: disruptive alternatives for medical imaging.

    PubMed

    Ratib, Osman; Rosset, Antoine; Heuberger, Joris

    2011-05-01

    In recent decades several major changes in computer and communication technology have pushed the limits of imaging informatics and PACS beyond the traditional system architecture providing new perspectives and innovative approach to a traditionally conservative medical community. Disruptive technologies such as the world-wide-web, wireless networking, Open Source software and recent emergence of cyber communities and social networks have imposed an accelerated pace and major quantum leaps in the progress of computer and technology infrastructure applicable to medical imaging applications. This paper reviews the impact and potential benefits of two major trends in consumer market software development and how they will influence the future of medical imaging informatics. Open Source software is emerging as an attractive and cost effective alternative to traditional commercial software developments and collaborative social networks provide a new model of communication that is better suited to the needs of the medical community. Evidence shows that successful Open Source software tools have penetrated the medical market and have proven to be more robust and cost effective than their commercial counterparts. Developed by developers that are themselves part of the user community, these tools are usually better adapted to the user's need and are more robust than traditional software programs being developed and tested by a large number of contributing users. This context allows a much faster and more appropriate development and evolution of the software platforms. Similarly, communication technology has opened up to the general public in a way that has changed the social behavior and habits adding a new dimension to the way people communicate and interact with each other. The new paradigms have also slowly penetrated the professional market and ultimately the medical community. Secure social networks allowing groups of people to easily communicate and exchange information

  10. A New Effort for Atmospherical Forecast: Meteorological Image Processing Software (MIPS) for Astronomical Observations

    NASA Astrophysics Data System (ADS)

    Shameoni Niaei, M.; Kilic, Y.; Yildiran, B. E.; Yüzlükoglu, F.; Yesilyaprak, C.

    2016-12-01

    We have described a new software (MIPS) about the analysis and image processing of the meteorological satellite (Meteosat) data for an astronomical observatory. This software will be able to help to make some atmospherical forecast (cloud, humidity, rain) using meteosat data for robotic telescopes. MIPS uses a python library for Eumetsat data that aims to be completely open-source and licenced under GNU/General Public Licence (GPL). MIPS is a platform independent and uses h5py, numpy, and PIL with the general-purpose and high-level programming language Python and the QT framework.

  11. BioContainers: an open-source and community-driven framework for software standardization.

    PubMed

    da Veiga Leprevost, Felipe; Grüning, Björn A; Alves Aflitos, Saulo; Röst, Hannes L; Uszkoreit, Julian; Barsnes, Harald; Vaudel, Marc; Moreno, Pablo; Gatto, Laurent; Weber, Jonas; Bai, Mingze; Jimenez, Rafael C; Sachsenberg, Timo; Pfeuffer, Julianus; Vera Alvarez, Roberto; Griss, Johannes; Nesvizhskii, Alexey I; Perez-Riverol, Yasset

    2017-08-15

    BioContainers (biocontainers.pro) is an open-source and community-driven framework which provides platform independent executable environments for bioinformatics software. BioContainers allows labs of all sizes to easily install bioinformatics software, maintain multiple versions of the same software and combine tools into powerful analysis pipelines. BioContainers is based on popular open-source projects Docker and rkt frameworks, that allow software to be installed and executed under an isolated and controlled environment. Also, it provides infrastructure and basic guidelines to create, manage and distribute bioinformatics containers with a special focus on omics technologies. These containers can be integrated into more comprehensive bioinformatics pipelines and different architectures (local desktop, cloud environments or HPC clusters). The software is freely available at github.com/BioContainers/. yperez@ebi.ac.uk.

  12. Stain Specific Standardization of Whole-Slide Histopathological Images.

    PubMed

    Bejnordi, Babak Ehteshami; Litjens, Geert; Timofeeva, Nadya; Otte-Höller, Irene; Homeyer, André; Karssemeijer, Nico; van der Laak, Jeroen A W M

    2016-02-01

    Variations in the color and intensity of hematoxylin and eosin (H&E) stained histological slides can potentially hamper the effectiveness of quantitative image analysis. This paper presents a fully automated algorithm for standardization of whole-slide histopathological images to reduce the effect of these variations. The proposed algorithm, called whole-slide image color standardizer (WSICS), utilizes color and spatial information to classify the image pixels into different stain components. The chromatic and density distributions for each of the stain components in the hue-saturation-density color model are aligned to match the corresponding distributions from a template whole-slide image (WSI). The performance of the WSICS algorithm was evaluated on two datasets. The first originated from 125 H&E stained WSIs of lymph nodes, sampled from 3 patients, and stained in 5 different laboratories on different days of the week. The second comprised 30 H&E stained WSIs of rat liver sections. The result of qualitative and quantitative evaluations using the first dataset demonstrate that the WSICS algorithm outperforms competing methods in terms of achieving color constancy. The WSICS algorithm consistently yields the smallest standard deviation and coefficient of variation of the normalized median intensity measure. Using the second dataset, we evaluated the impact of our algorithm on the performance of an already published necrosis quantification system. The performance of this system was significantly improved by utilizing the WSICS algorithm. The results of the empirical evaluations collectively demonstrate the potential contribution of the proposed standardization algorithm to improved diagnostic accuracy and consistency in computer-aided diagnosis for histopathology data.

  13. Comparison of human observers and CDCOM software reading for CDMAM images

    NASA Astrophysics Data System (ADS)

    Lanconelli, Nico; Rivetti, Stefano; Golinelli, Paola; Serafini, Marco; Bertolini, Marco; Borasi, Giovanni

    2007-03-01

    Contrast-detail analysis is one the most common way for the assessment of the performance of an imaging system. Usually, the reading of phantoms, such as CDMAM, is obtained by human observers. The main drawbacks of this practice is the presence of inter-observer variability and the great amount of time needed. However, software programs are available, for reading CDMAM images in an automatic way. In this paper we present a comparison of human and software reading of CDMAM images coming from three different FFDM clinical units. Images were acquired at different exposures in the same conditions for the three systems. Once software has completed the reading, the interpretation of the results is achieved on the same way used for the human case. CDCOM results are consistent with human analysis, if we consider figures such as COR and IQF. On the other hand, we find out some discrepancies along the CD curves obtained by human observers, with respect to those estimated by automated CDCOM analysis.

  14. Grid-less imaging with antiscatter correction software in 2D mammography: the effects on image quality and MGD under a partial virtual clinical validation study

    NASA Astrophysics Data System (ADS)

    Van Peteghem, Nelis; Bemelmans, Frédéric; Bramaje Adversalo, Xenia; Salvagnini, Elena; Marshall, Nicholas; Bosmans, Hilde; Van Ongeval, Chantal

    2016-03-01

    This work investigated the effect of the grid-less acquisition mode with scatter correction software developed by Siemens Healthcare (PRIME mode) on image quality and mean glandular dose (MGD) in a comparative study against a standard mammography system with grid. Image quality was technically quantified with contrast-detail (c-d) analysis and by calculating detectability indices (d') using a non-prewhitening with eye filter model observer (NPWE). MGD was estimated technically using slabs of PMMA and clinically on a set of 11439 patient images. The c-d analysis gave similar results for all mammographic systems examined, although the d' values were slightly lower for the system with PRIME mode when compared to the same system in standard mode (-2.8% to -5.7%, depending on the PMMA thickness). The MGD values corresponding to the PMMA measurements with automatic exposure control indicated a dose reduction from 11.0% to 20.8% for the system with PRIME mode compared to the same system without PRIME mode. The largest dose reductions corresponded to the thinnest PMMA thicknesses. The results from the clinical dosimetry study showed an overall population-averaged dose reduction of 11.6% (up to 27.7% for thinner breasts) for PRIME mode compared to standard mode for breast thicknesses from 20 to 69 mm. These technical image quality measures were then supported using a clinically oriented study whereby simulated clusters of microcalcifications and masses were inserted into patient images and read by radiologists in an AFROC study to quantify their detectability. In line with the technical investigation, no significant difference was found between the two imaging modes (p-value 0.95).

  15. Geoscience data standards, software implementations, and the Internet. Where we came from and where we might be going.

    NASA Astrophysics Data System (ADS)

    Blodgett, D. L.

    2014-12-01

    Geographic information science and the coupled database and software systems that have grown from it have been evolving since the early 1990s. The multi-file shapefile package, invented early in this evolution, is an example of a highly generalized file format that can be used as an archival, interchange, and format for program execution. There are other formats, such as GeoTIFF and NetCDF that have similar characteristics. These de-facto standard (in contrast to the formally defined and published standards) formats, while not initially designed for machine-readable web-services, are used in them extensively. Relying on these formats allows legacy software to be adapted to web-services, but may require complicate software development to handle dynamic introspection of these legacy file formats' metadata. A generalized system of web-service types that offer archive, interchange, and run-time capabilities based on commonly implemented file formats and established web-service specifications has emerged from exemplar implementations. For example, an Open Geospatial Consortium (OGC) Web Feature Service is used to serve sites or model polygons and an OGC Sensor Observation Service provides time series data for the sites. The broad system of data formats, web-service types, and freely available software that implements the system will be described. The presentation will include a perspective on the future of this basic system and how it relates to scientific domain specific information models such as the Open Geospatial Consortium standards for geographic, hydrologic, and hydrogeologic data.

  16. ARAM: an automated image analysis software to determine rosetting parameters and parasitaemia in Plasmodium samples.

    PubMed

    Kudella, Patrick Wolfgang; Moll, Kirsten; Wahlgren, Mats; Wixforth, Achim; Westerhausen, Christoph

    2016-04-18

    Rosetting is associated with severe malaria and a primary cause of death in Plasmodium falciparum infections. Detailed understanding of this adhesive phenomenon may enable the development of new therapies interfering with rosette formation. For this, it is crucial to determine parameters such as rosetting and parasitaemia of laboratory strains or patient isolates, a bottleneck in malaria research due to the time consuming and error prone manual analysis of specimens. Here, the automated, free, stand-alone analysis software automated rosetting analyzer for micrographs (ARAM) to determine rosetting rate, rosette size distribution as well as parasitaemia with a convenient graphical user interface is presented. Automated rosetting analyzer for micrographs is an executable with two operation modes for automated identification of objects on images. The default mode detects red blood cells and fluorescently labelled parasitized red blood cells by combining an intensity-gradient with a threshold filter. The second mode determines object location and size distribution from a single contrast method. The obtained results are compared with standardized manual analysis. Automated rosetting analyzer for micrographs calculates statistical confidence probabilities for rosetting rate and parasitaemia. Automated rosetting analyzer for micrographs analyses 25 cell objects per second reliably delivering identical results compared to manual analysis. For the first time rosette size distribution is determined in a precise and quantitative manner employing ARAM in combination with established inhibition tests. Additionally ARAM measures the essential observables parasitaemia, rosetting rate and size as well as location of all detected objects and provides confidence intervals for the determined observables. No other existing software solution offers this range of function. The second, non-malaria specific, analysis mode of ARAM offers the functionality to detect arbitrary objects

  17. MedXViewer: an extensible web-enabled software package for medical imaging

    NASA Astrophysics Data System (ADS)

    Looney, P. T.; Young, K. C.; Mackenzie, Alistair; Halling-Brown, Mark D.

    2014-03-01

    MedXViewer (Medical eXtensible Viewer) is an application designed to allow workstation-independent, PACS-less viewing and interaction with anonymised medical images (e.g. observer studies). The application was initially implemented for use in digital mammography and tomosynthesis but the flexible software design allows it to be easily extended to other imaging modalities. Regions of interest can be identified by a user and any associated information about a mark, an image or a study can be added. The questions and settings can be easily configured depending on the need of the research allowing both ROC and FROC studies to be performed. The extensible nature of the design allows for other functionality and hanging protocols to be available for each study. Panning, windowing, zooming and moving through slices are all available while modality-specific features can be easily enabled e.g. quadrant zooming in mammographic studies. MedXViewer can integrate with a web-based image database allowing results and images to be stored centrally. The software and images can be downloaded remotely from this centralised data-store. Alternatively, the software can run without a network connection where the images and results can be encrypted and stored locally on a machine or external drive. Due to the advanced workstation-style functionality, the simple deployment on heterogeneous systems over the internet without a requirement for administrative access and the ability to utilise a centralised database, MedXViewer has been used for running remote paper-less observer studies and is capable of providing a training infrastructure and co-ordinating remote collaborative viewing sessions (e.g. cancer reviews, interesting cases).

  18. Apero, AN Open Source Bundle Adjusment Software for Automatic Calibration and Orientation of Set of Images

    NASA Astrophysics Data System (ADS)

    Pierrot Deseilligny, M.; Clery, I.

    2011-09-01

    IGN has developed a set of photogrammetric tools, APERO and MICMAC, for computing 3D models from set of images. This software, developed initially for its internal needs are now delivered as open source code. This paper focuses on the presentation of APERO the orientation software. Compared to some other free software initiatives, it is probably more complex but also more complete, its targeted user is rather professionals (architects, archaeologist, geomophologist) than people. APERO uses both computer vision approach for estimation of initial solution and photogrammetry for a rigorous compensation of the total error; it has a large library of parametric model of distortion allowing a precise modelization of all the kind of pinhole camera we know, including several model of fish-eye; there is also several tools for geo-referencing the result. The results are illustrated on various application, including the data-set of 3D-Arch workshop.

  19. Technical note: DIRART--A software suite for deformable image registration and adaptive radiotherapy research.

    PubMed

    Yang, Deshan; Brame, Scott; El Naqa, Issam; Aditya, Apte; Wu, Yu; Goddu, S Murty; Mutic, Sasa; Deasy, Joseph O; Low, Daniel A

    2011-01-01

    Recent years have witnessed tremendous progress in image guide radiotherapy technology and a growing interest in the possibilities for adapting treatment planning and delivery over the course of treatment. One obstacle faced by the research community has been the lack of a comprehensive open-source software toolkit dedicated for adaptive radiotherapy (ART). To address this need, the authors have developed a software suite called the Deformable Image Registration and Adaptive Radiotherapy Toolkit (DIRART). DIRART is an open-source toolkit developed in MATLAB. It is designed in an object-oriented style with focus on user-friendliness, features, and flexibility. It contains four classes of DIR algorithms, including the newer inverse consistency algorithms to provide consistent displacement vector field in both directions. It also contains common ART functions, an integrated graphical user interface, a variety of visualization and image-processing features, dose metric analysis functions, and interface routines. These interface routines make DIRART a powerful complement to the Computational Environment for Radiotherapy Research (CERR) and popular image-processing toolkits such as ITK. DIRART provides a set of image processing/registration algorithms and postprocessing functions to facilitate the development and testing of DIR algorithms. It also offers a good amount of options for DIR results visualization, evaluation, and validation. By exchanging data with treatment planning systems via DICOM-RT files and CERR, and by bringing image registration algorithms closer to radiotherapy applications, DIRART is potentially a convenient and flexible platform that may facilitate ART and DIR research. 0 2011 Ameri-

  20. Technical Note: DIRART- A software suite for deformable image registration and adaptive radiotherapy research

    SciTech Connect

    Yang Deshan; Brame, Scott; El Naqa, Issam; Aditya, Apte; Wu Yu; Murty Goddu, S.; Mutic, Sasa; Deasy, Joseph O.; Low, Daniel A.

    2011-01-15

    Purpose: Recent years have witnessed tremendous progress in image guide radiotherapy technology and a growing interest in the possibilities for adapting treatment planning and delivery over the course of treatment. One obstacle faced by the research community has been the lack of a comprehensive open-source software toolkit dedicated for adaptive radiotherapy (ART). To address this need, the authors have developed a software suite called the Deformable Image Registration and Adaptive Radiotherapy Toolkit (DIRART). Methods: DIRART is an open-source toolkit developed in MATLAB. It is designed in an object-oriented style with focus on user-friendliness, features, and flexibility. It contains four classes of DIR algorithms, including the newer inverse consistency algorithms to provide consistent displacement vector field in both directions. It also contains common ART functions, an integrated graphical user interface, a variety of visualization and image-processing features, dose metric analysis functions, and interface routines. These interface routines make DIRART a powerful complement to the Computational Environment for Radiotherapy Research (CERR) and popular image-processing toolkits such as ITK. Results: DIRART provides a set of image processing/registration algorithms and postprocessing functions to facilitate the development and testing of DIR algorithms. It also offers a good amount of options for DIR results visualization, evaluation, and validation. Conclusions: By exchanging data with treatment planning systems via DICOM-RT files and CERR, and by bringing image registration algorithms closer to radiotherapy applications, DIRART is potentially a convenient and flexible platform that may facilitate ART and DIR research.

  1. The Performance Evaluation of Multi-Image 3d Reconstruction Software with Different Sensors

    NASA Astrophysics Data System (ADS)

    Mousavi, V.; Khosravi, M.; Ahmadi, M.; Noori, N.; Naveh, A. Hosseini; Varshosaz, M.

    2015-12-01

    Today, multi-image 3D reconstruction is an active research field and generating three dimensional model of the objects is one the most discussed issues in Photogrammetry and Computer Vision that can be accomplished using range-based or image-based methods. Very accurate and dense point clouds generated by range-based methods such as structured light systems and laser scanners has introduced them as reliable tools in the industry. Image-based 3D digitization methodologies offer the option of reconstructing an object by a set of unordered images that depict it from different viewpoints. As their hardware requirements are narrowed down to a digital camera and a computer system, they compose an attractive 3D digitization approach, consequently, although range-based methods are generally very accurate, image-based methods are low-cost and can be easily used by non-professional users. One of the factors affecting the accuracy of the obtained model in image-based methods is the software and algorithm used to generate three dimensional model. These algorithms are provided in the form of commercial software, open source and web-based services. Another important factor in the accuracy of the obtained model is the type of sensor used. Due to availability of mobile sensors to the public, popularity of professional sensors and the advent of stereo sensors, a comparison of these three sensors plays an effective role in evaluating and finding the optimized method to generate three-dimensional models. Lots of research has been accomplished to identify a suitable software and algorithm to achieve an accurate and complete model, however little attention is paid to the type of sensors used and its effects on the quality of the final model. The purpose of this paper is deliberation and the introduction of an appropriate combination of a sensor and software to provide a complete model with the highest accuracy. To do this, different software, used in previous studies, were compared and

  2. New AICPA standards aid accounting for the costs of internal-use software.

    PubMed

    Luecke, R W; Meeting, D T; Klingshirn, R G

    1999-05-01

    Statement of Position (SOP) No. 98-1, "Accounting for the Costs of Computer Software Developed or Obtained for Internal Use," issued by the American Institute of Certified Public Accountants in March 1998, provides financial managers with guidelines regarding which costs involved in developing or obtaining internal-use software should be expensed and which should be capitalized. The SOP identifies three stages in the development of internal-use software: the preliminary project stage, the application development stage, and the postimplementation-operation stage. The SOP provides that all costs incurred during the preliminary project stage should be expensed as incurred. During the application development stage, costs associated with developing or obtaining the software should be capitalized, while costs associated with preparing data for use within the new system should be expensed. Costs incurred during the postimplementation-operation stage, typically associated with training and application maintenance, should be expensed.

  3. Making the PACS workstation a browser of image processing software: a feasibility study using inter-process communication techniques.

    PubMed

    Wang, Chunliang; Ritter, Felix; Smedby, Orjan

    2010-07-01

    To enhance the functional expandability of a picture archiving and communication systems (PACS) workstation and to facilitate the integration of third-part image-processing modules, we propose a browser-server style method. In the proposed solution, the PACS workstation shows the front-end user interface defined in an XML file while the image processing software is running in the background as a server. Inter-process communication (IPC) techniques allow an efficient exchange of image data, parameters, and user input between the PACS workstation and stand-alone image-processing software. Using a predefined communication protocol, the PACS workstation developer or image processing software developer does not need detailed information about the other system, but will still be able to achieve seamless integration between the two systems and the IPC procedure is totally transparent to the final user. A browser-server style solution was built between OsiriX (PACS workstation software) and MeVisLab (Image-Processing Software). Ten example image-processing modules were easily added to OsiriX by converting existing MeVisLab image processing networks. Image data transfer using shared memory added <10ms of processing time while the other IPC methods cost 1-5 s in our experiments. The browser-server style communication based on IPC techniques is an appealing method that allows PACS workstation developers and image processing software developers to cooperate while focusing on different interests.

  4. Simulation of tomosynthesis images based on an anthropomorphic software breast tissue phantom

    NASA Astrophysics Data System (ADS)

    Ruiter, Nicole V.; Zhang, Cuiping; Bakic, Predrag R.; Carton, Ann-Katherine; Kuo, Johnny; Maidment, Andrew D. A.

    2008-03-01

    The aim of this work is to provide a simulation framework for generation of synthetic tomosynthesis images to be used for evaluation of future developments in the field of tomosynthesis. An anthropomorphic software tissue phantom was previously used in a number of applications for evaluation of acquisition modalities and image post-processing algorithms for mammograms. This software phantom has been extended for similar use with tomosynthesis. The new features of the simulation framework include a finite element deformation model to obtain realistic mammographic deformation and projection simulation for a variety of tomosynthesis geometries. The resulting projections are provided in DICOM format to be applicable for clinically applied reconstruction algorithms. Examples of simulations using parameters of a currently applied clinical setup are presented. The overall simulation model is generic, allowing multiple degrees of freedom to cover anatomical variety in the amount of glandular tissue, degrees of compression, material models for breast tissues, and tomosynthesis geometries.

  5. The Spectral Image Processing System (SIPS): Software for integrated analysis of AVIRIS data

    NASA Technical Reports Server (NTRS)

    Kruse, F. A.; Lefkoff, A. B.; Boardman, J. W.; Heidebrecht, K. B.; Shapiro, A. T.; Barloon, P. J.; Goetz, A. F. H.

    1992-01-01

    The Spectral Image Processing System (SIPS) is a software package developed by the Center for the Study of Earth from Space (CSES) at the University of Colorado, Boulder, in response to a perceived need to provide integrated tools for analysis of imaging spectrometer data both spectrally and spatially. SIPS was specifically designed to deal with data from the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) and the High Resolution Imaging Spectrometer (HIRIS), but was tested with other datasets including the Geophysical and Environmental Research Imaging Spectrometer (GERIS), GEOSCAN images, and Landsat TM. SIPS was developed using the 'Interactive Data Language' (IDL). It takes advantage of high speed disk access and fast processors running under the UNIX operating system to provide rapid analysis of entire imaging spectrometer datasets. SIPS allows analysis of single or multiple imaging spectrometer data segments at full spatial and spectral resolution. It also allows visualization and interactive analysis of image cubes derived from quantitative analysis procedures such as absorption band characterization and spectral unmixing. SIPS consists of three modules: SIPS Utilities, SIPS_View, and SIPS Analysis. SIPS version 1.1 is described below.

  6. Using MATLAB software with Tomcat server and Java platform for remote image analysis in pathology

    PubMed Central

    2011-01-01

    Background The Matlab software is a one of the most advanced development tool for application in engineering practice. From our point of view the most important is the image processing toolbox, offering many built-in functions, including mathematical morphology, and implementation of a many artificial neural networks as AI. It is very popular platform for creation of the specialized program for image analysis, also in pathology. Based on the latest version of Matlab Builder Java toolbox, it is possible to create the software, serving as a remote system for image analysis in pathology via internet communication. The internet platform can be realized based on Java Servlet Pages with Tomcat server as servlet container. Methods In presented software implementation we propose remote image analysis realized by Matlab algorithms. These algorithms can be compiled to executable jar file with the help of Matlab Builder Java toolbox. The Matlab function must be declared with the set of input data, output structure with numerical results and Matlab web figure. Any function prepared in that manner can be used as a Java function in Java Servlet Pages (JSP). The graphical user interface providing the input data and displaying the results (also in graphical form) must be implemented in JSP. Additionally the data storage to database can be implemented within algorithm written in Matlab with the help of Matlab Database Toolbox directly with the image processing. The complete JSP page can be run by Tomcat server. Results The proposed tool for remote image analysis was tested on the Computerized Analysis of Medical Images (CAMI) software developed by author. The user provides image and case information (diagnosis, staining, image parameter etc.). When analysis is initialized, input data with image are sent to servlet on Tomcat. When analysis is done, client obtains the graphical results as an image with marked recognized cells and also the quantitative output. Additionally, the

  7. Using MATLAB software with Tomcat server and Java platform for remote image analysis in pathology.

    PubMed

    Markiewicz, Tomasz

    2011-03-30

    The Matlab software is a one of the most advanced development tool for application in engineering practice. From our point of view the most important is the image processing toolbox, offering many built-in functions, including mathematical morphology, and implementation of a many artificial neural networks as AI. It is very popular platform for creation of the specialized program for image analysis, also in pathology. Based on the latest version of Matlab Builder Java toolbox, it is possible to create the software, serving as a remote system for image analysis in pathology via internet communication. The internet platform can be realized based on Java Servlet Pages with Tomcat server as servlet container. In presented software implementation we propose remote image analysis realized by Matlab algorithms. These algorithms can be compiled to executable jar file with the help of Matlab Builder Java toolbox. The Matlab function must be declared with the set of input data, output structure with numerical results and Matlab web figure. Any function prepared in that manner can be used as a Java function in Java Servlet Pages (JSP). The graphical user interface providing the input data and displaying the results (also in graphical form) must be implemented in JSP. Additionally the data storage to database can be implemented within algorithm written in Matlab with the help of Matlab Database Toolbox directly with the image processing. The complete JSP page can be run by Tomcat server. The proposed tool for remote image analysis was tested on the Computerized Analysis of Medical Images (CAMI) software developed by author. The user provides image and case information (diagnosis, staining, image parameter etc.). When analysis is initialized, input data with image are sent to servlet on Tomcat. When analysis is done, client obtains the graphical results as an image with marked recognized cells and also the quantitative output. Additionally, the results are stored in a server

  8. Algorithms and Software for Improved CT Imaging: Promotional Slides for Industrial Clients

    DTIC Science & Technology

    2015-09-03

    Briefing Charts 3. DATES COVERED (From - To) 29 June 2015 – 03 September 2015 4. TITLE AND SUBTITLE Algorithms and Software for Improved CT Imaging...Number: #15477; Clearance Date: 03 September 2015 14. ABSTRACT Viewgraphs/Briefing Charts 15. SUBJECT TERMS N/A 16. SECURITY CLASSIFICATION OF...distribution is unlimited. Frequency spectrum of the columns of the sinogram oscillations due to mechanical resonances and other effects r θ Distribution

  9. Implementation and Performance of Automated Software for Computing Right-to-Left Ventricular Diameter Ratio From Computed Tomography Pulmonary Angiography Images.

    PubMed

    Kumamaru, Kanako K; George, Elizabeth; Aghayev, Ayaz; Saboo, Sachin S; Khandelwal, Ashish; Rodríguez-López, Sara; Cai, Tianrun; Jiménez-Carretero, Daniel; Estépar, Raúl San José; Ledesma-Carbayo, Maria J; González, Germán; Rybicki, Frank J

    2016-01-01

    The aim of this study was to prospectively test the performance and potential for clinical integration of software that automatically calculates the right-to-left ventricular (RV/LV) diameter ratio from computed tomography pulmonary angiography images. Using 115 computed tomography pulmonary angiography images that were positive for acute pulmonary embolism, we prospectively evaluated RV/LV ratio measurements that were obtained as follows: (1) completely manual measurement (reference standard), (2) completely automated measurement using the software, and (3 and 4) using a customized software interface that allowed 2 independent radiologists to manually adjust the automatically positioned calipers. Automated measurements underestimated (P < 0.001) the reference standard (1.09 [0.25] vs1.03 [0.35]). With manual correction of the automatically positioned calipers, the mean ratio became closer to the reference standard (1.06 [0.29] by read 1 and 1.07 [0.30] by read 2), and the correlation improved (r = 0.675 to 0.872 and 0.887). The mean time required for manual adjustment (37 [20] seconds) was significantly less than the time required to perform measurements entirely manually (100 [23] seconds). Automated CT RV/LV diameter ratio software shows promise for integration into the clinical workflow for patients with acute pulmonary embolism.

  10. 2D-CELL: image processing software for extraction and analysis of 2-dimensional cellular structures

    NASA Astrophysics Data System (ADS)

    Righetti, F.; Telley, H.; Leibling, Th. M.; Mocellin, A.

    1992-01-01

    2D-CELL is a software package for the processing and analyzing of photographic images of cellular structures in a largely interactive way. Starting from a binary digitized image, the programs extract the line network (skeleton) of the structure and determine the graph representation that best models it. Provision is made for manually correcting defects such as incorrect node positions or dangling bonds. Then a suitable algorithm retrieves polygonal contours which define individual cells — local boundary curvatures are neglected for simplicity. Using elementary analytical geometry relations, a range of metric and topological parameters describing the population are then computed, organized into statistical distributions and graphically displayed.

  11. TiLIA: a software package for image analysis of firefly flash patterns.

    PubMed

    Konno, Junsuke; Hatta-Ohashi, Yoko; Akiyoshi, Ryutaro; Thancharoen, Anchana; Silalom, Somyot; Sakchoowong, Watana; Yiu, Vor; Ohba, Nobuyoshi; Suzuki, Hirobumi

    2016-05-01

    As flash signaling patterns of fireflies are species specific, signal-pattern analysis is important for understanding this system of communication. Here, we present time-lapse image analysis (TiLIA), a free open-source software package for signal and flight pattern analyses of fireflies that uses video-recorded image data. TiLIA enables flight path tracing of individual fireflies and provides frame-by-frame coordinates and light intensity data. As an example of TiLIA capabilities, we demonstrate flash pattern analysis of the fireflies Luciola cruciata and L. lateralis during courtship behavior.

  12. SOFI Simulation Tool: A Software Package for Simulating and Testing Super-Resolution Optical Fluctuation Imaging

    PubMed Central

    Sharipov, Azat; Geissbuehler, Stefan; Leutenegger, Marcel; Vandenberg, Wim; Dedecker, Peter; Hofkens, Johan; Lasser, Theo

    2016-01-01

    Super-resolution optical fluctuation imaging (SOFI) allows one to perform sub-diffraction fluorescence microscopy of living cells. By analyzing the acquired image sequence with an advanced correlation method, i.e. a high-order cross-cumulant analysis, super-resolution in all three spatial dimensions can be achieved. Here we introduce a software tool for a simple qualitative comparison of SOFI images under simulated conditions considering parameters of the microscope setup and essential properties of the biological sample. This tool incorporates SOFI and STORM algorithms, displays and describes the SOFI image processing steps in a tutorial-like fashion. Fast testing of various parameters simplifies the parameter optimization prior to experimental work. The performance of the simulation tool is demonstrated by comparing simulated results with experimentally acquired data. PMID:27583365

  13. A software tool for automatic classification and segmentation of 2D/3D medical images

    NASA Astrophysics Data System (ADS)

    Strzelecki, Michal; Szczypinski, Piotr; Materka, Andrzej; Klepaczko, Artur

    2013-02-01

    Modern medical diagnosis utilizes techniques of visualization of human internal organs (CT, MRI) or of its metabolism (PET). However, evaluation of acquired images made by human experts is usually subjective and qualitative only. Quantitative analysis of MR data, including tissue classification and segmentation, is necessary to perform e.g. attenuation compensation, motion detection, and correction of partial volume effect in PET images, acquired with PET/MR scanners. This article presents briefly a MaZda software package, which supports 2D and 3D medical image analysis aiming at quantification of image texture. MaZda implements procedures for evaluation, selection and extraction of highly discriminative texture attributes combined with various classification, visualization and segmentation tools. Examples of MaZda application in medical studies are also provided.

  14. SOFI Simulation Tool: A Software Package for Simulating and Testing Super-Resolution Optical Fluctuation Imaging.

    PubMed

    Girsault, Arik; Lukes, Tomas; Sharipov, Azat; Geissbuehler, Stefan; Leutenegger, Marcel; Vandenberg, Wim; Dedecker, Peter; Hofkens, Johan; Lasser, Theo

    2016-01-01

    Super-resolution optical fluctuation imaging (SOFI) allows one to perform sub-diffraction fluorescence microscopy of living cells. By analyzing the acquired image sequence with an advanced correlation method, i.e. a high-order cross-cumulant analysis, super-resolution in all three spatial dimensions can be achieved. Here we introduce a software tool for a simple qualitative comparison of SOFI images under simulated conditions considering parameters of the microscope setup and essential properties of the biological sample. This tool incorporates SOFI and STORM algorithms, displays and describes the SOFI image processing steps in a tutorial-like fashion. Fast testing of various parameters simplifies the parameter optimization prior to experimental work. The performance of the simulation tool is demonstrated by comparing simulated results with experimentally acquired data.

  15. SPLASSH: Open source software for camera-based high-speed, multispectral in-vivo optical image acquisition.

    PubMed

    Sun, Ryan; Bouchard, Matthew B; Hillman, Elizabeth M C

    2010-08-02

    Camera-based in-vivo optical imaging can provide detailed images of living tissue that reveal structure, function, and disease. High-speed, high resolution imaging can reveal dynamic events such as changes in blood flow and responses to stimulation. Despite these benefits, commercially available scientific cameras rarely include software that is suitable for in-vivo imaging applications, making this highly versatile form of optical imaging challenging and time-consuming to implement. To address this issue, we have developed a novel, open-source software package to control high-speed, multispectral optical imaging systems. The software integrates a number of modular functions through a custom graphical user interface (GUI) and provides extensive control over a wide range of inexpensive IEEE 1394 Firewire cameras. Multispectral illumination can be incorporated through the use of off-the-shelf light emitting diodes which the software synchronizes to image acquisition via a programmed microcontroller, allowing arbitrary high-speed illumination sequences. The complete software suite is available for free download. Here we describe the software's framework and provide details to guide users with development of this and similar software.

  16. The Human Physiome: how standards, software and innovative service infrastructures are providing the building blocks to make it achievable

    PubMed Central

    2016-01-01

    Reconstructing and understanding the Human Physiome virtually is a complex mathematical problem, and a highly demanding computational challenge. Mathematical models spanning from the molecular level through to whole populations of individuals must be integrated, then personalized. This requires interoperability with multiple disparate and geographically separated data sources, and myriad computational software tools. Extracting and producing knowledge from such sources, even when the databases and software are readily available, is a challenging task. Despite the difficulties, researchers must frequently perform these tasks so that available knowledge can be continually integrated into the common framework required to realize the Human Physiome. Software and infrastructures that support the communities that generate these, together with their underlying standards to format, describe and interlink the corresponding data and computer models, are pivotal to the Human Physiome being realized. They provide the foundations for integrating, exchanging and re-using data and models efficiently, and correctly, while also supporting the dissemination of growing knowledge in these forms. In this paper, we explore the standards, software tooling, repositories and infrastructures that support this work, and detail what makes them vital to realizing the Human Physiome. PMID:27051515

  17. The Human Physiome: how standards, software and innovative service infrastructures are providing the building blocks to make it achievable.

    PubMed

    Nickerson, David; Atalag, Koray; de Bono, Bernard; Geiger, Jörg; Goble, Carole; Hollmann, Susanne; Lonien, Joachim; Müller, Wolfgang; Regierer, Babette; Stanford, Natalie J; Golebiewski, Martin; Hunter, Peter

    2016-04-06

    Reconstructing and understanding the Human Physiome virtually is a complex mathematical problem, and a highly demanding computational challenge. Mathematical models spanning from the molecular level through to whole populations of individuals must be integrated, then personalized. This requires interoperability with multiple disparate and geographically separated data sources, and myriad computational software tools. Extracting and producing knowledge from such sources, even when the databases and software are readily available, is a challenging task. Despite the difficulties, researchers must frequently perform these tasks so that available knowledge can be continually integrated into the common framework required to realize the Human Physiome. Software and infrastructures that support the communities that generate these, together with their underlying standards to format, describe and interlink the corresponding data and computer models, are pivotal to the Human Physiome being realized. They provide the foundations for integrating, exchanging and re-using data and models efficiently, and correctly, while also supporting the dissemination of growing knowledge in these forms. In this paper, we explore the standards, software tooling, repositories and infrastructures that support this work, and detail what makes them vital to realizing the Human Physiome.

  18. Space Station Software Issues

    NASA Technical Reports Server (NTRS)

    Voigt, S. (Editor); Beskenis, S. (Editor)

    1985-01-01

    Issues in the development of software for the Space Station are discussed. Software acquisition and management, software development environment, standards, information system support for software developers, and a future software advisory board are addressed.

  19. Enhanced simulator software for image validation and interpretation for multimodal localization super-resolution fluorescence microscopy

    NASA Astrophysics Data System (ADS)

    Erdélyi, Miklós; Sinkó, József; Gajdos, Tamás.; Novák, Tibor

    2017-02-01

    Optical super-resolution techniques such as single molecule localization have become one of the most dynamically developed areas in optical microscopy. These techniques routinely provide images of fixed cells or tissues with sub-diffraction spatial resolution, and can even be applied for live cell imaging under appropriate circumstances. Localization techniques are based on the precise fitting of the point spread functions (PSF) to the measured images of stochastically excited, identical fluorescent molecules. These techniques require controlling the rate between the on, off and the bleached states, keeping the number of active fluorescent molecules at an optimum value, so their diffraction limited images can be detected separately both spatially and temporally. Because of the numerous (and sometimes unknown) parameters, the imaging system can only be handled stochastically. For example, the rotation of the dye molecules obscures the polarization dependent PSF shape, and only an averaged distribution - typically estimated by a Gaussian function - is observed. TestSTORM software was developed to generate image stacks for traditional localization microscopes, where localization meant the precise determination of the spatial position of the molecules. However, additional optical properties (polarization, spectra, etc.) of the emitted photons can be used for further monitoring the chemical and physical properties (viscosity, pH, etc.) of the local environment. The image stack generating program was upgraded by several new features, such as: multicolour, polarization dependent PSF, built-in 3D visualization, structured background. These features make the program an ideal tool for optimizing the imaging and sample preparation conditions.

  20. Image 100 procedures manual development: Applications system library definition and Image 100 software definition

    NASA Technical Reports Server (NTRS)

    Guseman, L. F., Jr.; Decell, H. P., Jr.

    1975-01-01

    An outline for an Image 100 procedures manual for Earth Resources Program image analysis was developed which sets forth guidelines that provide a basis for the preparation and updating of an Image 100 Procedures Manual. The scope of the outline was limited to definition of general features of a procedures manual together with special features of an interactive system. Computer programs were identified which should be implemented as part of an applications oriented library for the system.

  1. Software-based PET-MR image coregistration: combined PET-MRI for the rest of us!

    PubMed

    Robertson, Matthew S; Liu, Xinyang; Plishker, William; Zaki, George F; Vyas, Pranav K; Safdar, Nabile M; Shekhar, Raj

    2016-10-01

    With the introduction of hybrid positron emission tomography/magnetic resonance imaging (PET/MRI), a new imaging option to acquire multimodality images with complementary anatomical and functional information has become available. Compared with hybrid PET/computed tomography (CT), hybrid PET/MRI is capable of providing superior anatomical detail while removing the radiation exposure associated with CT. The early adoption of hybrid PET/MRI, however, has been limited. To provide a viable alternative to the hybrid PET/MRI hardware by validating a software-based solution for PET-MR image coregistration. A fully automated, graphics processing unit-accelerated 3-D deformable image registration technique was used to align PET (acquired as PET/CT) and MR image pairs of 17 patients (age range: 10 months-21 years, mean: 10 years) who underwent PET/CT and body MRI (chest, abdomen or pelvis), which were performed within a 28-day (mean: 10.5 days) interval. MRI data for most of these cases included single-station post-contrast axial T1-weighted images. Following registration, maximum standardized uptake value (SUVmax) values observed in coregistered PET (cPET) and the original PET were compared for 82 volumes of interest. In addition, we calculated the target registration error as a measure of the quality of image coregistration, and evaluated the algorithm's performance in the context of interexpert variability. The coregistration execution time averaged 97±45 s. The overall relative SUVmax difference was 7% between cPET-MRI and PET/CT. The average target registration error was 10.7±6.6 mm, which compared favorably with the typical voxel size (diagonal distance) of 8.0 mm (typical resolution: 0.66 mm × 0.66 mm × 8 mm) for MRI and 6.1 mm (typical resolution: 3.65 mm × 3.65 mm × 3.27 mm) for PET. The variability in landmark identification did not show statistically significant differences between the algorithm and a typical expert. We have

  2. Open source software for the analysis of corneal deformation parameters on the images from the Corvis tonometer.

    PubMed

    Koprowski, Robert

    2015-04-11

    The software supplied with the Corvis tonometer (which is designed to measure intraocular pressure with the use of the air-puff method) is limited to providing basic numerical data. These data relate to the values of the measured intraocular pressure and, for example, applanation amplitudes. However, on the basis of a sequence of images obtained from the Corvis tonometer, it is possible to obtain much more information which is not available in its original software. This will be presented in this paper. The proposed software has been tested on 1400 images from the Corvis tonometer. The number of analysed 2D images (with a resolution of 200 × 576 pixels) in a sequence is arbitrary. However, in typical cases there are 140 images. The proposed software has been written in Matlab (Version 7.11.0.584, R2010b). The methods of image analysis and processing and in particular edge detection and the fast Fourier transform have been applied. The software allows for fully automatic (1) acquisition of 12 new parameters previously unavailable in the original software of the Corvis tonometer. It also enables off-line (2) manual and (3) automatic browsing of images in a sequence; 3D graph visualization of: (4) the corneal deformation and (5) eyeball response; 6) change of the colour palette; (7) filtration and (8) visualization of selected measured values on individual 2D images. In addition, the proposed software enables (9) to save the obtained results for further analysis and processing. The dedicated software described in this paper enables to obtain additional new features of corneal deformations during intraocular pressure measurement. The software can be applied in the diagnosis of corneal deformation vibrations, glaucoma diagnosis, evaluation of measurement repeatability and others. The software has no licensing restrictions and can be used both commercially and non-commercially without any limitations.

  3. [Development of DICOM image viewing software for efficient image reading and evaluation of distributed server system for diagnostic environment].

    PubMed

    Ishikawa, K

    2000-12-01

    To construct an efficient diagnostic environment using computer displays, the author investigated the time of network transmission using clinical images. In our hospital, we introduced optical-fiber 100Base-Fx Ethernet connections between 22 HIS-segments and one RIS-segment. Although Ethernet architecture is inexpensive, the speed of image transmission becomes 2371 KB/sec. (4.6 CT-slice/sec.) in the RIS-segment and 996 KB/sec. (1.9 CT-slice/sec.) from the RIS-segment to HIS-segments. Because one examination is transmitted in one minute, it does not disturb image reading. Otherwise, a distributed server system using inexpensive personal computers helps in constructing an efficient system. This investigation showed that commercially based Digital Imaging and Communications in Medicine(DICOM) servers and RSNA Central Test Node servers are not so different in transmission speed. The author programmed and developed DICOM transmission and viewing software for Macintosh computers. This viewer includes two inventions, dynamic tiling window system (DTWS) and window binding mode(WBM). On DTWS, windows, tiles, and images are independent objects, which are movable and resizable. The tile-matrix is changeable by mouse dragging, which realizes suitable tile rectangles for wide-low or narrow-high images. The arranging window tool prevents windows from scattering. Using WBM, any operation affects each window similarly. This means that the relationship of compared images is always equivalent. DTWS and WBM contribute greatly to a filmless diagnostic environment.

  4. Automated Scoring of Chromogenic Media for Detection of Methicillin-Resistant Staphylococcus aureus by Use of WASPLab Image Analysis Software.

    PubMed

    Faron, Matthew L; Buchan, Blake W; Vismara, Chiara; Lacchini, Carla; Bielli, Alessandra; Gesu, Giovanni; Liebregts, Theo; van Bree, Anita; Jansz, Arjan; Soucy, Genevieve; Korver, John; Ledeboer, Nathan A

    2016-03-01

    Recently, systems have been developed to create total laboratory automation for clinical microbiology. These systems allow for the automation of specimen processing, specimen incubation, and imaging of bacterial growth. In this study, we used the WASPLab to validate software that discriminates and segregates positive and negative chromogenic methicillin-resistant Staphylococcus aureus (MRSA) plates by recognition of pigmented colonies. A total of 57,690 swabs submitted for MRSA screening were enrolled in the study. Four sites enrolled specimens following their standard of care. Chromogenic agar used at these sites included MRSASelect (Bio-Rad Laboratories, Redmond, WA), chromID MRSA (bioMérieux, Marcy l'Etoile, France), and CHROMagar MRSA (BD Diagnostics, Sparks, MD). Specimens were plated and incubated using the WASPLab. The digital camera took images at 0 and 16 to 24 h and the WASPLab software determined the presence of positive colonies based on a hue, saturation, and value (HSV) score. If the HSV score fell within a defined threshold, the plate was called positive. The performance of the digital analysis was compared to manual reading. Overall, the digital software had a sensitivity of 100% and a specificity of 90.7% with the specificity ranging between 90.0 and 96.0 across all sites. The results were similar using the three different agars with a sensitivity of 100% and specificity ranging between 90.7 and 92.4%. These data demonstrate that automated digital analysis can be used to accurately sort positive from negative chromogenic agar cultures regardless of the pigmentation produced. Copyright © 2016 Faron et al.

  5. Automated Scoring of Chromogenic Media for Detection of Methicillin-Resistant Staphylococcus aureus by Use of WASPLab Image Analysis Software

    PubMed Central

    Faron, Matthew L.; Vismara, Chiara; Lacchini, Carla; Bielli, Alessandra; Gesu, Giovanni; Liebregts, Theo; van Bree, Anita; Jansz, Arjan; Soucy, Genevieve; Korver, John

    2015-01-01

    Recently, systems have been developed to create total laboratory automation for clinical microbiology. These systems allow for the automation of specimen processing, specimen incubation, and imaging of bacterial growth. In this study, we used the WASPLab to validate software that discriminates and segregates positive and negative chromogenic methicillin-resistant Staphylococcus aureus (MRSA) plates by recognition of pigmented colonies. A total of 57,690 swabs submitted for MRSA screening were enrolled in the study. Four sites enrolled specimens following their standard of care. Chromogenic agar used at these sites included MRSASelect (Bio-Rad Laboratories, Redmond, WA), chromID MRSA (bioMérieux, Marcy l'Etoile, France), and CHROMagar MRSA (BD Diagnostics, Sparks, MD). Specimens were plated and incubated using the WASPLab. The digital camera took images at 0 and 16 to 24 h and the WASPLab software determined the presence of positive colonies based on a hue, saturation, and value (HSV) score. If the HSV score fell within a defined threshold, the plate was called positive. The performance of the digital analysis was compared to manual reading. Overall, the digital software had a sensitivity of 100% and a specificity of 90.7% with the specificity ranging between 90.0 and 96.0 across all sites. The results were similar using the three different agars with a sensitivity of 100% and specificity ranging between 90.7 and 92.4%. These data demonstrate that automated digital analysis can be used to accurately sort positive from negative chromogenic agar cultures regardless of the pigmentation produced. PMID:26719443

  6. Vascular enhancement and image quality of CT venography: comparison of standard and low kilovoltage settings.

    PubMed

    Fujikawa, Atsuko; Matsuoka, Shin; Kuramochi, Kenji; Yoshikawa, Tatsuo; Yagihashi, Kunihiro; Kurihara, Yasuyuki; Nakajima, Yasuo

    2011-10-01

    The objective of our study was to investigate the vascular enhancement and image quality of CT venography (CTV) with a lower peak kilovoltage (kVp) setting than the standard setting. In this retrospective study, the clinical records of 100 consecutive patients with suspected pulmonary embolism were analyzed. All patients underwent pulmonary CT angiography and CTV of the abdomen, pelvis, and lower extremities using 64-MDCT with automatic tube current modulation: 50 patients underwent CT at 120 kVp, the standard kVp setting, and 50 patients were scanned at 100 kVp; we refer to these groups as the "standard-kVp group" and the "low-kVp group," respectively. Vessel enhancement and image noise were assessed in the inferior vena cava (IVC), femoral vein, and popliteal vein. Two radiologists who were blinded to the kVp setting placed the regions of interest on vessels by consensus and assessed image quality using a 5-point visual scale. Effective dose was estimated using the dose-length product. The Wilcoxon rank test was used to evaluate differences between the two groups using statistics software (JMP, version 5.1). A p value of less than 0.05 was considered to indicate statistical significance. Mean vascular enhancement was significantly higher in the low-kVp group than in the standard-kVp group: IVC, 138.4 ± 12.2 (SD) HU versus 164.5 ± 17.4 HU, respectively; femoral vein, 130.2 ± 18.0 HU versus 152.0 ± 24.5 HU; and popliteal vein, 136.7 ± 17.5 HU versus 158.3 ± 26.0 HU. Although the images of the low-kVp group had significantly higher image noise, there were no significant differences in image quality in the IVC and popliteal vein. The mean effective dose for the low-kVp protocol was significantly lower than that for the standard-kVp protocol. Lowering the kVp setting for CTV examinations improved vascular enhancement while providing sufficient image quality.

  7. Comparison of an imaging software and manual prediction of soft tissue changes after orthognathic surgery.

    PubMed

    Ahmad Akhoundi, M S; Shirani, G; Arshad, M; Heidar, H; Sodagar, A

    2012-01-01

    Accurate prediction of the surgical outcome is important in treating dentofacial deformities. Visualized treatment objectives usually involve manual surgical simulation based on tracing of cephalometric radiographs. Recent technical advancements have led to the use of computer assisted imaging systems in treatment planning for orthognathic surgical cases. The purpose of this study was to examine and compare the ability and reliability of digitization using Dolphin Imaging Software with traditional manual techniques and to compare orthognathic prediction with actual outcomes. Forty patients consisting of 35 women and 5 men (32 class III and 8 class II) with no previous surgery were evaluated by manual tracing and indirect digitization using Dolphin Imaging Software. Reliability of each method was assessed then the two techniques were compared using paired t test. The nasal tip presented the least predicted error and higher reliability. The least accurate regions in vertical plane were subnasal and upper lip, and subnasal and pogonion in horizontal plane. There were no statistically significant differences between the predictions of groups with and without genioplasty. Computer-generated image prediction was suitable for patient education and communication. However, efforts are still needed to improve accuracy and reliability of the prediction program and to include changes in soft tissue tension and muscle strain.

  8. A PC-based 3D imaging system: algorithms, software, and hardware considerations.

    PubMed

    Raya, S P; Udupa, J K; Barrett, W A

    1990-01-01

    Three-dimensional (3D) imaging in medicine is known to produce easily and quickly derivable medically relevant information, especially in complex situations. We intend to demonstrate in this paper, that with an appropriate choice of approaches and a proper design of algorithms and software, it is possible to develop a low-cost 3D imaging system that can provide a level of performance sufficient to meet the daily case load in an individual or even group-practice situation. We describe hardware considerations of a generic system and give an example of a specific system we used for our implementation. Given a 3D image as a stack of slices, we generate a packed binary cubic voxel array, by combining segmentation (density thresholding), interpolation, and packing in an efficient way. Since threshold-based segmentation is very often not perfect, object-like structures and noise clutter the binary scene. We utilize an effective mechanism to isolate the object from this clutter by tracking a specified, connected surface of the object. The surface description thus obtained is rendered to create a depiction of the surface on a 2D display screen. Efficient implementation of hidden-part removal and image-space shading and a simple and fast antialiasing technique provide a level of performance which otherwise would not have been possible in a PC environment. We outline our software emphasizing some design aspects and present some clinical examples.

  9. Software Development for Producing Standard Navy Surf Output from Delft3D

    DTIC Science & Technology

    2006-12-29

    and surf prediction. We thank Mr. Ted Mettlach, formerly at Neptune Sciences Inc., for evaluating the software. REFERENCES Allard, R., J. Dykes...ending index % Com file is opened and relevant info is extracted. filename = sprintf (’%s%s’, ’com-’, runid, ’.dat’) Nfs = vs_use (filename,’quiet

  10. Study on image processing of panoramic X-ray using deviation improvement software.

    PubMed

    Kim, Tae-Gon; Lee, Yang-Sun; Kim, Young-Pyo; Park, Yong-Pil; Cheon, Min-Woo

    2014-01-01

    Utilization of panoramic X-ray device is getting wider. Panoramic X-ray has low resolution than general X-ray device and it occurs to distortion by deviation of image synthesis. Due to structural problems, it has been used restrictively to identify of tooth structure, not for whole head. Therefore, it designed and produced panoramic X-ray device which is possible to diagnostic coverage can be extended and had to be adjusted interval control between X-ray generator and image processing for whole of Maxillofacia's diagnosis. Produced panoramic X-ray device is composed basically of short image synthesis. In addition, it was confirmed the results by used the device which was applied deviation of the brightness of the image, filter to improve the location of the deviation and interpolation method. In this study, it was used 13 images including the front. It occurs to brightness deviation, position deviation, and geometric correction when synthesis of image, but it had been solved by deviation improvement software and a change of CCD camera's scan line which is used for image acquisition. Therefore, it confirmed expansion possibility of utilization range to commonly used panoramic X-ray device.

  11. An advanced software suite for the processing and analysis of silicon luminescence images

    NASA Astrophysics Data System (ADS)

    Payne, D. N. R.; Vargas, C.; Hameiri, Z.; Wenham, S. R.; Bagnall, D. M.

    2017-06-01

    Luminescence imaging is a versatile characterisation technique used for a broad range of research and industrial applications, particularly for the field of photovoltaics where photoluminescence and electroluminescence imaging is routinely carried out for materials analysis and quality control. Luminescence imaging can reveal a wealth of material information, as detailed in extensive literature, yet these techniques are often only used qualitatively instead of being utilised to their full potential. Part of the reason for this is the time and effort required for image processing and analysis in order to convert image data to more meaningful results. In this work, a custom built, Matlab based software suite is presented which aims to dramatically simplify luminescence image processing and analysis. The suite includes four individual programs which can be used in isolation or in conjunction to achieve a broad array of functionality, including but not limited to, point spread function determination and deconvolution, automated sample extraction, image alignment and comparison, minority carrier lifetime calibration and iron impurity concentration mapping.

  12. [Accuracy of morphological simulation for orthognatic surgery. Assessment of a 3D image fusion software.

    PubMed

    Terzic, A; Schouman, T; Scolozzi, P

    2013-08-06

    The CT/CBCT data allows for 3D reconstruction of skeletal and untextured soft tissue volume. 3D stereophotogrammetry technology has strongly improved the quality of facial soft tissue surface texture. The combination of these two technologies allows for an accurate and complete reconstruction. The 3D virtual head may be used for orthognatic surgical planning, virtual surgery, and morphological simulation obtained with a software dedicated to the fusion of 3D photogrammetric and radiological images. The imaging material include: a multi-slice CT scan or broad field CBCT scan, a 3D photogrammetric camera. The operative image processing protocol includes the following steps: 1) pre- and postoperative CT/CBCT scan and 3D photogrammetric image acquisition; 2) 3D image segmentation and fusion of untextured CT/CBCT skin with the preoperative textured facial soft tissue surface of the 3D photogrammetric scan; 3) image fusion of the pre- and postoperative CT/CBCT data set virtual osteotomies, and 3D photogrammetric soft tissue virtual simulation; 4) fusion of virtual simulated 3D photogrammetric and real postoperative images, and assessment of accuracy using a color-coded scale to measure the differences between the two surfaces. Copyright © 2013. Published by Elsevier Masson SAS.

  13. Development of an Open Source Image-Based Flow Modeling Software - SimVascular

    NASA Astrophysics Data System (ADS)

    Updegrove, Adam; Merkow, Jameson; Schiavazzi, Daniele; Wilson, Nathan; Marsden, Alison; Shadden, Shawn

    2014-11-01

    SimVascular (www.simvascular.org) is currently the only comprehensive software package that provides a complete pipeline from medical image data segmentation to patient specific blood flow simulation. This software and its derivatives have been used in hundreds of conference abstracts and peer-reviewed journal articles, as well as the foundation of medical startups. SimVascular was initially released in August 2007, yet major challenges and deterrents for new adopters were the requirement of licensing three expensive commercial libraries utilized by the software, a complicated build process, and a lack of documentation, support and organized maintenance. In the past year, the SimVascular team has made significant progress to integrate open source alternatives for the linear solver, solid modeling, and mesh generation commercial libraries required by the original public release. In addition, the build system, available distributions, and graphical user interface have been significantly enhanced. Finally, the software has been updated to enable users to directly run simulations using models and boundary condition values, included in the Vascular Model Repository (vascularmodel.org). In this presentation we will briefly overview the capabilities of the new SimVascular 2.0 release. National Science Foundation.

  14. Consistent image presentation implemented using DICOM grayscale standard display function

    NASA Astrophysics Data System (ADS)

    Kump, Kenneth S.; Omernick, Jon; French, John

    2000-04-01

    In this paper, we evaluate our ability to achieve consistent image presentation across a wide range of output devices, focusing on digital x-ray radiography for chest applications. In particular we focus on dry versus wet printers of hardcopy prints. In this evaluation, we review the expected theoretical variability using the DICOM grayscale standard display function (GSDF). The GSDF maps DICOM presentation values to luminance values that are perceived by a human. We present our methodology for calibrating devices as evaluated on sixteen printers. Seven devices were selected for a human observer study to determine if there are perceptible differences in the presentation of a given image, focusing on differences between wet and dry processes. It was found that wet printers were preferred, however, there may be other logistical and practical reasons whey dry printers may be used.

  15. Software designs of image processing tasks with incremental refinement of computation.

    PubMed

    Anastasia, Davide; Andreopoulos, Yiannis

    2010-08-01

    Software realizations of computationally-demanding image processing tasks (e.g., image transforms and convolution) do not currently provide graceful degradation when their clock-cycles budgets are reduced, e.g., when delay deadlines are imposed in a multitasking environment to meet throughput requirements. This is an important obstacle in the quest for full utilization of modern programmable platforms' capabilities since worst-case considerations must be in place for reasonable quality of results. In this paper, we propose (and make available online) platform-independent software designs performing bitplane-based computation combined with an incremental packing framework in order to realize block transforms, 2-D convolution and frame-by-frame block matching. The proposed framework realizes incremental computation: progressive processing of input-source increments improves the output quality monotonically. Comparisons with the equivalent nonincremental software realization of each algorithm reveal that, for the same precision of the result, the proposed approach can lead to comparable or faster execution, while it can be arbitrarily terminated and provide the result up to the computed precision. Application examples with region-of-interest based incremental computation, task scheduling per frame, and energy-distortion scalability verify that our proposal provides significant performance scalability with graceful degradation.

  16. CONRAD--a software framework for cone-beam imaging in radiology.

    PubMed

    Maier, Andreas; Hofmann, Hannes G; Berger, Martin; Fischer, Peter; Schwemmer, Chris; Wu, Haibo; Müller, Kerstin; Hornegger, Joachim; Choi, Jang-Hwan; Riess, Christian; Keil, Andreas; Fahrig, Rebecca

    2013-11-01

    In the community of x-ray imaging, there is a multitude of tools and applications that are used in scientific practice. Many of these tools are proprietary and can only be used within a certain lab. Often the same algorithm is implemented multiple times by different groups in order to enable comparison. In an effort to tackle this problem, the authors created CONRAD, a software framework that provides many of the tools that are required to simulate basic processes in x-ray imaging and perform image reconstruction with consideration of nonlinear physical effects. CONRAD is a Java-based state-of-the-art software platform with extensive documentation. It is based on platform-independent technologies. Special libraries offer access to hardware acceleration such as OpenCL. There is an easy-to-use interface for parallel processing. The software package includes different simulation tools that are able to generate up to 4D projection and volume data and respective vector motion fields. Well known reconstruction algorithms such as FBP, DBP, and ART are included. All algorithms in the package are referenced to a scientific source. A total of 13 different phantoms and 30 processing steps have already been integrated into the platform at the time of writing. The platform comprises 74.000 nonblank lines of code out of which 19% are used for documentation. The software package is available for download at http://conrad.stanford.edu. To demonstrate the use of the package, the authors reconstructed images from two different scanners, a table top system and a clinical C-arm system. Runtimes were evaluated using the RabbitCT platform and demonstrate state-of-the-art runtimes with 2.5 s for the 256 problem size and 12.4 s for the 512 problem size. As a common software framework, CONRAD enables the medical physics community to share algorithms and develop new ideas. In particular this offers new opportunities for scientific collaboration and quantitative performance comparison

  17. CONRAD—A software framework for cone-beam imaging in radiology

    PubMed Central

    Maier, Andreas; Hofmann, Hannes G.; Berger, Martin; Fischer, Peter; Schwemmer, Chris; Wu, Haibo; Müller, Kerstin; Hornegger, Joachim; Choi, Jang-Hwan; Riess, Christian; Keil, Andreas; Fahrig, Rebecca

    2013-01-01

    Purpose: In the community of x-ray imaging, there is a multitude of tools and applications that are used in scientific practice. Many of these tools are proprietary and can only be used within a certain lab. Often the same algorithm is implemented multiple times by different groups in order to enable comparison. In an effort to tackle this problem, the authors created CONRAD, a software framework that provides many of the tools that are required to simulate basic processes in x-ray imaging and perform image reconstruction with consideration of nonlinear physical effects. Methods: CONRAD is a Java-based state-of-the-art software platform with extensive documentation. It is based on platform-independent technologies. Special libraries offer access to hardware acceleration such as OpenCL. There is an easy-to-use interface for parallel processing. The software package includes different simulation tools that are able to generate up to 4D projection and volume data and respective vector motion fields. Well known reconstruction algorithms such as FBP, DBP, and ART are included. All algorithms in the package are referenced to a scientific source. Results: A total of 13 different phantoms and 30 processing steps have already been integrated into the platform at the time of writing. The platform comprises 74.000 nonblank lines of code out of which 19% are used for documentation. The software package is available for download at http://conrad.stanford.edu. To demonstrate the use of the package, the authors reconstructed images from two different scanners, a table top system and a clinical C-arm system. Runtimes were evaluated using the RabbitCT platform and demonstrate state-of-the-art runtimes with 2.5 s for the 256 problem size and 12.4 s for the 512 problem size. Conclusions: As a common software framework, CONRAD enables the medical physics community to share algorithms and develop new ideas. In particular this offers new opportunities for scientific collaboration and

  18. CONRAD—A software framework for cone-beam imaging in radiology

    SciTech Connect

    Maier, Andreas; Choi, Jang-Hwan; Riess, Christian; Keil, Andreas; Fahrig, Rebecca; Hofmann, Hannes G.; Berger, Martin; Fischer, Peter; Schwemmer, Chris; Wu, Haibo; Müller, Kerstin; Hornegger, Joachim

    2013-11-15

    Purpose: In the community of x-ray imaging, there is a multitude of tools and applications that are used in scientific practice. Many of these tools are proprietary and can only be used within a certain lab. Often the same algorithm is implemented multiple times by different groups in order to enable comparison. In an effort to tackle this problem, the authors created CONRAD, a software framework that provides many of the tools that are required to simulate basic processes in x-ray imaging and perform image reconstruction with consideration of nonlinear physical effects.Methods: CONRAD is a Java-based state-of-the-art software platform with extensive documentation. It is based on platform-independent technologies. Special libraries offer access to hardware acceleration such as OpenCL. There is an easy-to-use interface for parallel processing. The software package includes different simulation tools that are able to generate up to 4D projection and volume data and respective vector motion fields. Well known reconstruction algorithms such as FBP, DBP, and ART are included. All algorithms in the package are referenced to a scientific source.Results: A total of 13 different phantoms and 30 processing steps have already been integrated into the platform at the time of writing. The platform comprises 74.000 nonblank lines of code out of which 19% are used for documentation. The software package is available for download at http://conrad.stanford.edu. To demonstrate the use of the package, the authors reconstructed images from two different scanners, a table top system and a clinical C-arm system. Runtimes were evaluated using the RabbitCT platform and demonstrate state-of-the-art runtimes with 2.5 s for the 256 problem size and 12.4 s for the 512 problem size.Conclusions: As a common software framework, CONRAD enables the medical physics community to share algorithms and develop new ideas. In particular this offers new opportunities for scientific collaboration and

  19. CAVASS: a computer-assisted visualization and analysis software system - image processing aspects

    NASA Astrophysics Data System (ADS)

    Udupa, Jayaram K.; Grevera, George J.; Odhner, Dewey; Zhuge, Ying; Souza, Andre; Mishra, Shipra; Iwanaga, Tad

    2007-03-01

    The development of the concepts within 3DVIEWNIX and of the software system 3DVIEWNIX itself dates back to the 1970s. Since then, a series of software packages for Computer Assisted Visualization and Analysis (CAVA) of images came out from our group, 3DVIEWNIX released in 1993, being the most recent, and all were distributed with source code. CAVASS, an open source system, is the latest in this series, and represents the next major incarnation of 3DVIEWNIX. It incorporates four groups of operations: IMAGE PROCESSING (including ROI, interpolation, filtering, segmentation, registration, morphological, and algebraic operations), VISUALIZATION (including slice display, reslicing, MIP, surface rendering, and volume rendering), MANIPULATION (for modifying structures and surgery simulation), ANALYSIS (various ways of extracting quantitative information). CAVASS is designed to work on all platforms. Its key features are: (1) most major CAVA operations incorporated; (2) very efficient algorithms and their highly efficient implementations; (3) parallelized algorithms for computationally intensive operations; (4) parallel implementation via distributed computing on a cluster of PCs; (5) interface to other systems such as CAD/CAM software, ITK, and statistical packages; (6) easy to use GUI. In this paper, we focus on the image processing operations and compare the performance of CAVASS with that of ITK. Our conclusions based on assessing performance by utilizing a regular (6 MB), large (241 MB), and a super (873 MB) 3D image data set are as follows: CAVASS is considerably more efficient than ITK, especially in those operations which are computationally intensive. It can handle considerably larger data sets than ITK. It is easy and ready to use in applications since it provides an easy to use GUI. The users can easily build a cluster from ordinary inexpensive PCs and reap the full power of CAVASS inexpensively compared to expensive multiprocessing systems which are less

  20. Intracoronary optical coherence tomography: Clinical and research applications and intravascular imaging software overview.

    PubMed

    Tenekecioglu, Erhan; Albuquerque, Felipe N; Sotomi, Yohei; Zeng, Yaping; Suwannasom, Pannipa; Tateishi, Hiroki; Cavalcante, Rafael; Ishibashi, Yuki; Nakatani, Shimpei; Abdelghani, Mohammad; Dijkstra, Jouke; Bourantas, Christos; Collet, Carlos; Karanasos, Antonios; Radu, Maria; Wang, Ancong; Muramatsu, Takashi; Landmesser, Ulf; Okamura, Takayuki; Regar, Evelyn; Räber, Lorenz; Guagliumi, Giulio; Pyo, Robert T; Onuma, Yoshinobu; Serruys, Patrick W

    2017-01-21

    By providing valuable information about the coronary artery wall and lumen, intravascular imaging may aid in optimizing interventional procedure results and thereby could improve clinical outcomes following percutaneous coronary intervention (PCI). Intravascular optical coherence tomography (OCT) is a light-based technology with a tissue penetration of approximately 1 to 3 mm and provides near histological resolution. It has emerged as a technological breakthrough in intravascular imaging with multiple clinical and research applications. OCT provides detailed visualization of the vessel following PCI and provides accurate assessment of post-procedural stent performance including detection of edge dissection, stent struts apposition, tissue prolapse, and healing parameters. Additionally, it can provide accurate characterization of plaque morphology and provides key information to optimize post-procedural outcomes. This manuscript aims to review the current clinical and research applications of intracoronary OCT and summarize the analytic OCT imaging software packages currently available. © 2017 Wiley Periodicals, Inc.

  1. SPLASSH: Open source software for camera-based high-speed, multispectral in-vivo optical image acquisition

    PubMed Central

    Sun, Ryan; Bouchard, Matthew B.; Hillman, Elizabeth M. C.

    2010-01-01

    Camera-based in-vivo optical imaging can provide detailed images of living tissue that reveal structure, function, and disease. High-speed, high resolution imaging can reveal dynamic events such as changes in blood flow and responses to stimulation. Despite these benefits, commercially available scientific cameras rarely include software that is suitable for in-vivo imaging applications, making this highly versatile form of optical imaging challenging and time-consuming to implement. To address this issue, we have developed a novel, open-source software package to control high-speed, multispectral optical imaging systems. The software integrates a number of modular functions through a custom graphical user interface (GUI) and provides extensive control over a wide range of inexpensive IEEE 1394 Firewire cameras. Multispectral illumination can be incorporated through the use of off-the-shelf light emitting diodes which the software synchronizes to image acquisition via a programmed microcontroller, allowing arbitrary high-speed illumination sequences. The complete software suite is available for free download. Here we describe the software’s framework and provide details to guide users with development of this and similar software. PMID:21258475

  2. The role of camera-bundled image management software in the consumer digital imaging value chain

    NASA Astrophysics Data System (ADS)

    Mueller, Milton; Mundkur, Anuradha; Balasubramanian, Ashok; Chirania, Virat

    2005-02-01

    This research was undertaken by the Convergence Center at the Syracuse University School of Information Studies (www.digital-convergence.info). Project ICONICA, the name for the research, focuses on the strategic implications of digital Images and the CONvergence of Image management and image CApture. Consumer imaging - the activity that we once called "photography" - is now recognized as in the throes of a digital transformation. At the end of 2003, market researchers estimated that about 30% of the households in the U.S. and 40% of the households in Japan owned digital cameras. In 2004, of the 86 million new cameras sold (excluding one-time use cameras), a majority (56%) were estimated to be digital cameras. Sales of photographic film, while still profitable, are declining precipitously.

  3. 76 FR 43724 - In the Matter of Certain Digital Imaging Devices and Related Software; Notice of Commission...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-07-21

    ... From the Federal Register Online via the Government Publishing Office INTERNATIONAL TRADE COMMISSION In the Matter of Certain Digital Imaging Devices and Related Software; Notice of Commission... related software by reason of infringement of various claims of United States Patent Nos. 6,031,964 and...

  4. Comparative Evaluation of the Ostium After External and Nonendoscopic Endonasal Dacryocystorhinostomy Using Image Processing (Matlabs and Image J) Softwares.

    PubMed

    Ganguly, Anasua; Kaza, Hrishikesh; Kapoor, Aditya; Sheth, Jenil; Ali, Mohammad Hasnat; Tripathy, Devjyoti; Rath, Suryasnata

    The purpose of this study was to compare the characteristics of the ostium after external dacryocystorhinostomy and nonendoscopic endonasal dacryocystorhinostomy (NEN-DCR). This cross-sectional study included patients who underwent a successful external dacryocystorhinostomy or NEN-DCR and had ≥1 month follow up. Pictures of the ostium were captured with a nasal endoscope (4 mm, 30°) after inserting a lacrimal probe premarked at 2 mm. Image analyses were performed using Image J and Contour softwares. Of the 113 patients included, external dacryocystorhinostomy group had 53 patients and NEN-DCR group had 60 patients. The mean age of patients in the NEN-DCR group (38 years) was significantly (p < 0.05) lower than the external dacryocystorhinostomy group (50 years). There was no statistically significant difference (2 sample t test, p > 0.05) in mean follow up (6 vs. 4 months), maximum diameter of ostium (8 vs. 7 mm), perpendicular drawn to it (4 vs. 4 mm), area of ostium (43 vs. 36 mm), and the minimum distance between common internal punctum and edge of the ostium (1 vs. 1 mm) between the external and NEN-DCR groups. Image processing softwares offer simple and objective method to measure the ostium. While ostia are comparable in size, their relative position differs with posteriorly placed ostia in external compared with inferior in NEN-DCR.

  5. Usefulness of grayscale inverted images in addition to standard images in digital mammography.

    PubMed

    Altunkeser, Ayşegül; Körez, M Kazım

    2017-04-18

    Mammography is essential for early diagnosis of breast cancer, which is the most common type of cancer in females that is associated with a high mortality rate. We investigated whether evaluation of the grayscale inverted images of mammograms would aid in increasing the diagnostic sensitivity of the mammographic imaging technique. Our study included 636 mammograms of 159 women who had undergone digital mammography. Standard, grayscale inverted, and standard plus grayscale inverted images were sequentially examined three times, at 15-day intervals, for the presence or assessment of pathological changes in the skin, calcification, asymmetric density, mass lesions, structural distortions, and intramammary and axillary lymph nodes. To determine whether grayscale inverted image assessment improved detection rates, the results of the three assessment modes were compared using Cochran's Q test and the McNemar test (p < 0.05 was considered statistically significant). The average age of 159 patients was 50.4 years (range, 35-80 years). There were significant differences among the three assessment modes with respect to calcification and intramammary lymph nodes (p < 0.05); however, no significant differences were observed for the detection of other parameters. Assessment of grayscale inverted images in addition to standard images facilitates the detection of microcalcification.

  6. Mississippi Company Using NASA Software Program to Provide Unique Imaging Service: DATASTAR Success Story

    NASA Technical Reports Server (NTRS)

    2001-01-01

    DATASTAR, Inc., of Picayune, Miss., has taken NASA's award-winning Earth Resources Laboratory Applications (ELAS) software program and evolved it to the point that the company is now providing a unique, spatial imagery service over the Internet. ELAS was developed in the early 80's to process satellite and airborne sensor imagery data of the Earth's surface into readable and useable information. While there are several software packages on the market that allow the manipulation of spatial data into useable products, this is usually a laborious task. The new program, called the DATASTAR Image Processing Exploitation, or DIPX, Delivery Service, is a subscription service available over the Internet that takes the work out of the equation and provides normalized geo-spatial data in the form of decision products.

  7. Evaluation of cassette-based digital radiography detectors using standardized image quality metrics: AAPM TG-150 Draft Image Detector Tests.

    PubMed

    Li, Guang; Greene, Travis C; Nishino, Thomas K; Willis, Charles E

    2016-09-08

    The purpose of this study was to evaluate several of the standardized image quality metrics proposed by the American Association of Physics in Medicine (AAPM) Task Group 150. The task group suggested region-of-interest (ROI)-based techniques to measure nonuniformity, minimum signal-to-noise ratio (SNR), number of anomalous pixels, and modulation transfer function (MTF). This study evaluated the effects of ROI size and layout on the image metrics by using four different ROI sets, assessed result uncertainty by repeating measurements, and compared results with two commercially available quality control tools, namely the Carestream DIRECTVIEW Total Quality Tool (TQT) and the GE Healthcare Quality Assurance Process (QAP). Seven Carestream DRX-1C (CsI) detectors on mobile DR systems and four GE FlashPad detectors in radiographic rooms were tested. Images were analyzed using MATLAB software that had been previously validated and reported. Our values for signal and SNR nonuniformity and MTF agree with values published by other investigators. Our results show that ROI size affects nonuniformity and minimum SNR measurements, but not detection of anomalous pixels. Exposure geometry affects all tested image metrics except for the MTF. TG-150 metrics in general agree with the TQT, but agree with the QAP only for local and global signal nonuniformity. The difference in SNR nonuniformity and MTF values between the TG-150 and QAP may be explained by differences in the calculation of noise and acquisition beam quality, respectively. TG-150's SNR nonuniformity metrics are also more sensitive to detector nonuniformity compared to the QAP. Our results suggest that fixed ROI size should be used for consistency because nonuniformity metrics depend on ROI size. Ideally, detector tests should be performed at the exact calibration position. If not feasible, a baseline should be established from the mean of several repeated measurements. Our study indicates that the TG-150 tests can be

  8. The standardization of super resolution optical microscopic images based on DICOM

    NASA Astrophysics Data System (ADS)

    Xia, Wei; Gao, Xin

    2015-03-01

    Super resolution optical microscopy allows the capture of images with a higher resolution than the diffraction limit. However, due to the lack of a standard format, the processing, visualization, transfer, and exchange of Super Resolution Optical Microscope (SROM) images are inconvenient. In this work, we present an approach to standardize the SROM images based on the Digital Imaging and Communication in Medicine (DICOM) standard. The SROM images and associated information are encapsulated and converted to DICOM images based on the Visible Light Microscopic Image Information Object Definition of DICOM. The new generated SROM images in DICOM format can be displayed, processed, transferred, and exchanged by using most medical image processing tools.

  9. Comparison of perfusion- and diffusion-weighted imaging parameters in brain tumor studies processed using different software platforms.

    PubMed

    Milchenko, Mikhail V; Rajderkar, Dhanashree; LaMontagne, Pamela; Massoumzadeh, Parinaz; Bogdasarian, Ronald; Schweitzer, Gordon; Benzinger, Tammie; Marcus, Dan; Shimony, Joshua S; Fouke, Sarah Jost

    2014-10-01

    To compare quantitative imaging parameter measures from diffusion- and perfusion-weighted imaging magnetic resonance imaging (MRI) sequences in subjects with brain tumors that have been processed with different software platforms. Scans from 20 subjects with primary brain tumors were selected from the Comprehensive Neuro-oncology Data Repository at Washington University School of Medicine (WUSM) and the Swedish Neuroscience Institute. MR images were coregistered, and each subject's data set was processed by three software packages: 1) vendor-specific scanner software, 2) research software developed at WUSM, and 3) a commercially available, Food and Drug Administration-approved, processing platform (Nordic Ice). Regions of interest (ROIs) were chosen within the brain tumor and normal nontumor tissue. The results obtained using these methods were compared. For diffusion parameters, including mean diffusivity and fractional anisotropy, concordance was high when comparing different processing methods. For perfusion-imaging parameters, a significant variance in cerebral blood volume, cerebral blood flow, and mean transit time (MTT) values was seen when comparing the same raw data processed using different software platforms. Correlation was better with larger ROIs (radii ≥ 5 mm). Greatest variance was observed in MTT. Diffusion parameter values were consistent across different software processing platforms. Perfusion parameter values were more variable and were influenced by the software used. Variation in the MTT was especially large suggesting that MTT estimation may be unreliable in tumor tissues using current MRI perfusion methods. Copyright © 2014 AUR. Published by Elsevier Inc. All rights reserved.

  10. High Performance Embedded Computing Software Initiative (HPEC-SI) Program Facilitation of VSIPL++ Standardization

    DTIC Science & Technology

    2008-04-01

    parallel VSIPL++, and other parallel computing systems. The cluster is a fifty five node Beowulf style cluster with 116 compute processors of varying types...consoles, which GTRI inserted into to the parallel software testbed. A computer that is used as a compute node in a Beowulf -style cluster requires a... Beowulf -style cluster. GTRI also participated in technical advisory planning for the HPEC-SI program. 5. References 1. Schwartz, D. A ., Judd, R. R

  11. Features of the Upgraded Imaging for Hypersonic Experimental Aeroheating Testing (IHEAT) Software

    NASA Technical Reports Server (NTRS)

    Mason, Michelle L.; Rufer, Shann J.

    2016-01-01

    The Imaging for Hypersonic Experimental Aeroheating Testing (IHEAT) software is used at the NASA Langley Research Center to analyze global aeroheating data on wind tunnel models tested in the Langley Aerothermodynamics Laboratory. One-dimensional, semi-infinite heating data derived from IHEAT are used in the design of thermal protection systems for hypersonic vehicles that are exposed to severe aeroheating loads, such as reentry vehicles during descent and landing procedures. This software program originally was written in the PV-WAVE(Registered Trademark) programming language to analyze phosphor thermography data from the two-color, relative-intensity system developed at Langley. To increase the efficiency, functionality, and reliability of IHEAT, the program was migrated to MATLAB(Registered Trademark) syntax and compiled as a stand-alone executable file labeled version 4.0. New features of IHEAT 4.0 include the options to perform diagnostic checks of the accuracy of the acquired data during a wind tunnel test, to extract data along a specified multi-segment line following a feature such as a leading edge or a streamline, and to batch process all of the temporal frame data from a wind tunnel run. Results from IHEAT 4.0 were compared on a pixel level to the output images from the legacy software to validate the program. The absolute differences between the heat transfer data output from the two programs were on the order of 10(exp -5) to 10(exp -7). IHEAT 4.0 replaces the PV-WAVE(Registered Trademark) version as the production software for aeroheating experiments conducted in the hypersonic facilities at NASA Langley.

  12. Spiked proteomic standard dataset for testing label-free quantitative software and statistical methods.

    PubMed

    Ramus, Claire; Hovasse, Agnès; Marcellin, Marlène; Hesse, Anne-Marie; Mouton-Barbosa, Emmanuelle; Bouyssié, David; Vaca, Sebastian; Carapito, Christine; Chaoui, Karima; Bruley, Christophe; Garin, Jérôme; Cianférani, Sarah; Ferro, Myriam; Dorssaeler, Alain Van; Burlet-Schiltz, Odile; Schaeffer, Christine; Couté, Yohann; Gonzalez de Peredo, Anne

    2016-03-01

    This data article describes a controlled, spiked proteomic dataset for which the "ground truth" of variant proteins is known. It is based on the LC-MS analysis of samples composed of a fixed background of yeast lysate and different spiked amounts of the UPS1 mixture of 48 recombinant proteins. It can be used to objectively evaluate bioinformatic pipelines for label-free quantitative analysis, and their ability to detect variant proteins with good sensitivity and low false discovery rate in large-scale proteomic studies. More specifically, it can be useful for tuning software tools parameters, but also testing new algorithms for label-free quantitative analysis, or for evaluation of downstream statistical methods. The raw MS files can be downloaded from ProteomeXchange with identifier PXD001819. Starting from some raw files of this dataset, we also provide here some processed data obtained through various bioinformatics tools (including MaxQuant, Skyline, MFPaQ, IRMa-hEIDI and Scaffold) in different workflows, to exemplify the use of such data in the context of software benchmarking, as discussed in details in the accompanying manuscript [1]. The experimental design used here for data processing takes advantage of the different spike levels introduced in the samples composing the dataset, and processed data are merged in a single file to facilitate the evaluation and illustration of software tools results for the detection of variant proteins with different absolute expression levels and fold change values.

  13. Spiked proteomic standard dataset for testing label-free quantitative software and statistical methods

    PubMed Central

    Ramus, Claire; Hovasse, Agnès; Marcellin, Marlène; Hesse, Anne-Marie; Mouton-Barbosa, Emmanuelle; Bouyssié, David; Vaca, Sebastian; Carapito, Christine; Chaoui, Karima; Bruley, Christophe; Garin, Jérôme; Cianférani, Sarah; Ferro, Myriam; Dorssaeler, Alain Van; Burlet-Schiltz, Odile; Schaeffer, Christine; Couté, Yohann; Gonzalez de Peredo, Anne

    2015-01-01

    This data article describes a controlled, spiked proteomic dataset for which the “ground truth” of variant proteins is known. It is based on the LC-MS analysis of samples composed of a fixed background of yeast lysate and different spiked amounts of the UPS1 mixture of 48 recombinant proteins. It can be used to objectively evaluate bioinformatic pipelines for label-free quantitative analysis, and their ability to detect variant proteins with good sensitivity and low false discovery rate in large-scale proteomic studies. More specifically, it can be useful for tuning software tools parameters, but also testing new algorithms for label-free quantitative analysis, or for evaluation of downstream statistical methods. The raw MS files can be downloaded from ProteomeXchange with identifier PXD001819. Starting from some raw files of this dataset, we also provide here some processed data obtained through various bioinformatics tools (including MaxQuant, Skyline, MFPaQ, IRMa-hEIDI and Scaffold) in different workflows, to exemplify the use of such data in the context of software benchmarking, as discussed in details in the accompanying manuscript [1]. The experimental design used here for data processing takes advantage of the different spike levels introduced in the samples composing the dataset, and processed data are merged in a single file to facilitate the evaluation and illustration of software tools results for the detection of variant proteins with different absolute expression levels and fold change values. PMID:26862574

  14. a New Digital Image Correlation Software for Displacements Field Measurement in Structural Applications

    NASA Astrophysics Data System (ADS)

    Ravanelli, R.; Nascetti, A.; Di Rita, M.; Belloni, V.; Mattei, D.; Nisticó, N.; Crespi, M.

    2017-07-01

    Recently, there has been a growing interest in studying non-contact techniques for strain and displacement measurement. Within photogrammetry, Digital Image Correlation (DIC) has received particular attention thanks to the recent advances in the field of lowcost, high resolution digital cameras, computer power and memory storage. DIC is indeed an optical technique able to measure full field displacements and strain by comparing digital images of the surface of a material sample at different stages of deformation and thus can play a major role in structural monitoring applications. For all these reasons, a free and open source 2D DIC software, named py2DIC, was developed at the Geodesy and Geomatics Division of DICEA, University of Rome La Sapienza. Completely written in python, the software is based on the template matching method and computes the displacement and strain fields. The potentialities of Py2DIC were evaluated by processing the images captured during a tensile test performed in the Lab of Structural Engineering, where three different Glass Fiber Reinforced Polymer samples were subjected to a controlled tension by means of a universal testing machine. The results, compared with the values independently measured by several strain gauges fixed on the samples, demonstrate the possibility to successfully characterize the deformation mechanism of the investigated material. Py2DIC is indeed able to highlight displacements at few microns level, in reasonable agreement with the reference, both in terms of displacements (again, at few microns in the average) and Poisson's module.

  15. Scalable, high-performance 3D imaging software platform: system architecture and application to virtual colonoscopy.

    PubMed

    Yoshida, Hiroyuki; Wu, Yin; Cai, Wenli; Brett, Bevin

    2012-01-01

    One of the key challenges in three-dimensional (3D) medical imaging is to enable the fast turn-around time, which is often required for interactive or real-time response. This inevitably requires not only high computational power but also high memory bandwidth due to the massive amount of data that need to be processed. In this work, we have developed a software platform that is designed to support high-performance 3D medical image processing for a wide range of applications using increasingly available and affordable commodity computing systems: multi-core, clusters, and cloud computing systems. To achieve scalable, high-performance computing, our platform (1) employs size-adaptive, distributable block volumes as a core data structure for efficient parallelization of a wide range of 3D image processing algorithms; (2) supports task scheduling for efficient load distribution and balancing; and (3) consists of a layered parallel software libraries that allow a wide range of medical applications to share the same functionalities. We evaluated the performance of our platform by applying it to an electronic cleansing system in virtual colonoscopy, with initial experimental results showing a 10 times performance improvement on an 8-core workstation over the original sequential implementation of the system.

  16. [Development of software for three-dimensional reconstruction and automatic quantification of intravascular ultrasound images. Initial experience].

    PubMed

    Sanz, Roberto; Bodí, Vicente; Sanchís, Juan; Moratal, David; Núñez, Julio; Palau, Patricia; García, Diego; Rieta, José J; Sanchís, Juan M; Chorro, Francisco J; Llácer, Angel

    2006-09-01

    Quantification of intravascular ultrasound (IVUS) images is essential in ischemic heart disease and interventional cardiology. Manual analysis is very slow and expensive. We describe an automated computerized method of analysis that requires only minimal initial input from a specialist. This study was carried out by interventional cardiologists and biomedical engineers working in close collaboration. We developed software in which it was necessary only to identify the media-adventitia boundary in a few images taken from the whole sequence. A three-dimensional reconstruction was then generated from each sequence, from which measurements of areas and volumes could be derived automatically. In total, 2300 randomly selected images from video sequences of 11 patients were analyzed. Results obtained using the proposed method differed only minimally from those obtained with the manual method: for vessel area measurements, the variability was 0.08 (0.07) (mean absolute error [standard deviation] normalized to the actual value; this corresponds to an error of 0.08 mm(2) per mm(2) of vessel area); for lumen area, 0.11 (0.11) (normalized), and for plaque volume, 0.5 (0.3) (normalized). Regions with severe lesions (<4 mm(2)) were correctly identified in more than 90% of cases. Specialist time needed for each reconstruction was 10 (8) minutes (vs 60 [10] minutes for manual analysis; P< .0001). The computerized method used dramatically reduced the time and effort needed for IVUS sequence analysis, and the automated measurements obtained were very promising.

  17. A complete software application for automatic registration of x-ray mammography and magnetic resonance images

    SciTech Connect

    Solves-Llorens, J. A.; Rupérez, M. J. Monserrat, C.; Lloret, M.

    2014-08-15

    Purpose: This work presents a complete and automatic software application to aid radiologists in breast cancer diagnosis. The application is a fully automated method that performs a complete registration of magnetic resonance (MR) images and x-ray (XR) images in both directions (from MR to XR and from XR to MR) and for both x-ray mammograms, craniocaudal (CC), and mediolateral oblique (MLO). This new approximation allows radiologists to mark points in the MR images and, without any manual intervention, it provides their corresponding points in both types of XR mammograms and vice versa. Methods: The application automatically segments magnetic resonance images and x-ray images using the C-Means method and the Otsu method, respectively. It compresses the magnetic resonance images in both directions, CC and MLO, using a biomechanical model of the breast that distinguishes the specific biomechanical behavior of each one of its three tissues (skin, fat, and glandular tissue) separately. It makes a projection of both compressions and registers them with the original XR images using affine transformations and nonrigid registration methods. Results: The application has been validated by two expert radiologists. This was carried out through a quantitative validation on 14 data sets in which the Euclidean distance between points marked by the radiologists and the corresponding points obtained by the application were measured. The results showed a mean error of 4.2 ± 1.9 mm for the MRI to CC registration, 4.8 ± 1.3 mm for the MRI to MLO registration, and 4.1 ± 1.3 mm for the CC and MLO to MRI registration. Conclusions: A complete software application that automatically registers XR and MR images of the breast has been implemented. The application permits radiologists to estimate the position of a lesion that is suspected of being a tumor in an imaging modality based on its position in another different modality with a clinically acceptable error. The results show that the

  18. A complete software application for automatic registration of x-ray mammography and magnetic resonance images.

    PubMed

    Solves-Llorens, J A; Rupérez, M J; Monserrat, C; Feliu, E; García, M; Lloret, M

    2014-08-01

    This work presents a complete and automatic software application to aid radiologists in breast cancer diagnosis. The application is a fully automated method that performs a complete registration of magnetic resonance (MR) images and x-ray (XR) images in both directions (from MR to XR and from XR to MR) and for both x-ray mammograms, craniocaudal (CC), and mediolateral oblique (MLO). This new approximation allows radiologists to mark points in the MR images and, without any manual intervention, it provides their corresponding points in both types of XR mammograms and vice versa. The application automatically segments magnetic resonance images and x-ray images using the C-Means method and the Otsu method, respectively. It compresses the magnetic resonance images in both directions, CC and MLO, using a biomechanical model of the breast that distinguishes the specific biomechanical behavior of each one of its three tissues (skin, fat, and glandular tissue) separately. It makes a projection of both compressions and registers them with the original XR images using affine transformations and nonrigid registration methods. The application has been validated by two expert radiologists. This was carried out through a quantitative validation on 14 data sets in which the Euclidean distance between points marked by the radiologists and the corresponding points obtained by the application were measured. The results showed a mean error of 4.2 ± 1.9 mm for the MRI to CC registration, 4.8 ± 1.3 mm for the MRI to MLO registration, and 4.1 ± 1.3 mm for the CC and MLO to MRI registration. A complete software application that automatically registers XR and MR images of the breast has been implemented. The application permits radiologists to estimate the position of a lesion that is suspected of being a tumor in an imaging modality based on its position in another different modality with a clinically acceptable error. The results show that the application can accelerate the

  19. Reliability and reproducibility of macular segmentation using a custom-built optical coherence tomography retinal image analysis software

    NASA Astrophysics Data System (ADS)

    Cabrera Debuc, Delia; Somfai, Gábor Márk; Ranganathan, Sudarshan; Tátrai, Erika; Ferencz, Mária; Puliafito, Carmen A.

    2009-11-01

    We determine the reliability and reproducibility of retinal thickness measurements with a custom-built OCT retinal image analysis software (OCTRIMA). Ten eyes of five healthy subjects undergo repeated standard macular thickness map scan sessions by two experienced examiners using a Stratus OCT device. Automatic/semi automatic thickness quantification of the macula and intraretinal layers is performed using OCTRIMA software. Intraobserver, interobserver, and intervisit repeatability and reproducibility coefficients, and intraclass correlation coefficients (ICCs) per scan are calculated. Intraobserver, interobserver, and intervisit variability combined account for less than 5% of total variability for the total retinal thickness measurements and less than 7% for the intraretinal layers except the outer segment/ retinal pigment epithelium (RPE) junction. There is no significant difference between scans acquired by different observers or during different visits. The ICCs obtained for the intraobserver and intervisit variability tests are greater than 0.75 for the total retina and all intraretinal layers, except the inner nuclear layer intraobserver and interobserver test and the outer plexiform layer, intraobserver, interobserver, and intervisit test. Our results indicate that thickness measurements for the total retina and all intraretinal layers (except the outer segment/RPE junction) performed using OCTRIMA are highly repeatable and reproducible.

  20. XDesign: An open-source software package for designing X-ray imaging phantoms and experiments

    DOE PAGES

    Ching, Daniel J.; Gursoy, Dogˇa

    2017-02-21

    Here, the development of new methods or utilization of current X-ray computed tomography methods is impeded by the substantial amount of expertise required to design an X-ray computed tomography experiment from beginning to end. In an attempt to make material models, data acquisition schemes and reconstruction algorithms more accessible to researchers lacking expertise in some of these areas, a software package is described here which can generate complex simulated phantoms and quantitatively evaluate new or existing data acquisition schemes and image reconstruction algorithms for targeted applications.

  1. A comparison of strain calculation using digital image correlation and finite element software

    NASA Astrophysics Data System (ADS)

    Iadicola, M.; Banerjee, D.

    2016-08-01

    Digital image correlation (DIC) data are being extensively used for many forming applications and for comparisons with finite element analysis (FEA) simulated results. The most challenging comparisons are often in the area of strain localizations just prior to material failure. While qualitative comparisons can be misleading, quantitative comparisons are difficult because of insufficient information about the type of strain output. In this work, strains computed from DIC displacements from a forming limit test are compared to those from three commercial FEA software. Quantitative differences in calculated strains are assessed to determine if the scale of variations seen between FEA and DIC calculated strains constitute real behavior or just calculation differences.

  2. Multiplanar transcranial ultrasound imaging: standards, landmarks and correlation with magnetic resonance imaging.

    PubMed

    Kern, Rolf; Perren, Fabienne; Kreisel, Stefan; Szabo, Kristina; Hennerici, Michael; Meairs, Stephen

    2005-03-01

    The purpose of this study was to define a standardized multiplanar approach for transcranial ultrasound (US) imaging of brain parenchyma based on matched data from 3-D US and 3-D magnetic resonance imaging (MRI). The potential and limitations of multiple insonation planes in transverse and coronal orientation were evaluated for the visualization of intracranial landmarks in 60 healthy individuals (18 to 83 years old, mean 41.4 years) with sufficient temporal bone windows. Landmarks regularly visualized even in moderate sonographic conditions with identification rates of >75% were mesencephalon, pons, third ventricle, lateral ventricles, falx, thalamus, basal ganglia, pineal gland and temporal lobe. Identification of medulla oblongata, fourth ventricle, cerebellar structures, hippocampus, insula, frontal, parietal and occipital lobes was more difficult (<75%). We hypothesize that multiplanar transcranial US images, with standardized specification of tilt angles and orientation, not only allow comparison with other neuroimaging modalities, but may also provide a more objective framework for US monitoring of cerebral disease than freehand scanning.

  3. Melanie II--a third-generation software package for analysis of two-dimensional electrophoresis images: I. Features and user interface.

    PubMed

    Appel, R D; Palagi, P M; Walther, D; Vargas, J R; Sanchez, J C; Ravier, F; Pasquali, C; Hochstrasser, D F

    1997-12-01

    Although two-dimensional electrophoresis (2-DE) computer analysis software packages have existed ever since 2-DE technology was developed, it is only now that the hardware and software technology allows large-scale studies to be performed on low-cost personal computers or workstations, and that setting up a 2-DE computer analysis system in a small laboratory is no longer considered a luxury. After a first attempt in the seventies and early eighties to develop 2-DE analysis software systems on hardware that had poor or even no graphical capabilities, followed in the late eighties by a wave of innovative software developments that were possible thanks to new graphical interface standards such as XWindows, a third generation of 2-DE analysis software packages has now come to maturity. It can be run on a variety of low-cost, general-purpose personal computers, thus making the purchase of a 2-DE analysis system easily attainable for even the smallest laboratory that is involved in proteome research. Melanie II 2-D PAGE, developed at the University Hospital of Geneva, is such a third-generation software system for 2-DE analysis. Based on unique image processing algorithms, this user-friendly object-oriented software package runs on multiple platforms, including Unix, MS-Windows 95 and NT, and Power Macintosh. It provides efficient spot detection and quantitation, state-of-the-art image comparison, statistical data analysis facilities, and is Internet-ready. Linked to proteome databases such as those available on the World Wide Web, it represents a valuable tool for the "Virtual Lab" of the post-genome area.

  4. HYPOTrace: image analysis software for measuring hypocotyl growth and shape demonstrated on Arabidopsis seedlings undergoing photomorphogenesis.

    PubMed

    Wang, Liya; Uilecan, Ioan Vlad; Assadi, Amir H; Kozmik, Christine A; Spalding, Edgar P

    2009-04-01

    Analysis of time series of images can quantify plant growth and development, including the effects of genetic mutations (phenotypes) that give information about gene function. Here is demonstrated a software application named HYPOTrace that automatically extracts growth and shape information from electronic gray-scale images of Arabidopsis (Arabidopsis thaliana) seedlings. Key to the method is the iterative application of adaptive local principal components analysis to extract a set of ordered midline points (medial axis) from images of the seedling hypocotyl. Pixel intensity is weighted to avoid the medial axis being diverted by the cotyledons in areas where the two come in contact. An intensity feature useful for terminating the midline at the hypocotyl apex was isolated in each image by subtracting the baseline with a robust local regression algorithm. Applying the algorithm to time series of images of Arabidopsis seedlings responding to light resulted in automatic quantification of hypocotyl growth rate, apical hook opening, and phototropic bending with high spatiotemporal resolution. These functions are demonstrated here on wild-type, cryptochrome1, and phototropin1 seedlings for the purpose of showing that HYPOTrace generated expected results and to show how much richer the machine-vision description is compared to methods more typical in plant biology. HYPOTrace is expected to benefit seedling development research, particularly in the photomorphogenesis field, by replacing many tedious, error-prone manual measurements with a precise, largely automated computational tool.

  5. Review of free software tools for image analysis of fluorescence cell micrographs.

    PubMed

    Wiesmann, V; Franz, D; Held, C; Münzenmayer, C; Palmisano, R; Wittenberg, T

    2015-01-01

    An increasing number of free software tools have been made available for the evaluation of fluorescence cell micrographs. The main users are biologists and related life scientists with no or little knowledge of image processing. In this review, we give an overview of available tools and guidelines about which tools the users should use to segment fluorescence micrographs. We selected 15 free tools and divided them into stand-alone, Matlab-based, ImageJ-based, free demo versions of commercial tools and data sharing tools. The review consists of two parts: First, we developed a criteria catalogue and rated the tools regarding structural requirements, functionality (flexibility, segmentation and image processing filters) and usability (documentation, data management, usability and visualization). Second, we performed an image processing case study with four representative fluorescence micrograph segmentation tasks with figure-ground and cell separation. The tools display a wide range of functionality and usability. In the image processing case study, we were able to perform figure-ground separation in all micrographs using mainly thresholding. Cell separation was not possible with most of the tools, because cell separation methods are provided only by a subset of the tools and are difficult to parametrize and to use. Most important is that the usability matches the functionality of a tool. To be usable, specialized tools with less functionality need to fulfill less usability criteria, whereas multipurpose tools need a well-structured menu and intuitive graphical user interface.

  6. MIA - A free and open source software for gray scale medical image analysis

    PubMed Central

    2013-01-01

    Background Gray scale images make the bulk of data in bio-medical image analysis, and hence, the main focus of many image processing tasks lies in the processing of these monochrome images. With ever improving acquisition devices, spatial and temporal image resolution increases, and data sets become very large. Various image processing frameworks exists that make the development of new algorithms easy by using high level programming languages or visual programming. These frameworks are also accessable to researchers that have no background or little in software development because they take care of otherwise complex tasks. Specifically, the management of working memory is taken care of automatically, usually at the price of requiring more it. As a result, processing large data sets with these tools becomes increasingly difficult on work station class computers. One alternative to using these high level processing tools is the development of new algorithms in a languages like C++, that gives the developer full control over how memory is handled, but the resulting workflow for the prototyping of new algorithms is rather time intensive, and also not appropriate for a researcher with little or no knowledge in software development. Another alternative is in using command line tools that run image processing tasks, use the hard disk to store intermediate results, and provide automation by using shell scripts. Although not as convenient as, e.g. visual programming, this approach is still accessable to researchers without a background in computer science. However, only few tools exist that provide this kind of processing interface, they are usually quite task specific, and don’t provide an clear approach when one wants to shape a new command line tool from a prototype shell script. Results The proposed framework, MIA, provides a combination of command line tools, plug-ins, and libraries that make it possible to run image processing tasks interactively in a command shell

  7. MIA - A free and open source software for gray scale medical image analysis.

    PubMed

    Wollny, Gert; Kellman, Peter; Ledesma-Carbayo, María-Jesus; Skinner, Matthew M; Hublin, Jean-Jaques; Hierl, Thomas

    2013-10-11

    Gray scale images make the bulk of data in bio-medical image analysis, and hence, the main focus of many image processing tasks lies in the processing of these monochrome images. With ever improving acquisition devices, spatial and temporal image resolution increases, and data sets become very large.Various image processing frameworks exists that make the development of new algorithms easy by using high level programming languages or visual programming. These frameworks are also accessable to researchers that have no background or little in software development because they take care of otherwise complex tasks. Specifically, the management of working memory is taken care of automatically, usually at the price of requiring more it. As a result, processing large data sets with these tools becomes increasingly difficult on work station class computers.One alternative to using these high level processing tools is the development of new algorithms in a languages like C++, that gives the developer full control over how memory is handled, but the resulting workflow for the prototyping of new algorithms is rather time intensive, and also not appropriate for a researcher with little or no knowledge in software development.Another alternative is in using command line tools that run image processing tasks, use the hard disk to store intermediate results, and provide automation by using shell scripts. Although not as convenient as, e.g. visual programming, this approach is still accessable to researchers without a background in computer science. However, only few tools exist that provide this kind of processing interface, they are usually quite task specific, and don't provide an clear approach when one wants to shape a new command line tool from a prototype shell script. The proposed framework, MIA, provides a combination of command line tools, plug-ins, and libraries that make it possible to run image processing tasks interactively in a command shell and to prototype by

  8. IMART software for correction of motion artifacts in images collected in intravital microscopy

    PubMed Central

    Dunn, Kenneth W; Lorenz, Kevin S; Salama, Paul; Delp, Edward J

    2014-01-01

    Intravital microscopy is a uniquely powerful tool, providing the ability to characterize cell and organ physiology in the natural context of the intact, living animal. With the recent development of high-resolution microscopy techniques such as confocal and multiphoton microscopy, intravital microscopy can now characterize structures at subcellular resolution and capture events at sub-second temporal resolution. However, realizing the potential for high resolution requires remarkable stability in the tissue. Whereas the rigid structure of the skull facilitates high-resolution imaging of the brain, organs of the viscera are free to move with respiration and heartbeat, requiring additional apparatus for immobilization. In our experience, these methods are variably effective, so that many studies are compromised by residual motion artifacts. Here we demonstrate the use of IMART, a software tool for removing motion artifacts from intravital microscopy images collected in time series or in three dimensions. PMID:26090271

  9. User's Guide for the MapImage Reprojection Software Package, Version 1.01

    USGS Publications Warehouse

    Finn, Michael P.; Trent, Jason R.

    2004-01-01

    Scientists routinely accomplish small-scale geospatial modeling in the raster domain, using high-resolution datasets (such as 30-m data) for large parts of continents and low-resolution to high-resolution datasets for the entire globe. Recently, Usery and others (2003a) expanded on the previously limited empirical work with real geographic data by compiling and tabulating the accuracy of categorical areas in projected raster datasets of global extent. Geographers and applications programmers at the U.S. Geological Survey's (USGS) Mid-Continent Mapping Center (MCMC) undertook an effort to expand and evolve an internal USGS software package, MapImage, or mapimg, for raster map projection transformation (Usery and others, 2003a). Daniel R. Steinwand of Science Applications International Corporation, Earth Resources Observation Systems Data Center in Sioux Falls, S. Dak., originally developed mapimg for the USGS, basing it on the USGS's General Cartographic Transformation Package (GCTP). It operated as a command line program on the Unix operating system. Through efforts at MCMC, and in coordination with Mr. Steinwand, this program has been transformed from an application based on a command line into a software package based on a graphic user interface for Windows, Linux, and Unix machines. Usery and others (2003b) pointed out that many commercial software packages do not use exact projection equations and that even when exact projection equations are used, the software often results in error and sometimes does not complete the transformation for specific projections, at specific resampling resolutions, and for specific singularities. Direct implementation of point-to-point transformation with appropriate functions yields the variety of projections available in these software packages, but implementation with data other than points requires specific adaptation of the equations or prior preparation of the data to allow the transformation to succeed. Additional

  10. Comparison of software and human observers in reading images of the CDMAM test object to assess digital mammography systems

    NASA Astrophysics Data System (ADS)

    Young, Kenneth C.; Cook, James J. H.; Oduko, Jennifer M.; Bosmans, Hilde

    2006-03-01

    European Guidelines for quality control in digital mammography specify minimum and achievable standards of image quality in terms of threshold contrast, based on readings of images of the CDMAM test object by human observers. However this is time-consuming and has large inter-observer error. To overcome these problems a software program (CDCOM) is available to automatically read CDMAM images, but the optimal method of interpreting the output is not defined. This study evaluates methods of determining threshold contrast from the program, and compares these to human readings for a variety of mammography systems. The methods considered are (A) simple thresholding (B) psychometric curve fitting (C) smoothing and interpolation and (D) smoothing and psychometric curve fitting. Each method leads to similar threshold contrasts but with different reproducibility. Method (A) had relatively poor reproducibility with a standard error in threshold contrast of 18.1 +/- 0.7%. This was reduced to 8.4% by using a contrast-detail curve fitting procedure. Method (D) had the best reproducibility with an error of 6.7%, reducing to 5.1% with curve fitting. A panel of 3 human observers had an error of 4.4% reduced to 2.9 % by curve fitting. All automatic methods led to threshold contrasts that were lower than for humans. The ratio of human to program threshold contrasts varied with detail diameter and was 1.50 +/- .04 (sem) at 0.1mm and 1.82 +/- .06 at 0.25mm for method (D). There were good correlations between the threshold contrast determined by humans and the automated methods.

  11. Hierarchical Image Segmentation of Remotely Sensed Data using Massively Parallel GNU-LINUX Software

    NASA Technical Reports Server (NTRS)

    Tilton, James C.

    2003-01-01

    A hierarchical set of image segmentations is a set of several image segmentations of the same image at different levels of detail in which the segmentations at coarser levels of detail can be produced from simple merges of regions at finer levels of detail. In [1], Tilton, et a1 describes an approach for producing hierarchical segmentations (called HSEG) and gave a progress report on exploiting these hierarchical segmentations for image information mining. The HSEG algorithm is a hybrid of region growing and constrained spectral clustering that produces a hierarchical set of image segmentations based on detected convergence points. In the main, HSEG employs the hierarchical stepwise optimization (HSWO) approach to region growing, which was described as early as 1989 by Beaulieu and Goldberg. The HSWO approach seeks to produce segmentations that are more optimized than those produced by more classic approaches to region growing (e.g. Horowitz and T. Pavlidis, [3]). In addition, HSEG optionally interjects between HSWO region growing iterations, merges between spatially non-adjacent regions (i.e., spectrally based merging or clustering) constrained by a threshold derived from the previous HSWO region growing iteration. While the addition of constrained spectral clustering improves the utility of the segmentation results, especially for larger images, it also significantly increases HSEG s computational requirements. To counteract this, a computationally efficient recursive, divide-and-conquer, implementation of HSEG (RHSEG) was devised, which includes special code to avoid processing artifacts caused by RHSEG s recursive subdivision of the image data. The recursive nature of RHSEG makes for a straightforward parallel implementation. This paper describes the HSEG algorithm, its recursive formulation (referred to as RHSEG), and the implementation of RHSEG using massively parallel GNU-LINUX software. Results with Landsat TM data are included comparing RHSEG with classic

  12. Phantom-based standardization of CT angiography images for spot sign detection.

    PubMed

    Morotti, Andrea; Romero, Javier M; Jessel, Michael J; Hernandez, Andrew M; Vashkevich, Anastasia; Schwab, Kristin; Burns, Joseph D; Shah, Qaisar A; Bergman, Thomas A; Suri, M Fareed K; Ezzeddine, Mustapha; Kirmani, Jawad F; Agarwal, Sachin; Shapshak, Angela Hays; Messe, Steven R; Venkatasubramanian, Chitra; Palmieri, Katherine; Lewandowski, Christopher; Chang, Tiffany R; Chang, Ira; Rose, David Z; Smith, Wade; Hsu, Chung Y; Liu, Chun-Lin; Lien, Li-Ming; Hsiao, Chen-Yu; Iwama, Toru; Afzal, Mohammad Rauf; Cassarly, Christy; Greenberg, Steven M; Martin, Renee' Hebert; Qureshi, Adnan I; Rosand, Jonathan; Boone, John M; Goldstein, Joshua N

    2017-07-20

    The CT angiography (CTA) spot sign is a strong predictor of hematoma expansion in intracerebral hemorrhage (ICH). However, CTA parameters vary widely across centers and may negatively impact spot sign accuracy in predicting ICH expansion. We developed a CT iodine calibration phantom that was scanned at different institutions in a large multicenter ICH clinical trial to determine the effect of image standardization on spot sign detection and performance. A custom phantom containing known concentrations of iodine was designed and scanned using the stroke CT protocol at each institution. Custom software was developed to read the CT volume datasets and calculate the Hounsfield unit as a function of iodine concentration for each phantom scan. CTA images obtained within 8 h from symptom onset were analyzed by two trained readers comparing the calibrated vs. uncalibrated density cutoffs for spot sign identification. ICH expansion was defined as hematoma volume growth >33%. A total of 90 subjects qualified for the study, of whom 17/83 (20.5%) experienced ICH expansion. The number of spot sign positive scans was higher in the calibrated analysis (67.8 vs 38.9% p < 0.001). All spot signs identified in the non-calibrated analysis remained positive after calibration. Calibrated CTA images had higher sensitivity for ICH expansion (76 vs 52%) but inferior specificity (35 vs 63%) compared with uncalibrated images. Normalization of CTA images using phantom data is a feasible strategy to obtain consistent image quantification for spot sign analysis across different sites and may improve sensitivity for identification of ICH expansion.

  13. SNARK09 - a software package for reconstruction of 2D images from 1D projections.

    PubMed

    Klukowska, Joanna; Davidi, Ran; Herman, Gabor T

    2013-06-01

    The problem of reconstruction of slices and volumes from 1D and 2D projections has arisen in a large number of scientific fields (including computerized tomography, electron microscopy, X-ray microscopy, radiology, radio astronomy and holography). Many different methods (algorithms) have been suggested for its solution. In this paper we present a software package, SNARK09, for reconstruction of 2D images from their 1D projections. In the area of image reconstruction, researchers often desire to compare two or more reconstruction techniques and assess their relative merits. SNARK09 provides a uniform framework to implement algorithms and evaluate their performance. It has been designed to treat both parallel and divergent projection geometries and can either create test data (with or without noise) for use by reconstruction algorithms or use data collected by another software or a physical device. A number of frequently-used classical reconstruction algorithms are incorporated. The package provides a means for easy incorporation of new algorithms for their testing, comparison and evaluation. It comes with tools for statistical analysis of the results and ten worked examples.

  14. A medical software system for volumetric analysis of cerebral pathologies in magnetic resonance imaging (MRI) data.

    PubMed

    Egger, Jan; Kappus, Christoph; Freisleben, Bernd; Nimsky, Christopher

    2012-08-01

    In this contribution, a medical software system for volumetric analysis of different cerebral pathologies in magnetic resonance imaging (MRI) data is presented. The software system is based on a semi-automatic segmentation algorithm and helps to overcome the time-consuming process of volume determination during monitoring of a patient. After imaging, the parameter settings-including a seed point-are set up in the system and an automatic segmentation is performed by a novel graph-based approach. Manually reviewing the result leads to reseeding, adding seed points or an automatic surface mesh generation. The mesh is saved for monitoring the patient and for comparisons with follow-up scans. Based on the mesh, the system performs a voxelization and volume calculation, which leads to diagnosis and therefore further treatment decisions. The overall system has been tested with different cerebral pathologies-glioblastoma multiforme, pituitary adenomas and cerebral aneurysms- and evaluated against manual expert segmentations using the Dice Similarity Coefficient (DSC). Additionally, intra-physician segmentations have been performed to provide a quality measure for the presented system.

  15. A standardized evaluation of artefacts from metallic compounds during fast MR imaging.

    PubMed

    Murakami, Shumei; Verdonschot, Rinus G; Kataoka, Miyoshi; Kakimoto, Naoya; Shimamoto, Hiroaki; Kreiborg, Sven

    2016-10-01

    Metallic compounds present in the oral and maxillofacial regions (OMRs) cause large artefacts during MR scanning. We quantitatively assessed these artefacts embedded within a phantom according to standards set by the American Society for Testing and Materials (ASTM). Seven metallic dental materials (each of which was a 10-mm(3) cube embedded within a phantom) were scanned [i.e. aluminium (Al), silver alloy (Ag), type IV gold alloy (Au), gold-palladium-silver alloy (Au-Pd-Ag), titanium (Ti), nickel-chromium alloy (NC) and cobalt-chromium alloy (CC)] and compared with a reference image. Sequences included gradient echo (GRE), fast spin echo (FSE), gradient recalled acquisition in steady state (GRASS), a spoiled GRASS (SPGR), a fast SPGR (FSPGR), fast imaging employing steady state (FIESTA) and echo planar imaging (EPI; axial/sagittal planes). Artefact areas were determined according to the ASTM-F2119 standard, and artefact volumes were assessed using OsiriX MD software (Pixmeo, Geneva, Switzerland). Tukey-Kramer post hoc tests were used for statistical comparisons. For most materials, scanning sequences eliciting artefact volumes in the following (ascending) order FSE-T1/FSE-T2 < FSPGR/SPGR < GRASS/GRE < FIESTA < EPI. For all scanning sequences, artefact volumes containing Au, Al, Ag and Au-Pd-Ag were significantly smaller than other materials (in which artefact volume size increased, respectively, from Ti < NC < CC). The artefact-specific shape (elicited by the cubic sample) depended on the scanning plane (i.e. a circular pattern for the axial plane and a "clover-like" pattern for the sagittal plane). The availability of standardized information on artefact size and configuration during MRI will enhance diagnosis when faced with metallic compounds in the OMR.

  16. A standardized evaluation of artefacts from metallic compounds during fast MR imaging

    PubMed Central

    Murakami, Shumei; Kataoka, Miyoshi; Kakimoto, Naoya; Shimamoto, Hiroaki; Kreiborg, Sven

    2016-01-01

    Objectives: Metallic compounds present in the oral and maxillofacial regions (OMRs) cause large artefacts during MR scanning. We quantitatively assessed these artefacts embedded within a phantom according to standards set by the American Society for Testing and Materials (ASTM). Methods: Seven metallic dental materials (each of which was a 10-mm3 cube embedded within a phantom) were scanned [i.e. aluminium (Al), silver alloy (Ag), type IV gold alloy (Au), gold–palladium–silver alloy (Au-Pd-Ag), titanium (Ti), nickel–chromium alloy (NC) and cobalt–chromium alloy (CC)] and compared with a reference image. Sequences included gradient echo (GRE), fast spin echo (FSE), gradient recalled acquisition in steady state (GRASS), a spoiled GRASS (SPGR), a fast SPGR (FSPGR), fast imaging employing steady state (FIESTA) and echo planar imaging (EPI; axial/sagittal planes). Artefact areas were determined according to the ASTM-F2119 standard, and artefact volumes were assessed using OsiriX MD software (Pixmeo, Geneva, Switzerland). Results: Tukey–Kramer post hoc tests were used for statistical comparisons. For most materials, scanning sequences eliciting artefact volumes in the following (ascending) order FSE-T1/FSE-T2 < FSPGR/SPGR < GRASS/GRE < FIESTA < EPI. For all scanning sequences, artefact volumes containing Au, Al, Ag and Au-Pd-Ag were significantly smaller than other materials (in which artefact volume size increased, respectively, from Ti < NC < CC). The artefact-specific shape (elicited by the cubic sample) depended on the scanning plane (i.e. a circular pattern for the axial plane and a “clover-like” pattern for the sagittal plane). Conclusions: The availability of standardized information on artefact size and configuration during MRI will enhance diagnosis when faced with metallic compounds in the OMR. PMID:27459058

  17. A Real-Time GPP Software-Defined Radio Testbed for the Physical Layer of Wireless Standards

    NASA Astrophysics Data System (ADS)

    Schiphorst, R.; Hoeksema, F. W.; Slump, C. H.

    2005-12-01

    We present our contribution to the general-purpose-processor-(GPP)-based radio. We describe a baseband software-defined radio testbed for the physical layer of wireless LAN standards. All physical layer functions have been successfully mapped on a Pentium 4 processor that performs these functions in real time. The testbed consists of a transmitter PC with a DAC board and a receiver PC with an ADC board. In our project, we have implemented two different types of standards on this testbed, a continuous-phase-modulation-based standard, Bluetooth, and an OFDM-based standard, HiperLAN/2. However, our testbed can easily be extended to other standards, because the only limitation in our testbed is the maximal channel bandwidth of 20 MHz and of course the processing capabilities of the used PC. The transmitter functions require at most 714 M cycles per second and the receiver functions need 1225 M cycles per second on a Pentium 4 processor. In addition, baseband experiments have been carried out successfully.

  18. X-ray volumetric imaging in image-guided radiotherapy: The new standard in on-treatment imaging

    SciTech Connect

    McBain, Catherine A.; Henry, Ann M. . E-mail: catherine.mcbain@christie-tr.nwest.nhs.uk; Sykes, Jonathan; Amer, Ali; Marchant, Tom; Moore, Christopher M.; Davies, Julie; Stratford, Julia; McCarthy, Claire; Porritt, Bridget; Williams, Peter; Khoo, Vincent S.; Price, Pat

    2006-02-01

    Purpose: X-ray volumetric imaging (XVI) for the first time allows for the on-treatment acquisition of three-dimensional (3D) kV cone beam computed tomography (CT) images. Clinical imaging using the Synergy System (Elekta, Crawley, UK) commenced in July 2003. This study evaluated image quality and dose delivered and assessed clinical utility for treatment verification at a range of anatomic sites. Methods and Materials: Single XVIs were acquired from 30 patients undergoing radiotherapy for tumors at 10 different anatomic sites. Patients were imaged in their setup position. Radiation doses received were measured using TLDs on the skin surface. The utility of XVI in verifying target volume coverage was qualitatively assessed by experienced clinicians. Results: X-ray volumetric imaging acquisition was completed in the treatment position at all anatomic sites. At sites where a full gantry rotation was not possible, XVIs were reconstructed from projection images acquired from partial rotations. Soft-tissue definition of organ boundaries allowed direct assessment of 3D target volume coverage at all sites. Individual image quality depended on both imaging parameters and patient characteristics. Radiation dose ranged from 0.003 Gy in the head to 0.03 Gy in the pelvis. Conclusions: On-treatment XVI provided 3D verification images with soft-tissue definition at all anatomic sites at acceptably low radiation doses. This technology sets a new standard in treatment verification and will facilitate novel adaptive radiotherapy techniques.

  19. Developing a new software package for PSF estimation and fitting of adaptive optics images

    NASA Astrophysics Data System (ADS)

    Schreiber, Laura; Diolaiti, Emiliano; Sollima, Antonio; Arcidiacono, Carmelo; Bellazzini, Michele; Ciliegi, Paolo; Falomo, Renato; Foppiani, Italo; Greggio, Laura; Lanzoni, Barbara; Lombini, Matteo; Montegriffo, Paolo; Dalessandro, Emanuele; Massari, Davide

    2012-07-01

    Adaptive Optics (AO) images are characterized by structured Point Spread Function (PSF), with sharp core and extended halo, and by significant variations across the field of view. In order to enable the extraction of high-precision quantitative information and improve the scientific exploitation of AO data, efforts in the PSF modeling and in the integration of suitable models in a code for image analysis are needed. We present the current status of a study on the modeling of AO PSFs based on observational data taken with present telescopes (VLT and LBT). The methods under development include parametric models and hybrid (i.e. analytical / numerical) models adapted to various types of PSFs that can show up in AO images. The specific features of AO data, such as the mainly radial variation of the PSF with respect to the guide star position in single-reference AO, are taken into account as much as possible. The final objective of this project is the development of a flexible software package, based on the Starfinder code (Diolaiati et Al 2000), specifically dedicated to the PSF estimation and to the astrometric and photometric analysis of AO images with complex and spatially variable PSF.

  20. Clean Colon Software Program (CCSP), Proposal of a standardized Method to quantify Colon Cleansing During Colonoscopy: Preliminary Results

    PubMed Central

    Rosa-Rizzotto, Erik; Dupuis, Adrian; Guido, Ennio; Caroli, Diego; Monica, Fabio; Canova, Daniele; Cervellin, Erica; Marin, Renato; Trovato, Cristina; Crosta, Cristiano; Cocchio, Silvia; Baldo, Vincenzo; De Lazzari, Franca

    2015-01-01

    Background and study aims: Neoplastic lesions can be missed during colonoscopy, especially when cleansing is inadequate. Bowel preparation scales have significant limitations and no objective and standardized method currently exists to establish colon cleanliness during colonoscopy. The aims of our study are to create a software algorithm that is able to analyze bowel cleansing during colonoscopies and to compare it to a validate bowel preparation scale. Patients and methods: A software application (the Clean Colon Software Program, CCSP) was developed. Fifty colonoscopies were carried out and video-recorded. Each video was divided into 3 segments: cecum-hepatic flexure (1st Segment), hepatic flexure-descending colon (2nd Segment) and rectosigmoid segment (3rd Segment). Each segment was recorded twice, both before and after careful cleansing of the intestinal wall. A score from 0 (dirty) to 3 (clean) was then assigned by CCSP. All the videos were also viewed by four endoscopists and colon cleansing was established using the Boston Bowel Preparation Scale. Interclass correlation coefficient was then calculated between the endoscopists and the software. Results: The cleansing score of the prelavage colonoscopies was 1.56 ± 0.52 and the postlavage one was 2,08 ± 0,59 (P < 0.001) showing an approximate 33.3 % improvement in cleansing after lavage. Right colon segment prelavage (0.99 ± 0.69) was dirtier than left colon segment prelavage (2.07 ± 0.71). The overall interobserver agreement between the average cleansing score for the 4 endoscopists and the software pre-cleansing was 0.87 (95 % CI, 0.84 – 0.90) and post-cleansing was 0.86 (95 % CI, 0.83 – 0.89). Conclusions: The software is able to discriminate clean from non-clean colon tracts with high significance and is comparable to endoscopist evaluation. PMID:26528508

  1. An Effective On-line Polymer Characterization Technique by Using SALS Image Processing Software and Wavelet Analysis

    PubMed Central

    Xian, Guang-ming; Qu, Jin-ping; Zeng, Bi-qing

    2008-01-01

    This paper describes an effective on-line polymer characterization technique by using small-angle light-scattering (SALS) image processing software and wavelet analysis. The phenomenon of small-angle light scattering has been applied to give information about transparent structures on morphology. Real-time visualization of various scattered light image and light intensity matrices is performed by the optical image real-time processing software for SALS. The software can measure the signal intensity of light scattering images, draw the frequency-intensity curves and the amplitude-intensity curves to indicate the variation of the intensity of scattered light in different processing conditions, and estimate the parameters. The current study utilizes a one-dimensional wavelet to delete noise from the original SALS signal and estimate the variation trend of maximum intensity area of the scattered light. So, the system brought the qualitative analysis of the structural information of transparent film success. PMID:19229343

  2. Comprehensive, powerful, efficient, intuitive: a new software framework for clinical imaging applications

    NASA Astrophysics Data System (ADS)

    Augustine, Kurt E.; Holmes, David R., III; Hanson, Dennis P.; Robb, Richard A.

    2006-03-01

    One of the greatest challenges for a software engineer is to create a complex application that is comprehensive enough to be useful to a diverse set of users, yet focused enough for individual tasks to be carried out efficiently with minimal training. This "powerful yet simple" paradox is particularly prevalent in advanced medical imaging applications. Recent research in the Biomedical Imaging Resource (BIR) at Mayo Clinic has been directed toward development of an imaging application framework that provides powerful image visualization/analysis tools in an intuitive, easy-to-use interface. It is based on two concepts very familiar to physicians - Cases and Workflows. Each case is associated with a unique patient and a specific set of routine clinical tasks, or a workflow. Each workflow is comprised of an ordered set of general-purpose modules which can be re-used for each unique workflow. Clinicians help describe and design the workflows, and then are provided with an intuitive interface to both patient data and analysis tools. Since most of the individual steps are common to many different workflows, the use of general-purpose modules reduces development time and results in applications that are consistent, stable, and robust. While the development of individual modules may reflect years of research by imaging scientists, new customized workflows based on the new modules can be developed extremely fast. If a powerful, comprehensive application is difficult to learn and complicated to use, it will be unacceptable to most clinicians. Clinical image analysis tools must be intuitive and effective or they simply will not be used.

  3. Software-based approach toward vendor independent real-time photoacoustic imaging using ultrasound beamformed data

    NASA Astrophysics Data System (ADS)

    Zhang, Haichong K.; Huang, Howard; Lei, Chen; Kim, Younsu; Boctor, Emad M.

    2017-03-01

    Photoacoustic (PA) imaging has shown its potential for many clinical applications, but current research and usage of PA imaging are constrained by additional hardware costs to collect channel data, as the PA signals are incorrectly processed in existing clinical ultrasound systems. This problem arises from the fact that ultrasound systems beamform the PA signals as echoes from the ultrasound transducer instead of directly from illuminated sources. Consequently, conventional implementations of PA imaging rely on parallel channel acquisition from research platforms, which are not only slow and expensive, but are also mostly not approved by the FDA for clinical use. In previous studies, we have proposed the synthetic-aperture based photoacoustic re-beamformer (SPARE) that uses ultrasound beamformed radio frequency (RF) data as the input, which is readily available in clinical ultrasound scanners. The goal of this work is to implement the SPARE beamformer in a clinical ultrasound system, and to experimentally demonstrate its real-time visualization. Assuming a high pulsed repetition frequency (PRF) laser is used, a PZT-based pseudo PA source transmission was synchronized with the ultrasound line trigger. As a result, the frame-rate increases when limiting the image field-of-view (FOV), with 50 to 20 frames per second achieved for FOVs from 35 mm to 70 mm depth, respectively. Although in reality the maximum PRF of laser firing limits the PA image frame rate, this result indicates that the developed software is capable of displaying PA images with the maximum possible frame-rate for certain laser system without acquiring channel data.

  4. Application of standard and advanced open source GIS software functionality for analysis of coordinates obtained by GNSS measurements

    NASA Astrophysics Data System (ADS)

    Ilieva, Tamara

    2016-04-01

    Currently there is wide variety of GNSS measurements used in the geodetic practice. The coordinates obtained by static, kinematic or precise point positioning GNSS measurements could be analyzed by using the standard functionality of any GIS software, but the open source ones give to the users an opportunity to make themselves advanced functionality. There is an option the coordinates obtained by measurements to be stored in spatial geodatabase and information for the precision and time of measurement to be added. The data could be visualized in different coordinate systems and projections and analyzed by applying different types of spatial analysis. The process also could be automated in high degree. An example with test data is prepared. It includes automated loading of files with coordinates obtained by GNSS measurements and additional information for the precision and the time of measurements. Standard and advanced open source GIS software functionality is used for automation of the analysis process. Also, graph theory is implemented for making time series of the data stored in the spatial geodatabase.

  5. Using interactive software to teach image-based clinical laboratory tests in developing countries: a pilot trial in Nepal.

    PubMed

    Mallapaty, Gabriele; Kim, Sara; Astion, Michael L

    2003-05-01

    This study explores the feasibility of using computer tutorials to train laboratory personnel in Nepal. Training incorporated three software programs that teach microscope-based laboratory tests (peripheral blood smears, urinalysis, Gram stains). Forty-seven participants attended training sessions and completed a questionnaire. The participants' overall perception was: 1) the software was superior to formal lectures for learning image-based laboratory tests (43 participants, 92%); 2) the software would enhance job performance (43 participants, 92%); 3) more subjects should be taught using software (40 participants, 85%); and 4) the software helped participants learn new materials (38 participants, 81%). Considering that 79% of the participants were novice computer users, it is noteworthy that 38 (81%) participants thought the method of instruction was easy to understand. Factors contributing to learning included: 1) the resemblance of the computer images to actual microscope images derived from patient samples (37 participants, 68%); 2) the use of multiple examples of cells and other microscopic structures (28 participants, 60%); 3) the ability to interact with images and animations (23 participants, 49%); 4) the step-by-step explanation of laboratory techniques (21 participants, 45%); and 5) the self-pacing of the tutorial (12 participants, 26%). Overall, the pilot study suggests that educational software could help train clinical laboratory personnel in developing countries.

  6. Software-based high-level synthesis design of FPGA beamformers for synthetic aperture imaging.

    PubMed

    Amaro, Joao; Yiu, Billy Y S; Falcao, Gabriel; Gomes, Marco A C; Yu, Alfred C H

    2015-05-01

    Field-programmable gate arrays (FPGAs) can potentially be configured as beamforming platforms for ultrasound imaging, but a long design time and skilled expertise in hardware programming are typically required. In this article, we present a novel approach to the efficient design of FPGA beamformers for synthetic aperture (SA) imaging via the use of software-based high-level synthesis techniques. Software kernels (coded in OpenCL) were first developed to stage-wise handle SA beamforming operations, and their corresponding FPGA logic circuitry was emulated through a high-level synthesis framework. After design space analysis, the fine-tuned OpenCL kernels were compiled into register transfer level descriptions to configure an FPGA as a beamformer module. The processing performance of this beamformer was assessed through a series of offline emulation experiments that sought to derive beamformed images from SA channel-domain raw data (40-MHz sampling rate, 12 bit resolution). With 128 channels, our FPGA-based SA beamformer can achieve 41 frames per second (fps) processing throughput (3.44 × 10(8) pixels per second for frame size of 256 × 256 pixels) at 31.5 W power consumption (1.30 fps/W power efficiency). It utilized 86.9% of the FPGA fabric and operated at a 196.5 MHz clock frequency (after optimization). Based on these findings, we anticipate that FPGA and high-level synthesis can together foster rapid prototyping of real-time ultrasound processor modules at low power consumption budgets.

  7. Technology and Technique Standards for Camera-Acquired Digital Dermatologic Images: A Systematic Review.

    PubMed

    Quigley, Elizabeth A; Tokay, Barbara A; Jewell, Sarah T; Marchetti, Michael A; Halpern, Allan C

    2015-08-01

    Photographs are invaluable dermatologic diagnostic, management, research, teaching, and documentation tools. Digital Imaging and Communications in Medicine (DICOM) standards exist for many types of digital medical images, but there are no DICOM standards for camera-acquired dermatologic images to date. To identify and describe existing or proposed technology and technique standards for camera-acquired dermatologic images in the scientific literature. Systematic searches of the PubMed, EMBASE, and Cochrane databases were performed in January 2013 using photography and digital imaging, standardization, and medical specialty and medical illustration search terms and augmented by a gray literature search of 14 websites using Google. Two reviewers independently screened titles of 7371 unique publications, followed by 3 sequential full-text reviews, leading to the selection of 49 publications with the most recent (1985-2013) or detailed description of technology or technique standards related to the acquisition or use of images of skin disease (or related conditions). No universally accepted existing technology or technique standards for camera-based digital images in dermatology were identified. Recommendations are summarized for technology imaging standards, including spatial resolution, color resolution, reproduction (magnification) ratios, postacquisition image processing, color calibration, compression, output, archiving and storage, and security during storage and transmission. Recommendations are also summarized for technique imaging standards, including environmental conditions (lighting, background, and camera position), patient pose and standard view sets, and patient consent, privacy, and confidentiality. Proposed standards for specific-use cases in total body photography, teledermatology, and dermoscopy are described. The literature is replete with descriptions of obtaining photographs of skin disease, but universal imaging standards have not been developed

  8. The wavelet/scalar quantization compression standard for digital fingerprint images

    SciTech Connect

    Bradley, J.N.; Brislawn, C.M.

    1994-04-01

    A new digital image compression standard has been adopted by the US Federal Bureau of Investigation for use on digitized gray-scale fingerprint images. The algorithm is based on adaptive uniform scalar quantization of a discrete wavelet transform image decomposition and is referred to as the wavelet/scalar quantization standard. The standard produces archival quality images at compression ratios of around 20:1 and will allow the FBI to replace their current database of paper fingerprint cards with digital imagery.

  9. Standardizing the next generation of bioinformatics software development with BioHDF (HDF5).

    PubMed

    Mason, Christopher E; Zumbo, Paul; Sanders, Stephan; Folk, Mike; Robinson, Dana; Aydt, Ruth; Gollery, Martin; Welsh, Mark; Olson, N Eric; Smith, Todd M

    2010-01-01

    Next Generation Sequencing technologies are limited by the lack of standard bioinformatics infrastructures that can reduce data storage, increase data processing performance, and integrate diverse information. HDF technologies address these requirements and have a long history of use in data-intensive science communities. They include general data file formats, libraries, and tools for working with the data. Compared to emerging standards, such as the SAM/BAM formats, HDF5-based systems demonstrate significantly better scalability, can support multiple indexes, store multiple data types, and are self-describing. For these reasons, HDF5 and its BioHDF extension are well suited for implementing data models to support the next generation of bioinformatics applications.

  10. Navy Military Standards for Technical Software Documentation of Embedded Tactical Systems; a Critical Review.

    DTIC Science & Technology

    1985-09-01

    Bradley . Co-advisor: Carl R. Jones Approved for public release; distribution unlimited .o . . . . . . . . . UNCLASSIFIED SECURITY CLASSIFICATION OF THIS...OECLASSIFICATION, DOWNGRADING SCHEDULE 16. DISTRIBUTION STATEMENT (of thie Report) Approved for public release; distribution is unlimited 17. DISTRIBUTION...SCCuNITV CLASSIrCATIOW OF THIS PAG9 (Ohm 018I Wl 04 -of Standards and academic/commercial publications . The conclusion reached is that DOD-STD-1679A(Navy

  11. Current status of magnetic resonance imaging (MRI) and ultrasonography fusion software platforms for guidance of prostate biopsies.

    PubMed

    Logan, Jennifer K; Rais-Bahrami, Soroush; Turkbey, Baris; Gomella, Andrew; Amalou, Hayet; Choyke, Peter L; Wood, Bradford J; Pinto, Peter A

    2014-11-01

    Prostate MRI is currently the best diagnostic imaging method for detecting PCa. Magnetic resonance imaging (MRI)/ultrasonography (US) fusion allows the sensitivity and specificity of MRI to be combined with the real-time capabilities of transrectal ultrasonography (TRUS). Multiple approaches and techniques exist for MRI/US fusion and include direct 'in bore' MRI biopsies, cognitive fusion, and MRI/US fusion via software-based image coregistration platforms. © 2013 The Authors. BJU International © 2013 BJU International.

  12. Automated software for CCD-image processing and detection of small Solar System bodies

    NASA Astrophysics Data System (ADS)

    Savanevych, V.; Bryukhovetskiy, A.; Sokovikova, N.; Bezkrovniy, M.; Khlamov, S.; Elenin, L.; Movsesian, I.; Dihtyar, M.

    2014-07-01

    Efficiency is a crucial factor in the discovery of near-Earth asteroids (NEA) and potentially-hazardous asteroids. Current asteroid surveys yield many images per night. It is no longer possible for the observer to quickly view these images in the the blinking mode. This cause a serious difficulty for large-aperture wide-field telescopes, capturing up to several tens of asteroids in one image. To achieve better asteroid-survey efficiency it is necessary to design and develop automated software for the frame processing. Currently the CoLiTec software solves the problem of the frame processing for asteroid surveys in the real mode. The automatically detected asteroids are subject to follow-up visual confirmation. The CoLiTec software is in use for the automated detection of asteroids in Andrushivka Astronomical Observatory, in the Russian remote observatory ISON-NM (Mayhill, New Mexico, USA), as well as in the observatory ISON-Kislovodsk and in ISON-Ussuriysk starting from the fall 2013. The CoLiTec led to the first automated asteroid and comet discoveries in the CIS (Commonwealth of Independent States) and Baltic countries. In 2012 (2011) 80 (86) % of observations and 74 (75) % of discoveries of asteroids in these countries were made using the CoLiTec. The comet C/2010 X1 (Elenin), discovered using the CoLiTec on December 10, 2010, was the first comet discovered by a CIS astronomer over the past 20 years. In total, out of 7 recently discovered in the CIS and Baltic countries comets 4 comets were discovered due to the CoLiTec, namely C/2010 X1 (Elenin), P/2011 NO1 (Elenin), C/2012 S1 (ISON), and P/2013 V3 (Nevski). About 500,000 CoLiTec-used measurements were reported to MPC, including over 1,500 preliminary discovered objects. These objects include 21 Jupiter Trojan asteroids, 4 NEAs and 1 Centaur. Three other discovered asteroids were reported via dedicated electronic MPC circulars. In 2012 the CoLiTec users were ranked as No. 10, 13, and 22 in the list of the most

  13. Pre-Hardware Optimization of Spacecraft Image Processing Software Algorithms and Hardware Implementation

    NASA Technical Reports Server (NTRS)

    Kizhner, Semion; Flatley, Thomas P.; Hestnes, Phyllis; Jentoft-Nilsen, Marit; Petrick, David J.; Day, John H. (Technical Monitor)

    2001-01-01

    Spacecraft telemetry rates have steadily increased over the last decade presenting a problem for real-time processing by ground facilities. This paper proposes a solution to a related problem for the Geostationary Operational Environmental Spacecraft (GOES-8) image processing application. Although large super-computer facilities are the obvious heritage solution, they are very costly, making it imperative to seek a feasible alternative engineering solution at a fraction of the cost. The solution is based on a Personal Computer (PC) platform and synergy of optimized software algorithms and re-configurable computing hardware technologies, such as Field Programmable Gate Arrays (FPGA) and Digital Signal Processing (DSP). It has been shown in [1] and [2] that this configuration can provide superior inexpensive performance for a chosen application on the ground station or on-board a spacecraft. However, since this technology is still maturing, intensive pre-hardware steps are necessary to achieve the benefits of hardware implementation. This paper describes these steps for the GOES-8 application, a software project developed using Interactive Data Language (IDL) (Trademark of Research Systems, Inc.) on a Workstation/UNIX platform. The solution involves converting the application to a PC/Windows/RC platform, selected mainly by the availability of low cost, adaptable high-speed RC hardware. In order for the hybrid system to run, the IDL software was modified to account for platform differences. It was interesting to examine the gains and losses in performance on the new platform, as well as unexpected observations before implementing hardware. After substantial pre-hardware optimization steps, the necessity of hardware implementation for bottleneck code in the PC environment became evident and solvable beginning with the methodology described in [1], [2], and implementing a novel methodology for this specific application [6]. The PC-RC interface bandwidth problem for the

  14. Calculation of residence times and radiation doses using the standard PC software Excel.

    PubMed

    Herzog, H; Zilken, H; Niederbremer, A; Friedrich, W; Müller-Gärtner, H W

    1997-12-01

    We developed a program which aims to facilitate the calculation of radiation doses to single organs and the whole body. IMEDOSE uses Excel to include calculations, graphical displays, and interactions with the user in a single general-purpose PC software tool. To start the procedure the input data are copied into a spreadsheet. They must represent percentage uptake values of several organs derived from measurements in animals or humans. To extrapolate these data up to seven half-lives of the radionuclide, fitting to one or two exponentional functions is included and can be checked by the user. By means of the approximate time-activity information the cumulated activity or residence times are calculated. Finally these data are combined with the absorbed fraction doses (S-values) given by MIRD pamphlet No. 11 to yield radiation doses, the effective dose equivalent and the effective dose. These results are presented in a final table. Interactions are realized with push-buttons and drop-down menus. Calculations use the Visual Basic tool of Excel. In order to test our program, biodistribution data of fluorine-18 fluorodeoxyglucose were taken from the literature (Meija et al., J Nucl Med 1991; 32:699-706). For a 70-kg adult the resulting radiation doses of all target organs listed in MIRD 11 were different from the ICRP 53 values by 1%+/-18% on the average. When the residence times were introduced into MIRDOSE3 (Stabin, J Nucl Med 1996; 37:538-546) the mean difference between our results and those of MIRDOSE3 was -3%+/-6%. Both outcomes indicate the validity of the present approach.

  15. Developing an ANSI standard for image quality tools for the testing of active millimeter wave imaging systems

    NASA Astrophysics Data System (ADS)

    Barber, Jeffrey; Greca, Joseph; Yam, Kevin; Weatherall, James C.; Smith, Peter R.; Smith, Barry T.

    2017-05-01

    In 2016, the millimeter wave (MMW) imaging community initiated the formation of a standard for millimeter wave image quality metrics. This new standard, American National Standards Institute (ANSI) N42.59, will apply to active MMW systems for security screening of humans. The Electromagnetic Signatures of Explosives Laboratory at the Transportation Security Laboratory is supporting the ANSI standards process via the creation of initial prototypes for round-robin testing with MMW imaging system manufacturers and experts. Results obtained for these prototypes will be used to inform the community and lead to consensus objective standards amongst stakeholders. Images collected with laboratory systems are presented along with results of preliminary image analysis. Future directions for object design, data collection and image processing are discussed.

  16. Dose imaging with gel-dosemeter layers: optical analysis and dedicated software.

    PubMed

    Gambarini, G; Carrara, M; Gay, S; Tomatis, S

    2006-01-01

    In radiotherapy involving thermal and epithermal neutrons, the knowledge of dose distributions, with separation of the contribution of each secondary radiation component, is of utmost importance. Layers of Fricke-Xylenol-Orange-infused gel dosemeters give the possibility of achieving such requirements because, owing to the layer-geometry, enriching or depleting the gel matrix of suitable isotopes does not sensibly alter neutron transport. The dosimetry method has been critically re-examined with the aim of improving its suitability to boron neutron capture therapy (BNCT) requirements, as it applies to the protocol of measurement and analysis, the sensitivity of the method and the range of the linearity of the dosemeters. Software has been developed and studied to obtain automatically the images of the various dose components with the established separation procedure.

  17. Digital mapping of side-scan sonar data with the Woods Hole Image Processing System software

    USGS Publications Warehouse

    Paskevich, Valerie F.

    1992-01-01

    Since 1985, the Branch of Atlantic Marine Geology has been involved in collecting, processing and digitally mosaicking high and low resolution sidescan sonar data. In the past, processing and digital mosaicking has been accomplished with a dedicated, shore-based computer system. Recent development of a UNIX-based image-processing software system includes a series of task specific programs for pre-processing sidescan sonar data. To extend the capabilities of the UNIX-based programs, development of digital mapping techniques have been developed. This report describes the initial development of an automated digital mapping procedure. Included is a description of the programs and steps required to complete the digital mosaicking on a UNIXbased computer system, and a comparison of techniques that the user may wish to select.

  18. A comparison of five standard methods for evaluating image intensity uniformity in partially parallel imaging MRI.

    PubMed

    Goerner, Frank L; Duong, Timothy; Stafford, R Jason; Clarke, Geoffrey D

    2013-08-01

    To investigate the utility of five different standard measurement methods for determining image uniformity for partially parallel imaging (PPI) acquisitions in terms of consistency across a variety of pulse sequences and reconstruction strategies. Images were produced with a phantom using a 12-channel head matrix coil in a 3T MRI system (TIM TRIO, Siemens Medical Solutions, Erlangen, Germany). Images produced using echo-planar, fast spin echo, gradient echo, and balanced steady state free precession pulse sequences were evaluated. Two different PPI reconstruction methods were investigated, generalized autocalibrating partially parallel acquisition algorithm (GRAPPA) and modified sensitivity-encoding (mSENSE) with acceleration factors (R) of 2, 3, and 4. Additionally images were acquired with conventional, two-dimensional Fourier imaging methods (R=1). Five measurement methods of uniformity, recommended by the American College of Radiology (ACR) and the National Electrical Manufacturers Association (NEMA) were considered. The methods investigated were (1) an ACR method and a (2) NEMA method for calculating the peak deviation nonuniformity, (3) a modification of a NEMA method used to produce a gray scale uniformity map, (4) determining the normalized absolute average deviation uniformity, and (5) a NEMA method that focused on 17 areas of the image to measure uniformity. Changes in uniformity as a function of reconstruction method at the same R-value were also investigated. Two-way analysis of variance (ANOVA) was used to determine whether R-value or reconstruction method had a greater influence on signal intensity uniformity measurements for partially parallel MRI. Two of the methods studied had consistently negative slopes when signal intensity uniformity was plotted against R-value. The results obtained comparing mSENSE against GRAPPA found no consistent difference between GRAPPA and mSENSE with regard to signal intensity uniformity. The results of the two

  19. A software tool for stitching two PET/CT body segments into a single whole-body image set.

    PubMed

    Chang, Tingting; Chang, Guoping; Clark, John W; Rohren, Eric M; Mawlawi, Osama R

    2012-05-10

    A whole-body PET/CT scan extending from the vertex of the head to the toes of the patient is not feasible on a number of commercially available PET/CT scanners due to a limitation in the extent of bed travel on these systems. In such cases, the PET scan has to be divided into two parts: one covering the upper body segment, while the other covering the lower body segment. The aim of this paper is to describe and evaluate, using phantom and patient studies, a software tool that was developed to stitch two body segments and output a single whole-body image set, thereby facilitating the interpretation of whole-body PET scans. A mathematical model was first developed to stitch images from two body segments using three landmarks. The model calculates the relative positions of the landmarks on the two segments and then generates a rigid transformation that aligns these landmarks on the two segments. A software tool was written to implement this model while correcting for radioactive decay between the two body segments, and output a single DICOM whole-body image set with all the necessary tags. One phantom, and six patient studies were conducted to evaluate the performance of the software. In these studies, six radio-opaque markers (BBs) were used as landmarks (three on each leg). All studies were acquired in two body segments with BBs placed in the overlap region of the two segments. The PET/CT images of each segment were then stitched using the software tool to create a single DICOM whole-body PET/CT image. Evaluation of the stitching tool was based on visual inspection, consistency of radiotracer uptake in the two segments, and ability to display the resultant DICOM image set on two independent workstations. The software tool successfully stitched the two segments of the phantom image, and generated a single whole-body DICOM PET/CT image set that had the correct alignment and activity concentration throughout the image. The stitched images were viewed by two independent

  20. ProteoAnnotator--open source proteogenomics annotation software supporting PSI standards.

    PubMed

    Ghali, Fawaz; Krishna, Ritesh; Perkins, Simon; Collins, Andrew; Xia, Dong; Wastling, Jonathan; Jones, Andrew R

    2014-12-01

    The recent massive increase in capability for sequencing genomes is producing enormous advances in our understanding of biological systems. However, there is a bottleneck in genome annotation--determining the structure of all transcribed genes. Experimental data from MS studies can play a major role in confirming and correcting gene structure--proteogenomics. However, there are some technical and practical challenges to overcome, since proteogenomics requires pipelines comprising a complex set of interconnected modules as well as bespoke routines, for example in protein inference and statistics. We are introducing a complete, open source pipeline for proteogenomics, called ProteoAnnotator, which incorporates a graphical user interface and implements the Proteomics Standards Initiative mzIdentML standard for each analysis stage. All steps are included as standalone modules with the mzIdentML library, allowing other groups to re-use the whole pipeline or constituent parts within other tools. We have developed new modules for pre-processing and combining multiple search databases, for performing peptide-level statistics on mzIdentML files, for scoring grouped protein identifications matched to a given genomic locus to validate that updates to the official gene models are statistically sound and for mapping end results back onto the genome. ProteoAnnotator is available from http://www.proteoannotator.org/. All MS data have been deposited in the ProteomeXchange with identifiers PXD001042 and PXD001390 (http://proteomecentral.proteomexchange.org/dataset/PXD001042; http://proteomecentral.proteomexchange.org/dataset/PXD001390). © 2014 The Authors. PROTEOMICS Published by WILEY-VCH Verlag GmbH & Co. KGaA.

  1. SU-E-J-264: Comparison of Two Commercially Available Software Platforms for Deformable Image Registration

    SciTech Connect

    Tuohy, R; Stathakis, S; Mavroidis, P; Bosse, C; Papanikolaou, N

    2014-06-01

    Purpose: To evaluate and compare the deformable image registration algorithms available in the Velocity (Velocity Medical Solutions, Atlanta, GA) and RayStation (RaySearch Americas, Inc., Garden city NY). Methods: Ten consecutive patient cone beam CTs (CBCT) for each fraction were collected. The CBCTs along with the simulation CT were exported to the Velocity and the RayStation software. Each CBCT was registered using deformable image registration to the simulation CT and the resulting deformable vector matrix was generated. Each registration was visually inspected by a physicist and the prescribing physician. The volumes of the critical organs were calculated for each deformable CT and used for comparison. Results: The resulting deformable registrations revealed differences between the two algorithms. These differences were realized when the organs at risk were contoured on each deformed CBCT. Differences in the order of 10% ±30% in volume were observed for bladder, 17 ±21% for rectum and 16±10% for sigmoid. The prostate and PTV volume differences were in the order of 3±5%. The volumetric differences observed had a respective impact on the DVHs of all organs at risk. Differences of 8–10% in the mean dose were observed for all organs above. Conclusion: Deformable registration is a powerful tool that aids in the definition of critical structures and is often used for the evaluation of daily dose delivered to the patient. It should be noted that extended QA should be performed before clinical implementation of the software and the users should be aware of advantages and limitations of the methods.

  2. Comparison between three methods to value lower tear meniscus measured by image software

    NASA Astrophysics Data System (ADS)

    García-Resúa, Carlos; Pena-Verdeal, Hugo; Lira, Madalena; Oliveira, M. Elisabete Real; Giráldez, María. Jesús; Yebra-Pimentel, Eva

    2013-11-01

    To measure different parameters of lower tear meniscus height (TMH) by using photography with open software of measurement. TMH was addressed from lower eyelid to the top of the meniscus (absolute TMH) and to the brightest meniscus reflex (reflex TMH). 121 young healthy subjects were included in the study. The lower tear meniscus was videotaped by a digital camera attached to a slit lamp. Three videos were recorded in central meniscus portion on three different methods: slit lamp without fluorescein instillation, slit lamp with fluorescein instillation and TearscopeTM without fluorescein instillation. Then, a masked observed obtained an image from each video and measured TMH by using open source software of measurement based on Java (NIH ImageJ). Absolute central (TMH-CA), absolute with fluorescein (TMH-F) and absolute using the Tearscope (TMH-Tc) were compared each other as well as reflex central (TMH-CR) and reflex Tearscope (TMH-TcR). Mean +/- S.D. values of TMH-CA, TMH-CR, TMH-F, TMH-Tc and TMH-TcR of 0.209 +/- 0.049, 0.139 +/- 0.031, 0.222 +/- 0.058, 0.175 +/- 0.045 and 0.109 +/- 0.029 mm, respectively were found. Paired t-test was performed for the relationship between TMH-CA - TMH-CR, TMH-CA - TMH-F, TMH-CA - TMH-Tc, TMH-F - TMH-Tc, TMH-Tc - TMH-TcR and TMH-CR - TMH-TcR. In all cases, it was found a significant difference between both variables (all p < 0.008). This study showed a useful tool to objectively measure TMH by photography. Eye care professionals should maintain the same TMH parameter in the follow-up visits, due to the difference between them.

  3. Fundus image fusion in EYEPLAN software: An evaluation of a novel technique for ocular melanoma radiation treatment planning

    SciTech Connect

    Daftari, Inder K.; Mishra, Kavita K.; O'Brien, Joan M.; and others

    2010-10-15

    Purpose: The purpose of this study is to evaluate a novel approach for treatment planning using digital fundus image fusion in EYEPLAN for proton beam radiation therapy (PBRT) planning for ocular melanoma. The authors used a prototype version of EYEPLAN software, which allows for digital registration of high-resolution fundus photographs. The authors examined the improvement in tumor localization by replanning with the addition of fundus photo superimposition in patients with macular area tumors. Methods: The new version of EYEPLAN (v3.05) software allows for the registration of fundus photographs as a background image. This is then used in conjunction with clinical examination, tantalum marker clips, surgeon's mapping, and ultrasound to draw the tumor contour accurately. In order to determine if the fundus image superimposition helps in tumor delineation and treatment planning, the authors identified 79 patients with choroidal melanoma in the macular location that were treated with PBRT. All patients were treated to a dose of 56 GyE in four fractions. The authors reviewed and replanned all 79 macular melanoma cases with superimposition of pretreatment and post-treatment fundus imaging in the new EYEPLAN software. For patients with no local failure, the authors analyzed whether fundus photograph fusion accurately depicted and confirmed tumor volumes as outlined in the original treatment plan. For patients with local failure, the authors determined whether the addition of the fundus photograph might have benefited in terms of more accurate tumor volume delineation. Results: The mean follow-up of patients was 33.6{+-}23 months. Tumor growth was seen in six eyes of the 79 macular lesions. All six patients were marginal failures or tumor miss in the region of dose fall-off, including one patient with both in-field recurrence as well as marginal. Among the six recurrences, three were managed by enucleation and one underwent retreatment with proton therapy. Three

  4. Free digital image analysis software helps to resolve equivocal scores in HER2 immunohistochemistry.

    PubMed

    Helin, Henrik O; Tuominen, Vilppu J; Ylinen, Onni; Helin, Heikki J; Isola, Jorma

    2016-02-01

    Evaluation of human epidermal growth factor receptor 2 (HER2) immunohistochemistry (IHC) is subject to interobserver variation and lack of reproducibility. Digital image analysis (DIA) has been shown to improve the consistency and accuracy of the evaluation and its use is encouraged in current testing guidelines. We studied whether digital image analysis using a free software application (ImmunoMembrane) can assist in interpreting HER2 IHC in equivocal 2+ cases. We also compared digital photomicrographs with whole-slide images (WSI) as material for ImmunoMembrane DIA. We stained 750 surgical resection specimens of invasive breast cancers immunohistochemically for HER2 and analysed staining with ImmunoMembrane. The ImmunoMembrane DIA scores were compared with the originally responsible pathologists' visual scores, a researcher's visual scores and in situ hybridisation (ISH) results. The originally responsible pathologists reported 9.1 % positive 3+ IHC scores, for the researcher this was 8.4 % and for ImmunoMembrane 9.5 %. Equivocal 2+ scores were 34 % for the pathologists, 43.7 % for the researcher and 10.1 % for ImmunoMembrane. Negative 0/1+ scores were 57.6 % for the pathologists, 46.8 % for the researcher and 80.8 % for ImmunoMembrane. There were six false positive cases, which were classified as 3+ by ImmunoMembrane and negative by ISH. Six cases were false negative defined as 0/1+ by IHC and positive by ISH. ImmunoMembrane DIA using digital photomicrographs and WSI showed almost perfect agreement. In conclusion, digital image analysis by ImmunoMembrane can help to resolve a majority of equivocal 2+ cases in HER2 IHC, which reduces the need for ISH testing.

  5. Measuring the Pain Area: An Intra- and Inter-Rater Reliability Study Using Image Analysis Software.

    PubMed

    Dos Reis, Felipe Jose Jandre; de Barros E Silva, Veronica; de Lucena, Raphaela Nunes; Mendes Cardoso, Bruno Alexandre; Nogueira, Leandro Calazans

    2016-01-01

    Pain drawings have frequently been used for clinical information and research. The aim of this study was to investigate intra- and inter-rater reliability of area measurements performed on pain drawings. Our secondary objective was to verify the reliability when using computers with different screen sizes, both with and without mouse hardware. Pain drawings were completed by patients with chronic neck pain or neck-shoulder-arm pain. Four independent examiners participated in the study. Examiners A and B used the same computer with a 16-inch screen and wired mouse hardware. Examiner C used a notebook with a 16-inch screen and no mouse hardware, and Examiner D used a computer with an 11.6-inch screen and a wireless mouse. Image measurements were obtained using GIMP and NIH ImageJ computer programs. The length of all the images was measured using GIMP software to a set scale in ImageJ. Thus, each marked area was encircled and the total surface area (cm(2) ) was calculated for each pain drawing measurement. A total of 117 areas were identified and 52 pain drawings were analyzed. The intrarater reliability between all examiners was high (ICC = 0.989). The inter-rater reliability was also high. No significant differences were observed when using different screen sizes or when using or not using the mouse hardware. This suggests that the precision of these measurements is acceptable for the use of this method as a measurement tool in clinical practice and research. © 2014 World Institute of Pain.

  6. A flexible software architecture for scalable real-time image and video processing applications

    NASA Astrophysics Data System (ADS)

    Usamentiaga, Rubén; Molleda, Julio; García, Daniel F.; Bulnes, Francisco G.

    2012-06-01

    Real-time image and video processing applications require skilled architects, and recent trends in the hardware platform make the design and implementation of these applications increasingly complex. Many frameworks and libraries have been proposed or commercialized to simplify the design and tuning of real-time image processing applications. However, they tend to lack flexibility because they are normally oriented towards particular types of applications, or they impose specific data processing models such as the pipeline. Other issues include large memory footprints, difficulty for reuse and inefficient execution on multicore processors. This paper presents a novel software architecture for real-time image and video processing applications which addresses these issues. The architecture is divided into three layers: the platform abstraction layer, the messaging layer, and the application layer. The platform abstraction layer provides a high level application programming interface for the rest of the architecture. The messaging layer provides a message passing interface based on a dynamic publish/subscribe pattern. A topic-based filtering in which messages are published to topics is used to route the messages from the publishers to the subscribers interested in a particular type of messages. The application layer provides a repository for reusable application modules designed for real-time image and video processing applications. These modules, which include acquisition, visualization, communication, user interface and data processing modules, take advantage of the power of other well-known libraries such as OpenCV, Intel IPP, or CUDA. Finally, we present different prototypes and applications to show the possibilities of the proposed architecture.

  7. Evaluation of cassette-based digital radiography detectors using standardized image quality metrics: AAPM TG-150 Draft Image Detector Tests.

    PubMed

    Li, Guang; Greene, Travis C; Nishino, Thomas K; Willis, Charles E

    2016-09-01

    The purpose of this study was to evaluate several of the standardized image quality metrics proposed by the American Association of Physics in Medicine (AAPM) Task Group 150. The task group suggested region-of-interest (ROI)-based techniques to measure nonuniformity, minimum signal-to-noise ratio (SNR), number of anomalous pixels, and modulation transfer function (MTF). This study evaluated the effects of ROI size and layout on the image metrics by using four different ROI sets, assessed result uncertainty by repeating measurements, and compared results with two commercially available quality control tools, namely the Carestream DIRECTVIEW Total Quality Tool (TQT) and the GE Healthcare Quality Assurance Process (QAP). Seven Carestream DRX-1C (CsI) detectors on mobile DR systems and four GE FlashPad detectors in radiographic rooms were tested. Images were analyzed using MATLAB software that had been previously validated and reported. Our values for signal and SNR nonuniformity and MTF agree with values published by other investigators. Our results show that ROI size affects nonuniformity and minimum SNR measurements, but not detection of anomalous pixels. Exposure geometry affects all tested image metrics except for the MTF. TG-150 metrics in general agree with the TQT, but agree with the QAP only for local and global signal nonuniformity. The difference in SNR nonuniformity and MTF values between the TG-150 and QAP may be explained by differences in the calculation of noise and acquisition beam quality, respectively. TG-150's SNR nonuniformity metrics are also more sensitive to detector nonuniformity compared to the QAP. Our results suggest that fixed ROI size should be used for consistency because nonuniformity metrics depend on ROI size. Ideally, detector tests should be performed at the exact calibration position. If not feasible, a baseline should be established from the mean of several repeated measurements. Our study indicates that the TG-150 tests can be

  8. WCPS: An Open Geospatial Consortium Standard Applied to Flight Hardware/Software

    NASA Astrophysics Data System (ADS)

    Cappelaere, P. G.; Mandl, D.; Stanley, J.; Frye, S.; Baumann, P.

    2009-12-01

    The Open GeoSpatial Consortium Web Coverage Processing Service (WCPS) has the potential to allow advanced users to define processing algorithms using the web environment and seamlessly provide the capability to upload them directly to the satellite for autonomous execution using smart agent technology. The Open Geospatial Consortium recently announced the adoption of a specification for a Web Coverage Processing Service on Mar 25, 2009. This effort has been spearheaded by Dr. Peter Baumann, Jacobs University, Bremen, Germany. The WCPS specifies a coverage processing language allowing clients to send processing requests for evaluation to a server. NASA has been taking the next step by wrapping the user-defined requests into dynamic agents that can be uploaded to a spacecraft for onboard processing. This could have a dramatic impact to the new decadal missions such as HyspIRI. Dynamic onboard classifiers are key to providing level 2 products in near-realtime directly to end-users on the ground. This capability, currently implemented on the Hyspiri pathfinder testbed using the NASA SpaceCube, will be demonstrated on EO-1, a NASA Hyperspectral/Multispectral imager, as the next capability for agile autonomous science experiments.

  9. Leveraging Open-Source Software and Data Standards within the Integrated Water Resources Science and Services Initiative

    NASA Astrophysics Data System (ADS)

    Clark, E. P.

    2014-12-01

    The National Oceanic and Atmospheric Administration together with the U.S. Army Corps of Engineers and the U.S. Geological Survey establish the Integrated Water Resources Science and Service (IWRSS) consortium in 2011. IWRSS is a cross cutting, multidisciplinary approach to addressing complex water problems. The IWRSS Interoperability and Data Synchronization Scoping Team was tasked with documenting requirements related to the sharing of data sets essential for monitoring, forecasting the water nation's water resources as well as informing operations and management of hydraulic structures. A number of open source software tools were identified in the team's report as well as the need to adopt open source data structures and standards. This presentation will discuss the potential applications of open-source software and development practices within the IWRSS-Interoperability and Data Synchronization construct as well as explore the underlying benefits that open-source approaches offer to the federal water resources community. Programmatically this strategy facilitates a common operating picture between the federal water enterprise that is essential for a weather and water ready nation.

  10. Dependency of cardiac rubidium-82 imaging quantitative measures on age, gender, vascular territory, and software in a cardiovascular normal population.

    PubMed

    Sunderland, John J; Pan, Xiao-Bo; Declerck, Jerome; Menda, Yusuf

    2015-02-01

    Recent technological improvements to PET imaging equipment combined with the availability of software optimized to calculate regional myocardial blood flow (MBF) and myocardial flow reserve (MFR) create a paradigm shifting opportunity to provide new clinically relevant quantitative information to cardiologists. However, clinical interpretation of the MBF and MFR is entirely dependent upon knowledge of MBF and MFR values in normal populations and subpopulations. This work reports Rb-82-based MBF and MFR measurements for a series of 49 verified cardiovascularly normal subjects as a preliminary baseline for future clinical studies. Forty-nine subjects (24F/25M, ages 41-69) with low probability for coronary artery disease and with normal exercise stress test were included. These subjects underwent rest/dipyridamole stress Rb-82 myocardial perfusion imaging using standard clinical techniques (40 mCi injection, 6-minute acquisition) using a Siemens Biograph 40 PET/CT scanner with high count rate detector option. List mode data was rehistogrammed into 26 dynamic frames (12 × 5 seconds, 6 × 10 seconds, 4 × 20 seconds, 4 × 40 seconds). Cardiac images were processed, and MBF and MFR calculated using Siemens syngo MBF, PMOD, and FlowQuant software using a single compartment Rb-82 model. Global myocardial blood flow under pharmacological stress for the 24 females as measured by PMOD, syngo MBF, and FlowQuant were 3.10 ± 0.72, 2.80 ± 0.66, and 2.60 ± 0.63 mL·minute(-1)·g(-1), and for the 25 males was 2.60 ± 0.84, 2.33 ± 0.75, 2.15 ± 0.62 mL·minute(-1)·g(-1), respectively. Rest flows for PMOD, syngo MBF, and FlowQuant averaged 1.32 ± 0.42, 1.20 ± 0.33, and 1.06 ± 0.38 mL·minute(-1)·g(-1) for the female subjects, and 1.12 ± 0.29, 0.90 ± 0.26, and 0.85 ± 0.24 mL·minute(-1)·g(-1) for the males. Myocardial flow reserves for PMOD, syngo MBF, and FlowQuant for the female normals were calculated to be 2.50 ± 0.80, 2.53 ± 0.67, 2.71 ± 0.90, and 2.50 ± 1.19, 2

  11. Standardized platform for coregistration of nonconcurrent diffuse optical and magnetic resonance breast images obtained in different geometries.

    PubMed

    Azar, Fred S; Lee, Kijoon; Khamene, Ali; Choe, Regine; Corlu, Alper; Konecky, Soren D; Sauer, Frank; Yodh, Arjun G

    2007-01-01

    We present a novel methodology for combining breast image data obtained at different times, in different geometries, and by different techniques. We combine data based on diffuse optical tomography (DOT) and magnetic resonance imaging (MRI). The software platform integrates advanced multimodal registration and segmentation algorithms, requires minimal user experience, and employs computationally efficient techniques. The resulting superposed 3-D tomographs facilitate tissue analyses based on structural and functional data derived from both modalities, and readily permit enhancement of DOT data reconstruction using MRI-derived a-priori structural information. We demonstrate the multimodal registration method using a simulated phantom, and we present initial patient studies that confirm that tumorous regions in a patient breast found by both imaging modalities exhibit significantly higher total hemoglobin concentration (THC) than surrounding normal tissues. The average THC in the tumorous regions is one to three standard deviations larger than the overall breast average THC for all patients.

  12. WE-G-BRD-07: Automated MR Image Standardization and Auto-Contouring Strategy for MRI-Based Adaptive Brachytherapy for Cervix Cancer

    SciTech Connect

    Saleh, H Al; Erickson, B; Paulson, E

    2015-06-15

    Purpose: MRI-based adaptive brachytherapy (ABT) is an emerging treatment modality for patients with gynecological tumors. However, MR image intensity non-uniformities (IINU) can vary from fraction to fraction, complicating image interpretation and auto-contouring accuracy. We demonstrate here an automated MR image standardization and auto-contouring strategy for MRI-based ABT of cervix cancer. Methods: MR image standardization consisted of: 1) IINU correction using the MNI N3 algorithm, 2) noise filtering using anisotropic diffusion, and 3) signal intensity normalization using the volumetric median. This post-processing chain was implemented as a series of custom Matlab and Java extensions in MIM (v6.4.5, MIM Software) and was applied to 3D T2 SPACE images of six patients undergoing MRI-based ABT at 3T. Coefficients of variation (CV=σ/µ) were calculated for both original and standardized images and compared using Mann-Whitney tests. Patient-specific cumulative MR atlases of bladder, rectum, and sigmoid contours were constructed throughout ABT, using original and standardized MR images from all previous ABT fractions. Auto-contouring was performed in MIM two ways: 1) best-match of one atlas image to the daily MR image, 2) multi-match of all previous fraction atlas images to the daily MR image. Dice’s Similarity Coefficients (DSCs) were calculated for auto-generated contours relative to reference contours for both original and standardized MR images and compared using Mann-Whitney tests. Results: Significant improvements in CV were detected following MR image standardization (p=0.0043), demonstrating an improvement in MR image uniformity. DSCs consistently increased for auto-contoured bladder, rectum, and sigmoid following MR image standardization, with the highest DSCs detected when the combination of MR image standardization and multi-match cumulative atlas-based auto-contouring was utilized. Conclusion: MR image standardization significantly improves MR image

  13. SU-E-I-13: Evaluation of Metal Artifact Reduction (MAR) Software On Computed Tomography (CT) Images

    SciTech Connect

    Huang, V; Kohli, K

    2015-06-15

    Purpose: A new commercially available metal artifact reduction (MAR) software in computed tomography (CT) imaging was evaluated with phantoms in the presence of metals. The goal was to assess the ability of the software to restore the CT number in the vicinity of the metals without impacting the image quality. Methods: A Catphan 504 was scanned with a GE Optima RT 580 CT scanner (GE Healthcare, Milwaukee, WI) and the images were reconstructed with and without the MAR software. Both datasets were analyzed with Image Owl QA software (Image Owl Inc, Greenwich, NY). CT number sensitometry, MTF, low contrast, uniformity, noise and spatial accuracy were compared for scans with and without MAR software. In addition, an in-house made phantom was scanned with and without a stainless steel insert at three different locations. The accuracy of the CT number and metal insert dimension were investigated as well. Results: Comparisons between scans with and without MAR algorithm on the Catphan phantom demonstrate similar results for image quality. However, noise was slightly higher for the MAR algorithm. Evaluation of the CT number at various locations of the in-house made phantom was also performed. The baseline HU, obtained from the scan without metal insert, was compared to scans with the stainless steel insert at 3 different locations. The HU difference between the baseline scan versus metal scan was improved when the MAR algorithm was applied. In addition, the physical diameter of the stainless steel rod was over-estimated by the MAR algorithm by 0.9 mm. Conclusion: This work indicates with the presence of metal in CT scans, the MAR algorithm is capable of providing a more accurate CT number without compromising the overall image quality. Future work will include the dosimetric impact on the MAR algorithm.

  14. Development of imaging mass spectrometry (IMS) dataset extractor software, IMS convolution.

    PubMed

    Hayasaka, Takahiro; Goto-Inoue, Naoko; Ushijima, Masaru; Yao, Ikuko; Yuba-Kubo, Akiko; Wakui, Masatoshi; Kajihara, Shigeki; Matsuura, Masaaki; Setou, Mitsutoshi

    2011-07-01

    Imaging mass spectrometry (IMS) is a powerful tool for detecting and visualizing biomolecules in tissue sections. The technology has been applied to several fields, and many researchers have started to apply it to pathological samples. However, it is very difficult for inexperienced users to extract meaningful signals from enormous IMS datasets, and the procedure is time-consuming. We have developed software, called IMS Convolution with regions of interest (ROI), to automatically extract meaningful signals from IMS datasets. The processing is based on the detection of common peaks within the ordered area in the IMS dataset. In this study, the IMS dataset from a mouse eyeball section was acquired by a mass microscope that we recently developed, and the peaks extracted by manual and automatic procedures were compared. The manual procedure extracted 16 peaks with higher intensity in mass spectra averaged in whole measurement points. On the other hand, the automatic procedure using IMS Convolution easily and equally extracted peaks without any effort. Moreover, the use of ROIs with IMS Convolution enabled us to extract the peak on each ROI area, and all of the 16 ion images on mouse eyeball tissue were from phosphatidylcholine species. Therefore, we believe that IMS Convolution with ROIs could automatically extract the meaningful peaks from large-volume IMS datasets for inexperienced users as well as for researchers who have performed the analysis.

  15. A Monte Carlo software bench for simulation of spectral k-edge CT imaging: Initial results.

    PubMed

    Nasirudin, Radin A; Penchev, Petar; Mei, Kai; Rummeny, Ernst J; Fiebich, Martin; Noël, Peter B

    2015-06-01

    Spectral Computed Tomography (SCT) systems equipped with photon counting detectors (PCD) are clinically desired, since such systems provide not only additional diagnostic information but also radiation dose reductions by a factor of two or more. The current unavailability of clinical PCDs makes a simulation of such systems necessary. In this paper, we present a Monte Carlo-based simulation of a SCT equipped with a PCD. The aim of this development is to facilitate research on potential clinical applications. Our MC simulator takes into account scattering interactions within the scanned object and has the ability to simulate scans with and without scatter and a wide variety of imaging parameters. To demonstrate the usefulness of such a MC simulator for development of SCT applications, a phantom with contrast targets covering a wide range of clinically significant iodine concentrations is simulated. With those simulations the impact of scatter and exposure on image quality and material decomposition results is investigated. Our results illustrate that scatter radiation plays a significant role in visual as well as quantitative results. Scatter radiation can reduce the accuracy of contrast agent concentration by up to 15%. We present a reliable and robust software bench for simulation of SCTs equipped with PCDs. Copyright © 2015 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  16. Cloud computing geospatial application for water resources based on free and open source software and open standards - a prototype

    NASA Astrophysics Data System (ADS)

    Delipetrev, Blagoj

    2016-04-01

    Presently, most of the existing software is desktop-based, designed to work on a single computer, which represents a major limitation in many ways, starting from limited computer processing, storage power, accessibility, availability, etc. The only feasible solution lies in the web and cloud. This abstract presents research and development of a cloud computing geospatial application for water resources based on free and open source software and open standards using hybrid deployment model of public - private cloud, running on two separate virtual machines (VMs). The first one (VM1) is running on Amazon web services (AWS) and the second one (VM2) is running on a Xen cloud platform. The presented cloud application is developed using free and open source software, open standards and prototype code. The cloud application presents a framework how to develop specialized cloud geospatial application that needs only a web browser to be used. This cloud application is the ultimate collaboration geospatial platform because multiple users across the globe with internet connection and browser can jointly model geospatial objects, enter attribute data and information, execute algorithms, and visualize results. The presented cloud application is: available all the time, accessible from everywhere, it is scalable, works in a distributed computer environment, it creates a real-time multiuser collaboration platform, the programing languages code and components are interoperable, and it is flexible in including additional components. The cloud geospatial application is implemented as a specialized water resources application with three web services for 1) data infrastructure (DI), 2) support for water resources modelling (WRM), 3) user management. The web services are running on two VMs that are communicating over the internet providing services to users. The application was tested on the Zletovica river basin case study with concurrent multiple users. The application is a state

  17. Pluri-IQ: Quantification of Embryonic Stem Cell Pluripotency through an Image-Based Analysis Software.

    PubMed

    Perestrelo, Tânia; Chen, Weitong; Correia, Marcelo; Le, Christopher; Pereira, Sandro; Rodrigues, Ana S; Sousa, Maria I; Ramalho-Santos, João; Wirtz, Denis

    2017-08-08

    Image-based assays, such as alkaline phosphatase staining or immunocytochemistry for pluripotent markers, are common methods used in the stem cell field to assess pluripotency. Although an increased number of image-analysis approaches have been described, there is still a lack of software availability to automatically quantify pluripotency in large images after pluripotency staining. To address this need, we developed a robust and rapid image processing software, Pluri-IQ, which allows the automatic evaluation of pluripotency in large low-magnification images. Using mouse embryonic stem cells (mESC) as a model, we combined an automated segmentation algorithm with a supervised machine-learning platform to classify colonies as pluripotent, mixed, or differentiated. In addition, Pluri-IQ allows the automatic comparison between different culture conditions. This efficient user-friendly open-source software can be easily implemented in images derived from pluripotent cells or cells that express pluripotent markers (e.g., OCT4-GFP) and can be routinely used, decreasing image assessment bias. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  18. 3.0 Tesla magnetic resonance imaging: A new standard in liver imaging?

    PubMed

    Girometti, Rossano

    2015-07-28

    An ever-increasing number of 3.0 Tesla (T) magnets are installed worldwide. Moving from the standard of 1.5 T to higher field strength implies a number of potential advantage and drawbacks, requiring careful optimization of imaging protocols or implementation of novel hardware components. Clinical practice and literature review suggest that state-of-the-art 3.0 T is equivalent to 1.5 T in the assessment of focal liver lesions and diffuse liver disease. Therefore, further technical improvements are needed in order to fully exploit the potential of higher field strength.

  19. 3.0 Tesla magnetic resonance imaging: A new standard in liver imaging?

    PubMed Central

    Girometti, Rossano

    2015-01-01

    An ever-increasing number of 3.0 Tesla (T) magnets are installed worldwide. Moving from the standard of 1.5 T to higher field strength implies a number of potential advantage and drawbacks, requiring careful optimization of imaging protocols or implementation of novel hardware components. Clinical practice and literature review suggest that state-of-the-art 3.0 T is equivalent to 1.5 T in the assessment of focal liver lesions and diffuse liver disease. Therefore, further technical improvements are needed in order to fully exploit the potential of higher field strength. PMID:26244063

  20. Proof-of-Principle of rTLC, an Open-Source Software Developed for Image Evaluation and Multivariate Analysis of Planar Chromatograms.

    PubMed

    Fichou, Dimitri; Ristivojević, Petar; Morlock, Gertrud E

    2016-12-20

    High-performance thin-layer chromatography (HPTLC) is an advantageous analytical technique for analysis of complex samples. Combined with multivariate data analysis, it turns out to be a powerful tool for profiling of many samples in parallel. So far, chromatogram analysis has been time-consuming and required the application of at least two software packages to convert HPTLC chromatograms into a numerical data matrix. Hence, this study aimed to develop a powerful, all in one open-source software for user-friendly image processing and multivariate analysis of HPTLC chromatograms. Using the caret package for machine learning, the software was set up in the R programming language with an HTML-user interface created by the shiny package. The newly developed software, called rTLC, is deployed online, and instructions for direct use as a web application and for local installation, if required, are available on GitHub. rTLC was created especially for routine use in planar chromatography. It provides the necessary tools to guide the user in a fast protocol to the statistical data output (e.g., data extraction, preprocessing techniques, variable selection, and data analysis). rTLC offers a standardized procedure and informative visualization tools that allow the user to explore the data in a reproducible and comprehensive way. As proof-of-principle of rTLC, German propolis samples were analyzed using pattern recognition techniques, principal component analysis, hierarchic cluster analysis, and predictive techniques, such as random forest and support vector machines.

  1. 5D CNS+ Software for Automatically Imaging Axial, Sagittal, and Coronal Planes of Normal and Abnormal Second-Trimester Fetal Brains.

    PubMed

    Rizzo, Giuseppe; Capponi, Alessandra; Persico, Nicola; Ghi, Tullio; Nazzaro, Giovanni; Boito, Simona; Pietrolucci, Maria Elena; Arduini, Domenico

    2016-10-01

    The purpose of this study was to test new 5D CNS+ software (Samsung Medison Co, Ltd, Seoul, Korea), which is designed to image axial, sagittal, and coronal planes of the fetal brain from volumes obtained by 3-dimensional sonography. The study consisted of 2 different steps. First in a prospective study, 3-dimensional fetal brain volumes were acquired in 183 normal consecutive singleton pregnancies undergoing routine sonographic examinations at 18 to 24 weeks' gestation. The 5D CNS+ software was applied, and the percentage of adequate visualization of brain diagnostic planes was evaluated by 2 independent observers. In the second step, the software was also tested in 22 fetuses with cerebral anomalies. In 180 of 183 fetuses (98.4%), 5D CNS+ successfully reconstructed all of the diagnostic planes. Using the software on healthy fetuses, the observers acknowledged the presence of diagnostic images with visualization rates ranging from 97.7% to 99.4% for axial planes, 94.4% to 97.7% for sagittal planes, and 92.2% to 97.2% for coronal planes. The Cohen κ coefficient was analyzed to evaluate the agreement rates between the observers and resulted in values of 0.96 or greater for axial planes, 0.90 or greater for sagittal planes, and 0.89 or greater for coronal planes. All 22 fetuses with brain anomalies were identified among a series that also included healthy fetuses, and in 21 of the 22 cases, a correct diagnosis was made. 5D CNS+ was efficient in successfully imaging standard axial, sagittal, and coronal planes of the fetal brain. This approach may simplify the examination of the fetal central nervous system and reduce operator dependency.

  2. Looking to 2050: The USGS Integrated Software for Imagers and Spectrometers (ISIS)

    NASA Astrophysics Data System (ADS)

    Becker, T. L.; Edmundson, K. L.; Sides, S.; Hare, T. M.; Laura, J. R.

    2017-02-01

    Astrogeology Science Center develops and maintains software (ISIS) in support of planetary data for a diverse set of missions. We plan to provide support through the future while adapting to changes in hardware, software, and science requirements.

  3. Caltech/JPL Conference on Image Processing Technology, Data Sources and Software for Commercial and Scientific Applications

    NASA Technical Reports Server (NTRS)

    Redmann, G. H.

    1976-01-01

    Recent advances in image processing and new applications are presented to the user community to stimulate the development and transfer of this technology to industrial and commercial applications. The Proceedings contains 37 papers and abstracts, including many illustrations (some in color) and provides a single reference source for the user community regarding the ordering and obtaining of NASA-developed image-processing software and science data.

  4. Accuracy and feasibility of three different methods for software-based image fusion in whole-body PET and CT.

    PubMed

    Putzer, Daniel; Henninger, Benjamin; Kovacs, Peter; Uprimny, Christian; Kendler, Dorota; Jaschke, Werner; Bale, Reto J

    2016-06-01

    Even as PET/CT provides valuable diagnostic information in a great number of clinical indications, availability of hybrid PET/CT scanners is mainly limited to clinical centers. A software-based image fusion would facilitate combined image reading of CT and PET data sets if hardware image fusion is not available. To analyze the relevance of retrospective image fusion of separately acquired PET and CT data sets, we studied the accuracy, practicability and reproducibility of three different image registration techniques. We evaluated whole-body 18F-FDG-PET and CT data sets of 71 oncologic patients. Images were fused retrospectively using Stealth Station System, Treon (Medtronic Inc., Louisville, CO, USA) equipped with Cranial4 Software. External markers fixed to a vacuum mattress were used as reference for exact repositioning. Registration was repeated using internal anatomic landmarks and Automerge software, assessing accuracy for all three methods, measuring distances of liver representation in CT and PET with reference to a common coordinate system. On first measurement of image fusions with external markers, 53 were successful, 16 feasible and 2 not successful. Using anatomic landmarks, 42 were successful, 26 feasible and 3 not successful. Using Automerge Software only 13 were successful. The mean distance between center points in PET and CT was 7.69±4.96 mm on first, and 7.65±4.2 mm on second measurement. Results with external markers correlate very well and inaccuracies are significantly lower (P<0.001) than results using anatomical landmarks (10.38±6.13 mm and 10.83±6.23 mm). Analysis revealed a significantly faster alignment using external markers (P<0.001). External fiducials in combination with immobilization devices and breathing protocols allow for highly accurate image fusion cost-effectively and significantly less time, posing an attractive alternative for PET/CT interpretation when a hybrid scanner is not available.

  5. eWaterCycle: Building an operational global Hydrological forecasting system based on standards and open source software

    NASA Astrophysics Data System (ADS)

    Drost, Niels; Bierkens, Marc; Donchyts, Gennadii; van de Giesen, Nick; Hummel, Stef; Hut, Rolf; Kockx, Arno; van Meersbergen, Maarten; Sutanudjaja, Edwin; Verlaan, Martin; Weerts, Albrecht; Winsemius, Hessel

    2015-04-01

    At EGU 2015, the eWaterCycle project (www.ewatercycle.org) will launch an operational high-resolution Hydrological global model, including 14 day ensemble forecasts. Within the eWaterCycle project we aim to use standards and open source software as much as possible. This ensures the sustainability of the software created, and the ability to swap out components as newer technologies and solutions become available. It also allows us to build the system much faster than would otherwise be the case. At the heart of the eWaterCycle system is the PCRGLOB-WB Global Hydrological model (www.globalhydrology.nl) developed at Utrecht University. Version 2.0 of this model is implemented in Python, and models a wide range of Hydrological processes at 10 x 10km (and potentially higher) resolution. To assimilate near-real time satellite data into the model, and run an ensemble forecast we use the OpenDA system (www.openda.org). This allows us to make use of different data assimilation techniques without the need to implement these from scratch. As a data assimilation technique we currently use (variant of) an Ensemble Kalman Filter, specifically optimized for High Performance Computing environments. Coupling of the model with the DA is done with the Basic Model Interface (BMI), developed in the framework of the Community Surface Dynamics Modeling System (CSDMS) (csdms.colorado.edu). We have added support for BMI to PCRGLOB-WB, and developed a BMI adapter for OpenDA, allowing OpenDA to use any BMI compatible model. We currently use multiple different BMI models with OpenDA, already showing the benefits of using this standard. Throughout the system, all file based input and output is done via NetCDF files. We use several standard tools to be used for pre- and post-processing data. Finally we use ncWMS, an NetCDF based implementation of the Web Map Service (WMS) protocol to serve the forecasting result. We have build a 3D web application based on Cesium.js to visualize the output. In