Science.gov

Sample records for standard imaging software

  1. Software Formal Inspections Standard

    NASA Technical Reports Server (NTRS)

    1993-01-01

    This Software Formal Inspections Standard (hereinafter referred to as Standard) is applicable to NASA software. This Standard defines the requirements that shall be fulfilled by the software formal inspections process whenever this process is specified for NASA software. The objective of this Standard is to define the requirements for a process that inspects software products to detect and eliminate defects as early as possible in the software life cycle. The process also provides for the collection and analysis of inspection data to improve the inspection process as well as the quality of the software.

  2. Software assurance standard

    NASA Technical Reports Server (NTRS)

    1992-01-01

    This standard specifies the software assurance program for the provider of software. It also delineates the assurance activities for the provider and the assurance data that are to be furnished by the provider to the acquirer. In any software development effort, the provider is the entity or individual that actually designs, develops, and implements the software product, while the acquirer is the entity or individual who specifies the requirements and accepts the resulting products. This standard specifies at a high level an overall software assurance program for software developed for and by NASA. Assurance includes the disciplines of quality assurance, quality engineering, verification and validation, nonconformance reporting and corrective action, safety assurance, and security assurance. The application of these disciplines during a software development life cycle is called software assurance. Subsequent lower-level standards will specify the specific processes within these disciplines.

  3. Standard Annuciator Software overview

    SciTech Connect

    Anspach, D.A. ); Fox, E.T.; Kissock, P.S. )

    1990-01-01

    The Standard Annunciator Software is responsible for maintaining a current display of system status conditions. The software interfaces with other systems -- IACS, CCTV, UPS, and portable PC -- to determine their status and then displays this information at the operator's console. This manual describes the software organization, operation, and generation mechanisms for development and target environments. 6 figs.

  4. NASA Software Documentation Standard

    NASA Technical Reports Server (NTRS)

    1991-01-01

    The NASA Software Documentation Standard (hereinafter referred to as "Standard") is designed to support the documentation of all software developed for NASA; its goal is to provide a framework and model for recording the essential information needed throughout the development life cycle and maintenance of a software system. The NASA Software Documentation Standard can be applied to the documentation of all NASA software. The Standard is limited to documentation format and content requirements. It does not mandate specific management, engineering, or assurance standards or techniques. This Standard defines the format and content of documentation for software acquisition, development, and sustaining engineering. Format requirements address where information shall be recorded and content requirements address what information shall be recorded. This Standard provides a framework to allow consistency of documentation across NASA and visibility into the completeness of project documentation. The basic framework consists of four major sections (or volumes). The Management Plan contains all planning and business aspects of a software project, including engineering and assurance planning. The Product Specification contains all technical engineering information, including software requirements and design. The Assurance and Test Procedures contains all technical assurance information, including Test, Quality Assurance (QA), and Verification and Validation (V&V). The Management, Engineering, and Assurance Reports is the library and/or listing of all project reports.

  5. Standard Annunciator software overview

    SciTech Connect

    Anspach, D.A. ); Fox, E.T.; Kissock, P.S. )

    1992-10-01

    The Standard Annunciator Software is responsible for controlling the AN/GSS-41 and AN/GSS-44 Annunciator Systems. The software interfaces with other systems-ACS, ECS, CCTV, UPS-to determine current alarm, tamper, and hardware status. Current system status conditions are displayed at the operator's console and on display maps. This manual describes the organization and functionality of the software as well as the generation mechanisms for development and target environments.

  6. Standard Annunciator software overview

    SciTech Connect

    Anspach, D.A.; Fox, E.T.; Kissock, P.S.

    1992-10-01

    The Standard Annunciator Software is responsible for controlling the AN/GSS-41 and AN/GSS-44 Annunciator Systems. The software interfaces with other systems-ACS, ECS, CCTV, UPS-to determine current alarm, tamper, and hardware status. Current system status conditions are displayed at the operator`s console and on display maps. This manual describes the organization and functionality of the software as well as the generation mechanisms for development and target environments.

  7. NASA Software Safety Standard

    NASA Technical Reports Server (NTRS)

    Rosenberg, Linda

    1997-01-01

    If software is a critical element in a safety critical system, it is imperative to implement a systematic approach to software safety as an integral part of the overall system safety programs. The NASA-STD-8719.13A, "NASA Software Safety Standard", describes the activities necessary to ensure that safety is designed into software that is acquired or developed by NASA, and that safety is maintained throughout the software life cycle. A PDF version, is available on the WWW from Lewis. A Guidebook that will assist in the implementation of the requirements in the Safety Standard is under development at the Lewis Research Center (LeRC). After completion, it will also be available on the WWW from Lewis.

  8. NASA's Software Safety Standard

    NASA Technical Reports Server (NTRS)

    Ramsay, Christopher M.

    2007-01-01

    NASA relies more and more on software to control, monitor, and verify its safety critical systems, facilities and operations. Since the 1960's there has hardly been a spacecraft launched that does not have a computer on board that will provide command and control services. There have been recent incidents where software has played a role in high-profile mission failures and hazardous incidents. For example, the Mars Orbiter, Mars Polar Lander, the DART (Demonstration of Autonomous Rendezvous Technology), and MER (Mars Exploration Rover) Spirit anomalies were all caused or contributed to by software. The Mission Control Centers for the Shuttle, ISS, and unmanned programs are highly dependant on software for data displays, analysis, and mission planning. Despite this growing dependence on software control and monitoring, there has been little to no consistent application of software safety practices and methodology to NASA's projects with safety critical software. Meanwhile, academia and private industry have been stepping forward with procedures and standards for safety critical systems and software, for example Dr. Nancy Leveson's book Safeware: System Safety and Computers. The NASA Software Safety Standard, originally published in 1997, was widely ignored due to its complexity and poor organization. It also focused on concepts rather than definite procedural requirements organized around a software project lifecycle. Led by NASA Headquarters Office of Safety and Mission Assurance, the NASA Software Safety Standard has recently undergone a significant update. This new standard provides the procedures and guidelines for evaluating a project for safety criticality and then lays out the minimum project lifecycle requirements to assure the software is created, operated, and maintained in the safest possible manner. This update of the standard clearly delineates the minimum set of software safety requirements for a project without detailing the implementation for those

  9. NASA's Software Safety Standard

    NASA Technical Reports Server (NTRS)

    Ramsay, Christopher M.

    2005-01-01

    NASA (National Aeronautics and Space Administration) relies more and more on software to control, monitor, and verify its safety critical systems, facilities and operations. Since the 1960's there has hardly been a spacecraft (manned or unmanned) launched that did not have a computer on board that provided vital command and control services. Despite this growing dependence on software control and monitoring, there has been no consistent application of software safety practices and methodology to NASA's projects with safety critical software. Led by the NASA Headquarters Office of Safety and Mission Assurance, the NASA Software Safety Standard (STD-18l9.13B) has recently undergone a significant update in an attempt to provide that consistency. This paper will discuss the key features of the new NASA Software Safety Standard. It will start with a brief history of the use and development of software in safety critical applications at NASA. It will then give a brief overview of the NASA Software Working Group and the approach it took to revise the software engineering process across the Agency.

  10. Development of a viability standard curve for microencapsulated probiotic bacteria using confocal microscopy and image analysis software.

    PubMed

    Moore, Sarah; Kailasapathy, Kasipathy; Phillips, Michael; Jones, Mark R

    2015-07-01

    Microencapsulation is proposed to protect probiotic strains from food processing procedures and to maintain probiotic viability. Little research has described the in situ viability of microencapsulated probiotics. This study successfully developed a real-time viability standard curve for microencapsulated bacteria using confocal microscopy, fluorescent dyes and image analysis software. PMID:25887694

  11. Design and evaluation of a THz time domain imaging system using standard optical design software.

    PubMed

    Brückner, Claudia; Pradarutti, Boris; Müller, Ralf; Riehemann, Stefan; Notni, Gunther; Tünnermann, Andreas

    2008-09-20

    A terahertz (THz) time domain imaging system is analyzed and optimized with standard optical design software (ZEMAX). Special requirements to the illumination optics and imaging optics are presented. In the optimized system, off-axis parabolic mirrors and lenses are combined. The system has a numerical aperture of 0.4 and is diffraction limited for field points up to 4 mm and wavelengths down to 750 microm. ZEONEX is used as the lens material. Higher aspherical coefficients are used for correction of spherical aberration and reduction of lens thickness. The lenses were manufactured by ultraprecision machining. For optimization of the system, ray tracing and wave-optical methods were combined. We show how the ZEMAX Gaussian beam analysis tool can be used to evaluate illumination optics. The resolution of the THz system was tested with a wire and a slit target, line gratings of different period, and a Siemens star. The behavior of the temporal line spread function can be modeled with the polychromatic coherent line spread function feature in ZEMAX. The spectral and temporal resolutions of the line gratings are compared with the respective modulation transfer function of ZEMAX. For maximum resolution, the system has to be diffraction limited down to the smallest wavelength of the spectrum of the THz pulse. Then, the resolution on time domain analysis of the pulse maximum can be estimated with the spectral resolution of the center of gravity wavelength. The system resolution near the optical axis on time domain analysis of the pulse maximum is 1 line pair/mm with an intensity contrast of 0.22. The Siemens star is used for estimation of the resolution of the whole system. An eight channel electro-optic sampling system was used for detection. The resolution on time domain analysis of the pulse maximum of all eight channels could be determined with the Siemens star to be 0.7 line pairs/mm. PMID:18806862

  12. libdrdc: software standards library

    NASA Astrophysics Data System (ADS)

    Erickson, David; Peng, Tie

    2008-04-01

    This paper presents the libdrdc software standards library including internal nomenclature, definitions, units of measure, coordinate reference frames, and representations for use in autonomous systems research. This library is a configurable, portable C-function wrapped C++ / Object Oriented C library developed to be independent of software middleware, system architecture, processor, or operating system. It is designed to use the automatically-tuned linear algebra suite (ATLAS) and Basic Linear Algebra Suite (BLAS) and port to firmware and software. The library goal is to unify data collection and representation for various microcontrollers and Central Processing Unit (CPU) cores and to provide a common Application Binary Interface (ABI) for research projects at all scales. The library supports multi-platform development and currently works on Windows, Unix, GNU/Linux, and Real-Time Executive for Multiprocessor Systems (RTEMS). This library is made available under LGPL version 2.1 license.

  13. NASA software documentation standard software engineering program

    NASA Technical Reports Server (NTRS)

    1991-01-01

    The NASA Software Documentation Standard (hereinafter referred to as Standard) can be applied to the documentation of all NASA software. This Standard is limited to documentation format and content requirements. It does not mandate specific management, engineering, or assurance standards or techniques. This Standard defines the format and content of documentation for software acquisition, development, and sustaining engineering. Format requirements address where information shall be recorded and content requirements address what information shall be recorded. This Standard provides a framework to allow consistency of documentation across NASA and visibility into the completeness of project documentation. This basic framework consists of four major sections (or volumes). The Management Plan contains all planning and business aspects of a software project, including engineering and assurance planning. The Product Specification contains all technical engineering information, including software requirements and design. The Assurance and Test Procedures contains all technical assurance information, including Test, Quality Assurance (QA), and Verification and Validation (V&V). The Management, Engineering, and Assurance Reports is the library and/or listing of all project reports.

  14. Cathodoluminescence Spectrum Imaging Software

    Energy Science and Technology Software Center (ESTSC)

    2011-04-07

    The software developed for spectrum imaging is applied to the analysis of the spectrum series generated by our cathodoluminescence instrumentation. This software provides advanced processing capabilities s such: reconstruction of photon intensity (resolved in energy) and photon energy maps, extraction of the spectrum from selected areas, quantitative imaging mode, pixel-to-pixel correlation spectrum line scans, ASCII, output, filling routines, drift correction, etc.

  15. Software engineering standards and practices

    NASA Technical Reports Server (NTRS)

    Durachka, R. W.

    1981-01-01

    Guidelines are presented for the preparation of a software development plan. The various phases of a software development project are discussed throughout its life cycle including a general description of the software engineering standards and practices to be followed during each phase.

  16. Biological Imaging Software Tools

    PubMed Central

    Eliceiri, Kevin W.; Berthold, Michael R.; Goldberg, Ilya G.; Ibáñez, Luis; Manjunath, B.S.; Martone, Maryann E.; Murphy, Robert F.; Peng, Hanchuan; Plant, Anne L.; Roysam, Badrinath; Stuurman, Nico; Swedlow, Jason R.; Tomancak, Pavel; Carpenter, Anne E.

    2013-01-01

    Few technologies are more widespread in modern biological laboratories than imaging. Recent advances in optical technologies and instrumentation are providing hitherto unimagined capabilities. Almost all these advances have required the development of software to enable the acquisition, management, analysis, and visualization of the imaging data. We review each computational step that biologists encounter when dealing with digital images, the challenges in that domain, and the overall status of available software for bioimage informatics, focusing on open source options. PMID:22743775

  17. Future of Software Engineering Standards

    NASA Technical Reports Server (NTRS)

    Poon, Peter T.

    1997-01-01

    In the new millennium, software engineering standards are expected to continue to influence the process of producing software-intensive systems which are cost-effetive and of high quality. These sytems may range from ground and flight systems used for planetary exploration to educational support systems used in schools as well as consumer-oriented systems.

  18. Image Processing Software

    NASA Technical Reports Server (NTRS)

    1992-01-01

    To convert raw data into environmental products, the National Weather Service and other organizations use the Global 9000 image processing system marketed by Global Imaging, Inc. The company's GAE software package is an enhanced version of the TAE, developed by Goddard Space Flight Center to support remote sensing and image processing applications. The system can be operated in three modes and is combined with HP Apollo workstation hardware.

  19. Software Development Standard Processes (SDSP)

    NASA Technical Reports Server (NTRS)

    Lavin, Milton L.; Wang, James J.; Morillo, Ronald; Mayer, John T.; Jamshidian, Barzia; Shimizu, Kenneth J.; Wilkinson, Belinda M.; Hihn, Jairus M.; Borgen, Rosana B.; Meyer, Kenneth N.; Crean, Kathleen A.; Rinker, George C.; Smith, Thomas P.; Lum, Karen T.; Hanna, Robert A.; Erickson, Daniel E.; Gamble, Edward B., Jr.; Morgan, Scott C.; Kelsay, Michael G.; Newport, Brian J.; Lewicki, Scott A.; Stipanuk, Jeane G.; Cooper, Tonja M.; Meshkat, Leila

    2011-01-01

    A JPL-created set of standard processes is to be used throughout the lifecycle of software development. These SDSPs cover a range of activities, from management and engineering activities, to assurance and support activities. These processes must be applied to software tasks per a prescribed set of procedures. JPL s Software Quality Improvement Project is currently working at the behest of the JPL Software Process Owner to ensure that all applicable software tasks follow these procedures. The SDSPs are captured as a set of 22 standards in JPL s software process domain. They were developed in-house at JPL by a number of Subject Matter Experts (SMEs) residing primarily within the Engineering and Science Directorate, but also from the Business Operations Directorate and Safety and Mission Success Directorate. These practices include not only currently performed best practices, but also JPL-desired future practices in key thrust areas like software architecting and software reuse analysis. Additionally, these SDSPs conform to many standards and requirements to which JPL projects are beholden.

  20. NASA space station software standards issues

    NASA Technical Reports Server (NTRS)

    Tice, G. D., Jr.

    1985-01-01

    The selection and application of software standards present the NASA Space Station Program with the opportunity to serve as a pacesetter for the United States software in the area of software standards. The strengths and weaknesses of each of the NASA defined software standards issues are summerized and discussed. Several significant standards issues are offered for NASA consideration. A challenge is presented for the NASA Space Station Program to serve as a pacesetter for the U.S. Software Industry through: (1) Management commitment to software standards; (2) Overall program participation in software standards; and (3) Employment of the best available technology to support software standards

  1. Computed Tomography software and standards

    SciTech Connect

    Azevedo, S.G.; Martz, H.E.; Skeate, M.F.; Schneberk, D.J.; Roberson, G.P.

    1990-02-20

    This document establishes the software design, nomenclature, and conventions for industrial Computed Tomography (CT) used in the Nondestructive Evaluation Section at Lawrence Livermore National Laboratory. It is mainly a users guide to the technical use of the CT computer codes, but also presents a proposed standard for describing CT experiments and reconstructions. Each part of this document specifies different aspects of the CT software organization. A set of tables at the end describes the CT parameters of interest in our project. 4 refs., 6 figs., 1 tab.

  2. An overview of software safety standards

    SciTech Connect

    Lawrence, J.D.

    1995-10-01

    The writing of standards for software safety is an increasingly important activity. This essay briefly describes the two primary standards-writing organizations, IEEE and IEC, and provides a discussion of some of the more interesting software safety standards.

  3. Confined Space Imager (CSI) Software

    SciTech Connect

    Karelilz, David

    2013-07-03

    The software provides real-time image capture, enhancement, and display, and sensor control for the Confined Space Imager (CSI) sensor system The software captures images over a Cameralink connection and provides the following image enhancements: camera pixel to pixel non-uniformity correction, optical distortion correction, image registration and averaging, and illumination non-uniformity correction. The software communicates with the custom CSI hardware over USB to control sensor parameters and is capable of saving enhanced sensor images to an external USB drive. The software provides sensor control, image capture, enhancement, and display for the CSI sensor system. It is designed to work with the custom hardware.

  4. IMAGE Software Suite

    NASA Technical Reports Server (NTRS)

    Gallagher, Dennis L.; Rose, M. Franklin (Technical Monitor)

    2000-01-01

    The IMAGE Mission is generating a truely unique set of magnetospheric measurement through a first-of-its-kind complement of remote, global observations. These data are being distributed in the Universal Data Format (UDF), which consists of data, calibration, and documentation. This is an open dataset, available to all by request to the National Space Science Data Center (NSSDC) at NASA Goddard Space Flight Center. Browse data, which consists of summary observations, is also available through the NSSDC in the Common Data Format (CDF) and graphic representations of the browse data. Access to the browse data can be achieved through the NSSDC CDAWeb services or by use of NSSDC provided software tools. This presentation documents the software tools, being provided by the IMAGE team, for use in viewing and analyzing the UDF telemetry data. Like the IMAGE data, these tools are openly available. What these tools can do, how they can be obtained, and how they are expected to evolve will be discussed.

  5. Software thermal imager simulator

    NASA Astrophysics Data System (ADS)

    Le Noc, Loic; Pancrati, Ovidiu; Doucet, Michel; Dufour, Denis; Debaque, Benoit; Turbide, Simon; Berthiaume, Francois; Saint-Laurent, Louis; Marchese, Linda; Bolduc, Martin; Bergeron, Alain

    2014-10-01

    A software application, SIST, has been developed for the simulation of the video at the output of a thermal imager. The approach offers a more suitable representation than current identification (ID) range predictors do: the end user can evaluate the adequacy of a virtual camera as if he was using it in real operating conditions. In particular, the ambiguity in the interpretation of ID range is cancelled. The application also allows for a cost-efficient determination of the optimal design of an imager and of its subsystems without over- or under-specification: the performances are known early in the development cycle, for targets, scene and environmental conditions of interest. The simulated image is also a powerful method for testing processing algorithms. Finally, the display, which can be a severe system limitation, is also fully considered in the system by the use of real hardware components. The application consists in Matlabtm routines that simulate the effect of the subsystems atmosphere, optical lens, detector, and image processing algorithms. Calls to MODTRAN® for the atmosphere modeling and to Zemax for the optical modeling have been implemented. The realism of the simulation depends on the adequacy of the input scene for the application and on the accuracy of the subsystem parameters. For high accuracy results, measured imager characteristics such as noise can be used with SIST instead of less accurate models. The ID ranges of potential imagers were assessed for various targets, backgrounds and atmospheric conditions. The optimal specifications for an optical design were determined by varying the Seidel aberration coefficients to find the worst MTF that still respects the desired ID range.

  6. Confined Space Imager (CSI) Software

    Energy Science and Technology Software Center (ESTSC)

    2013-07-03

    The software provides real-time image capture, enhancement, and display, and sensor control for the Confined Space Imager (CSI) sensor system The software captures images over a Cameralink connection and provides the following image enhancements: camera pixel to pixel non-uniformity correction, optical distortion correction, image registration and averaging, and illumination non-uniformity correction. The software communicates with the custom CSI hardware over USB to control sensor parameters and is capable of saving enhanced sensor images to anmore » external USB drive. The software provides sensor control, image capture, enhancement, and display for the CSI sensor system. It is designed to work with the custom hardware.« less

  7. Standardized development of computer software. Part 2: Standards

    NASA Technical Reports Server (NTRS)

    Tausworthe, R. C.

    1978-01-01

    This monograph contains standards for software development and engineering. The book sets forth rules for design, specification, coding, testing, documentation, and quality assurance audits of software; it also contains detailed outlines for the documentation to be produced.

  8. Diversification and Challenges of Software Engineering Standards

    NASA Technical Reports Server (NTRS)

    Poon, Peter T.

    1994-01-01

    The author poses certain questions in this paper: 'In the future, should there be just one software engineering standards set? If so, how can we work towards that goal? What are the challenges of internationalizing standards?' Based on the author's personal view, the statement of his position is as follows: 'There should NOT be just one set of software engineering standards in the future. At the same time, there should NOT be the proliferation of standards, and the number of sets of standards should be kept to a minimum.It is important to understand the diversification of the areas which are spanned by the software engineering standards.' The author goes on to describe the diversification of processes, the diversification in the national and international character of standards organizations, the diversification of the professional organizations producing standards, the diversification of the types of businesses and industries, and the challenges of internationalizing standards.

  9. Standard classification of software documentation

    NASA Technical Reports Server (NTRS)

    Tausworthe, R. C.

    1976-01-01

    General conceptual requirements for standard levels of documentation and for application of these requirements to intended usages. These standards encourage the policy to produce only those forms of documentation that are needed and adequate for the purpose. Documentation standards are defined with respect to detail and format quality. Classes A through D range, in order, from the most definitive down to the least definitive, and categories 1 through 4 range, in order, from high-quality typeset down to handwritten material. Criteria for each of the classes and categories, as well as suggested selection guidelines for each are given.

  10. The IEEE Software Engineering Standards Process

    PubMed Central

    Buckley, Fletcher J.

    1984-01-01

    Software Engineering has emerged as a field in recent years, and those involved increasingly recognize the need for standards. As a result, members of the Institute of Electrical and Electronics Engineers (IEEE) formed a subcommittee to develop these standards. This paper discusses the ongoing standards development, and associated efforts.

  11. Image processing software for imaging spectrometry

    NASA Technical Reports Server (NTRS)

    Mazer, Alan S.; Martin, Miki; Lee, Meemong; Solomon, Jerry E.

    1988-01-01

    The paper presents a software system, Spectral Analysis Manager (SPAM), which has been specifically designed and implemented to provide the exploratory analysis tools necessary for imaging spectrometer data, using only modest computational resources. The basic design objectives are described as well as the major algorithms designed or adapted for high-dimensional images. Included in a discussion of system implementation are interactive data display, statistical analysis, image segmentation and spectral matching, and mixture analysis.

  12. Spotlight-8 Image Analysis Software

    NASA Technical Reports Server (NTRS)

    Klimek, Robert; Wright, Ted

    2006-01-01

    Spotlight is a cross-platform GUI-based software package designed to perform image analysis on sequences of images generated by combustion and fluid physics experiments run in a microgravity environment. Spotlight can perform analysis on a single image in an interactive mode or perform analysis on a sequence of images in an automated fashion. Image processing operations can be employed to enhance the image before various statistics and measurement operations are performed. An arbitrarily large number of objects can be analyzed simultaneously with independent areas of interest. Spotlight saves results in a text file that can be imported into other programs for graphing or further analysis. Spotlight can be run on Microsoft Windows, Linux, and Apple OS X platforms.

  13. Non-standard analysis and embedded software

    NASA Technical Reports Server (NTRS)

    Platek, Richard

    1995-01-01

    One model for computing in the future is ubiquitous, embedded computational devices analogous to embedded electrical motors. Many of these computers will control physical objects and processes. Such hidden computerized environments introduce new safety and correctness concerns whose treatment go beyond present Formal Methods. In particular, one has to begin to speak about Real Space software in analogy with Real Time software. By this we mean, computerized systems which have to meet requirements expressed in the real geometry of space. How to translate such requirements into ordinary software specifications and how to carry out proofs is a major challenge. In this talk we propose a research program based on the use of no-standard analysis. Much detail remains to be carried out. The purpose of the talk is to inform the Formal Methods community that Non-Standard Analysis provides a possible avenue to attack which we believe will be fruitful.

  14. Objective facial photograph analysis using imaging software.

    PubMed

    Pham, Annette M; Tollefson, Travis T

    2010-05-01

    Facial analysis is an integral part of the surgical planning process. Clinical photography has long been an invaluable tool in the surgeon's practice not only for accurate facial analysis but also for enhancing communication between the patient and surgeon, for evaluating postoperative results, for medicolegal documentation, and for educational and teaching opportunities. From 35-mm slide film to the digital technology of today, clinical photography has benefited greatly from technological advances. With the development of computer imaging software, objective facial analysis becomes easier to perform and less time consuming. Thus, while the original purpose of facial analysis remains the same, the process becomes much more efficient and allows for some objectivity. Although clinical judgment and artistry of technique is never compromised, the ability to perform objective facial photograph analysis using imaging software may become the standard in facial plastic surgery practices in the future. PMID:20511080

  15. Automatic AVHRR image navigation software

    NASA Technical Reports Server (NTRS)

    Baldwin, Dan; Emery, William

    1992-01-01

    This is the final report describing the work done on the project entitled Automatic AVHRR Image Navigation Software funded through NASA-Washington, award NAGW-3224, Account 153-7529. At the onset of this project, we had developed image navigation software capable of producing geo-registered images from AVHRR data. The registrations were highly accurate but required a priori knowledge of the spacecraft's axes alignment deviations, commonly known as attitude. The three angles needed to describe the attitude are called roll, pitch, and yaw, and are the components of the deviations in the along scan, along track and about center directions. The inclusion of the attitude corrections in the navigation software results in highly accurate georegistrations, however, the computation of the angles is very tedious and involves human interpretation for several steps. The technique also requires easily identifiable ground features which may not be available due to cloud cover or for ocean data. The current project was motivated by the need for a navigation system which was automatic and did not require human intervention or ground control points. The first step in creating such a system must be the ability to parameterize the spacecraft's attitude. The immediate goal of this project was to study the attitude fluctuations and determine if they displayed any systematic behavior which could be modeled or parameterized. We chose a period in 1991-1992 to study the attitude of the NOAA 11 spacecraft using data from the Tiros receiving station at the Colorado Center for Astrodynamic Research (CCAR) at the University of Colorado.

  16. Automated computer software development standards enforcement

    SciTech Connect

    Yule, H.P.; Formento, J.W.

    1991-01-01

    The Uniform Development Environment (UDE) is being investigated as a means of enforcing software engineering standards. For the programmer, it provides an environment containing the tools and utilities necessary for orderly and controlled development and maintenance of code according to requirements. In addition, it provides DoD management and developer management the tools needed for all phases of software life cycle management and control, from project planning and management, to code development, configuration management, version control, and change control. This paper reports the status of UDE development and field testing. 5 refs.

  17. Salvo: Seismic imaging software for complex geologies

    SciTech Connect

    OBER,CURTIS C.; GJERTSEN,ROB; WOMBLE,DAVID E.

    2000-03-01

    This report describes Salvo, a three-dimensional seismic-imaging software for complex geologies. Regions of complex geology, such as overthrusts and salt structures, can cause difficulties for many seismic-imaging algorithms used in production today. The paraxial wave equation and finite-difference methods used within Salvo can produce high-quality seismic images in these difficult regions. However this approach comes with higher computational costs which have been too expensive for standard production. Salvo uses improved numerical algorithms and methods, along with parallel computing, to produce high-quality images and to reduce the computational and the data input/output (I/O) costs. This report documents the numerical algorithms implemented for the paraxial wave equation, including absorbing boundary conditions, phase corrections, imaging conditions, phase encoding, and reduced-source migration. This report also describes I/O algorithms for large seismic data sets and images and parallelization methods used to obtain high efficiencies for both the computations and the I/O of seismic data sets. Finally, this report describes the required steps to compile, port and optimize the Salvo software, and describes the validation data sets used to help verify a working copy of Salvo.

  18. Easy and Accessible Imaging Software

    NASA Technical Reports Server (NTRS)

    2003-01-01

    DATASTAR, Inc., of Picayune, Mississippi, has taken NASA s award-winning Earth Resources Laboratory Applications Software (ELAS) program and evolved it into a user-friendly desktop application and Internet service to perform processing, analysis, and manipulation of remotely sensed imagery data. NASA s Stennis Space Center developed ELAS in the early 1980s to process satellite and airborne sensor imagery data of the Earth s surface into readable and accessible information. Since then, ELAS information has been applied worldwide to determine soil content, rainfall levels, and numerous other variances of topographical information. However, end-users customarily had to depend on scientific or computer experts to provide the results, because the imaging processing system was intricate and labor intensive.

  19. Software for Automated Image-to-Image Co-registration

    NASA Technical Reports Server (NTRS)

    Benkelman, Cody A.; Hughes, Heidi

    2007-01-01

    The project objectives are: a) Develop software to fine-tune image-to-image co-registration, presuming images are orthorectified prior to input; b) Create a reusable software development kit (SDK) to enable incorporation of these tools into other software; d) provide automated testing for quantitative analysis; and e) Develop software that applies multiple techniques to achieve subpixel precision in the co-registration of image pairs.

  20. Sandia software guidelines. Volume 3. Standards, practices, and conventions

    SciTech Connect

    Not Available

    1986-07-01

    This volume is one in a series of Sandia Software Guidelines intended for use in producing quality software within Sandia National Laboratories. In consonance with the IEEE Standard for Software Quality Assurance Plans, this volume identifies software standards, conventions, and practices. These guidelines are the result of a collective effort within Sandia National Laboratories to define recommended deliverables and to document standards, practices, and conventions which will help ensure quality software. 66 refs., 5 figs., 6 tabs.

  1. Image analysis library software development

    NASA Technical Reports Server (NTRS)

    Guseman, L. F., Jr.; Bryant, J.

    1977-01-01

    The Image Analysis Library consists of a collection of general purpose mathematical/statistical routines and special purpose data analysis/pattern recognition routines basic to the development of image analysis techniques for support of current and future Earth Resources Programs. Work was done to provide a collection of computer routines and associated documentation which form a part of the Image Analysis Library.

  2. Software Operates On Bit-Map Images

    NASA Technical Reports Server (NTRS)

    Choi, Diana

    1992-01-01

    PIXTOOLS is software for Silicon Graphics IRIS consisting of thirteen programs plus library for operating on bit-map images. Enables user to create, edit, and save high-resolution images in forms in which displayed on video screens, resize them, and capture them. Eleven programs print information and read and write files. Two offer graphical interfaces. Menus enable manipulation of images and background color and saving of an image screen to file. Written in C.

  3. Infrared Imaging Data Reduction Software and Techniques

    NASA Astrophysics Data System (ADS)

    Sabbey, C. N.; McMahon, R. G.; Lewis, J. R.; Irwin, M. J.

    Developed to satisfy certain design requirements not met in existing packages (e.g., full weight map handling) and to optimize the software for large data sets (non-interactive tasks that are CPU and disk efficient), the InfraRed Data Reduction software package is a small ANSI C library of fast image processing routines for automated pipeline reduction of infrared (dithered) observations. The software includes stand-alone C programs for tasks such as running sky frame subtraction with object masking, image registration and co-addition with weight maps, dither offset measurement using cross-correlation, and object mask dilation. Although currently used for near-IR mosaic images, the modular software is concise and readily adaptable for reuse in other work. IRDR, available via anonymous ftp at ftp.ast.cam.ac.uk in pub/sabbey

  4. Analyzing huge pathology images with open source software

    PubMed Central

    2013-01-01

    Background Digital pathology images are increasingly used both for diagnosis and research, because slide scanners are nowadays broadly available and because the quantitative study of these images yields new insights in systems biology. However, such virtual slides build up a technical challenge since the images occupy often several gigabytes and cannot be fully opened in a computer’s memory. Moreover, there is no standard format. Therefore, most common open source tools such as ImageJ fail at treating them, and the others require expensive hardware while still being prohibitively slow. Results We have developed several cross-platform open source software tools to overcome these limitations. The NDPITools provide a way to transform microscopy images initially in the loosely supported NDPI format into one or several standard TIFF files, and to create mosaics (division of huge images into small ones, with or without overlap) in various TIFF and JPEG formats. They can be driven through ImageJ plugins. The LargeTIFFTools achieve similar functionality for huge TIFF images which do not fit into RAM. We test the performance of these tools on several digital slides and compare them, when applicable, to standard software. A statistical study of the cells in a tissue sample from an oligodendroglioma was performed on an average laptop computer to demonstrate the efficiency of the tools. Conclusions Our open source software enables dealing with huge images with standard software on average computers. They are cross-platform, independent of proprietary libraries and very modular, allowing them to be used in other open source projects. They have excellent performance in terms of execution speed and RAM requirements. They open promising perspectives both to the clinician who wants to study a single slide and to the research team or data centre who do image analysis of many slides on a computer cluster. Virtual slides The virtual slide(s) for this article can be found here: http

  5. Standard practices for the implementation of computer software

    NASA Technical Reports Server (NTRS)

    Irvine, A. P. (Editor)

    1978-01-01

    A standard approach to the development of computer program is provided that covers the file cycle of software development from the planning and requirements phase through the software acceptance testing phase. All documents necessary to provide the required visibility into the software life cycle process are discussed in detail.

  6. Software for Simulation of Hyperspectral Images

    NASA Technical Reports Server (NTRS)

    Richtsmeier, Steven C.; Singer-Berk, Alexander; Bernstein, Lawrence S.

    2002-01-01

    A package of software generates simulated hyperspectral images for use in validating algorithms that generate estimates of Earth-surface spectral reflectance from hyperspectral images acquired by airborne and spaceborne instruments. This software is based on a direct simulation Monte Carlo approach for modeling three-dimensional atmospheric radiative transport as well as surfaces characterized by spatially inhomogeneous bidirectional reflectance distribution functions. In this approach, 'ground truth' is accurately known through input specification of surface and atmospheric properties, and it is practical to consider wide variations of these properties. The software can treat both land and ocean surfaces and the effects of finite clouds with surface shadowing. The spectral/spatial data cubes computed by use of this software can serve both as a substitute for and a supplement to field validation data.

  7. Video Image Stabilization and Registration (VISAR) Software

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Two scientists at NASA's Marshall Space Flight Center,atmospheric scientist Paul Meyer and solar physicist Dr. David Hathaway, developed promising new software, called Video Image stabilization and Registration (VISAR), which is illustrated in this Quick Time movie. VISAR is a computer algorithm that stabilizes camera motion in the horizontal and vertical as well as rotation and zoom effects producing clearer images of moving objects, smoothes jagged edges, enhances still images, and reduces video noise or snow. It could steady images of ultrasounds, which are infamous for their grainy, blurred quality. VISAR could also have applications in law enforcement, medical, and meteorological imaging. The software can be used for defense application by improving reconnaissance video imagery made by military vehicles, aircraft, and ships traveling in harsh, rugged environments.

  8. Video Image Stabilization and Registration (VISAR) Software

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Two scientists at NASA's Marshall Space Flight Center, atmospheric scientist Paul Meyer and solar physicist Dr. David Hathaway, developed promising new software, called Video Image Stabilization and Registration (VISAR), which is illustrated in this Quick Time movie. VISAR is a computer algorithm that stabilizes camera motion in the horizontal and vertical as well as rotation and zoom effects producing clearer images of moving objects, smoothes jagged edges, enhances still images, and reduces video noise or snow. It could steady images of ultrasounds, which are infamous for their grainy, blurred quality. VISAR could also have applications in law enforcement, medical, and meteorological imaging. The software can be used for defense application by improving reconnaissance video imagery made by military vehicles, aircraft, and ships traveling in harsh, rugged environments.

  9. A study of software standards used in the avionics industry

    NASA Technical Reports Server (NTRS)

    Hayhurst, Kelly J.

    1994-01-01

    Within the past decade, software has become an increasingly common element in computing systems. In particular, the role of software used in the aerospace industry, especially in life- or safety-critical applications, is rapidly expanding. This intensifies the need to use effective techniques for achieving and verifying the reliability of avionics software. Although certain software development processes and techniques are mandated by government regulating agencies, no one methodology has been shown to consistently produce reliable software. The knowledge base for designing reliable software simply has not reached the maturity of its hardware counterpart. In an effort to increase our understanding of software, the Langley Research Center conducted a series of experiments over 15 years with the goal of understanding why and how software fails. As part of this program, the effectiveness of current industry standards for the development of avionics is being investigated. This study involves the generation of a controlled environment to conduct scientific experiments on software processes.

  10. Video Image Stabilization and Registration (VISAR) Software

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Two scientists at NASA's Marshall Space Flight Center,atmospheric scientist Paul Meyer and solar physicist Dr. David Hathaway, developed promising new software, called Video Image Stabilization and Registration (VISAR). VISAR may help law enforcement agencies catch criminals by improving the quality of video recorded at crime scenes. In this photograph, the single frame at left, taken at night, was brightened in order to enhance details and reduce noise or snow. To further overcome the video defects in one frame, Law enforcement officials can use VISAR software to add information from multiple frames to reveal a person. Images from less than a second of videotape were added together to create the clarified image at right. VISAR stabilizes camera motion in the horizontal and vertical as well as rotation and zoom effects producing clearer images of moving objects, smoothes jagged edges, enhances still images, and reduces video noise or snow. VISAR could also have applications in medical and meteorological imaging. It could steady images of ultrasounds, which are infamous for their grainy, blurred quality. The software can be used for defense application by improving recornaissance video imagery made by military vehicles, aircraft, and ships traveling in harsh, rugged environments.

  11. Video Image Stabilization and Registration (VISAR) Software

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Two scientists at NASA Marshall Space Flight Center, atmospheric scientist Paul Meyer (left) and solar physicist Dr. David Hathaway, have developed promising new software, called Video Image Stabilization and Registration (VISAR), that may help law enforcement agencies to catch criminals by improving the quality of video recorded at crime scenes, VISAR stabilizes camera motion in the horizontal and vertical as well as rotation and zoom effects; produces clearer images of moving objects; smoothes jagged edges; enhances still images; and reduces video noise of snow. VISAR could also have applications in medical and meteorological imaging. It could steady images of Ultrasounds which are infamous for their grainy, blurred quality. It would be especially useful for tornadoes, tracking whirling objects and helping to determine the tornado's wind speed. This image shows two scientists reviewing an enhanced video image of a license plate taken from a moving automobile.

  12. Contracting for Computer Software in Standardized Computer Languages

    PubMed Central

    Brannigan, Vincent M.; Dayhoff, Ruth E.

    1982-01-01

    The interaction between standardized computer languages and contracts for programs which use these languages is important to the buyer or seller of software. The rationale for standardization, the problems in standardizing computer languages, and the difficulties of determining whether the product conforms to the standard are issues which must be understood. The contract law processes of delivery, acceptance testing, acceptance, rejection, and revocation of acceptance are applicable to the contracting process for standard language software. Appropriate contract language is suggested for requiring strict compliance with a standard, and an overview of remedies is given for failure to comply.

  13. Product review: lucis image processing software.

    PubMed

    Johnson, J E

    1999-04-01

    Lucis is a software program that allows the manipulation of images through the process of selective contrast pattern emphasis. Using an image-processing algorithm called Differential Hysteresis Processing (DHP), Lucis extracts and highlights patterns based on variations in image intensity (luminance). The result is that details can be seen that would otherwise be hidden in deep shadow or excessive brightness. The software is contained on a single floppy disk, is easy to install on a PC, simple to use, and runs on Windows 95, Windows 98, and Windows NT operating systems. The cost is $8,500 for a license, but is estimated to save a great deal of money in photographic materials, time, and labor that would have otherwise been spent in the darkroom. Superb images are easily obtained from unstained (no lead or uranium) sections, and stored image files sent to laser printers are of publication quality. The software can be used not only for all types of microscopy, including color fluorescence light microscopy, biological and materials science electron microscopy (TEM and SEM), but will be beneficial in medicine, such as X-ray films (pending approval by the FDA), and in the arts. PMID:10206154

  14. Standardization from below: Science and Technology Standards and Educational Software

    ERIC Educational Resources Information Center

    Fleischmann, Kenneth R.

    2007-01-01

    Education in the United States is becoming increasingly standardized, with the standards being initiated at the national level and then trickling down to the state level and finally the local level. Yet, this top-down approach to educational standards carries with it significant limitations, such as loss of local autonomy and restrictions on the…

  15. SUPRIM: easily modified image processing software.

    PubMed

    Schroeter, J P; Bretaudiere, J P

    1996-01-01

    A flexible, modular software package intended for the processing of electron microscopy images is presented. The system consists of a set of image processing tools or filters, written in the C programming language, and a command line style user interface based on the UNIX shell. The pipe and filter structure of UNIX and the availability of command files in the form of shell scripts eases the construction of complex image processing procedures from the simpler tools. Implementation of a new image processing algorithm in SUPRIM may often be performed by construction of a new shell script, using already existing tools. Currently, the package has been used for two- and three-dimensional image processing and reconstruction of macromolecules and other structures of biological interest. PMID:8742734

  16. Open environment for image processing and software development

    NASA Astrophysics Data System (ADS)

    Rasure, John R.; Young, Mark

    1992-04-01

    The main goal of the Khoros software project is to create and provide an integrated software development environment for information processing and data visualization. The Khoros software system is now being used as a foundation to improve productivity and promote software reuse in a wide variety of application domain. A powerful feature of the Khoros system is the high-level, abstract visual language that can be employed to significantly boost the productivity of the researcher. Central to the Khoros system is the need for a consistent yet flexible user interface development system that provides cohesiveness to the vast number of programs that make up the Khoros system. Automated tools assist in maintenance as well as development of programs. The software structure that embodies this system provides for extensibility and portability, and allows for easy tailoring to target specific application domains and processing environments. First, an overview of the Khoros software environment is given. Then this paper presents the abstract applications programmer interface, API, the data services that are provided in Khoros to support it, and the Khoros visualization and image file format. The authors contend that Khoros is an excellent environment for the exploration and implementation of imaging standards.

  17. A general check standard measurement and database software program

    SciTech Connect

    Duda, L.E.

    1998-04-01

    One way to verify that a measurement system remains under control and is functioning as expected is to use check standards. To aid in the measurement assurance process using check standards, a software program was developed that allows the user to enter measurements for a check standard and compare, by control charts plotted on the computer monitor, the new measurements with a historical database of measurements of the same device. The program is especially suited for check standards which are measured as a function of another parameter such as frequency, voltage, temperature, etc. This paper describes the software function and discusses its capabilities and applications.

  18. Imaging Sensor Flight and Test Equipment Software

    NASA Technical Reports Server (NTRS)

    Freestone, Kathleen; Simeone, Louis; Robertson, Byran; Frankford, Maytha; Trice, David; Wallace, Kevin; Wilkerson, DeLisa

    2007-01-01

    The Lightning Imaging Sensor (LIS) is one of the components onboard the Tropical Rainfall Measuring Mission (TRMM) satellite, and was designed to detect and locate lightning over the tropics. The LIS flight code was developed to run on a single onboard digital signal processor, and has operated the LIS instrument since 1997 when the TRMM satellite was launched. The software provides controller functions to the LIS Real-Time Event Processor (RTEP) and onboard heaters, collects the lightning event data from the RTEP, compresses and formats the data for downlink to the satellite, collects housekeeping data and formats the data for downlink to the satellite, provides command processing and interface to the spacecraft communications and data bus, and provides watchdog functions for error detection. The Special Test Equipment (STE) software was designed to operate specific test equipment used to support the LIS hardware through development, calibration, qualification, and integration with the TRMM spacecraft. The STE software provides the capability to control instrument activation, commanding (including both data formatting and user interfacing), data collection, decompression, and display and image simulation. The LIS STE code was developed for the DOS operating system in the C programming language. Because of the many unique data formats implemented by the flight instrument, the STE software was required to comprehend the same formats, and translate them for the test operator. The hardware interfaces to the LIS instrument using both commercial and custom computer boards, requiring that the STE code integrate this variety into a working system. In addition, the requirement to provide RTEP test capability dictated the need to provide simulations of background image data with short-duration lightning transients superimposed. This led to the development of unique code used to control the location, intensity, and variation above background for simulated lightning strikes

  19. Sine-Fitting Software for IEEE Standard 1057

    SciTech Connect

    Blair, Jerome

    1999-05-01

    Software application that performs the calculations related to the sine-fit tests of IEEE Standard 1057/94. Example outputs and explainations of these outputs to determine the important characteristics of the device under test. This application performs the calculations related to sine-fit tests and uses 4-parameter sine fit from IEEE Standard 1057-1994.

  20. ICER-3D Hyperspectral Image Compression Software

    NASA Technical Reports Server (NTRS)

    Xie, Hua; Kiely, Aaron; Klimesh, matthew; Aranki, Nazeeh

    2010-01-01

    Software has been developed to implement the ICER-3D algorithm. ICER-3D effects progressive, three-dimensional (3D), wavelet-based compression of hyperspectral images. If a compressed data stream is truncated, the progressive nature of the algorithm enables reconstruction of hyperspectral data at fidelity commensurate with the given data volume. The ICER-3D software is capable of providing either lossless or lossy compression, and incorporates an error-containment scheme to limit the effects of data loss during transmission. The compression algorithm, which was derived from the ICER image compression algorithm, includes wavelet-transform, context-modeling, and entropy coding subalgorithms. The 3D wavelet decomposition structure used by ICER-3D exploits correlations in all three dimensions of sets of hyperspectral image data, while facilitating elimination of spectral ringing artifacts, using a technique summarized in "Improving 3D Wavelet-Based Compression of Spectral Images" (NPO-41381), NASA Tech Briefs, Vol. 33, No. 3 (March 2009), page 7a. Correlation is further exploited by a context-modeling subalgorithm, which exploits spectral dependencies in the wavelet-transformed hyperspectral data, using an algorithm that is summarized in "Context Modeler for Wavelet Compression of Hyperspectral Images" (NPO-43239), which follows this article. An important feature of ICER-3D is a scheme for limiting the adverse effects of loss of data during transmission. In this scheme, as in the similar scheme used by ICER, the spatial-frequency domain is partitioned into rectangular error-containment regions. In ICER-3D, the partitions extend through all the wavelength bands. The data in each partition are compressed independently of those in the other partitions, so that loss or corruption of data from any partition does not affect the other partitions. Furthermore, because compression is progressive within each partition, when data are lost, any data from that partition received

  1. Standardized development of computer software. Part 1: Methods

    NASA Technical Reports Server (NTRS)

    Tausworthe, R. C.

    1976-01-01

    This work is a two-volume set on standards for modern software engineering methodology. This volume presents a tutorial and practical guide to the efficient development of reliable computer software, a unified and coordinated discipline for design, coding, testing, documentation, and project organization and management. The aim of the monograph is to provide formal disciplines for increasing the probability of securing software that is characterized by high degrees of initial correctness, readability, and maintainability, and to promote practices which aid in the consistent and orderly development of a total software system within schedule and budgetary constraints. These disciplines are set forth as a set of rules to be applied during software development to drastically reduce the time traditionally spent in debugging, to increase documentation quality, to foster understandability among those who must come in contact with it, and to facilitate operations and alterations of the program as requirements on the program environment change.

  2. Image processing software for imaging spectrometry data analysis

    NASA Technical Reports Server (NTRS)

    Mazer, Alan; Martin, Miki; Lee, Meemong; Solomon, Jerry E.

    1988-01-01

    Imaging spectrometers simultaneously collect image data in hundreds of spectral channels, from the near-UV to the IR, and can thereby provide direct surface materials identification by means resembling laboratory reflectance spectroscopy. Attention is presently given to a software system, the Spectral Analysis Manager (SPAM) for the analysis of imaging spectrometer data. SPAM requires only modest computational resources and is composed of one main routine and a set of subroutine libraries. Additions and modifications are relatively easy, and special-purpose algorithms have been incorporated that are tailored to geological applications.

  3. IMCAT: Image and Catalogue Manipulation Software

    NASA Astrophysics Data System (ADS)

    Kaiser, Nick

    2011-08-01

    The IMCAT software was developed initially to do faint galaxy photometry for weak lensing studies, and provides a fairly complete set of tools for this kind of work. Unlike most packages for doing data analysis, the tools are standalone unix commands which you can invoke from the shell, via shell scripts or from perl scripts. The tools are arranges in a tree of directories. One main branch is the 'imtools'. These deal only with fits files. The most important imtool is the 'image calculator' 'ic' which allows one to do rather general operations on fits images. A second branch is the 'catools' which operate only on catalogues. The key cattool is 'lc'; this effectively defines the format of IMCAT catalogues, and allows one to do very general operations on and filtering of such catalogues. A third branch is the 'imcattools'. These tend to be much more specialised than the cattools and imcattools and are focussed on faint galaxy photometry.

  4. Software for Viewing Landsat Mosaic Images

    NASA Technical Reports Server (NTRS)

    Watts, Jack; Farve, Catherine L.; Harvey, Craig

    2002-01-01

    A Windows-based computer program has been written to enable novice users (especially educators and students) to view images of large areas of the Earth (e.g., the continental United States) generated from image data acquired in the Landsat observations performed circa the year 1990. The large-area images are constructed as mosaics from the original Landsat images, which were acquired in several wavelength bands and each of which spans an area (in effect, one tile of a mosaic) of 5 in latitude by approximately equal to 6 degrees in longitude. Whereas the original Landsat data are registered on a universal transverse Mercator (UTM) grid, the program converts the UTM coordinates of a mouse pointer in the image to latitude and longitude, which are continuously updated and displayed as the pointer is moved. The mosaic image currently on display can be exported as a Windows bit-map file. Other images (e.g., of state boundaries or interstate highways) can be overlaid on Landsat mosaics. The program interacts with the user via standard toolbar, keyboard, and mouse user interfaces. The program is supplied on a compact disk along with tutorial and educational information.

  5. Software for Viewing Landsat Mosaic Images

    NASA Technical Reports Server (NTRS)

    2002-01-01

    A Windows-based computer program has been written to enable novice users (especially educators and students) to view images of large areas of the Earth (e.g., the continental United States) generated from image data acquired in the Landsat observations performed circa the year 1990. The large-area images are constructed as mosaics from the original Landsat images, which were acquired in several wavelength bands and each of which spans an area (in effect, one tile of a mosaic) of approx. 5 in latitude by approx. 6 deg in longitude. Whereas the original Landsat data are registered on a universal transverse Mercator (UTM) grid, the program converts the UTM coordinates of a mouse pointer in the image to latitude and longitude, which are continuously updated and displayed as the pointer is moved. The mosaic image currently on display can be exported as a Windows bit-map file. Other images (e.g., of state boundaries or interstate highways) can be overlaid on Landsat mosaics. The program interacts with the user via standard toolbar, keyboard, and mouse user interfaces. The program is supplied on a compact disk along with tutorial and educational information.

  6. Software for Viewing Landsat Mosaic Images

    NASA Technical Reports Server (NTRS)

    Watts, Zack; Farve, Catharine L.; Harvey, Craig

    2003-01-01

    A Windows-based computer program has been written to enable novice users (especially educators and students) to view images of large areas of the Earth (e.g., the continental United States) generated from image data acquired in the Landsat observations performed circa the year 1990. The large-area images are constructed as mosaics from the original Landsat images, which were acquired in several wavelength bands and each of which spans an area (in effect, one tile of a mosaic) of .5 in latitude by .6 in longitude. Whereas the original Landsat data are registered on a universal transverse Mercator (UTM) grid, the program converts the UTM coordinates of a mouse pointer in the image to latitude and longitude, which are continuously updated and displayed as the pointer is moved. The mosaic image currently on display can be exported as a Windows bitmap file. Other images (e.g., of state boundaries or interstate highways) can be overlaid on Landsat mosaics. The program interacts with the user via standard toolbar, keyboard, and mouse user interfaces. The program is supplied on a compact disk along with tutorial and educational information.

  7. Standards guide for space and earth sciences computer software

    NASA Technical Reports Server (NTRS)

    Mason, G.; Chapman, R.; Klinglesmith, D.; Linnekin, J.; Putney, W.; Shaffer, F.; Dapice, R.

    1972-01-01

    Guidelines for the preparation of systems analysis and programming work statements are presented. The data is geared toward the efficient administration of available monetary and equipment resources. Language standards and the application of good management techniques to software development are emphasized.

  8. Introduction to color facsimile: hardware, software, and standards

    NASA Astrophysics Data System (ADS)

    Lee, Daniel T. L.

    1996-03-01

    The design of a color facsimile machine presents a number of unique challenges. From the technical side it requires a very efficient, seamless integration of algorithms and architectures in image scanning, compression, color processing, communications and printing. From the standardization side, it requires that agreements on the color representation space, negotiation protocols and coding methods must be reached through formal international standardization process. This paper presents an introduction to the overall development of color facsimile. An overview of the recent development of the international Color Facsimile Standard is first presented. The standard enables the transmission of continuous-tone colors and gray-scale images in Group 3 (over conventional telephone lines) and Group 4 (over digital lines) facsimile services, with backwards compatibility to current black and white facsimile. The standard provides specifications on color representation and color image encoding methods as well as extensions to current facsimile protocols to enable the transmission of color images. The technical challenges in implementing the color facsimile standard on existing facsimile machines are described next. The integration of algorithms and architectures in color scanning, compression, color processing, transmission and rendering of received hardcopy facsimile in a color imaging pipeline is described. Lastly, the current status on softcopy color facsimile standardization is reported.

  9. Advanced software design and standards for traffic signal control

    SciTech Connect

    Bullock, D.; Hendrickson, C. )

    1992-05-01

    Improves traffic management and control systems are widely reported to be cost-effective investments. Simply retiming signals can provide significant benefits by reducing vehicle stops, travel times, and fuel consumption. The installation of advanced traffic management systems (ATMS) can provide even greater savings. However, many hardware and software obstacles have impeded the actual implementation of advanced traffic management systems. The general hardware and software limitations of current traffic signal control technology are reviewed in this paper. The impact of these deficiencies is discussed in the context of three example applications. Based on this discussion, the paper identifies several computing issues that should be addressed in order to reduce the effort involved with integrating existing traffic control devices. Adoption of standard industrial control computing platforms and development of new communication and software engineering models are recommendrecommended.

  10. International standards activities in image data compression

    NASA Technical Reports Server (NTRS)

    Haskell, Barry

    1989-01-01

    Integrated Services Digital Network (ISDN); coding for color TV, video conferencing, video conferencing/telephone, and still color images; ISO color image coding standard; and ISO still picture standard are briefly discussed. This presentation is represented by viewgraphs only.

  11. Open source software projects of the caBIG In Vivo Imaging Workspace Software special interest group.

    PubMed

    Prior, Fred W; Erickson, Bradley J; Tarbox, Lawrence

    2007-11-01

    The Cancer Bioinformatics Grid (caBIG) program was created by the National Cancer Institute to facilitate sharing of IT infrastructure, data, and applications among the National Cancer Institute-sponsored cancer research centers. The program was launched in February 2004 and now links more than 50 cancer centers. In April 2005, the In Vivo Imaging Workspace was added to promote the use of imaging in cancer clinical trials. At the inaugural meeting, four special interest groups (SIGs) were established. The Software SIG was charged with identifying projects that focus on open-source software for image visualization and analysis. To date, two projects have been defined by the Software SIG. The eXtensible Imaging Platform project has produced a rapid application development environment that researchers may use to create targeted workflows customized for specific research projects. The Algorithm Validation Tools project will provide a set of tools and data structures that will be used to capture measurement information and associated needed to allow a gold standard to be defined for the given database against which change analysis algorithms can be tested. Through these and future efforts, the caBIG In Vivo Imaging Workspace Software SIG endeavors to advance imaging informatics and provide new open-source software tools to advance cancer research. PMID:17846835

  12. Software components for medical image visualization and surgical planning

    NASA Astrophysics Data System (ADS)

    Starreveld, Yves P.; Gobbi, David G.; Finnis, Kirk; Peters, Terence M.

    2001-05-01

    Purpose: The development of new applications in medical image visualization and surgical planning requires the completion of many common tasks such as image reading and re-sampling, segmentation, volume rendering, and surface display. Intra-operative use requires an interface to a tracking system and image registration, and the application requires basic, easy to understand user interface components. Rapid changes in computer and end-application hardware, as well as in operating systems and network environments make it desirable to have a hardware and operating system as an independent collection of reusable software components that can be assembled rapidly to prototype new applications. Methods: Using the OpenGL based Visualization Toolkit as a base, we have developed a set of components that implement the above mentioned tasks. The components are written in both C++ and Python, but all are accessible from Python, a byte compiled scripting language. The components have been used on the Red Hat Linux, Silicon Graphics Iris, Microsoft Windows, and Apple OS X platforms. Rigorous object-oriented software design methods have been applied to ensure hardware independence and a standard application programming interface (API). There are components to acquire, display, and register images from MRI, MRA, CT, Computed Rotational Angiography (CRA), Digital Subtraction Angiography (DSA), 2D and 3D ultrasound, video and physiological recordings. Interfaces to various tracking systems for intra-operative use have also been implemented. Results: The described components have been implemented and tested. To date they have been used to create image manipulation and viewing tools, a deep brain functional atlas, a 3D ultrasound acquisition and display platform, a prototype minimally invasive robotic coronary artery bypass graft planning system, a tracked neuro-endoscope guidance system and a frame-based stereotaxy neurosurgery planning tool. The frame-based stereotaxy module has been

  13. Standardizing Activation Analysis: New Software for Photon Activation Analysis

    SciTech Connect

    Sun, Z. J.; Wells, D.; Green, J.; Segebade, C.

    2011-06-01

    Photon Activation Analysis (PAA) of environmental, archaeological and industrial samples requires extensive data analysis that is susceptible to error. For the purpose of saving time, manpower and minimizing error, a computer program was designed, built and implemented using SQL, Access 2007 and asp.net technology to automate this process. Based on the peak information of the spectrum and assisted by its PAA library, the program automatically identifies elements in the samples and calculates their concentrations and respective uncertainties. The software also could be operated in browser/server mode, which gives the possibility to use it anywhere the internet is accessible. By switching the nuclide library and the related formula behind, the new software can be easily expanded to neutron activation analysis (NAA), charged particle activation analysis (CPAA) or proton-induced X-ray emission (PIXE). Implementation of this would standardize the analysis of nuclear activation data. Results from this software were compared to standard PAA analysis with excellent agreement. With minimum input from the user, the software has proven to be fast, user-friendly and reliable.

  14. Software considerations in the design of an image archive

    NASA Astrophysics Data System (ADS)

    Seshadri, Sridhar B.; Kishore, Sheel; Khalsa, Satjeet S.; Stevens, John F.; Arenson, Ronald L.

    1990-08-01

    The Radiology Department at the Hospital of the University of Pennsylvania is currently expanding its prototype Picture Archiving and Communications System (PACS) into a fully functional clinical system. The first phase of this expansion involves three major efforts: the upgrade of the 10-Mbit token-ring to an 80-Mbit backbone with associated sub-nets, the implementation of a large-scale image archive, and, an interface between the PACS and the Department's Radiology Information System. Upon the completion of this phase, the PACS will serve the storage and display needs of four MRI scanners and four of the Hospital's Intensive Care Units. This paper addresses the implementation of a software suite designed to duplicate and enhance conventional Film Library functions on a PACS. The structure of an electronic 'folder' based upon the ACR/NEMA Digital Imaging and Communication Standard is also introduced.

  15. Automatic Image Registration Using Free and Open Source Software

    NASA Astrophysics Data System (ADS)

    Giri Babu, D.; Raja Shekhar, S. S.; Chandrasekar, K.; Sesha Sai, M. V. R.; Diwakar, P. G.; Dadhwal, V. K.

    2014-11-01

    Image registration is the most critical operation in remote sensing applications to enable location based referencing and analysis of earth features. This is the first step for any process involving identification, time series analysis or change detection using a large set of imagery over a region. Most of the reliable procedures involve time consuming and laborious manual methods of finding the corresponding matching features of the input image with respect to reference. Also the process, as it involves human interaction, does not converge with multiple operations at different times. Automated procedures rely on accurately determining the matching locations or points from both the images under comparison and the procedures are robust and consistent over time. Different algorithms are available to achieve this, based on pattern recognition, feature based detection, similarity techniques etc. In the present study and implementation, Correlation based methods have been used with a improvement over newly developed technique of identifying and pruning the false points of match. Free and Open Source Software (FOSS) have been used to develop the methodology to reach a wider audience, without any dependency on COTS (Commercially off the shelf) software. Standard deviation from foci of the ellipse of correlated points, is a statistical means of ensuring the best match of the points of interest based on both intensity values and location correspondence. The methodology is developed and standardised by enhancements to meet the registration requirements of remote sensing imagery. Results have shown a performance improvement, nearly matching the visual techniques and have been implemented in remote sensing operational projects. The main advantage of the proposed methodology is its viability in production mode environment. This paper also shows that the visualization capabilities of MapWinGIS, GDAL's image handling abilities and OSSIM's correlation facility can be efficiently

  16. Integration of CMM software standards for nanopositioning and nanomeasuring machines

    NASA Astrophysics Data System (ADS)

    Sparrer, E.; Machleidt, T.; Hausotte, T.; Manske, E.; Franke, K.-H.

    2011-06-01

    The paper focuses on the utilization of nanopositioning and nanomeasuring machines as a three dimensional coordinate measuring machine by means of the international harmonized communication protocol Inspection plus plus for Dimensional Measurement Equipment (abbreviated I++DME). I++DME was designed 1999 to enable the interoperability of different measuring hardware, like coordinate measuring machines, form tester, camshaft or crankshaft measuring machines, with a priori unknown third party controlling and analyzing software. Our recent work was focused on the implementation of a modular, standard conform command interpreter server for the Inspection plus plus protocol. This communication protocol enables the application of I++DME compliant graphical controlling software, which is easy to operate and less error prone than the currently used textural programming via MathWorks MATLab. The function and architecture of the I++DME command interpreter is discussed and the principle of operation is demonstrated by means of an example controlling a nanopositioning and nanomeasuring machine with Hexagon Metrology's controlling and analyzing software QUINDOS 7 via the I++DME command interpreter server.

  17. IMAGE information monitoring and applied graphics software environment. Volume 2. Software description

    SciTech Connect

    Hallam, J.W.; Ng, K.B.; Upham, G.L.

    1986-09-01

    The EPRI Information Monitoring and Applied Graphics Environment (IMAGE) system is designed for 'fast proto-typing' of advanced concepts for computer-aided plant operations tools. It is a flexible software system which can be used for rapidly creating, dynamically driving and evaluating advanced operator aid displays. The software is written to be both host computer and graphic device independent.

  18. Software to model AXAF image quality

    NASA Technical Reports Server (NTRS)

    Ahmad, Anees

    1993-01-01

    This draft final report describes the work performed under this delivery order from May 1992 through June 1993. The purpose of this contract was to enhance and develop an integrated optical performance modeling software for complex x-ray optical systems such as AXAF. The GRAZTRACE program developed by the MSFC Optical Systems Branch for modeling VETA-I was used as the starting baseline program. The original program was a large single file program and, therefore, could not be modified very efficiently. The original source code has been reorganized, and a 'Make Utility' has been written to update the original program. The new version of the source code consists of 36 small source files to make it easier for the code developer to manage and modify the program. A user library has also been built and a 'Makelib' utility has been furnished to update the library. With the user library, the users can easily access the GRAZTRACE source files and build a custom library. A user manual for the new version of GRAZTRACE has been compiled. The plotting capability for the 3-D point spread functions and contour plots has been provided in the GRAZTRACE using the graphics package DISPLAY. The Graphics emulator over the network has been set up for programming the graphics routine. The point spread function and the contour plot routines have also been modified to display the plot centroid, and to allow the user to specify the plot range, and the viewing angle options. A Command Mode version of GRAZTRACE has also been developed. More than 60 commands have been implemented in a Code-V like format. The functions covered in this version include data manipulation, performance evaluation, and inquiry and setting of internal parameters. The user manual for these commands has been formatted as in Code-V, showing the command syntax, synopsis, and options. An interactive on-line help system for the command mode has also been accomplished to allow the user to find valid commands, command syntax

  19. DICOM: a standard for medical imaging

    NASA Astrophysics Data System (ADS)

    Horii, Steven C.; Bidgood, W. Dean

    1993-01-01

    Since 1983, the American College of Radiology (ACR) and the National Electrical Manufacturers Association (NEMA) have been engaged in developing standards related to medical imaging. This alliance of users and manufacturers was formed to meet the needs of the medical imaging community as its use of digital imaging technology increased. The development of electronic picture archiving and communications systems (PACS), which could connect a number of medical imaging devices together in a network, led to the need for a standard interface and data structure for use on imaging equipment. Since medical image files tend to be very large and include much text information along with the image, the need for a fast, flexible, and extensible standard was quickly established. The ACR-NEMA Digital Imaging and Communications Standards Committee developed a standard which met these needs. The standard (ACR-NEMA 300-1988) was first published in 1985 and revised in 1988. It is increasingly available from equipment manufacturers. The current work of the ACR- NEMA Committee has been to extend the standard to incorporate direct network connection features, and build on standards work done by the International Standards Organization in its Open Systems Interconnection series. This new standard, called Digital Imaging and Communication in Medicine (DICOM), follows an object-oriented design methodology and makes use of as many existing internationally accepted standards as possible. This paper gives a brief overview of the requirements for communications standards in medical imaging, a history of the ACR-NEMA effort and what it has produced, and a description of the DICOM standard.

  20. Earth Observation Services (Image Processing Software)

    NASA Technical Reports Server (NTRS)

    1992-01-01

    San Diego State University and Environmental Systems Research Institute, with other agencies, have applied satellite imaging and image processing techniques to geographic information systems (GIS) updating. The resulting images display land use and are used by a regional planning agency for applications like mapping vegetation distribution and preserving wildlife habitats. The EOCAP program provides government co-funding to encourage private investment in, and to broaden the use of NASA-developed technology for analyzing information about Earth and ocean resources.

  1. Software Helps Extract Information From Astronomical Images

    NASA Technical Reports Server (NTRS)

    Hartley, Booth; Ebert, Rick; Laughlin, Gaylin

    1995-01-01

    PAC Skyview 2.0 is interactive program for display and analysis of astronomical images. Includes large set of functions for display, analysis and manipulation of images. "Man" pages with descriptions of functions and examples of usage included. Skyview used interactively or in "server" mode, in which another program calls Skyview and executes commands itself. Skyview capable of reading image data files of four types, including those in FITS, S, IRAF, and Z formats. Written in C.

  2. Development and implementation of software systems for imaging spectroscopy

    USGS Publications Warehouse

    Boardman, J.W.; Clark, R.N.; Mazer, A.S.; Biehl, L.L.; Kruse, F.A.; Torson, J.; Staenz, K.

    2006-01-01

    Specialized software systems have played a crucial role throughout the twenty-five year course of the development of the new technology of imaging spectroscopy, or hyperspectral remote sensing. By their very nature, hyperspectral data place unique and demanding requirements on the computer software used to visualize, analyze, process and interpret them. Often described as a marriage of the two technologies of reflectance spectroscopy and airborne/spaceborne remote sensing, imaging spectroscopy, in fact, produces data sets with unique qualities, unlike previous remote sensing or spectrometer data. Because of these unique spatial and spectral properties hyperspectral data are not readily processed or exploited with legacy software systems inherited from either of the two parent fields of study. This paper provides brief reviews of seven important software systems developed specifically for imaging spectroscopy.

  3. MaZda--a software package for image texture analysis.

    PubMed

    Szczypiński, Piotr M; Strzelecki, Michał; Materka, Andrzej; Klepaczko, Artur

    2009-04-01

    MaZda, a software package for 2D and 3D image texture analysis is presented. It provides a complete path for quantitative analysis of image textures, including computation of texture features, procedures for feature selection and extraction, algorithms for data classification, various data visualization and image segmentation tools. Initially, MaZda was aimed at analysis of magnetic resonance image textures. However, it revealed its effectiveness in analysis of other types of textured images, including X-ray and camera images. The software was utilized by numerous researchers in diverse applications. It was proven to be an efficient and reliable tool for quantitative image analysis, even in more accurate and objective medical diagnosis. MaZda was also successfully used in food industry to assess food product quality. MaZda can be downloaded for public use from the Institute of Electronics, Technical University of Lodz webpage. PMID:18922598

  4. The image related services of the HELIOS software engineering environment.

    PubMed

    Engelmann, U; Meinzer, H P; Schröter, A; Günnel, U; Demiris, A M; Makabe, M; Evers, H; Jean, F C; Degoulet, P

    1995-01-01

    This paper describes the approach of the European HELIOS project to integrate image processing tools into ward information systems. The image processing tools are the result of the basic research in image analysis in the Department Medical and Biological Informatics at the German Cancer Research Center. These tools for the analysis of two-dimensional images and three-dimensional data volumes with 3D reconstruction and visualization ae part of the Image Related Services of HELIOS. The HELIOS software engineering environment allows to use the image processing functionality in integrated applications. PMID:7743775

  5. Software Graphical User Interface For Analysis Of Images

    NASA Technical Reports Server (NTRS)

    Leonard, Desiree M.; Nolf, Scott R.; Avis, Elizabeth L.; Stacy, Kathryn

    1992-01-01

    CAMTOOL software provides graphical interface between Sun Microsystems workstation and Eikonix Model 1412 digitizing camera system. Camera scans and digitizes images, halftones, reflectives, transmissives, rigid or flexible flat material, or three-dimensional objects. Users digitize images and select from three destinations: work-station display screen, magnetic-tape drive, or hard disk. Written in C.

  6. FITSH: Software Package for Image Processing

    NASA Astrophysics Data System (ADS)

    Pál, András

    2011-11-01

    FITSH provides a standalone environment for analysis of data acquired by imaging astronomical detectors. The package provides utilities both for the full pipeline of subsequent related data processing steps (including image calibration, astrometry, source identification, photometry, differential analysis, low-level arithmetic operations, multiple image combinations, spatial transformations and interpolations, etc.) and for aiding the interpretation of the (mainly photometric and/or astrometric) results. The package also features a consistent implementation of photometry based on image subtraction, point spread function fitting and aperture photometry and provides easy-to-use interfaces for comparisons and for picking the most suitable method for a particular problem. The utilities in the package are built on the top of the commonly used UNIX/POSIX shells (hence the name of the package), therefore both frequently used and well-documented tools for such environments can be exploited and managing massive amount of data is rather convenient.

  7. gr-MRI: A software package for magnetic resonance imaging using software defined radios

    NASA Astrophysics Data System (ADS)

    Hasselwander, Christopher J.; Cao, Zhipeng; Grissom, William A.

    2016-09-01

    The goal of this work is to develop software that enables the rapid implementation of custom MRI spectrometers using commercially-available software defined radios (SDRs). The developed gr-MRI software package comprises a set of Python scripts, flowgraphs, and signal generation and recording blocks for GNU Radio, an open-source SDR software package that is widely used in communications research. gr-MRI implements basic event sequencing functionality, and tools for system calibrations, multi-radio synchronization, and MR signal processing and image reconstruction. It includes four pulse sequences: a single-pulse sequence to record free induction signals, a gradient-recalled echo imaging sequence, a spin echo imaging sequence, and an inversion recovery spin echo imaging sequence. The sequences were used to perform phantom imaging scans with a 0.5 Tesla tabletop MRI scanner and two commercially-available SDRs. One SDR was used for RF excitation and reception, and the other for gradient pulse generation. The total SDR hardware cost was approximately 2000. The frequency of radio desynchronization events and the frequency with which the software recovered from those events was also measured, and the SDR's ability to generate frequency-swept RF waveforms was validated and compared to the scanner's commercial spectrometer. The spin echo images geometrically matched those acquired using the commercial spectrometer, with no unexpected distortions. Desynchronization events were more likely to occur at the very beginning of an imaging scan, but were nearly eliminated if the user invoked the sequence for a short period before beginning data recording. The SDR produced a 500 kHz bandwidth frequency-swept pulse with high fidelity, while the commercial spectrometer produced a waveform with large frequency spike errors. In conclusion, the developed gr-MRI software can be used to develop high-fidelity, low-cost custom MRI spectrometers using commercially-available SDRs.

  8. gr-MRI: A software package for magnetic resonance imaging using software defined radios.

    PubMed

    Hasselwander, Christopher J; Cao, Zhipeng; Grissom, William A

    2016-09-01

    The goal of this work is to develop software that enables the rapid implementation of custom MRI spectrometers using commercially-available software defined radios (SDRs). The developed gr-MRI software package comprises a set of Python scripts, flowgraphs, and signal generation and recording blocks for GNU Radio, an open-source SDR software package that is widely used in communications research. gr-MRI implements basic event sequencing functionality, and tools for system calibrations, multi-radio synchronization, and MR signal processing and image reconstruction. It includes four pulse sequences: a single-pulse sequence to record free induction signals, a gradient-recalled echo imaging sequence, a spin echo imaging sequence, and an inversion recovery spin echo imaging sequence. The sequences were used to perform phantom imaging scans with a 0.5Tesla tabletop MRI scanner and two commercially-available SDRs. One SDR was used for RF excitation and reception, and the other for gradient pulse generation. The total SDR hardware cost was approximately $2000. The frequency of radio desynchronization events and the frequency with which the software recovered from those events was also measured, and the SDR's ability to generate frequency-swept RF waveforms was validated and compared to the scanner's commercial spectrometer. The spin echo images geometrically matched those acquired using the commercial spectrometer, with no unexpected distortions. Desynchronization events were more likely to occur at the very beginning of an imaging scan, but were nearly eliminated if the user invoked the sequence for a short period before beginning data recording. The SDR produced a 500kHz bandwidth frequency-swept pulse with high fidelity, while the commercial spectrometer produced a waveform with large frequency spike errors. In conclusion, the developed gr-MRI software can be used to develop high-fidelity, low-cost custom MRI spectrometers using commercially-available SDRs. PMID:27394165

  9. MOSAIC: Software for creating mosaics from collections of images

    NASA Technical Reports Server (NTRS)

    Varosi, F.; Gezari, D. Y.

    1992-01-01

    We have developed a powerful, versatile image processing and analysis software package called MOSAIC, designed specifically for the manipulation of digital astronomical image data obtained with (but not limited to) two-dimensional array detectors. The software package is implemented using the Interactive Data Language (IDL), and incorporates new methods for processing, calibration, analysis, and visualization of astronomical image data, stressing effective methods for the creation of mosaic images from collections of individual exposures, while at the same time preserving the photometric integrity of the original data. Since IDL is available on many computers, the MOSAIC software runs on most UNIX and VAX workstations with the X-Windows or Sun View graphics interface.

  10. Increasing software testability with standard access and control interfaces

    NASA Technical Reports Server (NTRS)

    Nikora, Allen P; Some, Raphael R.; Tamir, Yuval

    2003-01-01

    We describe an approach to improving the testability of complex software systems with software constructs modeled after the hardware JTAG bus, used to provide visibility and controlability in testing digital circuits.

  11. MOPEX: a software package for astronomical image processing and visualization

    NASA Astrophysics Data System (ADS)

    Makovoz, David; Roby, Trey; Khan, Iffat; Booth, Hartley

    2006-06-01

    We present MOPEX - a software package for astronomical image processing and display. The package is a combination of command-line driven image processing software written in C/C++ with a Java-based GUI. The main image processing capabilities include creating mosaic images, image registration, background matching, point source extraction, as well as a number of minor image processing tasks. The combination of the image processing and display capabilities allows for much more intuitive and efficient way of performing image processing. The GUI allows for the control over the image processing and display to be closely intertwined. Parameter setting, validation, and specific processing options are entered by the user through a set of intuitive dialog boxes. Visualization feeds back into further processing by providing a prompt feedback of the processing results. The GUI also allows for further analysis by accessing and displaying data from existing image and catalog servers using a virtual observatory approach. Even though originally designed for the Spitzer Space Telescope mission, a lot of functionalities are of general usefulness and can be used for working with existing astronomical data and for new missions. The software used in the package has undergone intensive testing and benefited greatly from effective software reuse. The visualization part has been used for observation planning for both the Spitzer and Herschel Space Telescopes as part the tool Spot. The visualization capabilities of Spot have been enhanced and integrated with the image processing functionality of the command-line driven MOPEX. The image processing software is used in the Spitzer automated pipeline processing, which has been in operation for nearly 3 years. The image processing capabilities have also been tested in off-line processing by numerous astronomers at various institutions around the world. The package is multi-platform and includes automatic update capabilities. The software

  12. Image-Processing Software For A Hypercube Computer

    NASA Technical Reports Server (NTRS)

    Lee, Meemong; Mazer, Alan S.; Groom, Steven L.; Williams, Winifred I.

    1992-01-01

    Concurrent Image Processing Executive (CIPE) is software system intended to develop and use image-processing application programs on concurrent computing environment. Designed to shield programmer from complexities of concurrent-system architecture, it provides interactive image-processing environment for end user. CIPE utilizes architectural characteristics of particular concurrent system to maximize efficiency while preserving architectural independence from user and programmer. CIPE runs on Mark-IIIfp 8-node hypercube computer and associated SUN-4 host computer.

  13. Issues and relationships among software standards for nuclear safety applications. Version 2.0

    SciTech Connect

    Scott, J.A.; Preckshot, G.G.; Lawrence, J.D.; Johnson, G.L.

    1996-03-26

    Lawrence Livermore National Laboratory is assisting the Nuclear Regulatory Commission with the development of draft regulatory guides for selected software engineering standards. This report describes the results of the initial task in this work. The selected software standards and a set of related software engineering standards were reviewed, and the resulting preliminary elements of the regulatory positions are identified in this report. The importance of a thorough understanding of the relationships among standards useful for developing safety-related software is emphasized. The relationship of this work to the update of the Standard Review Plan is also discussed.

  14. Uses of software in digital image analysis: a forensic report

    NASA Astrophysics Data System (ADS)

    Sharma, Mukesh; Jha, Shailendra

    2010-02-01

    Forensic image analysis is required an expertise to interpret the content of an image or the image itself in legal matters. Major sub-disciplines of forensic image analysis with law enforcement applications include photo-grammetry, photographic comparison, content analysis and image authentication. It has wide applications in forensic science range from documenting crime scenes to enhancing faint or indistinct patterns such as partial fingerprints. The process of forensic image analysis can involve several different tasks, regardless of the type of image analysis performed. Through this paper authors have tried to explain these tasks, which are described in to three categories: Image Compression, Image Enhancement & Restoration and Measurement Extraction. With the help of examples like signature comparison, counterfeit currency comparison and foot-wear sole impression using the software Canvas and Corel Draw.

  15. Image processing software for providing radiometric inputs to land surface climatology models

    NASA Technical Reports Server (NTRS)

    Newcomer, Jeffrey A.; Goetz, Scott J.; Strebel, Donald E.; Hall, Forrest G.

    1989-01-01

    During the First International Land Surface Climatology Project (ISLSCP) Field Experiment (FIFE), 80 gigabytes of image data were generated from a variety of satellite and airborne sensors in a multidisciplinary attempt to study energy and mass exchange between the land surface and the atmosphere. To make these data readily available to researchers with a range of image data handling experience and capabilities, unique image-processing software was designed to perform a variety of nonstandard image-processing manipulations and to derive a set of standard-format image products. The nonconventional features of the software include: (1) adding new layers of geographic coordinates, and solar and viewing conditions to existing data; (2) providing image polygon extraction and calibration of data to at-sensor radiances; and, (3) generating standard-format derived image products that can be easily incorporated into radiometric or climatology models. The derived image products consist of easily handled ASCII descriptor files, byte image data files, and additional per-pixel integer data files (e.g., geographic coordinates, and sun and viewing conditions). Details of the solutions to the image-processing problems, the conventions adopted for handling a variety of satellite and aircraft image data, and the applicability of the output products to quantitative modeling are presented. They should be of general interest to future experiment and data-handling design considerations.

  16. Single-molecule localization software applied to photon counting imaging.

    PubMed

    Hirvonen, Liisa M; Kilfeather, Tiffany; Suhling, Klaus

    2015-06-01

    Centroiding in photon counting imaging has traditionally been accomplished by a single-step, noniterative algorithm, often implemented in hardware. Single-molecule localization techniques in superresolution fluorescence microscopy are conceptually similar, but use more sophisticated iterative software-based fitting algorithms to localize the fluorophore. Here, we discuss common features and differences between single-molecule localization and photon counting imaging and investigate the suitability of single-molecule localization software for photon event localization. We find that single-molecule localization software packages designed for superresolution microscopy-QuickPALM, rapidSTORM, and ThunderSTORM-can work well when applied to photon counting imaging with a microchannel-plate-based intensified camera system: photon event recognition can be excellent, fixed pattern noise can be low, and the microchannel plate pores can easily be resolved. PMID:26192667

  17. Modified control software for imaging ultracold atomic clouds

    NASA Astrophysics Data System (ADS)

    Whitaker, D. L.; Sharma, A.; Brown, J. M.

    2006-12-01

    A charge-coupled device (CCD) camera capable of taking high-quality images of ultracold atomic samples can often represent a significant portion of the equipment costs in atom trapping experiment. We have modified the commercial control software of a CCD camera designed for astronomical imaging to take absorption images of ultracold rubidium clouds. This camera is sensitive at 780 nm and has been modified to take three successive 16-bit images at full resolution. The control software can be integrated into a Matlab graphical user interface with fitting routines written as Matlab functions. This camera is capable of recording high-quality images at a fraction of the cost of similar cameras typically used in atom trapping experiments.

  18. Computer Software Configuration Item-Specific Flight Software Image Transfer Script Generator

    NASA Technical Reports Server (NTRS)

    Bolen, Kenny; Greenlaw, Ronald

    2010-01-01

    A K-shell UNIX script enables the International Space Station (ISS) Flight Control Team (FCT) operators in NASA s Mission Control Center (MCC) in Houston to transfer an entire or partial computer software configuration item (CSCI) from a flight software compact disk (CD) to the onboard Portable Computer System (PCS). The tool is designed to read the content stored on a flight software CD and generate individual CSCI transfer scripts that are capable of transferring the flight software content in a given subdirectory on the CD to the scratch directory on the PCS. The flight control team can then transfer the flight software from the PCS scratch directory to the Electronically Erasable Programmable Read Only Memory (EEPROM) of an ISS Multiplexer/ Demultiplexer (MDM) via the Indirect File Transfer capability. The individual CSCI scripts and the CSCI Specific Flight Software Image Transfer Script Generator (CFITSG), when executed a second time, will remove all components from their original execution. The tool will identify errors in the transfer process and create logs of the transferred software for the purposes of configuration management.

  19. Backhoe 3D "gold standard" image

    NASA Astrophysics Data System (ADS)

    Gorham, LeRoy; Naidu, Kiranmai D.; Majumder, Uttam; Minardi, Michael A.

    2005-05-01

    ViSUAl-D (VIsual Sar Using ALl Dimensions), a 2004 DARPA/IXO seedling effort, is developing a capability for reliable high confidence ID from standoff ranges. Recent conflicts have demonstrated that the warfighter would greatly benefit from the ability to ID targets beyond visual and electro-optical ranges[1]. Forming optical-quality SAR images while exploiting full polarization, wide angles, and large bandwidth would be key evidence such a capability is achievable. Using data generated by the Xpatch EM scattering code, ViSUAl-D investigates all degrees of freedom available to the radar designer, including 6 GHz bandwidth, full polarization and angle sampling over 2π steradians (upper hemisphere), in order to produce a "literal" image or representation of the target. This effort includes the generation of a "Gold Standard" image that can be produced at X-band utilizing all available target data. This "Gold Standard" image of the backhoe will serve as a test bed for future more relevant military targets and their image development. The seedling team produced a public release data which was released at the 2004 SPIE conference, as well as a 3D "Gold Standard" backhoe image using a 3D image formation algorithm. This paper describes the full backhoe data set, the image formation algorithm, the visualization process and the resulting image.

  20. Software development for a Ring Imaging Detector

    NASA Astrophysics Data System (ADS)

    Torisky, Benjamin; Benmokhtar, Fatiha

    2015-04-01

    Jefferson Lab (Jlab) is performing a large-scale upgrade to their Continuous Electron Beam Accelerator Facility (CEBAF) up to 12 GeV beam. The Large Acceptance Spectrometer (CLAS12) in Hall B is being upgraded and a new Ring Imaging CHerenkov (RICH) detector is being developed to provide better kaon - pion separation throughout the 3 to 12 GeV range. With this addition, when the electron beam hits the target, the resulting pions, kaons, and other particles will pass through a wall of translucent aerogel tiles and create Cherenkov radiation. This light can then be accurately detected by a large array of Multi-Anode PhotoMultiplier Tubes (MA-PMT). I am presenting my work on the implementation of Java based reconstruction programs for the RICH in the CLAS12 main analysis package.

  1. Software Development for Ring Imaging Detector

    NASA Astrophysics Data System (ADS)

    Torisky, Benjamin

    2016-03-01

    Jefferson Lab (Jlab) is performing a large-scale upgrade to their Continuous Electron Beam Accelerator Facility (CEBAF) up to 12GeV beam. The Large Acceptance Spectrometer (CLAS12) in Hall B is being upgraded and a new Ring Imaging Cherenkov (RICH) detector is being developed to provide better kaon - pion separation throughout the 3 to 12 GeV range. With this addition, when the electron beam hits the target, the resulting pions, kaons, and other particles will pass through a wall of translucent aerogel tiles and create Cherenkov radiation. This light can then be accurately detected by a large array of Multi-Anode PhotoMultiplier Tubes (MA-PMT). I am presenting an update on my work on the implementation of Java based reconstruction programs for the RICH in the CLAS12 main analysis package.

  2. FBI compression standard for digitized fingerprint images

    NASA Astrophysics Data System (ADS)

    Brislawn, Christopher M.; Bradley, Jonathan N.; Onyshczak, Remigius J.; Hopper, Thomas

    1996-11-01

    The FBI has formulated national standards for digitization and compression of gray-scale fingerprint images. The compression algorithm for the digitized images is based on adaptive uniform scalar quantization of a discrete wavelet transform subband decomposition, a technique referred to as the wavelet/scalar quantization method. The algorithm produces archival-quality images at compression ratios of around 15 to 1 and will allow the current database of paper fingerprint cards to be replaced by digital imagery. A compliance testing program is also being implemented to ensure high standards of image quality and interchangeability of data between different implementations. We will review the current status of the FBI standard, including the compliance testing process and the details of the first-generation encoder.

  3. Non-Imaging Software/Data Analysis Requirements

    NASA Technical Reports Server (NTRS)

    1984-01-01

    The analysis software needs of the non-imaging planetary data user are discussed. Assumptions as to the nature of the planetary science data centers where the data are physically stored are advanced, the scope of the non-imaging data is outlined, and facilities that users are likely to need to define and access data are identified. Data manipulation and analysis needs and display graphics are discussed.

  4. Comparison of ISO 9000 and recent software life cycle standards to nuclear regulatory review guidance

    SciTech Connect

    Preckshot, G.G.; Scott, J.A.

    1998-01-20

    Lawrence Livermore National Laboratory is assisting the Nuclear Regulatory Commission with the assessment of certain quality and software life cycle standards to determine whether additional guidance for the U.S. nuclear regulatory context should be derived from the standards. This report describes the nature of the standards and compares the guidance of the standards to that of the recently updated Standard Review Plan.

  5. Software to model AXAF-I image quality

    NASA Technical Reports Server (NTRS)

    Ahmad, Anees; Feng, Chen

    1995-01-01

    A modular user-friendly computer program for the modeling of grazing-incidence type x-ray optical systems has been developed. This comprehensive computer software GRAZTRACE covers the manipulation of input data, ray tracing with reflectivity and surface deformation effects, convolution with x-ray source shape, and x-ray scattering. The program also includes the capabilities for image analysis, detector scan modeling, and graphical presentation of the results. A number of utilities have been developed to interface the predicted Advanced X-ray Astrophysics Facility-Imaging (AXAF-I) mirror structural and thermal distortions with the ray-trace. This software is written in FORTRAN 77 and runs on a SUN/SPARC station. An interactive command mode version and a batch mode version of the software have been developed.

  6. SIMA: Python software for analysis of dynamic fluorescence imaging data

    PubMed Central

    Kaifosh, Patrick; Zaremba, Jeffrey D.; Danielson, Nathan B.; Losonczy, Attila

    2014-01-01

    Fluorescence imaging is a powerful method for monitoring dynamic signals in the nervous system. However, analysis of dynamic fluorescence imaging data remains burdensome, in part due to the shortage of available software tools. To address this need, we have developed SIMA, an open source Python package that facilitates common analysis tasks related to fluorescence imaging. Functionality of this package includes correction of motion artifacts occurring during in vivo imaging with laser-scanning microscopy, segmentation of imaged fields into regions of interest (ROIs), and extraction of signals from the segmented ROIs. We have also developed a graphical user interface (GUI) for manual editing of the automatically segmented ROIs and automated registration of ROIs across multiple imaging datasets. This software has been designed with flexibility in mind to allow for future extension with different analysis methods and potential integration with other packages. Software, documentation, and source code for the SIMA package and ROI Buddy GUI are freely available at http://www.losonczylab.org/sima/. PMID:25295002

  7. Development of Standard Digital Images for Pneumoconiosis

    PubMed Central

    Lee, Won-Jeong; Kim, Sung Jin; Park, Choong-Ki; Park, Jai-Soung; Tae, Seok; Hering, Kurt Georg

    2011-01-01

    We developed the standard digital images (SDIs) to be used in the classification and recognition of pneumoconiosis. From July 3, 2006 through August 31, 2007, 531 retired male workers exposed to inorganic dust were examined by digital (DR) and analog radiography (AR) on the same day, after being approved by our institutional review board and obtaining informed consent from all participants. All images were twice classified according to the International Labour Office (ILO) 2000 guidelines with reference to ILO standard analog radiographs (SARs) by four chest radiologists. After consensus reading on 349 digital images matched with the first selected analog images, 120 digital images were selected as the SDIs that considered the distribution of pneumoconiosis findings. Images with profusion category 0/1, 1, 2, and 3 were 12, 50, 40, and 15, respectively, and a large opacity were in 43 images (A = 20, B = 22, C = 1). Among pleural abnormality, costophrenic angle obliteration, pleural plaque and thickening were in 11 (9.2%), 31 (25.8%), and 9 (7.5%) images, respectively. Twenty-one of 29 symbols were present except cp, ef, ho, id, me, pa, ra, and rp. A set of 120 SDIs had more various pneumoconiosis findings than ILO SARs that were developed from adequate methods. It can be used as digital reference images for the recognition and classification of pneumoconiosis. PMID:22065894

  8. Standardizing PhenoCam Image Processing and Data Products

    NASA Astrophysics Data System (ADS)

    Milliman, T. E.; Richardson, A. D.; Klosterman, S.; Gray, J. M.; Hufkens, K.; Aubrecht, D.; Chen, M.; Friedl, M. A.

    2014-12-01

    The PhenoCam Network (http://phenocam.unh.edu) contains an archive of imagery from digital webcams to be used for scientific studies of phenological processes of vegetation. The image archive continues to grow and currently has over 4.8 million images representing 850 site-years of data. Time series of broadband reflectance (e.g., red, green, blue, infrared bands) and derivative vegetation indices (e.g. green chromatic coordinate or GCC) are calculated for regions of interest (ROI) within each image series. These time series form the basis for subsequent analysis, such as spring and autumn transition date extraction (using curvature analysis techniques) and modeling the climate-phenology relationship. Processing is relatively straightforward but time consuming, with some sites having more than 100,000 images available. While the PhenoCam Network distributes the original image data, it is our goal to provide higher-level vegetation phenology products, generated in a standardized way, to encourage use of the data without the need to download and analyze individual images. We describe here the details of the standard image processing procedures, and also provide a description of the products that will be available for download. Products currently in development include an "all-image" file, which contains a statistical summary of the red, green and blue bands over the pixels in predefined ROI's for each image from a site. This product is used to generate 1-day and 3-day temporal aggregates with 90th percentile values of GCC for the specified time-periodwith standard image selection/filtering criteria applied. Sample software (in python, R, MATLAB) that can be used to read in and plot these products will also be described.

  9. Open source tools for standardized privacy protection of medical images

    NASA Astrophysics Data System (ADS)

    Lien, Chung-Yueh; Onken, Michael; Eichelberg, Marco; Kao, Tsair; Hein, Andreas

    2011-03-01

    In addition to the primary care context, medical images are often useful for research projects and community healthcare networks, so-called "secondary use". Patient privacy becomes an issue in such scenarios since the disclosure of personal health information (PHI) has to be prevented in a sharing environment. In general, most PHIs should be completely removed from the images according to the respective privacy regulations, but some basic and alleviated data is usually required for accurate image interpretation. Our objective is to utilize and enhance these specifications in order to provide reliable software implementations for de- and re-identification of medical images suitable for online and offline delivery. DICOM (Digital Imaging and Communications in Medicine) images are de-identified by replacing PHI-specific information with values still being reasonable for imaging diagnosis and patient indexing. In this paper, this approach is evaluated based on a prototype implementation built on top of the open source framework DCMTK (DICOM Toolkit) utilizing standardized de- and re-identification mechanisms. A set of tools has been developed for DICOM de-identification that meets privacy requirements of an offline and online sharing environment and fully relies on standard-based methods.

  10. Open Architecture Standard for NASA's Software-Defined Space Telecommunications Radio Systems

    NASA Technical Reports Server (NTRS)

    Reinhart, Richard C.; Johnson, Sandra K.; Kacpura, Thomas J.; Hall, Charles S.; Smith, Carl R.; Liebetreu, John

    2008-01-01

    NASA is developing an architecture standard for software-defined radios used in space- and ground-based platforms to enable commonality among radio developments to enhance capability and services while reducing mission and programmatic risk. Transceivers (or transponders) with functionality primarily defined in software (e.g., firmware) have the ability to change their functional behavior through software alone. This radio architecture standard offers value by employing common waveform software interfaces, method of instantiation, operation, and testing among different compliant hardware and software products. These common interfaces within the architecture abstract application software from the underlying hardware to enable technology insertion independently at either the software or hardware layer. This paper presents the initial Space Telecommunications Radio System (STRS) Architecture for NASA missions to provide the desired software abstraction and flexibility while minimizing the resources necessary to support the architecture.

  11. MMX-I: data-processing software for multimodal X-ray imaging and tomography

    PubMed Central

    Bergamaschi, Antoine; Medjoubi, Kadda; Messaoudi, Cédric; Marco, Sergio; Somogyi, Andrea

    2016-01-01

    A new multi-platform freeware has been developed for the processing and reconstruction of scanning multi-technique X-ray imaging and tomography datasets. The software platform aims to treat different scanning imaging techniques: X-ray fluorescence, phase, absorption and dark field and any of their combinations, thus providing an easy-to-use data processing tool for the X-ray imaging user community. A dedicated data input stream copes with the input and management of large datasets (several hundred GB) collected during a typical multi-technique fast scan at the Nanoscopium beamline and even on a standard PC. To the authors’ knowledge, this is the first software tool that aims at treating all of the modalities of scanning multi-technique imaging and tomography experiments. PMID:27140159

  12. MMX-I: data-processing software for multimodal X-ray imaging and tomography.

    PubMed

    Bergamaschi, Antoine; Medjoubi, Kadda; Messaoudi, Cédric; Marco, Sergio; Somogyi, Andrea

    2016-05-01

    A new multi-platform freeware has been developed for the processing and reconstruction of scanning multi-technique X-ray imaging and tomography datasets. The software platform aims to treat different scanning imaging techniques: X-ray fluorescence, phase, absorption and dark field and any of their combinations, thus providing an easy-to-use data processing tool for the X-ray imaging user community. A dedicated data input stream copes with the input and management of large datasets (several hundred GB) collected during a typical multi-technique fast scan at the Nanoscopium beamline and even on a standard PC. To the authors' knowledge, this is the first software tool that aims at treating all of the modalities of scanning multi-technique imaging and tomography experiments. PMID:27140159

  13. The Khoros software development environment for image and signal processing.

    PubMed

    Konstantinides, K; Rasure, J R

    1994-01-01

    Data flow visual language systems allow users to graphically create a block diagram of their applications and interactively control input, output, and system variables. Khoros is an integrated software development environment for information processing and visualization. It is particularly attractive for image processing because of its rich collection of tools for image and digital signal processing. This paper presents a general overview of Khoros with emphasis on its image processing and DSP tools. Various examples are presented and the future direction of Khoros is discussed. PMID:18291923

  14. Stromatoporoid biometrics using image analysis software: A first order approach

    NASA Astrophysics Data System (ADS)

    Wolniewicz, Pawel

    2010-04-01

    Strommetric is a new image analysis computer program that performs morphometric measurements of stromatoporoid sponges. The program measures 15 features of skeletal elements (pillars and laminae) visible in both longitudinal and transverse thin sections. The software is implemented in C++, using the Open Computer Vision (OpenCV) library. The image analysis system distinguishes skeletal elements from sparry calcite using Otsu's method for image thresholding. More than 150 photos of thin sections were used as a test set, from which 36,159 measurements were obtained. The software provided about one hundred times more data than the current method applied until now. The data obtained are reproducible, even if the work is repeated by different workers. Thus the method makes the biometric studies of stromatoporoids objective.

  15. [Utility of noise addition image made by using water phantom and image addition and subtraction software].

    PubMed

    Watanabe, Ryo; Ogawa, Masato; Mituzono, Hiroki; Aoki, Takahiro; Hayano, Mizuho; Watanabe, Yuka

    2010-08-20

    In optimizing exposures, it is very important to evaluate the impact of image noise on image quality. To realize this, there is a need to evaluate how much image noise will make the subject disease invisible. But generally it is very difficult to shoot images of different quality in a clinical examination. Thus, a method to create a noise addition image by adding the image noise to raw data has been reported. However, this approach requires a special system, so it is difficult to implement in many facilities. We have invented a method to easily create a noise addition image by using the water phantom and image add-subtract software that accompanies the device. To create a noise addition image, first we made a noise image by subtracting the water phantom with different SD. A noise addition image was then created by adding the noise image to the original image. By using this method, a simulation image with intergraded SD can be created from the original. Moreover, the noise frequency component of the created noise addition image is as same as the real image. Thus, the relationship of image quality to SD in the clinical image can be evaluated. Although this method is an easy method of LDSI creation on image data, a noise addition image can be easily created by using image addition and subtraction software and water phantom, and this can be implemented in many facilities. PMID:20953102

  16. Image Fusion Software in the Clearpem-Sonic Project

    NASA Astrophysics Data System (ADS)

    Pizzichemi, M.; di Vara, N.; Cucciati, G.; Ghezzi, A.; Paganoni, M.; Farina, F.; Frisch, B.; Bugalho, R.

    2012-08-01

    ClearPEM-Sonic is a mammography scanner that combines Positron Emission Tomography with 3D ultrasound echographic and elastographic imaging. It has been developed to improve early stage detection of breast cancer by combining metabolic and anatomical information. The PET system has been developed by the Crystal Clear Collaboration, while the 3D ultrasound probe has been provided by SuperSonic Imagine. In this framework, the visualization and fusion software is an essential tool for the radiologists in the diagnostic process. This contribution discusses the design choices, the issues faced during the implementation, and the commissioning of the software tools developed for ClearPEM-Sonic.

  17. Software Defined Radio Standard Architecture and its Application to NASA Space Missions

    NASA Technical Reports Server (NTRS)

    Andro, Monty; Reinhart, Richard C.

    2006-01-01

    A software defined radio (SDR) architecture used in space-based platforms proposes to standardize certain aspects of radio development such as interface definitions, functional control and execution, and application software and firmware development. NASA has charted a team to develop an open software defined radio hardware and software architecture to support NASA missions and determine the viability of an Agency-wide Standard. A draft concept of the proposed standard has been released and discussed among organizations in the SDR community. Appropriate leveraging of the JTRS SCA, OMG's SWRadio Architecture and other aspects are considered. A standard radio architecture offers potential value by employing common waveform software instantiation, operation, testing and software maintenance. While software defined radios offer greater flexibility, they also poses challenges to the radio development for the space environment in terms of size, mass and power consumption and available technology. An SDR architecture for space must recognize and address the constraints of space flight hardware, and systems along with flight heritage and culture. NASA is actively participating in the development of technology and standards related to software defined radios. As NASA considers a standard radio architecture for space communications, input and coordination from government agencies, the industry, academia, and standards bodies is key to a successful architecture. The unique aspects of space require thorough investigation of relevant terrestrial technologies properly adapted to space. The talk will describe NASA's current effort to investigate SDR applications to space missions and a brief overview of a candidate architecture under consideration for space based platforms.

  18. SIVIC: Open-Source, Standards-Based Software for DICOM MR Spectroscopy Workflows.

    PubMed

    Crane, Jason C; Olson, Marram P; Nelson, Sarah J

    2013-01-01

    Quantitative analysis of magnetic resonance spectroscopic imaging (MRSI) data provides maps of metabolic parameters that show promise for improving medical diagnosis and therapeutic monitoring. While anatomical images are routinely reconstructed on the scanner, formatted using the DICOM standard, and interpreted using PACS workstations, this is not the case for MRSI data. The evaluation of MRSI data is made more complex because files are typically encoded with vendor-specific file formats and there is a lack of standardized tools for reconstruction, processing, and visualization. SIVIC is a flexible open-source software framework and application suite that enables a complete scanner-to-PACS workflow for evaluation and interpretation of MRSI data. It supports conversion of vendor-specific formats into the DICOM MR spectroscopy (MRS) standard, provides modular and extensible reconstruction and analysis pipelines, and provides tools to support the unique visualization requirements associated with such data. Workflows are presented which demonstrate the routine use of SIVIC to support the acquisition, analysis, and delivery to PACS of clinical (1)H MRSI datasets at UCSF. PMID:23970895

  19. The application of image processing software: Photoshop in environmental design

    NASA Astrophysics Data System (ADS)

    Dong, Baohua; Zhang, Chunmi; Zhuo, Chen

    2011-02-01

    In the process of environmental design and creation, the design sketch holds a very important position in that it not only illuminates the design's idea and concept but also shows the design's visual effects to the client. In the field of environmental design, computer aided design has made significant improvement. Many types of specialized design software for environmental performance of the drawings and post artistic processing have been implemented. Additionally, with the use of this software, working efficiency has greatly increased and drawings have become more specific and more specialized. By analyzing the application of photoshop image processing software in environmental design and comparing and contrasting traditional hand drawing and drawing with modern technology, this essay will further explore the way for computer technology to play a bigger role in environmental design.

  20. Toward clinically relevant standardization of image quality.

    PubMed

    Samei, Ehsan; Rowberg, Alan; Avraham, Ellie; Cornelius, Craig

    2004-12-01

    In recent years, notable progress has been made on standardization of medical image presentations in the definition and implementation of the Digital Imaging and Communications in Medicine (DICOM) Grayscale Standard Display Function (GSDF). In parallel, the American Association of Physicists in Medicine (AAPM) Task Group 18 has provided much needed guidelines and tools for visual and quantitative assessment of medical display quality. In spite of these advances, however, there are still notable gaps in the effectiveness of DICOM GSDF to assure consistent and high-quality display of medical images. In additions the degree of correlation between display technical data and diagnostic usability and performance of displays remains unclear. This article proposes three specific steps that DICOM, AAPM, and ACR may collectively take to bridge the gap between technical performance and clinical use: (1) DICOM does not provide means and acceptance criteria to evaluate the conformance of a display device to GSDF or to address other image quality characteristics. DICOM can expand beyond luminance response, extending the measurable, quantifiable elements of TG18 such as reflection and resolution. (2) In a large picture archiving and communication system (PACS) installation, it is critical to continually track the appropriate use and performance of multiple display devices. DICOM may help with this task by adding a Device Service Class to the standard to provide for communication and control of image quality parameters between applications and devices, (3) The question of clinical significance of image quality metrics has rarely been addressed by prior efforts. In cooperation with AAPM, the American College of Radiology (ACR), and the Society for Computer Applications in Radiology (SCAR), DICOM may help to initiate research that will determine the clinical consequence of variations in image quality metrics (eg, GSDF conformance) and to define what constitutes image quality from a

  1. The quest for standards in medical imaging.

    PubMed

    Gibaud, Bernard

    2011-05-01

    This article focuses on standards supporting interoperability and system integration in the medical imaging domain. We introduce the basic concepts and actors and we review the most salient achievements in this domain, especially with the DICOM standard, and the definition of IHE integration profiles. We analyze and discuss what was successful, and what could still be more widely adopted by industry. We then sketch out a perspective of what should be done next, based on our vision of new requirements for the next decade. In particular, we discuss the challenges of a more explicit sharing of image and image processing semantics, and we discuss the help that semantic web technologies (and especially ontologies) may bring to achieving this goal. PMID:20605693

  2. Web-based interactive 2D/3D medical image processing and visualization software.

    PubMed

    Mahmoudi, Seyyed Ehsan; Akhondi-Asl, Alireza; Rahmani, Roohollah; Faghih-Roohi, Shahrooz; Taimouri, Vahid; Sabouri, Ahmad; Soltanian-Zadeh, Hamid

    2010-05-01

    There are many medical image processing software tools available for research and diagnosis purposes. However, most of these tools are available only as local applications. This limits the accessibility of the software to a specific machine, and thus the data and processing power of that application are not available to other workstations. Further, there are operating system and processing power limitations which prevent such applications from running on every type of workstation. By developing web-based tools, it is possible for users to access the medical image processing functionalities wherever the internet is available. In this paper, we introduce a pure web-based, interactive, extendable, 2D and 3D medical image processing and visualization application that requires no client installation. Our software uses a four-layered design consisting of an algorithm layer, web-user-interface layer, server communication layer, and wrapper layer. To compete with extendibility of the current local medical image processing software, each layer is highly independent of other layers. A wide range of medical image preprocessing, registration, and segmentation methods are implemented using open source libraries. Desktop-like user interaction is provided by using AJAX technology in the web-user-interface. For the visualization functionality of the software, the VRML standard is used to provide 3D features over the web. Integration of these technologies has allowed implementation of our purely web-based software with high functionality without requiring powerful computational resources in the client side. The user-interface is designed such that the users can select appropriate parameters for practical research and clinical studies. PMID:20022133

  3. Software for visualization, analysis, and manipulation of laser scan images

    NASA Astrophysics Data System (ADS)

    Burnsides, Dennis B.

    1997-03-01

    The recent introduction of laser surface scanning to scientific applications presents a challenge to computer scientists and engineers. Full utilization of this two- dimensional (2-D) and three-dimensional (3-D) data requires advances in techniques and methods for data processing and visualization. This paper explores the development of software to support the visualization, analysis and manipulation of laser scan images. Specific examples presented are from on-going efforts at the Air Force Computerized Anthropometric Research and Design (CARD) Laboratory.

  4. GILDAS: Grenoble Image and Line Data Analysis Software

    NASA Astrophysics Data System (ADS)

    Gildas Team

    2013-05-01

    GILDAS is a collection of software oriented toward (sub-)millimeter radioastronomical applications (either single-dish or interferometer). It has been adopted as the IRAM standard data reduction package and is jointly maintained by IRAM & CNRS. GILDAS contains many facilities, most of which are oriented towards spectral line mapping and many kinds of 3-dimensional data. The code, written in Fortran-90 with a few parts in C/C++ (mainly keyboard interaction, plotting, widgets), is easily extensible.

  5. Parallel-Processing Software for Creating Mosaic Images

    NASA Technical Reports Server (NTRS)

    Klimeck, Gerhard; Deen, Robert; McCauley, Michael; DeJong, Eric

    2008-01-01

    A computer program implements parallel processing for nearly real-time creation of panoramic mosaics of images of terrain acquired by video cameras on an exploratory robotic vehicle (e.g., a Mars rover). Because the original images are typically acquired at various camera positions and orientations, it is necessary to warp the images into the reference frame of the mosaic before stitching them together to create the mosaic. [Also see "Parallel-Processing Software for Correlating Stereo Images," Software Supplement to NASA Tech Briefs, Vol. 31, No. 9 (September 2007) page 26.] The warping algorithm in this computer program reflects the considerations that (1) for every pixel in the desired final mosaic, a good corresponding point must be found in one or more of the original images and (2) for this purpose, one needs a good mathematical model of the cameras and a good correlation of individual pixels with respect to their positions in three dimensions. The desired mosaic is divided into slices, each of which is assigned to one of a number of central processing units (CPUs) operating simultaneously. The results from the CPUs are gathered and placed into the final mosaic. The time taken to create the mosaic depends upon the number of CPUs, the speed of each CPU, and whether a local or a remote data-staging mechanism is used.

  6. Designing multistatic ultrasound imaging systems using software analysis

    NASA Astrophysics Data System (ADS)

    Lee, Michael; Singh, Rahul S.; Culjat, Martin O.; Stubbs, Scott; Natarajan, Shyam; Brown, Elliott R.; Grundfest, Warren S.; Lee, Hua

    2010-03-01

    This paper describes the method of using the finite-element analysis software, PZFlex, to direct the design of a novel ultrasound imaging system which uses conformal transducer arrays. Current challenges in ultrasound array technology, including 2D array processing, have motivated exploration into new data acquisition and reconstruction techniques. Ultimately, these efforts encourage a broader examination of the processes used to effectively validate new array configurations and image formation procedures. Commercial software available today is capable of efficiently and accurately modeling detailed operational aspects of customized arrays. Combining quality simulated data with prototyped reconstruction techniques presents a valuable tool for testing novel schemes before committing more costly resources. To investigate this practice, we modeled three 1D ultrasound arrays operating multistatically instead of by the conventional phased-array approach. They are: a simple linear array, a half-circle array with 180-degree coverage, and a full circular array for inward imaging. We present the process used to create unique array models in PZFlex, simulate operation and obtain data, and subsequently generate images by inputting data into a reconstruction algorithm in MATLAB. Further discussion describes the tested reconstruction algorithm and includes resulting images.

  7. Integration of HIS components through open standards: an American HIS and a European Image Processing System.

    PubMed Central

    London, J. W.; Engelmann, U.; Morton, D. E.; Meinzer, H. P.; Degoulet, P.

    1993-01-01

    This paper describes the integration of an existing American Hospital Information System with a European Image Processing System. Both systems were built independently (with no knowledge of each other), but on open systems standards. The easy integration of these systems demonstrates the major benefit of open standards-based software design. PMID:8130452

  8. A standard interface between simulation programs and systems analysis software.

    PubMed

    Reichert, P

    2006-01-01

    A simple interface between simulation programs and systems analytical software is proposed. This interface is designed to facilitate linkage of environmental simulation programs with systems analytical software and thus can contribute to remedying the deficiency in applying systems analytical techniques to environmental modelling studies. The proposed concept, consisting of a text file interface combined with a batch mode simulation program call, is independent of model structure, operating system and programming language. It is open for implementation by academic and commercial simulation and systems analytical software developers and is very simple to implement. Its practicability is demonstrated by implementations for three environmental simulation packages (AQUASIM, SWAT and LEACHM) and two systems analytical program packages (UNCSIM, SUFI). The properties listed above and the demonstration of the ease of implementation of the approach are prerequisites for the stimulation of a widespread implementation of the proposed interface that would be beneficial for the dissemination of systems analytical techniques in the environmental and engineering sciences. Furthermore, such a development could stimulate the transfer of systems analytical techniques between different fields of application. PMID:16532757

  9. Woods Hole Image Processing System Software implementation; using NetCDF as a software interface for image processing

    USGS Publications Warehouse

    Paskevich, Valerie F.

    1992-01-01

    The Branch of Atlantic Marine Geology has been involved in the collection, processing and digital mosaicking of high, medium and low-resolution side-scan sonar data during the past 6 years. In the past, processing and digital mosaicking has been accomplished with a dedicated, shore-based computer system. With the need to process sidescan data in the field with increased power and reduced cost of major workstations, a need to have an image processing package on a UNIX based computer system which could be utilized in the field as well as be more generally available to Branch personnel was identified. This report describes the initial development of that package referred to as the Woods Hole Image Processing System (WHIPS). The software was developed using the Unidata NetCDF software interface to allow data to be more readily portable between different computer operating systems.

  10. Image Evaluation For Sensor Performance Standards

    NASA Astrophysics Data System (ADS)

    Peck, Lorin C.

    1989-02-01

    The subject of imagery evaluation as it applies to electro-optical (EO) sensor performance testing standards is discussed. Some of the difficulties encountered in the development of these standards for the various aircraft Line Replaceable Units (LRUs) are listed. The use of system performance testing is regarded as a requirement for the depot maintenance program to insure the integrity of total system performance requirements for EO imaging systems such as the Advanced Tactical Air Reconnaissance System (ATARS). The necessity for tying NATO Essential Elements of Information (EEIs) together with Imagery Interpretation Rating Scale (IIRS) numbers is explained. The requirements for a field target suitable for EO imagery evaluation is explained.

  11. Development of Software to Model AXAF-I Image Quality

    NASA Technical Reports Server (NTRS)

    Ahmad, Anees; Hawkins, Lamar

    1996-01-01

    This draft final report describes the work performed under the delivery order number 145 from May 1995 through August 1996. The scope of work included a number of software development tasks for the performance modeling of AXAF-I. A number of new capabilities and functions have been added to the GT software, which is the command mode version of the GRAZTRACE software, originally developed by MSFC. A structural data interface has been developed for the EAL (old SPAR) finite element analysis FEA program, which is being used by MSFC Structural Analysis group for the analysis of AXAF-I. This interface utility can read the structural deformation file from the EAL and other finite element analysis programs such as NASTRAN and COSMOS/M, and convert the data to a suitable format that can be used for the deformation ray-tracing to predict the image quality for a distorted mirror. There is a provision in this utility to expand the data from finite element models assuming 180 degrees symmetry. This utility has been used to predict image characteristics for the AXAF-I HRMA, when subjected to gravity effects in the horizontal x-ray ground test configuration. The development of the metrology data processing interface software has also been completed. It can read the HDOS FITS format surface map files, manipulate and filter the metrology data, and produce a deformation file, which can be used by GT for ray tracing for the mirror surface figure errors. This utility has been used to determine the optimum alignment (axial spacing and clocking) for the four pairs of AXAF-I mirrors. Based on this optimized alignment, the geometric images and effective focal lengths for the as built mirrors were predicted to cross check the results obtained by Kodak.

  12. Development of Automatic Testing Tool for `Design & Coding Standard' for Railway Signaling Software

    NASA Astrophysics Data System (ADS)

    Hwang, Jong-gyu; Jo, Hyun-jeong

    2009-08-01

    In accordance with the development of recent computer technology, the dependency of railway signaling system on the computer software is being increased further, and accordingly, the testing for the safety and reliability of railway signaling system software became more important. This thesis suggested an automated testing tool for coding rules on this railway signaling system software, and presented its result of implementation. The testing items in the implemented tool had referred to the international standards in relation to the software for railway system and MISRA-C standards. This automated testing tool for railway signaling system can be utilized at the assessment stage for railway signaling system software also, and it is anticipated that it can be utilized usefully at the software development stage also.

  13. Planning the Unplanned Experiment: Assessing the Efficacy of Standards for Safety Critical Software

    NASA Technical Reports Server (NTRS)

    Graydon, Patrick J.; Holloway, C. Michael

    2015-01-01

    We need well-founded means of determining whether software is t for use in safety-critical applications. While software in industries such as aviation has an excellent safety record, the fact that software aws have contributed to deaths illustrates the need for justi ably high con dence in software. It is often argued that software is t for safety-critical use because it conforms to a standard for software in safety-critical systems. But little is known about whether such standards `work.' Reliance upon a standard without knowing whether it works is an experiment; without collecting data to assess the standard, this experiment is unplanned. This paper reports on a workshop intended to explore how standards could practicably be assessed. Planning the Unplanned Experiment: Assessing the Ecacy of Standards for Safety Critical Software (AESSCS) was held on 13 May 2014 in conjunction with the European Dependable Computing Conference (EDCC). We summarize and elaborate on the workshop's discussion of the topic, including both the presented positions and the dialogue that ensued.

  14. Standardized system for multispectral imaging of palimpsests

    NASA Astrophysics Data System (ADS)

    Easton, Roger L., Jr.; Knox, Keith T.; Christens-Barry, William A.; Boydston, Kenneth; Toth, Michael B.; Emery, Doug; Noel, William

    2010-02-01

    The Archimedes Palimpsest imaging team has developed a spectral imaging system and associated processing techniques for general use with palimpsests and other artifacts. It includes an illumination system of light-emitting diodes (LEDs) in 13 narrow bands from the near ultraviolet through the near infrared (▵λ<= 40nm), blue and infrared LEDs at raking angles, high-resolution monochrome and color sensors, a variety of image collection techniques (including spectral imaging of emitted fluorescence), standard metadata records, and image processing algorithms, including pseudocolor color renderings and principal component analysis (PCA). This paper addresses the development and optimization of these techniques for the study of parchment palimpsests and the adaptation of these techniques to allow flexibility for new technologies and processing capabilities. The system has proven useful for extracting text from several palimpsests, including all original manuscripts in the Archimedes Palimpsest, the undertext in a privately owned 9th-century Syriac palimpsest, and in a survey of selected palimpsested leaves at St. Catherine's Monastery in Egypt. In addition, the system is being used at the U.S. Library of Congress for spectral imaging of historical manuscripts and other documents.

  15. Software architecture standard for simulation virtual machine, version 2.0

    NASA Technical Reports Server (NTRS)

    Sturtevant, Robert; Wessale, William

    1994-01-01

    The Simulation Virtual Machine (SBM) is an Ada architecture which eases the effort involved in the real-time software maintenance and sustaining engineering. The Software Architecture Standard defines the infrastructure which all the simulation models are built from. SVM was developed for and used in the Space Station Verification and Training Facility.

  16. Software for Verifying Image-Correlation Tie Points

    NASA Technical Reports Server (NTRS)

    Klimeck, Gerhard; Yagi, Gary

    2008-01-01

    A computer program enables assessment of the quality of tie points in the image-correlation processes of the software described in the immediately preceding article. Tie points are computed in mappings between corresponding pixels in the left and right images of a stereoscopic pair. The mappings are sometimes not perfect because image data can be noisy and parallax can cause some points to appear in one image but not the other. The present computer program relies on the availability of a left- right correlation map in addition to the usual right left correlation map. The additional map must be generated, which doubles the processing time. Such increased time can now be afforded in the data-processing pipeline, since the time for map generation is now reduced from about 60 to 3 minutes by the parallelization discussed in the previous article. Parallel cluster processing time, therefore, enabled this better science result. The first mapping is typically from a point (denoted by coordinates x,y) in the left image to a point (x',y') in the right image. The second mapping is from (x',y') in the right image to some point (x",y") in the left image. If (x,y) and(x",y") are identical, then the mapping is considered perfect. The perfect-match criterion can be relaxed by introducing an error window that admits of round-off error and a small amount of noise. The mapping procedure can be repeated until all points in each image not connected to points in the other image are eliminated, so that what remains are verified correlation data.

  17. Software and Algorithms for Biomedical Image Data Processing and Visualization

    NASA Technical Reports Server (NTRS)

    Talukder, Ashit; Lambert, James; Lam, Raymond

    2004-01-01

    A new software equipped with novel image processing algorithms and graphical-user-interface (GUI) tools has been designed for automated analysis and processing of large amounts of biomedical image data. The software, called PlaqTrak, has been specifically used for analysis of plaque on teeth of patients. New algorithms have been developed and implemented to segment teeth of interest from surrounding gum, and a real-time image-based morphing procedure is used to automatically overlay a grid onto each segmented tooth. Pattern recognition methods are used to classify plaque from surrounding gum and enamel, while ignoring glare effects due to the reflection of camera light and ambient light from enamel regions. The PlaqTrak system integrates these components into a single software suite with an easy-to-use GUI (see Figure 1) that allows users to do an end-to-end run of a patient s record, including tooth segmentation of all teeth, grid morphing of each segmented tooth, and plaque classification of each tooth image. The automated and accurate processing of the captured images to segment each tooth [see Figure 2(a)] and then detect plaque on a tooth-by-tooth basis is a critical component of the PlaqTrak system to do clinical trials and analysis with minimal human intervention. These features offer distinct advantages over other competing systems that analyze groups of teeth or synthetic teeth. PlaqTrak divides each segmented tooth into eight regions using an advanced graphics morphing procedure [see results on a chipped tooth in Figure 2(b)], and a pattern recognition classifier is then used to locate plaque [red regions in Figure 2(d)] and enamel regions. The morphing allows analysis within regions of teeth, thereby facilitating detailed statistical analysis such as the amount of plaque present on the biting surfaces on teeth. This software system is applicable to a host of biomedical applications, such as cell analysis and life detection, or robotic applications, such

  18. Planning the Unplanned Experiment: Towards Assessing the Efficacy of Standards for Safety-Critical Software

    NASA Technical Reports Server (NTRS)

    Graydon, Patrick J.; Holloway, C. M.

    2015-01-01

    Safe use of software in safety-critical applications requires well-founded means of determining whether software is fit for such use. While software in industries such as aviation has a good safety record, little is known about whether standards for software in safety-critical applications 'work' (or even what that means). It is often (implicitly) argued that software is fit for safety-critical use because it conforms to an appropriate standard. Without knowing whether a standard works, such reliance is an experiment; without carefully collecting assessment data, that experiment is unplanned. To help plan the experiment, we organized a workshop to develop practical ideas for assessing software safety standards. In this paper, we relate and elaborate on the workshop discussion, which revealed subtle but important study design considerations and practical barriers to collecting appropriate historical data and recruiting appropriate experimental subjects. We discuss assessing standards as written and as applied, several candidate definitions for what it means for a standard to 'work,' and key assessment strategies and study techniques and the pros and cons of each. Finally, we conclude with thoughts about the kinds of research that will be required and how academia, industry, and regulators might collaborate to overcome the noted barriers.

  19. OsiriX: an open-source software for navigating in multidimensional DICOM images.

    PubMed

    Rosset, Antoine; Spadola, Luca; Ratib, Osman

    2004-09-01

    A multidimensional image navigation and display software was designed for display and interpretation of large sets of multidimensional and multimodality images such as combined PET-CT studies. The software is developed in Objective-C on a Macintosh platform under the MacOS X operating system using the GNUstep development environment. It also benefits from the extremely fast and optimized 3D graphic capabilities of the OpenGL graphic standard widely used for computer games optimized for taking advantage of any hardware graphic accelerator boards available. In the design of the software special attention was given to adapt the user interface to the specific and complex tasks of navigating through large sets of image data. An interactive jog-wheel device widely used in the video and movie industry was implemented to allow users to navigate in the different dimensions of an image set much faster than with a traditional mouse or on-screen cursors and sliders. The program can easily be adapted for very specific tasks that require a limited number of functions, by adding and removing tools from the program's toolbar and avoiding an overwhelming number of unnecessary tools and functions. The processing and image rendering tools of the software are based on the open-source libraries ITK and VTK. This ensures that all new developments in image processing that could emerge from other academic institutions using these libraries can be directly ported to the OsiriX program. OsiriX is provided free of charge under the GNU open-source licensing agreement at http://homepage.mac.com/rossetantoine/osirix. PMID:15534753

  20. Special Software for Planetary Image Processing and Research

    NASA Astrophysics Data System (ADS)

    Zubarev, A. E.; Nadezhdina, I. E.; Kozlova, N. A.; Brusnikin, E. S.; Karachevtseva, I. P.

    2016-06-01

    The special modules of photogrammetric processing of remote sensing data that provide the opportunity to effectively organize and optimize the planetary studies were developed. As basic application the commercial software package PHOTOMOD™ is used. Special modules were created to perform various types of data processing: calculation of preliminary navigation parameters, calculation of shape parameters of celestial body, global view image orthorectification, estimation of Sun illumination and Earth visibilities from planetary surface. For photogrammetric processing the different types of data have been used, including images of the Moon, Mars, Mercury, Phobos, Galilean satellites and Enceladus obtained by frame or push-broom cameras. We used modern planetary data and images that were taken over the years, shooting from orbit flight path with various illumination and resolution as well as obtained by planetary rovers from surface. Planetary data image processing is a complex task, and as usual it can take from few months to years. We present our efficient pipeline procedure that provides the possibilities to obtain different data products and supports a long way from planetary images to celestial body maps. The obtained data - new three-dimensional control point networks, elevation models, orthomosaics - provided accurate maps production: a new Phobos atlas (Karachevtseva et al., 2015) and various thematic maps that derived from studies of planetary surface (Karachevtseva et al., 2016a).

  1. The accuracy of a designed software for automated localization of craniofacial landmarks on CBCT images

    PubMed Central

    2014-01-01

    Background Two-dimensional projection radiographs have been traditionally considered the modality of choice for cephalometric analysis. To overcome the shortcomings of two-dimensional images, three-dimensional computed tomography (CT) has been used to evaluate craniofacial structures. However, manual landmark detection depends on medical expertise, and the process is time-consuming. The present study was designed to produce software capable of automated localization of craniofacial landmarks on cone beam (CB) CT images based on image registration and to evaluate its accuracy. Methods The software was designed using MATLAB programming language. The technique was a combination of feature-based (principal axes registration) and voxel similarity-based methods for image registration. A total of 8 CBCT images were selected as our reference images for creating a head atlas. Then, 20 CBCT images were randomly selected as the test images for evaluating the method. Three experts twice located 14 landmarks in all 28 CBCT images during two examinations set 6 weeks apart. The differences in the distances of coordinates of each landmark on each image between manual and automated detection methods were calculated and reported as mean errors. Results The combined intraclass correlation coefficient for intraobserver reliability was 0.89 and for interobserver reliability 0.87 (95% confidence interval, 0.82 to 0.93). The mean errors of all 14 landmarks were <4 mm. Additionally, 63.57% of landmarks had a mean error of <3 mm compared with manual detection (gold standard method). Conclusion The accuracy of our approach for automated localization of craniofacial landmarks, which was based on combining feature-based and voxel similarity-based methods for image registration, was acceptable. Nevertheless we recommend repetition of this study using other techniques, such as intensity-based methods. PMID:25223399

  2. 'Face value': new medical imaging software in commercial view.

    PubMed

    Coopmans, Catelijne

    2011-04-01

    Based on three ethnographic vignettes describing the engagements of a small start-up company with prospective competitors, partners and customers, this paper shows how commercial considerations are folded into the ways visual images become 'seeable'. When company members mount demonstrations of prototype mammography software, they seek to generate interest but also to protect their intellectual property. Pivotal to these efforts to manage revelation and concealment is the visual interface, which is variously performed as obstacle and ally in the development of a profitable product. Using the concept of 'face value', the paper seeks to develop further insight into contemporary dynamics of seeing and showing by tracing the way techno-visual presentations and commercial considerations become entangled in practice. It also draws attention to the salience and significance of enactments of surface and depth in image-based practices. PMID:21998921

  3. Image Settings Affecting Nuchal Translucency Measurement Using Volume NT™ Software

    PubMed Central

    Cho, Hee Young; Kim, Young Han; Park, Yong Won; Kim, Sung Yoon; Lee, Kwang Hee; Yoo, Joon Sang

    2015-01-01

    Purpose To evaluate the effects of the deviation from the mid-sagittal plane, fetal image size, tissue harmonic imaging (THI), and speckle reduction filter (SRF) on the measurement of the nuchal translucency (NT) thickness using Volume NT™ software. Materials and Methods In 79 pregnant women, NT was measured using Volume NT™. Firstly, the three-dimensional volumes were categorized based on the angle of deviation in 10° intervals from the mid-sagittal plane. Secondly, the operator downsized the fetal image to less than 50% of the screen (Method A) and by magnifying the image (Method B). Next, the image was magnified until the fetal head and thorax occupied 75% of the screen, and the NT was measured (Method C). Lastly, NT values were acquired with THI and SRF functions on, with each function alternately on, and with both functions off. Results The mean differences in NT measurements were -0.09 mm (p<0.01) between two-dimensional (2D) and a deviation of 31-40° and -0.10 mm (p<0.01) between 2D and 41-50°. The intraclass correlation coefficients (ICC) for 2D-NT and NT according to image size were 0.858, 0.923, and 0.928 for methods A, B, and C, respectively. The ICC for 2D-NT and NT with respect to the THI and SRF were 0.786, 0.761, 0.740, and 0.731 with both functions on, THI only, SRF only, and with both functions off, respectively. Conclusion NT measurements made using Volume NT™ are affected by angle deviation from the mid-sagittal plane and fetal image size. Additionally, the highest correlation with 2D-NT was achieved when THI and SRF functions were used. PMID:26256978

  4. Software development for ACR-approved phantom-based nuclear medicine tomographic image quality control with cross-platform compatibility

    NASA Astrophysics Data System (ADS)

    Oh, Jungsu S.; Choi, Jae Min; Nam, Ki Pyo; Chae, Sun Young; Ryu, Jin-Sook; Moon, Dae Hyuk; Kim, Jae Seung

    2015-07-01

    Quality control and quality assurance (QC/QA) have been two of the most important issues in modern nuclear medicine (NM) imaging for both clinical practices and academic research. Whereas quantitative QC analysis software is common to modern positron emission tomography (PET) scanners, the QC of gamma cameras and/or single-photon-emission computed tomography (SPECT) scanners has not been sufficiently addressed. Although a thorough standard operating process (SOP) for mechanical and software maintenance may help the QC/QA of a gamma camera and SPECT-computed tomography (CT), no previous study has addressed a unified platform or process to decipher or analyze SPECT phantom images acquired from various scanners thus far. In addition, a few approaches have established cross-platform software to enable the technologists and physicists to assess the variety of SPECT scanners from different manufacturers. To resolve these issues, we have developed Interactive Data Language (IDL)-based in-house software for crossplatform (in terms of not only operating systems (OS) but also manufacturers) analyses of the QC data on an ACR SPECT phantom, which is essential for assessing and assuring the tomographical image quality of SPECT. We applied our devised software to our routine quarterly QC of ACR SPECT phantom images acquired from a number of platforms (OS/manufacturers). Based on our experience, we suggest that our devised software can offer a unified platform that allows images acquired from various types of scanners to be analyzed with great precision and accuracy.

  5. Vobi One: a data processing software package for functional optical imaging

    PubMed Central

    Takerkart, Sylvain; Katz, Philippe; Garcia, Flavien; Roux, Sébastien; Reynaud, Alexandre; Chavane, Frédéric

    2014-01-01

    Optical imaging is the only technique that allows to record the activity of a neuronal population at the mesoscopic scale. A large region of the cortex (10–20 mm diameter) is directly imaged with a CCD camera while the animal performs a behavioral task, producing spatio-temporal data with an unprecedented combination of spatial and temporal resolutions (respectively, tens of micrometers and milliseconds). However, researchers who have developed and used this technique have relied on heterogeneous software and methods to analyze their data. In this paper, we introduce Vobi One, a software package entirely dedicated to the processing of functional optical imaging data. It has been designed to facilitate the processing of data and the comparison of different analysis methods. Moreover, it should help bring good analysis practices to the community because it relies on a database and a standard format for data handling and it provides tools that allow producing reproducible research. Vobi One is an extension of the BrainVISA software platform, entirely written with the Python programming language, open source and freely available for download at https://trac.int.univ-amu.fr/vobi_one. PMID:24478623

  6. Vobi One: a data processing software package for functional optical imaging.

    PubMed

    Takerkart, Sylvain; Katz, Philippe; Garcia, Flavien; Roux, Sébastien; Reynaud, Alexandre; Chavane, Frédéric

    2014-01-01

    Optical imaging is the only technique that allows to record the activity of a neuronal population at the mesoscopic scale. A large region of the cortex (10-20 mm diameter) is directly imaged with a CCD camera while the animal performs a behavioral task, producing spatio-temporal data with an unprecedented combination of spatial and temporal resolutions (respectively, tens of micrometers and milliseconds). However, researchers who have developed and used this technique have relied on heterogeneous software and methods to analyze their data. In this paper, we introduce Vobi One, a software package entirely dedicated to the processing of functional optical imaging data. It has been designed to facilitate the processing of data and the comparison of different analysis methods. Moreover, it should help bring good analysis practices to the community because it relies on a database and a standard format for data handling and it provides tools that allow producing reproducible research. Vobi One is an extension of the BrainVISA software platform, entirely written with the Python programming language, open source and freely available for download at https://trac.int.univ-amu.fr/vobi_one. PMID:24478623

  7. Development of a Consensus Standard for Verification and Validation of Nuclear System Thermal-Fluids Software

    SciTech Connect

    Edwin A. Harvego; Richard R. Schultz; Ryan L. Crane

    2011-12-01

    With the resurgence of nuclear power and increased interest in advanced nuclear reactors as an option to supply abundant energy without the associated greenhouse gas emissions of the more conventional fossil fuel energy sources, there is a need to establish internationally recognized standards for the verification and validation (V&V) of software used to calculate the thermal-hydraulic behavior of advanced reactor designs for both normal operation and hypothetical accident conditions. To address this need, ASME (American Society of Mechanical Engineers) Standards and Certification has established the V&V 30 Committee, under the jurisdiction of the V&V Standards Committee, to develop a consensus standard for verification and validation of software used for design and analysis of advanced reactor systems. The initial focus of this committee will be on the V&V of system analysis and computational fluid dynamics (CFD) software for nuclear applications. To limit the scope of the effort, the committee will further limit its focus to software to be used in the licensing of High-Temperature Gas-Cooled Reactors. In this framework, the Standard should conform to Nuclear Regulatory Commission (NRC) and other regulatory practices, procedures and methods for licensing of nuclear power plants as embodied in the United States (U.S.) Code of Federal Regulations and other pertinent documents such as Regulatory Guide 1.203, 'Transient and Accident Analysis Methods' and NUREG-0800, 'NRC Standard Review Plan'. In addition, the Standard should be consistent with applicable sections of ASME NQA-1-2008 'Quality Assurance Requirements for Nuclear Facility Applications (QA)'. This paper describes the general requirements for the proposed V&V 30 Standard, which includes; (a) applicable NRC and other regulatory requirements for defining the operational and accident domain of a nuclear system that must be considered if the system is to be licensed, (b) the corresponding calculation domain of

  8. Development of image-processing software for automatic segmentation of brain tumors in MR images

    PubMed Central

    Vijayakumar, C.; Gharpure, Damayanti Chandrashekhar

    2011-01-01

    Most of the commercially available software for brain tumor segmentation have limited functionality and frequently lack the careful validation that is required for clinical studies. We have developed an image-analysis software package called ‘Prometheus,’ which performs neural system–based segmentation operations on MR images using pre-trained information. The software also has the capability to improve its segmentation performance by using the training module of the neural system. The aim of this article is to present the design and modules of this software. The segmentation module of Prometheus can be used primarily for image analysis in MR images. Prometheus was validated against manual segmentation by a radiologist and its mean sensitivity and specificity was found to be 85.71±4.89% and 93.2±2.87%, respectively. Similarly, the mean segmentation accuracy and mean correspondence ratio was found to be 92.35±3.37% and 0.78±0.046, respectively. PMID:21897560

  9. A comprehensive software system for image processing and programming. Final report

    SciTech Connect

    Rasure, J.; Hallett, S.; Jordan, R.

    1994-12-31

    XVision is an example of a comprehensive software system dedicated to the processing of multidimensional scientific data. Because it is comprehensive it is necessarily complex. This design complexity is dealt with by considering XVision as nine overlapping software systems, their components and the required standards. The complexity seen by a user of XVision is minimized by the different interfaces providing access to the image processing routines as well as an interface to ease the incorporation of new routines. The XVision project has stressed the importance of having: (1) interfaces to accommodate users with differing preferences and backgrounds and (2) tools to support the programmer and the scientist. The result is a system that provides a framework for building a powerful research, education and development tool.

  10. Standardization of (59)Fe by 4π(PC)β-γ software coincidence system.

    PubMed

    Koskinas, M F; Polillo, G; Brancaccio, F; Yamazaki, I M; Dias, M S

    2016-03-01

    The procedure for the standardization of (59)Fe using a 4π(PC)β-γ software coincidence system is described. The standardization was performed with an experimental setup consisting of a thin window gas-flow proportional counter (PC) in 4π geometry coupled to a NaI(Tl) scintillator and to a HPGe detector. The data acquisition was carried out by means of a Software Coincidence System (SCS). The beta efficiency was changed by using Collodion films and aluminum foils as external absorbers. PMID:26688361

  11. Collaboration using open standards and open source software (examples of DIAS/CEOS Water Portal)

    NASA Astrophysics Data System (ADS)

    Miura, S.; Sekioka, S.; Kuroiwa, K.; Kudo, Y.

    2015-12-01

    The DIAS/CEOS Water Portal is a part of the DIAS (Data Integration and Analysis System, http://www.editoria.u-tokyo.ac.jp/projects/dias/?locale=en_US) systems for data distribution for users including, but not limited to, scientists, decision makers and officers like river administrators. One of the functions of this portal is to enable one-stop search and access variable water related data archived multiple data centers located all over the world. This portal itself does not store data. Instead, according to requests made by users on the web page, it retrieves data from distributed data centers on-the-fly and lets them download and see rendered images/plots. Our system mainly relies on the open source software GI-cat (http://essi-lab.eu/do/view/GIcat) and open standards such as OGC-CSW, Opensearch and OPeNDAP protocol to enable the above functions. Details on how it works will be introduced during the presentation. Although some data centers have unique meta data format and/or data search protocols, our portal's brokering function enables users to search across various data centers at one time. And this portal is also connected to other data brokering systems, including GEOSS DAB (Discovery and Access Broker). As a result, users can search over thousands of datasets, millions of files at one time. Users can access the DIAS/CEOS Water Portal system at http://waterportal.ceos.org/.

  12. A Survey of DICOM Viewer Software to Integrate Clinical Research and Medical Imaging.

    PubMed

    Haak, Daniel; Page, Charles-E; Deserno, Thomas M

    2016-04-01

    The digital imaging and communications in medicine (DICOM) protocol is the leading standard for image data management in healthcare. Imaging biomarkers and image-based surrogate endpoints in clinical trials and medical registries require DICOM viewer software with advanced functionality for visualization and interfaces for integration. In this paper, a comprehensive evaluation of 28 DICOM viewers is performed. The evaluation criteria are obtained from application scenarios in clinical research rather than patient care. They include (i) platform, (ii) interface, (iii) support, (iv) two-dimensional (2D), and (v) three-dimensional (3D) viewing. On the average, 4.48 and 1.43 of overall 8 2D and 5 3D image viewing criteria are satisfied, respectively. Suitable DICOM interfaces for central viewing in hospitals are provided by GingkoCADx, MIPAV, and OsiriX Lite. The viewers ImageJ, MicroView, MIPAV, and OsiriX Lite offer all included 3D-rendering features for advanced viewing. Interfaces needed for decentral viewing in web-based systems are offered by Oviyam, Weasis, and Xero. Focusing on open source components, MIPAV is the best candidate for 3D imaging as well as DICOM communication. Weasis is superior for workflow optimization in clinical trials. Our evaluation shows that advanced visualization and suitable interfaces can also be found in the open source field and not only in commercial products. PMID:26482912

  13. Towards an Improvement of Software Development Processes through Standard Business Rules

    NASA Astrophysics Data System (ADS)

    Martínez-Fernández, José L.; Martínez, Paloma; González-Cristóbal, José C.

    The automation of software development processes is a desirable goal of current software companies which would lead to a cost reduction in software production. This automation is the backbone of approaches such as Model Driven Architecture (MDA) or Software Factories. This paper proposes the use of standard Business Rules (using Rules Interchange Format, RIF) to specify application functionality along with a platform to produce automatic implementations for them. The novelty of this proposal is to introduce Business Rules at all levels of MDA architecture in a software development process, providing a supporting tool where production Business Rules are considered at every abstraction level. Production Business Rules are represented through standard languages, rule engine vendor independence is assured via automatic transformation between rule languages, and Business Rules reuse is made possible. The objective is to get the development of production Business Rules closer to non-technical people involved in the software development process through the use of natural language processing approaches, automatic transformations among models and semantic web languages such as Ontology Web Language (OWL).

  14. Application of industry-standard guidelines for the validation of avionics software

    NASA Technical Reports Server (NTRS)

    Hayhurst, Kelly J.; Shagnea, Anita M.

    1990-01-01

    The application of industry standards to the development of avionics software is discussed, focusing on verification and validation activities. It is pointed out that the procedures that guide the avionics software development and testing process are under increased scrutiny. The DO-178A guidelines, Software Considerations in Airborne Systems and Equipment Certification, are used by the FAA for certifying avionics software. To investigate the effectiveness of the DO-178A guidelines for improving the quality of avionics software, guidance and control software (GCS) is being developed according to the DO-178A development method. It is noted that, due to the extent of the data collection and configuration management procedures, any phase in the life cycle of a GCS implementation can be reconstructed. Hence, a fundamental development and testing platform has been established that is suitable for investigating the adequacy of various software development processes. In particular, the overall effectiveness and efficiency of the development method recommended by the DO-178A guidelines are being closely examined.

  15. The family of standard hydrogen monitoring system computer software design description: Revision 2

    SciTech Connect

    Bender, R.M.

    1994-11-16

    In March 1990, 23 waste tanks at the Hanford Nuclear Reservation were identified as having the potential for the buildup of gas to a flammable or explosive level. As a result of the potential for hydrogen gas buildup, a project was initiated to design a standard hydrogen monitoring system (SHMS) for use at any waste tank to analyze gas samples for hydrogen content. Since it was originally deployed three years ago, two variations of the original system have been developed: the SHMS-B and SHMS-C. All three are currently in operation at the tank farms and will be discussed in this document. To avoid confusion in this document, when a feature is common to all three of the SHMS variants, it will be referred to as ``The family of SHMS.`` When it is specific to only one or two, they will be identified. The purpose of this computer software design document is to provide the following: the computer software requirements specification that documents the essential requirements of the computer software and its external interfaces; the computer software design description; the computer software user documentation for using and maintaining the computer software and any dedicated hardware; and the requirements for computer software design verification and validation.

  16. A new gold-standard dataset for 2D/3D image registration evaluation

    NASA Astrophysics Data System (ADS)

    Pawiro, Supriyanto; Markelj, Primoz; Gendrin, Christelle; Figl, Michael; Stock, Markus; Bloch, Christoph; Weber, Christoph; Unger, Ewald; Nöbauer, Iris; Kainberger, Franz; Bergmeister, Helga; Georg, Dietmar; Bergmann, Helmar; Birkfellner, Wolfgang

    2010-02-01

    In this paper, we propose a new gold standard data set for the validation of 2D/3D image registration algorithms for image guided radiotherapy. A gold standard data set was calculated using a pig head with attached fiducial markers. We used several imaging modalities common in diagnostic imaging or radiotherapy which include 64-slice computed tomography (CT), magnetic resonance imaging (MRI) using T1, T2 and proton density (PD) sequences, and cone beam CT (CBCT) imaging data. Radiographic data were acquired using kilovoltage (kV) and megavoltage (MV) imaging techniques. The image information reflects both anatomy and reliable fiducial marker information, and improves over existing data sets by the level of anatomical detail and image data quality. The markers of three dimensional (3D) and two dimensional (2D) images were segmented using Analyze 9.0 (AnalyzeDirect, Inc) and an in-house software. The projection distance errors (PDE) and the expected target registration errors (TRE) over all the image data sets were found to be less than 1.7 mm and 1.3 mm, respectively. The gold standard data set, obtained with state-of-the-art imaging technology, has the potential to improve the validation of 2D/3D registration algorithms for image guided therapy.

  17. The Effects of Personalized Practice Software on Learning Math Standards in the Third through Fifth Grades

    ERIC Educational Resources Information Center

    Gomez, Angela Nicole

    2012-01-01

    The purpose of this study was to investigate the effectiveness of "MathFacts in a Flash" software in helping students learn math standards. In each of their classes, the third-, fourth-, and fifth-grade students in a small private Roman Catholic school from the Pacific Northwest were randomly assigned either to a control group that used…

  18. WorkstationJ: workstation emulation software for medical image perception and technology evaluation research

    NASA Astrophysics Data System (ADS)

    Schartz, Kevin M.; Berbaum, Kevin S.; Caldwell, Robert T.; Madsen, Mark T.

    2007-03-01

    We developed image presentation software that mimics the functionality available in the clinic, but also records time-stamped, observer-display interactions and is readily deployable on diverse workstations making it possible to collect comparable observer data at multiple sites. Commercial image presentation software for clinical use has limited application for research on image perception, ergonomics, computer-aids and informatics because it does not collect observer responses, or other information on observer-display interactions, in real time. It is also very difficult to collect observer data from multiple institutions unless the same commercial software is available at different sites. Our software not only records observer reports of abnormalities and their locations, but also inspection time until report, inspection time for each computed radiograph and for each slice of tomographic studies, window/level, and magnification settings used by the observer. The software is a modified version of the open source ImageJ software available from the National Institutes of Health. Our software involves changes to the base code and extensive new plugin code. Our free software is currently capable of displaying computed tomography and computed radiography images. The software is packaged as Java class files and can be used on Windows, Linux, or Mac systems. By deploying our software together with experiment-specific script files that administer experimental procedures and image file handling, multi-institutional studies can be conducted that increase reader and/or case sample sizes or add experimental conditions.

  19. Computer systems and software description for Standard-E+ Hydrogen Monitoring System (SHMS-E+)

    SciTech Connect

    Tate, D.D.

    1997-05-01

    The primary function of the Standard-E+ Hydrogen Monitoring System (SHMS-E+) is to determine tank vapor space gas composition and gas release rate, and to detect gas release events. Characterization of the gas composition is needed for safety analyses. The lower flammability limit, as well as the peak burn temperature and pressure, are dependent upon the gas composition. If there is little or no knowledge about the gas composition, safety analyses utilize compositions that yield the worst case in a deflagration or detonation. Knowledge of the true composition could lead to reductions in the assumptions and therefore there may be a potential for a reduction in controls and work restrictions. Also, knowledge of the actual composition will be required information for the analysis that is needed to remove tanks from the Watch List. Similarly, the rate of generation and release of gases is required information for performing safety analyses, developing controls, designing equipment, and closing safety issues. This report outlines the computer system design layout description for the Standard-E+ Hydrogen Monitoring System.

  20. Standard and fenestrated endograft sizing in EVAR planning: Description and validation of a semi-automated 3D software.

    PubMed

    Macía, Iván; de Blas, Mariano; Legarreta, Jon Haitz; Kabongo, Luis; Hernández, Óscar; Egaña, José María; Emparanza, José Ignacio; García-Familiar, Ainhoa; Graña, Manuel

    2016-06-01

    An abdominal aortic aneurysm (AAA) is a pathological dilation of the abdominal aorta that may lead to a rupture with fatal consequences. Endovascular aneurysm repair (EVAR) is a minimally invasive surgical procedure consisting of the deployment and fixation of a stent-graft that isolates the damaged vessel wall from blood circulation. The technique requires adequate endovascular device sizing, which may be performed by vascular analysis and quantification on Computerized Tomography Angiography (CTA) scans. This paper presents a novel 3D CTA image-based software for AAA inspection and EVAR sizing, eVida Vascular, which allows fast and accurate 3D endograft sizing for standard and fenestrated endografts. We provide a description of the system and its innovations, including the underlying vascular image analysis and visualization technology, functional modules and user interaction. Furthermore, an experimental validation of the tool is described, assessing the degree of agreement with a commercial, clinically validated software, when comparing measurements obtained for standard endograft sizing in a group of 14 patients. PMID:25747803

  1. Design and validation of Segment - freely available software for cardiovascular image analysis

    PubMed Central

    2010-01-01

    Background Commercially available software for cardiovascular image analysis often has limited functionality and frequently lacks the careful validation that is required for clinical studies. We have already implemented a cardiovascular image analysis software package and released it as freeware for the research community. However, it was distributed as a stand-alone application and other researchers could not extend it by writing their own custom image analysis algorithms. We believe that the work required to make a clinically applicable prototype can be reduced by making the software extensible, so that researchers can develop their own modules or improvements. Such an initiative might then serve as a bridge between image analysis research and cardiovascular research. The aim of this article is therefore to present the design and validation of a cardiovascular image analysis software package (Segment) and to announce its release in a source code format. Results Segment can be used for image analysis in magnetic resonance imaging (MRI), computed tomography (CT), single photon emission computed tomography (SPECT) and positron emission tomography (PET). Some of its main features include loading of DICOM images from all major scanner vendors, simultaneous display of multiple image stacks and plane intersections, automated segmentation of the left ventricle, quantification of MRI flow, tools for manual and general object segmentation, quantitative regional wall motion analysis, myocardial viability analysis and image fusion tools. Here we present an overview of the validation results and validation procedures for the functionality of the software. We describe a technique to ensure continued accuracy and validity of the software by implementing and using a test script that tests the functionality of the software and validates the output. The software has been made freely available for research purposes in a source code format on the project home page http

  2. Automated facial coding software outperforms people in recognizing neutral faces as neutral from standardized datasets

    PubMed Central

    Lewinski, Peter

    2015-01-01

    Little is known about people’s accuracy of recognizing neutral faces as neutral. In this paper, I demonstrate the importance of knowing how well people recognize neutral faces. I contrasted human recognition scores of 100 typical, neutral front-up facial images with scores of an arguably objective judge – automated facial coding (AFC) software. I hypothesized that the software would outperform humans in recognizing neutral faces because of the inherently objective nature of computer algorithms. Results confirmed this hypothesis. I provided the first-ever evidence that computer software (90%) was more accurate in recognizing neutral faces than people were (59%). I posited two theoretical mechanisms, i.e., smile-as-a-baseline and false recognition of emotion, as possible explanations for my findings. PMID:26441761

  3. BioBrick assembly standards and techniques and associated software tools.

    PubMed

    Røkke, Gunvor; Korvald, Eirin; Pahr, Jarle; Oyås, Ove; Lale, Rahmi

    2014-01-01

    The BioBrick idea was developed to introduce the engineering principles of abstraction and standardization into synthetic biology. BioBricks are DNA sequences that serve a defined biological function and can be readily assembled with any other BioBrick parts to create new BioBricks with novel properties. In order to achieve this, several assembly standards can be used. Which assembly standards a BioBrick is compatible with, depends on the prefix and suffix sequences surrounding the part. In this chapter, five of the most common assembly standards will be described, as well as some of the most used assembly techniques, cloning procedures, and a presentation of the available software tools that can be used for deciding on the best method for assembling of different BioBricks, and searching for BioBrick parts in the Registry of Standard Biological Parts database. PMID:24395353

  4. Software-Assisted Depth Analysis of Optic Nerve Stereoscopic Images in Telemedicine.

    PubMed

    Xia, Tian; Patel, Shriji N; Szirth, Ben C; Kolomeyer, Anton M; Khouri, Albert S

    2016-01-01

    Background. Software guided optic nerve assessment can assist in process automation and reduce interobserver disagreement. We tested depth analysis software (DAS) in assessing optic nerve cup-to-disc ratio (VCD) from stereoscopic optic nerve images (SONI) of normal eyes. Methods. In a prospective study, simultaneous SONI from normal subjects were collected during telemedicine screenings using a Kowa 3Wx nonmydriatic simultaneous stereoscopic retinal camera (Tokyo, Japan). VCD was determined from SONI pairs and proprietary pixel DAS (Kowa Inc., Tokyo, Japan) after disc and cup contour line placement. A nonstereoscopic VCD was determined using the right channel of a stereo pair. Mean, standard deviation, t-test, and the intraclass correlation coefficient (ICCC) were calculated. Results. 32 patients had mean age of 40 ± 14 years. Mean VCD on SONI was 0.36 ± 0.09, with DAS 0.38 ± 0.08, and with nonstereoscopic 0.29 ± 0.12. The difference between stereoscopic and DAS assisted was not significant (p = 0.45). ICCC showed agreement between stereoscopic and software VCD assessment. Mean VCD difference was significant between nonstereoscopic and stereoscopic (p < 0.05) and nonstereoscopic and DAS (p < 0.005) recordings. Conclusions. DAS successfully assessed SONI and showed a high degree of correlation to physician-determined stereoscopic VCD. PMID:27190507

  5. Software-Assisted Depth Analysis of Optic Nerve Stereoscopic Images in Telemedicine

    PubMed Central

    Xia, Tian; Patel, Shriji N.; Szirth, Ben C.

    2016-01-01

    Background. Software guided optic nerve assessment can assist in process automation and reduce interobserver disagreement. We tested depth analysis software (DAS) in assessing optic nerve cup-to-disc ratio (VCD) from stereoscopic optic nerve images (SONI) of normal eyes. Methods. In a prospective study, simultaneous SONI from normal subjects were collected during telemedicine screenings using a Kowa 3Wx nonmydriatic simultaneous stereoscopic retinal camera (Tokyo, Japan). VCD was determined from SONI pairs and proprietary pixel DAS (Kowa Inc., Tokyo, Japan) after disc and cup contour line placement. A nonstereoscopic VCD was determined using the right channel of a stereo pair. Mean, standard deviation, t-test, and the intraclass correlation coefficient (ICCC) were calculated. Results. 32 patients had mean age of 40 ± 14 years. Mean VCD on SONI was 0.36 ± 0.09, with DAS 0.38 ± 0.08, and with nonstereoscopic 0.29 ± 0.12. The difference between stereoscopic and DAS assisted was not significant (p = 0.45). ICCC showed agreement between stereoscopic and software VCD assessment. Mean VCD difference was significant between nonstereoscopic and stereoscopic (p < 0.05) and nonstereoscopic and DAS (p < 0.005) recordings. Conclusions. DAS successfully assessed SONI and showed a high degree of correlation to physician-determined stereoscopic VCD. PMID:27190507

  6. 25 CFR 547.8 - What are the minimum technical software standards applicable to Class II gaming systems?

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 25 Indians 2 2014-04-01 2014-04-01 false What are the minimum technical software standards... EQUIPMENT § 547.8 What are the minimum technical software standards applicable to Class II gaming systems... adopted by the tribe or TGRA; (ii) Display player interface identification; and (iii) Display...

  7. AnaSP: a software suite for automatic image analysis of multicellular spheroids.

    PubMed

    Piccinini, Filippo

    2015-04-01

    Today, more and more biological laboratories use 3D cell cultures and tissues grown in vitro as a 3D model of in vivo tumours and metastases. In the last decades, it has been extensively established that multicellular spheroids represent an efficient model to validate effects of drugs and treatments for human care applications. However, a lack of methods for quantitative analysis limits the usage of spheroids as models for routine experiments. Several methods have been proposed in literature to perform high throughput experiments employing spheroids by automatically computing different morphological parameters, such as diameter, volume and sphericity. Nevertheless, these systems are typically grounded on expensive automated technologies, that make the suggested solutions affordable only for a limited subset of laboratories, frequently performing high content screening analysis. In this work we propose AnaSP, an open source software suitable for automatically estimating several morphological parameters of spheroids, by simply analyzing brightfield images acquired with a standard widefield microscope, also not endowed with a motorized stage. The experiments performed proved sensitivity and precision of the segmentation method proposed, and excellent reliability of AnaSP to compute several morphological parameters of spheroids imaged in different conditions. AnaSP is distributed as an open source software tool. Its modular architecture and graphical user interface make it attractive also for researchers who do not work in areas of computer vision and suitable for both high content screenings and occasional spheroid-based experiments. PMID:25737369

  8. ASAP (Automatic Software for ASL Processing): A toolbox for processing Arterial Spin Labeling images.

    PubMed

    Mato Abad, Virginia; García-Polo, Pablo; O'Daly, Owen; Hernández-Tamames, Juan Antonio; Zelaya, Fernando

    2016-04-01

    The method of Arterial Spin Labeling (ASL) has experienced a significant rise in its application to functional imaging, since it is the only technique capable of measuring blood perfusion in a truly non-invasive manner. Currently, there are no commercial packages for processing ASL data and there is no recognized standard for normalizing ASL data to a common frame of reference. This work describes a new Automated Software for ASL Processing (ASAP) that can automatically process several ASL datasets. ASAP includes functions for all stages of image pre-processing: quantification, skull-stripping, co-registration, partial volume correction and normalization. To assess the applicability and validity of the toolbox, this work shows its application in the study of hypoperfusion in a sample of healthy subjects at risk of progressing to Alzheimer's disease. ASAP requires limited user intervention, minimizing the possibility of random and systematic errors, and produces cerebral blood flow maps that are ready for statistical group analysis. The software is easy to operate and results in excellent quality of spatial normalization. The results found in this evaluation study are consistent with previous studies that find decreased perfusion in Alzheimer's patients in similar regions and demonstrate the applicability of ASAP. PMID:26612079

  9. GRO/EGRET data analysis software: An integrated system of custom and commercial software using standard interfaces

    NASA Technical Reports Server (NTRS)

    Laubenthal, N. A.; Bertsch, D.; Lal, N.; Etienne, A.; Mcdonald, L.; Mattox, J.; Sreekumar, P.; Nolan, P.; Fierro, J.

    1992-01-01

    The Energetic Gamma Ray Telescope Experiment (EGRET) on the Compton Gamma Ray Observatory has been in orbit for more than a year and is being used to map the full sky for gamma rays in a wide energy range from 30 to 20,000 MeV. Already these measurements have resulted in a wide range of exciting new information on quasars, pulsars, galactic sources, and diffuse gamma ray emission. The central part of the analysis is done with sky maps that typically cover an 80 x 80 degree section of the sky for an exposure time of several days. Specific software developed for this program generates the counts, exposure, and intensity maps. The analysis is done on a network of UNIX based workstations and takes full advantage of a custom-built user interface called X-dialog. The maps that are generated are stored in the FITS format for a collection of energies. These, along with similar diffuse emission background maps generated from a model calculation, serve as input to a maximum likelihood program that produces maps of likelihood with optional contours that are used to evaluate regions for sources. Likelihood also evaluates the background corrected intensity at each location for each energy interval from which spectra can be generated. Being in a standard FITS format permits all of the maps to be easily accessed by the full complement of tools available in several commercial astronomical analysis systems. In the EGRET case, IDL is used to produce graphics plots in two and three dimensions and to quickly implement any special evaluation that might be desired. Other custom-built software, such as the spectral and pulsar analyses, take advantage of the XView toolkit for display and Postscript output for the color hard copy. This poster paper outlines the data flow and provides examples of the user interfaces and output products. It stresses the advantages that are derived from the integration of the specific instrument-unique software and powerful commercial tools for graphics and

  10. Software interface for high-speed readout of particle detectors based on the CoaXPress communication standard

    NASA Astrophysics Data System (ADS)

    Hejtmánek, M.; Neue, G.; Voleš, P.

    2015-06-01

    This article is devoted to the software design and development of a high-speed readout application used for interfacing particle detectors via the CoaXPress communication standard. The CoaXPress provides an asymmetric high-speed serial connection over a single coaxial cable. It uses a widely available 75 Ω BNC standard and can operate in various modes with a data throughput ranging from 1.25 Gbps up to 25 Gbps. Moreover, it supports a low speed uplink with a fixed bit rate of 20.833 Mbps, which can be used to control and upload configuration data to the particle detector. The CoaXPress interface is an upcoming standard in medical imaging, therefore its usage promises long-term compatibility and versatility. This work presents an example of how to develop DAQ system for a pixel detector. For this purpose, a flexible DAQ card was developed using the XILINX Spartan 6 FPGA. The DAQ card is connected to the framegrabber FireBird CXP6 Quad, which is plugged in the PCI Express bus of the standard PC. The data transmission was performed between the FPGA and framegrabber card via the standard coaxial cable in communication mode with a bit rate of 3.125 Gbps. Using the Medipix2 Quad pixel detector, the framerate of 100 fps was achieved. The front-end application makes use of the FireBird framegrabber software development kit and is suitable for data acquisition as well as control of the detector through the registers implemented in the FPGA.

  11. NEIGHBOUR-IN: Image processing software for spatial analysis of animal grouping

    PubMed Central

    Caubet, Yves; Richard, Freddie-Jeanne

    2015-01-01

    Abstract Animal grouping is a very complex process that occurs in many species, involving many individuals under the influence of different mechanisms. To investigate this process, we have created an image processing software, called NEIGHBOUR-IN, designed to analyse individuals’ coordinates belonging to up to three different groups. The software also includes statistical analysis and indexes to discriminate aggregates based on spatial localisation of individuals and their neighbours. After the description of the software, the indexes computed by the software are illustrated using both artificial patterns and case studies using the spatial distribution of woodlice. The added strengths of this software and methods are also discussed. PMID:26261448

  12. Metrology Standards for Quantitative Imaging Biomarkers.

    PubMed

    Sullivan, Daniel C; Obuchowski, Nancy A; Kessler, Larry G; Raunig, David L; Gatsonis, Constantine; Huang, Erich P; Kondratovich, Marina; McShane, Lisa M; Reeves, Anthony P; Barboriak, Daniel P; Guimaraes, Alexander R; Wahl, Richard L

    2015-12-01

    Although investigators in the imaging community have been active in developing and evaluating quantitative imaging biomarkers (QIBs), the development and implementation of QIBs have been hampered by the inconsistent or incorrect use of terminology or methods for technical performance and statistical concepts. Technical performance is an assessment of how a test performs in reference objects or subjects under controlled conditions. In this article, some of the relevant statistical concepts are reviewed, methods that can be used for evaluating and comparing QIBs are described, and some of the technical performance issues related to imaging biomarkers are discussed. More consistent and correct use of terminology and study design principles will improve clinical research, advance regulatory science, and foster better care for patients who undergo imaging studies. PMID:26267831

  13. Quantitative Neuroimaging Software for Clinical Assessment of Hippocampal Volumes on MR Imaging

    PubMed Central

    Ahdidan, Jamila; Raji, Cyrus A.; DeYoe, Edgar A.; Mathis, Jedidiah; Noe, Karsten Ø.; Rimestad, Jens; Kjeldsen, Thomas K.; Mosegaard, Jesper; Becker, James T.; Lopez, Oscar

    2015-01-01

    Background: Multiple neurological disorders including Alzheimer’s disease (AD), mesial temporal sclerosis, and mild traumatic brain injury manifest with volume loss on brain MRI. Subtle volume loss is particularly seen early in AD. While prior research has demonstrated the value of this additional information from quantitative neuroimaging, very few applications have been approved for clinical use. Here we describe a US FDA cleared software program, NeuroreaderTM, for assessment of clinical hippocampal volume on brain MRI. Objective: To present the validation of hippocampal volumetrics on a clinical software program. Method: Subjects were drawn (n = 99) from the Alzheimer Disease Neuroimaging Initiative study. Volumetric brain MR imaging was acquired in both 1.5 T (n = 59) and 3.0 T (n = 40) scanners in participants with manual hippocampal segmentation. Fully automated hippocampal segmentation and measurement was done using a multiple atlas approach. The Dice Similarity Coefficient (DSC) measured the level of spatial overlap between NeuroreaderTM and gold standard manual segmentation from 0 to 1 with 0 denoting no overlap and 1 representing complete agreement. DSC comparisons between 1.5 T and 3.0 T scanners were done using standard independent samples T-tests. Results: In the bilateral hippocampus, mean DSC was 0.87 with a range of 0.78–0.91 (right hippocampus) and 0.76–0.91 (left hippocampus). Automated segmentation agreement with manual segmentation was essentially equivalent at 1.5 T (DSC = 0.879) versus 3.0 T (DSC = 0.872). Conclusion: This work provides a description and validation of a software program that can be applied in measuring hippocampal volume, a biomarker that is frequently abnormal in AD and other neurological disorders. PMID:26484924

  14. Platform-independent software for medical image processing on the Internet

    NASA Astrophysics Data System (ADS)

    Mancuso, Michael E.; Pathak, Sayan D.; Kim, Yongmin

    1997-05-01

    We have developed a software tool for image processing over the Internet. The tool is a general purpose, easy to use, flexible, platform independent image processing software package with functions most commonly used in medical image processing.It provides for processing of medical images located wither remotely on the Internet or locally. The software was written in Java - the new programming language developed by Sun Microsystems. It was compiled and tested using Microsoft's Visual Java 1.0 and Microsoft's Just in Time Compiler 1.00.6211. The software is simple and easy to use. In order to use the tool, the user needs to download the software from our site before he/she runs it using any Java interpreter, such as those supplied by Sun, Symantec, Borland or Microsoft. Future versions of the operating systems supplied by Sun, Microsoft, Apple, IBM, and others will include Java interpreters. The software is then able to access and process any image on the iNternet or on the local computer. Using a 512 X 512 X 8-bit image, a 3 X 3 convolution took 0.88 seconds on an Intel Pentium Pro PC running at 200 MHz with 64 Mbytes of memory. A window/level operation took 0.38 seconds while a 3 X 3 median filter took 0.71 seconds. These performance numbers demonstrate the feasibility of using this software interactively on desktop computes. Our software tool supports various image processing techniques commonly used in medical image processing and can run without the need of any specialized hardware. It can become an easily accessible resource over the Internet to promote the learning and of understanding image processing algorithms. Also, it could facilitate sharing of medical image databases and collaboration amongst researchers and clinicians, regardless of location.

  15. Integration of XNAT/PACS, DICOM, and research software for automated multi-modal image analysis

    NASA Astrophysics Data System (ADS)

    Gao, Yurui; Burns, Scott S.; Lauzon, Carolyn B.; Fong, Andrew E.; James, Terry A.; Lubar, Joel F.; Thatcher, Robert W.; Twillie, David A.; Wirt, Michael D.; Zola, Marc A.; Logan, Bret W.; Anderson, Adam W.; Landman, Bennett A.

    2013-03-01

    Traumatic brain injury (TBI) is an increasingly important public health concern. While there are several promising avenues of intervention, clinical assessments are relatively coarse and comparative quantitative analysis is an emerging field. Imaging data provide potentially useful information for evaluating TBI across functional, structural, and microstructural phenotypes. Integration and management of disparate data types are major obstacles. In a multi-institution collaboration, we are collecting electroencephalogy (EEG), structural MRI, diffusion tensor MRI (DTI), and single photon emission computed tomography (SPECT) from a large cohort of US Army service members exposed to mild or moderate TBI who are undergoing experimental treatment. We have constructed a robust informatics backbone for this project centered on the DICOM standard and eXtensible Neuroimaging Archive Toolkit (XNAT) server. Herein, we discuss (1) optimization of data transmission, validation and storage, (2) quality assurance and workflow management, and (3) integration of high performance computing with research software.

  16. Wavelet/scalar quantization compression standard for fingerprint images

    SciTech Connect

    Brislawn, C.M.

    1996-06-12

    US Federal Bureau of Investigation (FBI) has recently formulated a national standard for digitization and compression of gray-scale fingerprint images. Fingerprints are scanned at a spatial resolution of 500 dots per inch, with 8 bits of gray-scale resolution. The compression algorithm for the resulting digital images is based on adaptive uniform scalar quantization of a discrete wavelet transform subband decomposition (wavelet/scalar quantization method). The FBI standard produces archival-quality images at compression ratios of around 15 to 1 and will allow the current database of paper fingerprint cards to be replaced by digital imagery. The compression standard specifies a class of potential encoders and a universal decoder with sufficient generality to reconstruct compressed images produced by any compliant encoder, allowing flexibility for future improvements in encoder technology. A compliance testing program is also being implemented to ensure high standards of image quality and interchangeability of data between different implementations.

  17. Software engineering methods and standards used in the Sloan Digital Sky Survey

    NASA Astrophysics Data System (ADS)

    Petravick, Don; Berman, Eileen; Gurbani, Vijay; Nicinski, Tom; Pordes, Ruth; Rechenmacher, Ron; Sergey, Gary; Lupton, Robert H.

    1995-04-01

    We present an integrated science software development environment, code maintenance and support system for the Sloan Digital Sky Survey (SDSS) now being actively used throughout the collaboration. The SDSS is a collaboration between the Fermi National Accelerator Laboratory, the Institute for Advanced Study, The Japan Promotion Group, Johns Hopkins University, Princeton University, The United States Naval Observatory, the University of Chicago, and the University of Washington. The SDSS will produce a five-color imaging survey of 1/4 of the sky about the north galactic cap and image 108 Stars, 108 galaxies, and 105 Quasars. Spectra will be obtained for 106 galaxies and 105 Quasars as well. The survey will utilize a dedicated 2.5 meter telescope at the Apache Point Observatory in New Mexico. Its imaging camera will hold 54 Charge-Coupled Devices (CADS). The SDSS will take five years to complete, acquiring well over 12 TB of data.

  18. Xmipp 3.0: an improved software suite for image processing in electron microscopy.

    PubMed

    de la Rosa-Trevín, J M; Otón, J; Marabini, R; Zaldívar, A; Vargas, J; Carazo, J M; Sorzano, C O S

    2013-11-01

    Xmipp is a specialized software package for image processing in electron microscopy, and that is mainly focused on 3D reconstruction of macromolecules through single-particles analysis. In this article we present Xmipp 3.0, a major release which introduces several improvements and new developments over the previous version. A central improvement is the concept of a project that stores the entire processing workflow from data import to final results. It is now possible to monitor, reproduce and restart all computing tasks as well as graphically explore the complete set of interrelated tasks associated to a given project. Other graphical tools have also been improved such as data visualization, particle picking and parameter "wizards" that allow the visual selection of some key parameters. Many standard image formats are transparently supported for input/output from all programs. Additionally, results have been standardized, facilitating the interoperation between different Xmipp programs. Finally, as a result of a large code refactoring, the underlying C++ libraries are better suited for future developments and all code has been optimized. Xmipp is an open-source package that is freely available for download from: http://xmipp.cnb.csic.es. PMID:24075951

  19. The -mdoc macro package: A software tool to support computer documentation standards

    SciTech Connect

    Sanders, C.E.

    1987-09-16

    At Los Alamos National Laboratory a small staff of writers and word processors in the Computer Documentation Group is responsible for producing computer documentation for the over 8000 users of the Laboratory's computer network. The -mdoc macro package was developed as a software tool to support that effort. The -mdoc macro package is used with the NROFF/TROFF document preparation system on the UNIX operating system. The -mdoc macro package incorporates the standards for computer documentation at Los Alamos that were established by the writers. Use of the -mdoc macro package has freed the staff of programming format details, allowing writers to concentrate on content of documents and word processors to produce documents in a timely manner. It is an easy-to-use software tool that adapts to changing skills, needs, and technology. 5 refs.

  20. A European de facto standard for image folders applied to telepathology and teaching.

    PubMed

    Klossa, J; Cordier, J C; Flandrin, G; Got, C; Hémet, J

    1998-02-01

    Since 1980, French pathologists at ADICAP (Association pour le Développement de l'Informatique en Cytologie et en Anatomie Pathologique) have created a common language code allowing the use of computers for routine applications. This code permitted the production of an associated exhaustive image bank of approximately 30,000 images. This task involved many specialists necessitating the definition of specific processes for security and simplicity of data handling. In particular, it has been necessary to develop image communication. To achieve that goal, it was necessary to define a folder, associating textual information to images. That was done through several industrial software providers contribution. Consequently, this folder, using a common packaging standard, allowed any pathologist access to images, codified data and clinical information. Accessing folders has been made easy by launching a Web server at CRIHAN under the supervision of ADICAP. An ADICAP software user may not only browse through the folder but may also import them into their own system and produce new folders. Today more than a hundred users in France and in foreign countries are able to provide diagnostic advice and also referential products useful for further education and quality control. The next challenge is the development of this preliminary de facto approach toward an internationally admitted standard suited for morphological image exchange. PMID:9600422

  1. JHelioviewer: Open-Source Software for Discovery and Image Access in the Petabyte Age (Invited)

    NASA Astrophysics Data System (ADS)

    Mueller, D.; Dimitoglou, G.; Langenberg, M.; Pagel, S.; Dau, A.; Nuhn, M.; Garcia Ortiz, J. P.; Dietert, H.; Schmidt, L.; Hughitt, V. K.; Ireland, J.; Fleck, B.

    2010-12-01

    The unprecedented torrent of data returned by the Solar Dynamics Observatory is both a blessing and a barrier: a blessing for making available data with significantly higher spatial and temporal resolution, but a barrier for scientists to access, browse and analyze them. With such staggering data volume, the data is bound to be accessible only from a few repositories and users will have to deal with data sets effectively immobile and practically difficult to download. From a scientist's perspective this poses three challenges: accessing, browsing and finding interesting data while avoiding the proverbial search for a needle in a haystack. To address these challenges, we have developed JHelioviewer, an open-source visualization software that lets users browse large data volumes both as still images and movies. We did so by deploying an efficient image encoding, storage, and dissemination solution using the JPEG 2000 standard. This solution enables users to access remote images at different resolution levels as a single data stream. Users can view, manipulate, pan, zoom, and overlay JPEG 2000 compressed data quickly, without severe network bandwidth penalties. Besides viewing data, the browser provides third-party metadata and event catalog integration to quickly locate data of interest, as well as an interface to the Virtual Solar Observatory to download science-quality data. As part of the Helioviewer Project, JHelioviewer offers intuitive ways to browse large amounts of heterogeneous data remotely and provides an extensible and customizable open-source platform for the scientific community.

  2. An image-based software tool for screening retinal fundus images using vascular morphology and network transport analysis

    NASA Astrophysics Data System (ADS)

    Clark, Richard D.; Dickrell, Daniel J.; Meadows, David L.

    2014-03-01

    As the number of digital retinal fundus images taken each year grows at an increasing rate, there exists a similarly increasing need for automatic eye disease detection through image-based analysis. A new method has been developed for classifying standard color fundus photographs into both healthy and diseased categories. This classification was based on the calculated network fluid conductance, a function of the geometry and connectivity of the vascular segments. To evaluate the network resistance, the retinal vasculature was first manually separated from the background to ensure an accurate representation of the geometry and connectivity. The arterial and venous networks were then semi-automatically separated into two separate binary images. The connectivity of the arterial network was then determined through a series of morphological image operations. The network comprised of segments of vasculature and points of bifurcation, with each segment having a characteristic geometric and fluid properties. Based on the connectivity and fluid resistance of each vascular segment, an arterial network flow conductance was calculated, which described the ease with which blood can pass through a vascular system. In this work, 27 eyes (13 healthy and 14 diabetic) from patients roughly 65 years in age were evaluated using this methodology. Healthy arterial networks exhibited an average fluid conductance of 419 ± 89 μm3/mPa-s while the average network fluid conductance of the diabetic set was 165 ± 87 μm3/mPa-s (p < 0.001). The results of this new image-based software demonstrated an ability to automatically, quantitatively and efficiently screen diseased eyes from color fundus imagery.

  3. Self-contained off-line media for exchanging medical images using DICOM-compliant standard

    NASA Astrophysics Data System (ADS)

    Ratib, Osman M.; Ligier, Yves; Rosset, Antoine; Staub, Jean-Christophe; Logean, Marianne; Girard, Christian

    2000-05-01

    The goal of this project is to develop and implement off-line DICOM-compliant CD ROMs that contain the necessary software tools for displaying the images and related data on any personal computer. We implemented a hybrid recording technique allowing CD-ROMs for Macintosh and Windows platforms to be fully DICOM compliant. A public domain image viewing program (OSIRIS) is recorded on the CD for display and manipulation of sequences of images. The content of the disk is summarized in a standard HTML file that can be displayed on any web-browser. This allows the images to be easily accessible on any desktop computer, while being also readable on high-end commercial DICOM workstations. The HTML index page contains a set of thumbnails and full-size JPEG images that are directly linked to the original high-resolution DICOM images through an activation of the OSIRIS program. Reports and associated text document are also converted to HTML format to be easily displayable directly within the web browser. This portable solution provides a convenient and low cost alternative to hard copy images for exchange and transmission of images to referring physicians and external care providers without the need for any specialized software or hardware.

  4. SPRECware: software tools for Standard PREanalytical Code (SPREC) labeling - effective exchange and search of stored biospecimens.

    PubMed

    Nanni, Umberto; Betsou, Fotini; Riondino, Silvia; Rossetti, Luisa; Spila, Antonella; Valente, Maria Giovanna; Della-Morte, David; Palmirotta, Raffaele; Roselli, Mario; Ferroni, Patrizia; Guadagni, Fiorella

    2012-01-01

    Biobanks provide stored material to basic, translational, and epidemiological research and this material should be transferred without institute-dependent intrinsic bias. The ISBER Biospecimen Science Working Group has released a "Standard PREanalytical Code" (SPREC), which is a proposal for a standard coding of the preanalytical options that have been adopted in order to track and make explicit the preanalytical variations in the collection, preparation, and storage of specimens. In this paper we address 2 issues arising in any biobank or biolaboratory aiming at adopting SPREC: (i) reducing the burden required to adopt this standard coding, and (ii) maximize the immediate benefits of this adoption by providing a free, dedicated software tool. We propose SPRECware, a vision encompassing tools and solutions for the best exploitation of SPREC based on information technology (www.sprecware.org). As a first step, we make available SPRECbase, a software tool useful for generating, storing, managing, and exchanging SPREC-related information associated to specimens. Adopting SPREC is useful both for internal purposes (such as finding the samples having some given preanalytical features), and for exchanging the preanalytical information associated to biological samples between Laboratory Information Systems. In case of a common adoption of this coding, it would be easy to find out whether and where, among the participating Biological Resource Centers, the specimens for a given study are available in order to carry out a planned experiment. PMID:23032579

  5. Implementation of a real-time software-only image smoothing filter for a block-transform video codec

    NASA Astrophysics Data System (ADS)

    Miaw, Wesley F.; Rowe, Lawrence A.

    2003-05-01

    The JPEG compression standard is a popular image format. However, at high compression ratios JPEG compression, which uses block-transform coding, can produce blocking artifacts, or artificially introduced edges within the image. Several post-processing algorithms have been developed to remove these artifacts. This paper describes an implementation of a post-processing algorithm developed by Ramchandran, Chou, and Crouse (RCC) which is fast enough for real-time software-only video applications. The original implementation of the RCC algorithm involved calculating thresholds to identify artificial edges. These calculations proved too expensive for use in real-time software-only applications. We replaced these calculations with a linear scale approximating ideal threshold values based on a combination of peak signal-to-noise ratio calculations and subjective visual quality. The resulting filter implementation is available in the widely-deployed Open Mash streaming media toolkit.

  6. Upgrade and standardization of real-time software for telescope systems at the Gemini telescopes

    NASA Astrophysics Data System (ADS)

    Rambold, William N.; Gigoux, Pedro; Urrutia, Cristian; Ebbers, Angelic; Taylor, Philip; Rippa, Mathew J.; Rojas, Roberto; Cumming, Tom

    2014-07-01

    The real-time control systems for the Gemini Telescopes were designed and built in the 1990s using state-of-the-art software tools and operating systems of that time. Since these systems are in use every night they have not been kept upto- date and are now obsolete and very labor intensive to support. Gemini is currently engaged in a major upgrade of its telescope control systems. This paper reviews the studies performed to select and develop a new standard operating environment for Gemini real-time systems and the work performed so far in implementing it.

  7. IMAGE information monitoring and applied graphics software environment. Volume 1. Executive overview

    SciTech Connect

    Hallam, J.W.; Ng, K.B.; Upham, G.L.

    1986-09-01

    The EPRI Information Monitoring and Applied Graphics Environment (IMAGE) system is designed for 'fast proto-typing' of advanced concepts for computer-aided plant operations tools. It is a flexible software system which can be used for rapidly creating, dynamically driving and evaluating advanced operator aid displays. The software is written to be both host computer and graphic device independent. This four volume report includes an Executive Overview of the IMAGE package (Volume 1), followed by Software Description (Volume II), User's Guide (Volume III), and Description of Example Applications (Volume IV).

  8. IMAGE information monitoring and applied graphics software environment. Volume 4. Applications description

    SciTech Connect

    Hallam, J.W.; Ng, K.B.; Upham, G.L.

    1986-09-01

    The EPRI Information Monitoring and Applied Graphics Environment (IMAGE) system is designed for 'fast proto-typing' of advanced concepts for computer-aided plant operations tools. It is a flexible software system which can be used for rapidly creating, dynamically driving and evaluating advanced operator aid displays. The software is written to be both host computer and graphic device independent. This four volume report includes an Executive Overview of the IMAGE package (Volume 1), followed by Software Description (Volume II), User's Guide (Volume III), and Description of Example Applications (Volume IV).

  9. IMAGE information monitoring and applied graphics software environment. Volume 3. User's guide

    SciTech Connect

    Hallam, J.W.; Ng, K.B.; Upham, G.L.

    1986-09-01

    The EPRI Information Monitoring and Applied Graphics Environment (IMAGE) system is designed for 'fast proto-typing' of advanced concepts for computer-aided plant operations tools. It is a flexible software system which can be used for rapidly creating, dynamically driving and evaluating advanced operator aid displays. The software is written to be host computer and graphic device independent. This four volume report includes an Executive Overview of the IMAGE package (Volume 1), followed by Software Description (Volume II), User's Guide (Volume III), and Description of Example Applications (Volume IV).

  10. On the use of standards for microarray lossless image compression.

    PubMed

    Pinho, Armando J; Paiva, António R C; Neves, António J R

    2006-03-01

    The interest in methods that are able to efficiently compress microarray images is relatively new. This is not surprising, since the appearance and fast growth of the technology responsible for producing these images is also quite recent. In this paper, we present a set of compression results obtained with 49 publicly available images, using three image coding standards: lossless JPEG2000, JBIG, and JPEG-LS. We concluded that the compression technology behind JBIG seems to be the one that offers the best combination of compression efficiency and flexibility for microarray image compression. PMID:16532784

  11. A tutorial for software development in quantitative proteomics using PSI standard formats.

    PubMed

    Gonzalez-Galarza, Faviel F; Qi, Da; Fan, Jun; Bessant, Conrad; Jones, Andrew R

    2014-01-01

    The Human Proteome Organisation - Proteomics Standards Initiative (HUPO-PSI) has been working for ten years on the development of standardised formats that facilitate data sharing and public database deposition. In this article, we review three HUPO-PSI data standards - mzML, mzIdentML and mzQuantML, which can be used to design a complete quantitative analysis pipeline in mass spectrometry (MS)-based proteomics. In this tutorial, we briefly describe the content of each data model, sufficient for bioinformaticians to devise proteomics software. We also provide guidance on the use of recently released application programming interfaces (APIs) developed in Java for each of these standards, which makes it straightforward to read and write files of any size. We have produced a set of example Java classes and a basic graphical user interface to demonstrate how to use the most important parts of the PSI standards, available from http://code.google.com/p/psi-standard-formats-tutorial. This article is part of a Special Issue entitled: Computational Proteomics in the Post-Identification Era. Guest Editors: Martin Eisenacher and Christian Stephan. PMID:23584085

  12. Development of software standards for advanced transportation control systems. Final report. Volume 1. A model for roadway traffic control software

    SciTech Connect

    Bullock, D.; Hendrickson, C.

    1993-06-01

    A systematic approach to traffic engineering software development could provide significant advantages with regard to software capability, flexibility and maintenance. Improved traffic controllers will likely be essential for many of the proposed intelligent vehicle highway systems (IVHS) applications. The report proposes a computable language, called TCBLKS (Traffic Control BLocKS), that could provide the foundation for constructing real time traffic engineering software. This computable language is designed to be configured by a graphical user interface that does not require extensive software engineering training to use, yet provides much more flexibility and capability than possible by simply changing program parameters. The model is based upon the function block metaphor commonly used for constructing robust and efficient real time industrial control systems.

  13. ESO C Library for an Image Processing Software Environment (eclipse)

    NASA Astrophysics Data System (ADS)

    Devillard, N.

    Written in ANSI C, eclipse is a library offering numerous services related to astronomical image processing: FITS data access, various image and cube loading methods, binary image handling and filtering (including convolution and morphological filters), 2-D cross-correlation, connected components, cube and image arithmetic, dead pixel detection and correction, object detection, data extraction, flat-fielding with robust fit, image generation, statistics, photometry, image-space resampling, image combination, and cube stacking. It also contains support for mathematical tools like random number generation, FFT, curve fitting, matrices, fast median computation, and point-pattern matching. The main feature of this library is its ability to handle large amounts of input data (up to 2 GB in the current version) regardless of the amount of memory and swap available on the local machine. Another feature is the very high speed allowed by optimized C, making it an ideal base tool for programming efficient number-crunching applications, e.g., on parallel (Beowulf) systems. Running on all Unix-like platforms, eclipse is portable. A high-level interface to Python is foreseen that would allow programmers to prototype their applications much faster than through C programs.

  14. Eclipse: ESO C Library for an Image Processing Software Environment

    NASA Astrophysics Data System (ADS)

    Devillard, Nicolas

    2011-12-01

    Written in ANSI C, eclipse is a library offering numerous services related to astronomical image processing: FITS data access, various image and cube loading methods, binary image handling and filtering (including convolution and morphological filters), 2-D cross-correlation, connected components, cube and image arithmetic, dead pixel detection and correction, object detection, data extraction, flat-fielding with robust fit, image generation, statistics, photometry, image-space resampling, image combination, and cube stacking. It also contains support for mathematical tools like random number generation, FFT, curve fitting, matrices, fast median computation, and point-pattern matching. The main feature of this library is its ability to handle large amounts of input data (up to 2GB in the current version) regardless of the amount of memory and swap available on the local machine. Another feature is the very high speed allowed by optimized C, making it an ideal base tool for programming efficient number-crunching applications, e.g., on parallel (Beowulf) systems.

  15. Polarization information processing and software system design for simultaneously imaging polarimetry

    NASA Astrophysics Data System (ADS)

    Wang, Yahui; Liu, Jing; Jin, Weiqi; Wen, Renjie

    2015-08-01

    Simultaneous imaging polarimetry can realize real-time polarization imaging of the dynamic scene, which has wide application prospect. This paper first briefly illustrates the design of the double separate Wollaston Prism simultaneous imaging polarimetry, and then emphases are put on the polarization information processing methods and software system design for the designed polarimetry. Polarization information processing methods consist of adaptive image segmentation, high-accuracy image registration, instrument matrix calibration. Morphological image processing was used for image segmentation by taking dilation of an image; The accuracy of image registration can reach 0.1 pixel based on the spatial and frequency domain cross-correlation; Instrument matrix calibration adopted four-point calibration method. The software system was implemented under Windows environment based on C++ programming language, which realized synchronous polarization images acquisition and preservation, image processing and polarization information extraction and display. Polarization data obtained with the designed polarimetry shows that: the polarization information processing methods and its software system effectively performs live realize polarization measurement of the four Stokes parameters of a scene. The polarization information processing methods effectively improved the polarization detection accuracy.

  16. Do we really need standards in digital image management?

    PubMed Central

    Ho, ELM

    2008-01-01

    Convention dictates that standards are a necessity rather than a luxury. Standards are supposed to improve the exchange of health and image data information resulting in improved quality and efficiency of patient care. True standardisation is some time away yet, as barriers exist with evolving equipment, storage formats and even the standards themselves. The explosive growth in the size and complexity of images such as those generated by multislice computed tomography have driven the need for digital image management, created problems of storage space and costs, and created a challenge for increasing or getting an adequate speed for transmitting, accessing and retrieving the image data. The search for a suitable and practical format for storing the data without loss of information and medico-legal implications has become a necessity and a matter of ‘urgency’. Existing standards are either open or proprietary and must comply with local, regional or national laws. Currently there are the Picture Archiving and Communications System (PACS); Digital Imaging and Communications in Medicine (DICOM); Health Level 7 (HL7) and Integrating the Healthcare Enterprise (IHE). Issues in digital image management can be categorised as operational, procedural, technical and administrative. Standards must stay focussed on the ultimate goal – that is, improved patient care worldwide. PMID:21611012

  17. The FBI compression standard for digitized fingerprint images

    SciTech Connect

    Brislawn, C.M.; Bradley, J.N.; Onyshczak, R.J.; Hopper, T.

    1996-10-01

    The FBI has formulated national standards for digitization and compression of gray-scale fingerprint images. The compression algorithm for the digitized images is based on adaptive uniform scalar quantization of a discrete wavelet transform subband decomposition, a technique referred to as the wavelet/scalar quantization method. The algorithm produces archival-quality images at compression ratios of around 15 to 1 and will allow the current database of paper fingerprint cards to be replaced by digital imagery. A compliance testing program is also being implemented to ensure high standards of image quality and interchangeability of data between different implementations. We will review the current status of the FBI standard, including the compliance testing process and the details of the first-generation encoder.

  18. BIRP: Software for interactive search and retrieval of image engineering data

    NASA Technical Reports Server (NTRS)

    Arvidson, R. E.; Bolef, L. K.; Guinness, E. A.; Norberg, P.

    1980-01-01

    Better Image Retrieval Programs (BIRP), a set of programs to interactively sort through and to display a database, such as engineering data for images acquired by spacecraft is described. An overview of the philosophy of BIRP design, the structure of BIRP data files, and examples that illustrate the capabilities of the software are provided.

  19. Software for Analyzing Sequences of Flow-Related Images

    NASA Technical Reports Server (NTRS)

    Klimek, Robert; Wright, Ted

    2004-01-01

    Spotlight is a computer program for analysis of sequences of images generated in combustion and fluid physics experiments. Spotlight can perform analysis of a single image in an interactive mode or a sequence of images in an automated fashion. The primary type of analysis is tracking of positions of objects over sequences of frames. Features and objects that are typically tracked include flame fronts, particles, droplets, and fluid interfaces. Spotlight automates the analysis of object parameters, such as centroid position, velocity, acceleration, size, shape, intensity, and color. Images can be processed to enhance them before statistical and measurement operations are performed. An unlimited number of objects can be analyzed simultaneously. Spotlight saves results of analyses in a text file that can be exported to other programs for graphing or further analysis. Spotlight is a graphical-user-interface-based program that at present can be executed on Microsoft Windows and Linux operating systems. A version that runs on Macintosh computers is being considered.

  20. A Review of Diffusion Tensor Magnetic Resonance Imaging Computational Methods and Software Tools

    PubMed Central

    Hasan, Khader M.; Walimuni, Indika S.; Abid, Humaira; Hahn, Klaus R.

    2010-01-01

    In this work we provide an up-to-date short review of computational magnetic resonance imaging (MRI) and software tools that are widely used to process and analyze diffusion-weighted MRI data. A review of different methods used to acquire, model and analyze diffusion-weighted imaging data (DWI) is first provided with focus on diffusion tensor imaging (DTI). The major preprocessing, processing and post-processing procedures applied to DTI data are discussed. A list of freely available software packages to analyze diffusion MRI data is also provided. PMID:21087766

  1. A review of diffusion tensor magnetic resonance imaging computational methods and software tools.

    PubMed

    Hasan, Khader M; Walimuni, Indika S; Abid, Humaira; Hahn, Klaus R

    2011-12-01

    In this work we provide an up-to-date short review of computational magnetic resonance imaging (MRI) and software tools that are widely used to process and analyze diffusion-weighted MRI data. A review of different methods used to acquire, model and analyze diffusion-weighted imaging data (DWI) is first provided with focus on diffusion tensor imaging (DTI). The major preprocessing, processing and post-processing procedures applied to DTI data are discussed. A list of freely available software packages to analyze diffusion MRI data is also provided. PMID:21087766

  2. Spatial data software integration - Merging CAD/CAM/mapping with GIS and image processing

    NASA Technical Reports Server (NTRS)

    Logan, Thomas L.; Bryant, Nevin A.

    1987-01-01

    The integration of CAD/CAM/mapping with image processing using geographic information systems (GISs) as the interface is examined. Particular emphasis is given to the development of software interfaces between JPL's Video Image Communication and Retrieval (VICAR)/Imaged Based Information System (IBIS) raster-based GIS and the CAD/CAM/mapping system. The design and functions of the VICAR and IBIS are described. Vector data capture and editing are studied. Various software programs for interfacing between the VICAR/IBIS and CAD/CAM/mapping are presented and analyzed.

  3. The design of real time infrared image generation software based on Creator and Vega

    NASA Astrophysics Data System (ADS)

    Wang, Rui-feng; Wu, Wei-dong; Huo, Jun-xiu

    2013-09-01

    Considering the requirement of high reality and real-time quality dynamic infrared image of an infrared image simulation, a method to design real-time infrared image simulation application on the platform of VC++ is proposed. This is based on visual simulation software Creator and Vega. The functions of Creator are introduced simply, and the main features of Vega developing environment are analyzed. The methods of infrared modeling and background are offered, the designing flow chart of the developing process of IR image real-time generation software and the functions of TMM Tool and MAT Tool and sensor module are explained, at the same time, the real-time of software is designed.

  4. Reliability evaluation of I-123 ADAM SPECT imaging using SPM software and AAL ROI methods

    NASA Astrophysics Data System (ADS)

    Yang, Bang-Hung; Tsai, Sung-Yi; Wang, Shyh-Jen; Su, Tung-Ping; Chou, Yuan-Hwa; Chen, Chia-Chieh; Chen, Jyh-Cheng

    2011-08-01

    The level of serotonin was regulated by serotonin transporter (SERT), which is a decisive protein in regulation of serotonin neurotransmission system. Many psychiatric disorders and therapies were also related to concentration of cerebral serotonin. I-123 ADAM was the novel radiopharmaceutical to image SERT in brain. The aim of this study was to measure reliability of SERT densities of healthy volunteers by automated anatomical labeling (AAL) method. Furthermore, we also used statistic parametric mapping (SPM) on a voxel by voxel analysis to find difference of cortex between test and retest of I-123 ADAM single photon emission computed tomography (SPECT) images.Twenty-one healthy volunteers were scanned twice with SPECT at 4 h after intravenous administration of 185 MBq of 123I-ADAM. The image matrix size was 128×128 and pixel size was 3.9 mm. All images were obtained through filtered back-projection (FBP) reconstruction algorithm. Region of interest (ROI) definition was performed based on the AAL brain template in PMOD version 2.95 software package. ROI demarcations were placed on midbrain, pons, striatum, and cerebellum. All images were spatially normalized to the SPECT MNI (Montreal Neurological Institute) templates supplied with SPM2. And each image was transformed into standard stereotactic space, which was matched to the Talairach and Tournoux atlas. Then differences across scans were statistically estimated on a voxel by voxel analysis using paired t-test (population main effect: 2 cond's, 1 scan/cond.), which was applied to compare concentration of SERT between the test and retest cerebral scans.The average of specific uptake ratio (SUR: target/cerebellum-1) of 123I-ADAM binding to SERT in midbrain was 1.78±0.27, pons was 1.21±0.53, and striatum was 0.79±0.13. The cronbach's α of intra-class correlation coefficient (ICC) was 0.92. Besides, there was also no significant statistical finding in cerebral area using SPM2 analysis. This finding might help us

  5. Software optimization for electrical conductivity imaging in polycrystalline diamond cutters

    SciTech Connect

    Bogdanov, G.; Ludwig, R.; Wiggins, J.; Bertagnolli, K.

    2014-02-18

    We previously reported on an electrical conductivity imaging instrument developed for measurements on polycrystalline diamond cutters. These cylindrical cutters for oil and gas drilling feature a thick polycrystalline diamond layer on a tungsten carbide substrate. The instrument uses electrical impedance tomography to profile the conductivity in the diamond table. Conductivity images must be acquired quickly, on the order of 5 sec per cutter, to be useful in the manufacturing process. This paper reports on successful efforts to optimize the conductivity reconstruction routine, porting major portions of it to NVIDIA GPUs, including a custom CUDA kernel for Jacobian computation.

  6. IDP: Image and data processing (software) in C++

    SciTech Connect

    Lehman, S.

    1994-11-15

    IDP++(Image and Data Processing in C++) is a complied, multidimensional, multi-data type, signal processing environment written in C++. It is being developed within the Radar Ocean Imaging group and is intended as a partial replacement for View. IDP++ takes advantage of the latest object-oriented compiler technology to provide `information hiding.` Users need only know C, not C++. Signals are treated like any other variable with a defined set of operators and functions in an intuitive manner. IDP++ is being designed for real-time environment where interpreted signal processing packages are less efficient.

  7. Image compression software for the SOHO LASCO and EIT experiments

    NASA Technical Reports Server (NTRS)

    Grunes, Mitchell R.; Howard, Russell A.; Hoppel, Karl; Mango, Stephen A.; Wang, Dennis

    1994-01-01

    This paper describes the lossless and lossy image compression algorithms to be used on board the Solar Heliospheric Observatory (SOHO) in conjunction with the Large Angle Spectrometric Coronograph and Extreme Ultraviolet Imaging Telescope experiments. It also shows preliminary results obtained using similar prior imagery and discusses the lossy compression artifacts which will result. This paper is in part intended for the use of SOHO investigators who need to understand the results of SOHO compression in order to better allocate the transmission bits which they have been allocated.

  8. Software engineering methods and standards used int he sloan digital sky survey

    SciTech Connect

    Petravick, D.; Berman, E.; Gurbani, V.; Nicinski, T.; Pordes, R.; Rechenmacher, R.; Sergey, G.; Lupton, R.H.

    1995-04-01

    We present an integrated science software development environment, code maintenance and support system for the Sloan Digital Sky Survey (SDSS) now being actively used throughout the collaboration. The SDSS is a collaboration between the Fermi National Accelerator Laboratory, the Institute for Advanced Study, The Japan Promotion Group, Johns Hopkins University, Princeton University, The United States Naval Observatory, the University of Chicago, and the University of Washington. The SDSS will produce a five-color imaging survey of 1/4 of the sky about the north galactic cap and image 10{sup 8} Stars, 10{sup 8} galaxies, and 10{sup 5} Quasars. Spectra will be obtained for 10{sup 6} galaxies and 10{sup 5} Quasars as well. The survey will utilize a dedicated 2.5 meter telescope at the Apache Point Observatory in New Mexico. Its imaging camera will hold 54 Charge-Coupled Devices (CADS). The SDSS will take five years to complete, acquiring well over 12 TB of data.

  9. Image analysis software for following progression of peripheral neuropathy

    NASA Astrophysics Data System (ADS)

    Epplin-Zapf, Thomas; Miller, Clayton; Larkin, Sean; Hermesmeyer, Eduardo; Macy, Jenny; Pellegrini, Marco; Luccarelli, Saverio; Staurenghi, Giovanni; Holmes, Timothy

    2009-02-01

    A relationship has been reported by several research groups [1 - 4] between the density and shapes of nerve fibers in the cornea and the existence and severity of peripheral neuropathy. Peripheral neuropathy is a complication of several prevalent diseases or conditions, which include diabetes, HIV, prolonged alcohol overconsumption and aging. A common clinical technique for confirming the condition is intramuscular electromyography (EMG), which is invasive, so a noninvasive technique like the one proposed here carries important potential advantages for the physician and patient. A software program that automatically detects the nerve fibers, counts them and measures their shapes is being developed and tested. Tests were carried out with a database of subjects with levels of severity of diabetic neuropathy as determined by EMG testing. Results from this testing, that include a linear regression analysis are shown.

  10. Development of Software to Model AXAF-I Image Quality

    NASA Technical Reports Server (NTRS)

    Geary, Joseph; Hawkins, Lamar; Ahmad, Anees; Gong, Qian

    1997-01-01

    This report describes work conducted on Delivery Order 181 between October 1996 through June 1997. During this period software was written to: compute axial PSD's from RDOS AXAF-I mirror surface maps; plot axial surface errors and compute PSD's from HDOS "Big 8" axial scans; plot PSD's from FITS format PSD files; plot band-limited RMS vs axial and azimuthal position for multiple PSD files; combine and organize PSD's from multiple mirror surface measurements formatted as input to GRAZTRACE; modify GRAZTRACE to read FITS formatted PSD files; evaluate AXAF-I test results; improve and expand the capabilities of the GT x-ray mirror analysis package. During this period work began on a more user-friendly manual for the GT program, and improvements were made to the on-line help manual.

  11. Robust Intensity Standardization in Brain Magnetic Resonance Images.

    PubMed

    De Nunzio, Giorgio; Cataldo, Rosella; Carlà, Alessandra

    2015-12-01

    The paper is focused on a tiSsue-Based Standardization Technique (SBST) of magnetic resonance (MR) brain images. Magnetic Resonance Imaging intensities have no fixed tissue-specific numeric meaning, even within the same MRI protocol, for the same body region, or even for images of the same patient obtained on the same scanner in different moments. This affects postprocessing tasks such as automatic segmentation or unsupervised/supervised classification methods, which strictly depend on the observed image intensities, compromising the accuracy and efficiency of many image analyses algorithms. A large number of MR images from public databases, belonging to healthy people and to patients with different degrees of neurodegenerative pathology, were employed together with synthetic MRIs. Combining both histogram and tissue-specific intensity information, a correspondence is obtained for each tissue across images. The novelty consists of computing three standardizing transformations for the three main brain tissues, for each tissue class separately. In order to create a continuous intensity mapping, spline smoothing of the overall slightly discontinuous piecewise-linear intensity transformation is performed. The robustness of the technique is assessed in a post hoc manner, by verifying that automatic segmentation of images before and after standardization gives a high overlapping (Dice index >0.9) for each tissue class, even across images coming from different sources. Furthermore, SBST efficacy is tested by evaluating if and how much it increases intertissue discrimination and by assessing gaussianity of tissue gray-level distributions before and after standardization. Some quantitative comparisons to already existing different approaches available in the literature are performed. PMID:25708893

  12. Performance of a Method to Standardize Breast Ultrasound Interpretation Using Image Processing and Case-Based Reasoning

    NASA Astrophysics Data System (ADS)

    André, M. P.; Galperin, M.; Berry, A.; Ojeda-Fournier, H.; O'Boyle, M.; Olson, L.; Comstock, C.; Taylor, A.; Ledgerwood, M.

    Our computer-aided diagnostic (CADx) tool uses advanced image processing and artificial intelligence to analyze findings on breast sonography images. The goal is to standardize reporting of such findings using well-defined descriptors and to improve accuracy and reproducibility of interpretation of breast ultrasound by radiologists. This study examined several factors that may impact accuracy and reproducibility of the CADx software, which proved to be highly accurate and stabile over several operating conditions.

  13. Starworld: Preparing Accountants for the Future: A Case-Based Approach to Teach International Financial Reporting Standards Using ERP Software

    ERIC Educational Resources Information Center

    Ragan, Joseph M.; Savino, Christopher J.; Parashac, Paul; Hosler, Jonathan C.

    2010-01-01

    International Financial Reporting Standards now constitute an important part of educating young professional accountants. This paper looks at a case based process to teach International Financial Reporting Standards using integrated Enterprise Resource Planning software. The case contained within the paper can be used within a variety of courses…

  14. Validated novel software to measure the conspicuity index of lesions in DICOM images

    NASA Astrophysics Data System (ADS)

    Szczepura, K. R.; Manning, D. J.

    2016-03-01

    A novel software programme and associated Excel spreadsheet has been developed to provide an objective measure of the expected visual detectability of focal abnormalities within DICOM images. ROIs are drawn around the abnormality, the software then fits the lesion using a least squares method to recognize the edges of the lesion based on the full width half maximum. 180 line profiles are then plotted around the lesion, giving 360 edge profiles.

  15. Assessment of using Imaging software Image J to determine percentage woody cover from half meter resolution satellite images

    NASA Astrophysics Data System (ADS)

    Mace, W. D.; Cerling, T. E.

    2010-12-01

    The percentage of woody cover over a landscape has been shown to be related to the d13C in soil organic matter because of the difference in carbon isotope discrimination between plants using C3 and C4 photosynthetic pathways. Woody plants such as those found in dense forests predominantly use the C3 pathway; whereas plants that are found to grow in arid grasslands are predominantly using the C4 pathway. Therefore it has also been shown that it is possible to determine the vegetation of current and past ecosystems using d13C in soil organic matter. With the introduction of very high resolution remote sensing it is becoming possible to make detailed maps based on d13C and estimate percentage woody cover. Using these maps it may be possible to create large scale representations of prehistoric ecosystems. Here we asses the use of a widely available imaging software, Image J to survey the percentage of woody cover of tropical ecosystems in East Africa. These results are compared with canopy gap fraction that has been calculated from in-situ ground-up circular fisheye images. We find that in areas where the percentage woody cover is less than 0.5, Image J is an effective method of analysis; however as the percentage cover becomes greater than 0.5 it becomes difficult to distinguish between true canopy and shadows.

  16. Parallel-Processing Software for Correlating Stereo Images

    NASA Technical Reports Server (NTRS)

    Klimeck, Gerhard; Deen, Robert; Mcauley, Michael; DeJong, Eric

    2007-01-01

    A computer program implements parallel- processing algorithms for cor relating images of terrain acquired by stereoscopic pairs of digital stereo cameras on an exploratory robotic vehicle (e.g., a Mars rove r). Such correlations are used to create three-dimensional computatio nal models of the terrain for navigation. In this program, the scene viewed by the cameras is segmented into subimages. Each subimage is assigned to one of a number of central processing units (CPUs) opera ting simultaneously.

  17. JHelioviewer: Open-Source Software for Discovery and Image Access in the Petabyte Age

    NASA Astrophysics Data System (ADS)

    Mueller, D.; Dimitoglou, G.; Garcia Ortiz, J.; Langenberg, M.; Nuhn, M.; Dau, A.; Pagel, S.; Schmidt, L.; Hughitt, V. K.; Ireland, J.; Fleck, B.

    2011-12-01

    The unprecedented torrent of data returned by the Solar Dynamics Observatory is both a blessing and a barrier: a blessing for making available data with significantly higher spatial and temporal resolution, but a barrier for scientists to access, browse and analyze them. With such staggering data volume, the data is accessible only from a few repositories and users have to deal with data sets effectively immobile and practically difficult to download. From a scientist's perspective this poses three challenges: accessing, browsing and finding interesting data while avoiding the proverbial search for a needle in a haystack. To address these challenges, we have developed JHelioviewer, an open-source visualization software that lets users browse large data volumes both as still images and movies. We did so by deploying an efficient image encoding, storage, and dissemination solution using the JPEG 2000 standard. This solution enables users to access remote images at different resolution levels as a single data stream. Users can view, manipulate, pan, zoom, and overlay JPEG 2000 compressed data quickly, without severe network bandwidth penalties. Besides viewing data, the browser provides third-party metadata and event catalog integration to quickly locate data of interest, as well as an interface to the Virtual Solar Observatory to download science-quality data. As part of the ESA/NASA Helioviewer Project, JHelioviewer offers intuitive ways to browse large amounts of heterogeneous data remotely and provides an extensible and customizable open-source platform for the scientific community. In addition, the easy-to-use graphical user interface enables the general public and educators to access, enjoy and reuse data from space missions without barriers.

  18. DEIReconstructor: a software for diffraction enhanced imaging processing and tomography reconstruction

    NASA Astrophysics Data System (ADS)

    Zhang, Kai; Yuan, Qing-Xi; Huang, Wan-Xia; Zhu, Pei-Ping; Wu, Zi-Yu

    2014-10-01

    Diffraction enhanced imaging (DEI) has been widely applied in many fields, especially when imaging low-Z samples or when the difference in the attenuation coefficient between different regions in the sample is too small to be detected. Recent developments of this technique have presented a need for a new software package for data analysis. Here, the Diffraction Enhanced Image Reconstructor (DEIReconstructor), developed in Matlab, is presented. DEIReconstructor has a user-friendly graphical user interface and runs under any of the 32-bit or 64-bit Microsoft Windows operating systems including XP and Win7. Many of its features are integrated to support imaging preprocessing, extract absorption, refractive and scattering information of diffraction enhanced imaging and allow for parallel-beam tomography reconstruction for DEI-CT. Furthermore, many other useful functions are also implemented in order to simplify the data analysis and the presentation of results. The compiled software package is freely available.

  19. Web-based spatial analysis with the ILWIS open source GIS software and satellite images from GEONETCast

    NASA Astrophysics Data System (ADS)

    Lemmens, R.; Maathuis, B.; Mannaerts, C.; Foerster, T.; Schaeffer, B.; Wytzisk, A.

    2009-12-01

    fingertips of users around the globe. This user-friendly and low-cost information dissemination provides global information as a basis for decision-making in a number of critical areas, including public health, energy, agriculture, weather, water, climate, natural disasters and ecosystems. GEONETCast makes available satellite images via Digital Video Broadcast (DVB) technology. An OGC WMS interface and plug-ins which convert GEONETCast data streams allow an ILWIS user to integrate various distributed data sources with data locally stored on his machine. Our paper describes a use case in which ILWIS is used with GEONETCast satellite imagery for decision making processes in Ghana. We also explain how the ILWIS software can be extended with additional functionality by means of building plug-ins and unfold our plans to implement other OGC standards, such as WCS and WPS in the same context. Especially, the latter one can be seen as a major step forward in terms of moving well-proven desktop based processing functionality to the web. This enables the embedding of ILWIS functionality in Spatial Data Infrastructures or even the execution in scalable and on-demand cloud computing environments.

  20. Development of a Standard for Verification and Validation of Software Used to Calculate Nuclear System Thermal Fluids Behavior

    SciTech Connect

    Richard R. Schultz; Edwin A. Harvego; Ryan L. Crane

    2010-05-01

    With the resurgence of nuclear power and increased interest in advanced nuclear reactors as an option to supply abundant energy without the associated greenhouse gas emissions of the more conventional fossil fuel energy sources, there is a need to establish internationally recognized standards for the verification and validation (V&V) of software used to calculate the thermal-hydraulic behavior of advanced reactor designs for both normal operation and hypothetical accident conditions. To address this need, ASME (American Society of Mechanical Engineers) Standards and Certification has established the V&V 30 Committee, under the responsibility of the V&V Standards Committee, to develop a consensus Standard for verification and validation of software used for design and analysis of advanced reactor systems. The initial focus of this committee will be on the V&V of system analysis and computational fluid dynamics (CFD) software for nuclear applications. To limit the scope of the effort, the committee will further limit its focus to software to be used in the licensing of High-Temperature Gas-Cooled Reactors. In this framework, the standard should conform to Nuclear Regulatory Commission (NRC) practices, procedures and methods for licensing of nuclear power plants as embodied in the United States (U.S.) Code of Federal Regulations and other pertinent documents such as Regulatory Guide 1.203, “Transient and Accident Analysis Methods” and NUREG-0800, “NRC Standard Review Plan”. In addition, the standard should be consistent with applicable sections of ASME Standard NQA-1 (“Quality Assurance Requirements for Nuclear Facility Applications (QA)”). This paper describes the general requirements for the V&V Standard, which includes; (a) the definition of the operational and accident domain of a nuclear system that must be considered if the system is to licensed, (b) the corresponding calculational domain of the software that should encompass the nuclear operational

  1. Assessing angulation on digital images of radiographs of fractures of the distal radius: visual estimation versus computer software measurement.

    PubMed

    Robertson, G A J; Robertson, B F M; Thomas, B; McEachan, J; Davidson, D M

    2011-03-01

    We assessed the reliability of visual estimation of angles on computer images of radiographs, and compared their accuracy with the measurement of angles using computer software for ten distal radius fractures. We asked 73 clinicians to visually estimate the dorsal angulation on ten computerized radiographs of fractures of the distal radius. The reliability of these estimations was calculated. Their accuracy was compared to a 'gold standard' obtained by consensus agreement between three consultants measuring these angles using the software. Inter-observer reliability was calculated as ICC = 0.51 and intra-observer reliability as r = 0.76. The visual estimations were less accurate with a mean percentage error of 31% (range, 7-83%). As angulation increased the estimation accuracy improved. Although reliability and accuracy of such estimation was better for clinicians with greater experience, actual measurement was more reliable and accurate. PMID:21169298

  2. Informatics in radiology: automated structured reporting of imaging findings using the AIM standard and XML.

    PubMed

    Zimmerman, Stefan L; Kim, Woojin; Boonn, William W

    2011-01-01

    Quantitative and descriptive imaging data are a vital component of the radiology report and are frequently of paramount importance to the ordering physician. Unfortunately, current methods of recording these data in the report are both inefficient and error prone. In addition, the free-text, unstructured format of a radiology report makes aggregate analysis of data from multiple reports difficult or even impossible without manual intervention. A structured reporting work flow has been developed that allows quantitative data created at an advanced imaging workstation to be seamlessly integrated into the radiology report with minimal radiologist intervention. As an intermediary step between the workstation and the reporting software, quantitative and descriptive data are converted into an extensible markup language (XML) file in a standardized format specified by the Annotation and Image Markup (AIM) project of the National Institutes of Health Cancer Biomedical Informatics Grid. The AIM standard was created to allow image annotation data to be stored in a uniform machine-readable format. These XML files containing imaging data can also be stored on a local database for data mining and analysis. This structured work flow solution has the potential to improve radiologist efficiency, reduce errors, and facilitate storage of quantitative and descriptive imaging data for research. PMID:21357413

  3. Sharing Images Intelligently: The Astronomical Visualization Metadata Standard

    NASA Astrophysics Data System (ADS)

    Hurt, Robert L.; Christensen, L.; Gauthier, A.

    2006-12-01

    The astronomical education and public outreach (EPO) community plays a key role in conveying the results of scientific research to the general public. A key product of EPO development is a variety of non-scientific public image resources, both derived from scientific observations and created as artistic visualizations of scientific results. This refers to general image formats such as JPEG, TIFF, PNG, GIF, not scientific FITS datasets. Such resources are currently scattered across the internet in a variety of galleries and archives, but are not searchable in any coherent or unified way. Just as Virtual Observatory standards open up all data archives to a common query engine, the EPO community will benefit greatly from a similar mechanism for image search and retrieval. A new standard has been developed for astronomical imagery defining a common set of content fields suited for the needs of astronomical visualizations. This encompasses images derived from data, artist's conceptions, simulations, photography, and can be ultimately extensible to video products. The first generation of tools are now available to tag images with this metadata, which can be embedded with the image file using an XML-based format that functions similarly to a FITS header. As image collections are processed to include astronomy visualization metadata tags, extensive information providing educational context, credits, data sources, and even coordinate information will be readily accessible for uses spanning casual browsing, publication, and interactive media systems.

  4. An image-processing software package: UU and Fig for optical metrology applications

    NASA Astrophysics Data System (ADS)

    Chen, Lujie

    2013-06-01

    Modern optical metrology applications are largely supported by computational methods, such as phase shifting [1], Fourier Transform [2], digital image correlation [3], camera calibration [4], etc, in which image processing is a critical and indispensable component. While it is not too difficult to obtain a wide variety of image-processing programs from the internet; few are catered for the relatively special area of optical metrology. This paper introduces an image-processing software package: UU (data processing) and Fig (data rendering) that incorporates many useful functions to process optical metrological data. The cross-platform programs UU and Fig are developed based on wxWidgets. At the time of writing, it has been tested on Windows, Linux and Mac OS. The userinterface is designed to offer precise control of the underline processing procedures in a scientific manner. The data input/output mechanism is designed to accommodate diverse file formats and to facilitate the interaction with other independent programs. In terms of robustness, although the software was initially developed for personal use, it is comparably stable and accurate to most of the commercial software of similar nature. In addition to functions for optical metrology, the software package has a rich collection of useful tools in the following areas: real-time image streaming from USB and GigE cameras, computational geometry, computer vision, fitting of data, 3D image processing, vector image processing, precision device control (rotary stage, PZT stage, etc), point cloud to surface reconstruction, volume rendering, batch processing, etc. The software package is currently used in a number of universities for teaching and research.

  5. Development of recognition software of heart to find the standard cross section on echocardiography.

    PubMed

    Masuda, Kohji; Matsuura, Hirotaka; Imai, Takao; Inoue, Hiroto

    2007-01-01

    We have developed an algorithm to find standard cross sections (the long-axis view and the short-axis view) of the heart from successive echograms. We first divided an echogram into small spatial regions and detected the typical motion of the mitral valve by analyzing the brightness variation and correlation coefficient among the regions. We have obtained 95% accuracy in the position of the valve through time series echogram of 25 normal volunteers. The recognized valve was visualized as a mark on the video stream. Furthermore, combining this technique with an optical flow method, we elucidated the region velocity of the wall motion of the left ventricle after centering the valve on echogram. By analyzing symmetry among region velocity, we have confirmed to distinguish between the long- and the short-axis view of heart. This algorism is applicable to instruction software to find standard cross section of the heart as an assistant of echocardiography. We are going to apply to more subjects who have heart disease and to contribute automatic diagnosis in the future. PMID:18001962

  6. Creation of three-dimensional craniofacial standards from CBCT images

    NASA Astrophysics Data System (ADS)

    Subramanyan, Krishna; Palomo, Martin; Hans, Mark

    2006-03-01

    Low-dose three-dimensional Cone Beam Computed Tomography (CBCT) is becoming increasingly popular in the clinical practice of dental medicine. Two-dimensional Bolton Standards of dentofacial development are routinely used to identify deviations from normal craniofacial anatomy. With the advent of CBCT three dimensional imaging, we propose a set of methods to extend these 2D Bolton Standards to anatomically correct surface based 3D standards to allow analysis of morphometric changes seen in craniofacial complex. To create 3D surface standards, we have implemented series of steps. 1) Converting bi-plane 2D tracings into set of splines 2) Converting the 2D splines curves from bi-plane projection into 3D space curves 3) Creating labeled template of facial and skeletal shapes and 4) Creating 3D average surface Bolton standards. We have used datasets from patients scanned with Hitachi MercuRay CBCT scanner providing high resolution and isotropic CT volume images, digitized Bolton Standards from age 3 to 18 years of lateral and frontal male, female and average tracings and converted them into facial and skeletal 3D space curves. This new 3D standard will help in assessing shape variations due to aging in young population and provide reference to correct facial anomalies in dental medicine.

  7. Designing Tracking Software for Image-Guided Surgery Applications: IGSTK Experience

    PubMed Central

    Enquobahrie, Andinet; Gobbi, David; Turek, Matt; Cheng, Patrick; Yaniv, Ziv; Lindseth, Frank; Cleary, Kevin

    2009-01-01

    Objective Many image-guided surgery applications require tracking devices as part of their core functionality. The Image-Guided Surgery Toolkit (IGSTK) was designed and developed to interface tracking devices with software applications incorporating medical images. Methods IGSTK was designed as an open source C++ library that provides the basic components needed for fast prototyping and development of image-guided surgery applications. This library follows a component-based architecture with several components designed for specific sets of image-guided surgery functions. At the core of the toolkit is the tracker component that handles communication between a control computer and navigation device to gather pose measurements of surgical instruments present in the surgical scene. The representations of the tracked instruments are superimposed on anatomical images to provide visual feedback to the clinician during surgical procedures. Results The initial version of the IGSTK toolkit has been released in the public domain and several trackers are supported. The toolkit and related information are available at www.igstk.org. Conclusion With the increased popularity of minimally invasive procedures in health care, several tracking devices have been developed for medical applications. Designing and implementing high-quality and safe software to handle these different types of trackers in a common framework is a challenging task. It requires establishing key software design principles that emphasize abstraction, extensibility, reusability, fault-tolerance, and portability. IGSTK is an open source library that satisfies these needs for the image-guided surgery community. PMID:20037671

  8. Pore Size Distribution Estimates Compared: Available software applied to soil CT and synthetic images.

    NASA Astrophysics Data System (ADS)

    Houston, Alasdair N.; Falconer, Ruth E.; Otten, Wilfred; Hapca, Simona M.

    2015-04-01

    The Pore Size Distribution (PSD) has been widely used as a means of characterising porous media and, in conjunction with knowledge of pore space connectivity, has been used to infer hydrological properties. There exist various strategies to estimate PSD from a segmented image and each strategy typically involves a sequence of algorithms that transform image information. Some of these algorithms may be explicitly parameterised, requiring decisions by a knowledgeable operator. As a result PSD estimates may be quite variable between software applications and operators. In order to better understand these differences, a constrained boolean model was used to construct synthetic images whose pore structure is without ambiguity and whose properties can be analytically determined. Applying to such images a selection of analysis procedures in the form of readily available software applications, reveals differences between PSD estimates and analytic information. In some cases it is possible to attribute these differences to artifacts visible within map images generated by the analysis procedures, permitting correction procedures to be devised. In the case of soil CT images which exhibit complex interconnected pore structure, differences in the PSD estimate between analysis procedures are very great in some cases. Inspection of map images can again help in identifying the cause of such problems, but this may result from a fundamental property of the procedure with respect to complex pore structure. Based on the evidence presented, we conclude that some readily available software will produce PSD estimates that can usefully characterise geomaterials.

  9. Monte Carlo PENRADIO software for dose calculation in medical imaging

    NASA Astrophysics Data System (ADS)

    Adrien, Camille; Lòpez Noriega, Mercedes; Bonniaud, Guillaume; Bordy, Jean-Marc; Le Loirec, Cindy; Poumarede, Bénédicte

    2014-06-01

    The increase on the collective radiation dose due to the large number of medical imaging exams has led the medical physics community to deeply consider the amount of dose delivered and its associated risks in these exams. For this purpose we have developed a Monte Carlo tool, PENRADIO, based on a modified version of PENELOPE code 2006 release, to obtain an accurate individualized radiation dose in conventional and interventional radiography and in computed tomography (CT). This tool has been validated showing excellent agreement between the measured and simulated organ doses in the case of a hip conventional radiography and a coronography. We expect the same accuracy in further results for other localizations and CT examinations.

  10. New image processing software for analyzing object size-frequency distributions, geometry, orientation, and spatial distribution

    NASA Astrophysics Data System (ADS)

    Beggan, Ciarán; Hamilton, Christopher W.

    2010-04-01

    Geological Image Analysis Software (GIAS) combines basic tools for calculating object area, abundance, radius, perimeter, eccentricity, orientation, and centroid location, with the first automated method for characterizing the aerial distribution of objects using sample-size-dependent nearest neighbor (NN) statistics. The NN analyses include tests for (1) Poisson, (2) Normalized Poisson, (3) Scavenged k=1, and (4) Scavenged k=2 NN distributions. GIAS is implemented in MATLAB with a Graphical User Interface (GUI) that is available as pre-parsed pseudocode for use with MATLAB, or as a stand-alone application that runs on Windows and Unix systems. GIAS can process raster data (e.g., satellite imagery, photomicrographs, etc.) and tables of object coordinates to characterize the size, geometry, orientation, and spatial organization of a wide range of geological features. This information expedites quantitative measurements of 2D object properties, provides criteria for validating the use of stereology to transform 2D object sections into 3D models, and establishes a standardized NN methodology that can be used to compare the results of different geospatial studies and identify objects using non-morphological parameters.

  11. Comparison of two academic software packages for analyzing two-dimensional gel images.

    PubMed

    Wu, Yukun; Zhang, Le

    2011-12-01

    One of the key limitations for proteomic studies using two-dimensional (2D) gel is the lack of automatic, fast, robust, and reliable methods for detecting, matching, and quantifying protein spots. Although there are commercial software packages for 2D gel image analysis, extensive human intervention is still needed for spot detection and matching, which is time-consuming and error-prone. Moreover, the commercial software packages are usually expensive and non-open source. Thus, it is very beneficial for researchers to have free software that is fast, fully automatic, and robust. In this paper, we review and compare two recently developed and publicly available software packages, RegStatGel and Pinnacle, for analyzing 2D gel images. These two software packages share some common features and also have some fundamental difference in the aspects of spot detection and quantification. Based on our experience, RegStatGel is much better in terms of spot detection and matching. It also contains more advanced statistical tools and is more user-friendly. In contrast, Pinnacle is quite sensitive to background noise and relies on external statistical software packages for statistical analysis. PMID:22084013

  12. Development of HydroImage, A User Friendly Hydrogeophysical Characterization Software

    SciTech Connect

    Mok, Chin Man; Hubbard, Susan; Chen, Jinsong; Suribhatla, Raghu; Kaback, Dawn Samara

    2014-01-29

    HydroImage, user friendly software that utilizes high-resolution geophysical data for estimating hydrogeological parameters in subsurface strate, was developed under this grant. HydroImage runs on a personal computer platform to promote broad use by hydrogeologists to further understanding of subsurface processes that govern contaminant fate, transport, and remediation. The unique software provides estimates of hydrogeological properties over continuous volumes of the subsurface, whereas previous approaches only allow estimation of point locations. thus, this unique tool can be used to significantly enhance site conceptual models and improve design and operation of remediation systems. The HydroImage technical approach uses statistical models to integrate geophysical data with borehole geological data and hydrological measurements to produce hydrogeological parameter estimates as 2-D or 3-D images.

  13. The image-guided surgery toolkit IGSTK: an open source C++ software toolkit.

    PubMed

    Enquobahrie, Andinet; Cheng, Patrick; Gary, Kevin; Ibanez, Luis; Gobbi, David; Lindseth, Frank; Yaniv, Ziv; Aylward, Stephen; Jomier, Julien; Cleary, Kevin

    2007-11-01

    This paper presents an overview of the image-guided surgery toolkit (IGSTK). IGSTK is an open source C++ software library that provides the basic components needed to develop image-guided surgery applications. It is intended for fast prototyping and development of image-guided surgery applications. The toolkit was developed through a collaboration between academic and industry partners. Because IGSTK was designed for safety-critical applications, the development team has adopted lightweight software processes that emphasizes safety and robustness while, at the same time, supporting geographically separated developers. A software process that is philosophically similar to agile software methods was adopted emphasizing iterative, incremental, and test-driven development principles. The guiding principle in the architecture design of IGSTK is patient safety. The IGSTK team implemented a component-based architecture and used state machine software design methodologies to improve the reliability and safety of the components. Every IGSTK component has a well-defined set of features that are governed by state machines. The state machine ensures that the component is always in a valid state and that all state transitions are valid and meaningful. Realizing that the continued success and viability of an open source toolkit depends on a strong user community, the IGSTK team is following several key strategies to build an active user community. These include maintaining a users and developers' mailing list, providing documentation (application programming interface reference document and book), presenting demonstration applications, and delivering tutorial sessions at relevant scientific conferences. PMID:17703338

  14. Digital processing of side-scan sonar data with the Woods Hole image processing system software

    USGS Publications Warehouse

    Paskevich, Valerie F.

    1992-01-01

    Since 1985, the Branch of Atlantic Marine Geology has been involved in collecting, processing and digitally mosaicking high and low-resolution side-scan sonar data. Recent development of a UNIX-based image-processing software system includes a series of task specific programs for processing side-scan sonar data. This report describes the steps required to process the collected data and to produce an image that has equal along- and across-track resol

  15. 3D thermography imaging standardization technique for inflammation diagnosis

    NASA Astrophysics Data System (ADS)

    Ju, Xiangyang; Nebel, Jean-Christophe; Siebert, J. Paul

    2005-01-01

    We develop a 3D thermography imaging standardization technique to allow quantitative data analysis. Medical Digital Infrared Thermal Imaging is very sensitive and reliable mean of graphically mapping and display skin surface temperature. It allows doctors to visualise in colour and quantify temperature changes in skin surface. The spectrum of colours indicates both hot and cold responses which may co-exist if the pain associate with an inflammatory focus excites an increase in sympathetic activity. However, due to thermograph provides only qualitative diagnosis information, it has not gained acceptance in the medical and veterinary communities as a necessary or effective tool in inflammation and tumor detection. Here, our technique is based on the combination of visual 3D imaging technique and thermal imaging technique, which maps the 2D thermography images on to 3D anatomical model. Then we rectify the 3D thermogram into a view independent thermogram and conform it a standard shape template. The combination of these imaging facilities allows the generation of combined 3D and thermal data from which thermal signatures can be quantified.

  16. Digital image measurement of specimen deformation based on CCD cameras and Image J software: an application to human pelvic biomechanics

    NASA Astrophysics Data System (ADS)

    Jia, Yongwei; Cheng, Liming; Yu, Guangrong; Lou, Yongjian; Yu, Yan; Chen, Bo; Ding, Zuquan

    2008-03-01

    A method of digital image measurement of specimen deformation based on CCD cameras and Image J software was developed. This method was used to measure the biomechanics behavior of human pelvis. Six cadaveric specimens from the third lumbar vertebra to the proximal 1/3 part of femur were tested. The specimens without any structural abnormalities were dissected of all soft tissue, sparing the hip joint capsules and the ligaments of the pelvic ring and floor. Markers with black dot on white background were affixed to the key regions of the pelvis. Axial loading from the proximal lumbar was applied by MTS in the gradient of 0N to 500N, which simulated the double feet standing stance. The anterior and lateral images of the specimen were obtained through two CCD cameras. Based on Image J software, digital image processing software, which can be freely downloaded from the National Institutes of Health, digital 8-bit images were processed. The procedure includes the recognition of digital marker, image invert, sub-pixel reconstruction, image segmentation, center of mass algorithm based on weighted average of pixel gray values. Vertical displacements of S1 (the first sacral vertebrae) in front view and micro-angular rotation of sacroiliac joint in lateral view were calculated according to the marker movement. The results of digital image measurement showed as following: marker image correlation before and after deformation was excellent. The average correlation coefficient was about 0.983. According to the 768 × 576 pixels image (pixel size 0.68mm × 0.68mm), the precision of the displacement detected in our experiment was about 0.018 pixels and the comparatively error could achieve 1.11\\perthou. The average vertical displacement of S1 of the pelvis was 0.8356+/-0.2830mm under vertical load of 500 Newtons and the average micro-angular rotation of sacroiliac joint in lateral view was 0.584+/-0.221°. The load-displacement curves obtained from our optical measure system

  17. 3-dimensional root phenotyping with a novel imaging and software platform

    Technology Transfer Automated Retrieval System (TEKTRAN)

    A novel imaging and software platform was developed for the high-throughput phenotyping of 3-dimensional root traits during seedling development. To demonstrate the platform’s capacity, plants of two rice (Oryza sativa) genotypes, Azucena and IR64, were grown in a transparent gellan gum system and ...

  18. Onboard utilization of ground control points for image correction. Volume 4: Correlation analysis software design

    NASA Technical Reports Server (NTRS)

    1981-01-01

    The software utilized for image correction accuracy measurement is described. The correlation analysis program is written to allow the user various tools to analyze different correlation algorithms. The algorithms were tested using LANDSAT imagery in two different spectral bands. Three classification algorithms are implemented.

  19. Image contrast enhancement based on a local standard deviation model

    SciTech Connect

    Chang, Dah-Chung; Wu, Wen-Rong

    1996-12-31

    The adaptive contrast enhancement (ACE) algorithm is a widely used image enhancement method, which needs a contrast gain to adjust high frequency components of an image. In the literature, the gain is usually inversely proportional to the local standard deviation (LSD) or is a constant. But these cause two problems in practical applications, i.e., noise overenhancement and ringing artifact. In this paper a new gain is developed based on Hunt`s Gaussian image model to prevent the two defects. The new gain is a nonlinear function of LSD and has the desired characteristic emphasizing the LSD regions in which details are concentrated. We have applied the new ACE algorithm to chest x-ray images and the simulations show the effectiveness of the proposed algorithm.

  20. Stain Specific Standardization of Whole-Slide Histopathological Images.

    PubMed

    Bejnordi, Babak Ehteshami; Litjens, Geert; Timofeeva, Nadya; Otte-Höller, Irene; Homeyer, André; Karssemeijer, Nico; van der Laak, Jeroen A W M

    2016-02-01

    Variations in the color and intensity of hematoxylin and eosin (H&E) stained histological slides can potentially hamper the effectiveness of quantitative image analysis. This paper presents a fully automated algorithm for standardization of whole-slide histopathological images to reduce the effect of these variations. The proposed algorithm, called whole-slide image color standardizer (WSICS), utilizes color and spatial information to classify the image pixels into different stain components. The chromatic and density distributions for each of the stain components in the hue-saturation-density color model are aligned to match the corresponding distributions from a template whole-slide image (WSI). The performance of the WSICS algorithm was evaluated on two datasets. The first originated from 125 H&E stained WSIs of lymph nodes, sampled from 3 patients, and stained in 5 different laboratories on different days of the week. The second comprised 30 H&E stained WSIs of rat liver sections. The result of qualitative and quantitative evaluations using the first dataset demonstrate that the WSICS algorithm outperforms competing methods in terms of achieving color constancy. The WSICS algorithm consistently yields the smallest standard deviation and coefficient of variation of the normalized median intensity measure. Using the second dataset, we evaluated the impact of our algorithm on the performance of an already published necrosis quantification system. The performance of this system was significantly improved by utilizing the WSICS algorithm. The results of the empirical evaluations collectively demonstrate the potential contribution of the proposed standardization algorithm to improved diagnostic accuracy and consistency in computer-aided diagnosis for histopathology data. PMID:26353368

  1. WHIPPET: a collaborative software environment for medical image processing and analysis

    NASA Astrophysics Data System (ADS)

    Hu, Yangqiu; Haynor, David R.; Maravilla, Kenneth R.

    2007-03-01

    While there are many publicly available software packages for medical image processing, making them available to end users in clinical and research labs remains non-trivial. An even more challenging task is to mix these packages to form pipelines that meet specific needs seamlessly, because each piece of software usually has its own input/output formats, parameter sets, and so on. To address these issues, we are building WHIPPET (Washington Heterogeneous Image Processing Pipeline EnvironmenT), a collaborative platform for integrating image analysis tools from different sources. The central idea is to develop a set of Python scripts which glue the different packages together and make it possible to connect them in processing pipelines. To achieve this, an analysis is carried out for each candidate package for WHIPPET, describing input/output formats, parameters, ROI description methods, scripting and extensibility and classifying its compatibility with other WHIPPET components as image file level, scripting level, function extension level, or source code level. We then identify components that can be connected in a pipeline directly via image format conversion. We set up a TWiki server for web-based collaboration so that component analysis and task request can be performed online, as well as project tracking, knowledge base management, and technical support. Currently WHIPPET includes the FSL, MIPAV, FreeSurfer, BrainSuite, Measure, DTIQuery, and 3D Slicer software packages, and is expanding. Users have identified several needed task modules and we report on their implementation.

  2. CellProfiler Analyst: data exploration and analysis software for complex image-based screens

    PubMed Central

    Jones, Thouis R; Kang, In Han; Wheeler, Douglas B; Lindquist, Robert A; Papallo, Adam; Sabatini, David M; Golland, Polina; Carpenter, Anne E

    2008-01-01

    Background Image-based screens can produce hundreds of measured features for each of hundreds of millions of individual cells in a single experiment. Results Here, we describe CellProfiler Analyst, open-source software for the interactive exploration and analysis of multidimensional data, particularly data from high-throughput, image-based experiments. Conclusion The system enables interactive data exploration for image-based screens and automated scoring of complex phenotypes that require combinations of multiple measured features per cell. PMID:19014601

  3. New Software Developments for Quality Mesh Generation and Optimization from Biomedical Imaging Data

    PubMed Central

    Yu, Zeyun; Wang, Jun; Gao, Zhanheng; Xu, Ming; Hoshijima, Masahiko

    2013-01-01

    In this paper we present a new software toolkit for generating and optimizing surface and volumetric meshes from three-dimensional (3D) biomedical imaging data, targeted at image-based finite element analysis of some biomedical activities in a single material domain. Our toolkit includes a series of geometric processing algorithms including surface re-meshing and quality-guaranteed tetrahedral mesh generation and optimization. All methods described have been encapsulated into a user-friendly graphical interface for easy manipulation and informative visualization of biomedical images and mesh models. Numerous examples are presented to demonstrate the effectiveness and efficiency of the described methods and toolkit. PMID:24252469

  4. Software for MR image overlay guided needle insertions: the clinical translation process

    NASA Astrophysics Data System (ADS)

    Ungi, Tamas; U-Thainual, Paweena; Fritz, Jan; Iordachita, Iulian I.; Flammang, Aaron J.; Carrino, John A.; Fichtinger, Gabor

    2013-03-01

    PURPOSE: Needle guidance software using augmented reality image overlay was translated from the experimental phase to support preclinical and clinical studies. Major functional and structural changes were needed to meet clinical requirements. We present the process applied to fulfill these requirements, and selected features that may be applied in the translational phase of other image-guided surgical navigation systems. METHODS: We used an agile software development process for rapid adaptation to unforeseen clinical requests. The process is based on iterations of operating room test sessions, feedback discussions, and software development sprints. The open-source application framework of 3D Slicer and the NA-MIC kit provided sufficient flexibility and stable software foundations for this work. RESULTS: All requirements were addressed in a process with 19 operating room test iterations. Most features developed in this phase were related to workflow simplification and operator feedback. CONCLUSION: Efficient and affordable modifications were facilitated by an open source application framework and frequent clinical feedback sessions. Results of cadaver experiments show that software requirements were successfully solved after a limited number of operating room tests.

  5. IHE cross-enterprise document sharing for imaging: interoperability testing software

    PubMed Central

    2010-01-01

    Background With the deployments of Electronic Health Records (EHR), interoperability testing in healthcare is becoming crucial. EHR enables access to prior diagnostic information in order to assist in health decisions. It is a virtual system that results from the cooperation of several heterogeneous distributed systems. Interoperability between peers is therefore essential. Achieving interoperability requires various types of testing. Implementations need to be tested using software that simulates communication partners, and that provides test data and test plans. Results In this paper we describe a software that is used to test systems that are involved in sharing medical images within the EHR. Our software is used as part of the Integrating the Healthcare Enterprise (IHE) testing process to test the Cross Enterprise Document Sharing for imaging (XDS-I) integration profile. We describe its architecture and functionalities; we also expose the challenges encountered and discuss the elected design solutions. Conclusions EHR is being deployed in several countries. The EHR infrastructure will be continuously evolving to embrace advances in the information technology domain. Our software is built on a web framework to allow for an easy evolution with web technology. The testing software is publicly available; it can be used by system implementers to test their implementations. It can also be used by site integrators to verify and test the interoperability of systems, or by developers to understand specifications ambiguities, or to resolve implementations difficulties. PMID:20858241

  6. Capturing a failure of an ASIC in-situ, using infrared radiometry and image processing software

    NASA Technical Reports Server (NTRS)

    Ruiz, Ronald P.

    2003-01-01

    Failures in electronic devices can sometimes be tricky to locate-especially if they are buried inside radiation-shielded containers designed to work in outer space. Such was the case with a malfunctioning ASIC (Application Specific Integrated Circuit) that was drawing excessive power at a specific temperature during temperature cycle testing. To analyze the failure, infrared radiometry (thermography) was used in combination with image processing software to locate precisely where the power was being dissipated at the moment the failure took place. The IR imaging software was used to make the image of the target and background, appear as unity. As testing proceeded and the failure mode was reached, temperature changes revealed the precise location of the fault. The results gave the design engineers the information they needed to fix the problem. This paper describes the techniques and equipment used to accomplish this failure analysis.

  7. A software to digital image processing to be used in the voxel phantom development.

    PubMed

    Vieira, J W; Lima, F R A

    2009-01-01

    Anthropomorphic models used in computational dosimetry, also denominated phantoms, are based on digital images recorded from scanning of real people by Computed Tomography (CT) or Magnetic Resonance Imaging (MRI). The voxel phantom construction requests computational processing for transformations of image formats, to compact two-dimensional (2-D) images forming of three-dimensional (3-D) matrices, image sampling and quantization, image enhancement, restoration and segmentation, among others. Hardly the researcher of computational dosimetry will find all these available abilities in single software, and almost always this difficulty presents as a result the decrease of the rhythm of his researches or the use, sometimes inadequate, of alternative tools. The need to integrate the several tasks mentioned above to obtain an image that can be used in an exposure computational model motivated the development of the Digital Image Processing (DIP) software, mainly to solve particular problems in Dissertations and Thesis developed by members of the Grupo de Pesquisa em Dosimetria Numérica (GDN/CNPq). Because of this particular objective, the software uses the Portuguese idiom in their implementations and interfaces. This paper presents the second version of the DIP, whose main changes are the more formal organization on menus and menu items, and menu for digital image segmentation. Currently, the DIP contains the menus Fundamentos, Visualizações, Domínio Espacial, Domínio de Frequências, Segmentações and Estudos. Each menu contains items and sub-items with functionalities that, usually, request an image as input and produce an image or an attribute in the output. The DIP reads edits and writes binary files containing the 3-D matrix corresponding to a stack of axial images from a given geometry that can be a human body or other volume of interest. It also can read any type of computational image and to make conversions. When the task involves only an output image

  8. HOLON/CADSE: integrating open software standards and formal methods to generate guideline-based decision support agents.

    PubMed Central

    Silverman, B. G.; Sokolsky, O.; Tannen, V.; Wong, A.; Lang, L.; Khoury, A.; Campbell, K.; Qiang, C.; Sahuguet, A.

    1999-01-01

    This paper describes the efforts of a consortium that is trying to develop and validate formal methods and a meta-environment for authoring, checking, and maintaining a large repository of machine executable practice guidelines. The goal is to integrate and extend a number of open software standards so that guidelines in the meta-environment become a resource that any vendor can plug their applications into and run in their proprietary environment provided they conform to the interface standards. PMID:10566502

  9. A near-infrared fluorescence-based surgical navigation system imaging software for sentinel lymph node detection

    NASA Astrophysics Data System (ADS)

    Ye, Jinzuo; Chi, Chongwei; Zhang, Shuang; Ma, Xibo; Tian, Jie

    2014-02-01

    Sentinel lymph node (SLN) in vivo detection is vital in breast cancer surgery. A new near-infrared fluorescence-based surgical navigation system (SNS) imaging software, which has been developed by our research group, is presented for SLN detection surgery in this paper. The software is based on the fluorescence-based surgical navigation hardware system (SNHS) which has been developed in our lab, and is designed specifically for intraoperative imaging and postoperative data analysis. The surgical navigation imaging software consists of the following software modules, which mainly include the control module, the image grabbing module, the real-time display module, the data saving module and the image processing module. And some algorithms have been designed to achieve the performance of the software, for example, the image registration algorithm based on correlation matching. Some of the key features of the software include: setting the control parameters of the SNS; acquiring, display and storing the intraoperative imaging data in real-time automatically; analysis and processing of the saved image data. The developed software has been used to successfully detect the SLNs in 21 cases of breast cancer patients. In the near future, we plan to improve the software performance and it will be extensively used for clinical purpose.

  10. White Paper: Access to Standard Computers, Software, and Information Systems by Persons with Disabilities. Revised, Version 2.0.

    ERIC Educational Resources Information Center

    Vanderheiden, Gregg C.

    The paper focuses on low cost and no cost methods to allow access and use (via specialized interface and display aids) by the disabled of standard unmodified computers and of microcomputer software systems becoming increasingly common in daily life. First, relevant characteristics of persons with movement, sensory, hearing, or cognitive…

  11. 25 CFR 547.8 - What are the minimum technical software standards applicable to Class II gaming systems?

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... OF CLASS II GAMES § 547.8 What are the minimum technical software standards applicable to Class II... of Class II games. (a) Player interface displays. (1) If not otherwise provided to the player, the player interface shall display the following: (i) The purchase or wager amount; (ii) Game results;...

  12. 25 CFR 547.8 - What are the minimum technical software standards applicable to Class II gaming systems?

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... OF CLASS II GAMES § 547.8 What are the minimum technical software standards applicable to Class II... of Class II games. (a) Player interface displays. (1) If not otherwise provided to the player, the player interface shall display the following: (i) The purchase or wager amount; (ii) Game results;...

  13. 25 CFR 547.8 - What are the minimum technical software standards applicable to Class II gaming systems?

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... OF CLASS II GAMES § 547.8 What are the minimum technical software standards applicable to Class II... of Class II games. (a) Player interface displays. (1) If not otherwise provided to the player, the player interface shall display the following: (i) The purchase or wager amount; (ii) Game results;...

  14. Grid-less imaging with antiscatter correction software in 2D mammography: the effects on image quality and MGD under a partial virtual clinical validation study

    NASA Astrophysics Data System (ADS)

    Van Peteghem, Nelis; Bemelmans, Frédéric; Bramaje Adversalo, Xenia; Salvagnini, Elena; Marshall, Nicholas; Bosmans, Hilde; Van Ongeval, Chantal

    2016-03-01

    This work investigated the effect of the grid-less acquisition mode with scatter correction software developed by Siemens Healthcare (PRIME mode) on image quality and mean glandular dose (MGD) in a comparative study against a standard mammography system with grid. Image quality was technically quantified with contrast-detail (c-d) analysis and by calculating detectability indices (d') using a non-prewhitening with eye filter model observer (NPWE). MGD was estimated technically using slabs of PMMA and clinically on a set of 11439 patient images. The c-d analysis gave similar results for all mammographic systems examined, although the d' values were slightly lower for the system with PRIME mode when compared to the same system in standard mode (-2.8% to -5.7%, depending on the PMMA thickness). The MGD values corresponding to the PMMA measurements with automatic exposure control indicated a dose reduction from 11.0% to 20.8% for the system with PRIME mode compared to the same system without PRIME mode. The largest dose reductions corresponded to the thinnest PMMA thicknesses. The results from the clinical dosimetry study showed an overall population-averaged dose reduction of 11.6% (up to 27.7% for thinner breasts) for PRIME mode compared to standard mode for breast thicknesses from 20 to 69 mm. These technical image quality measures were then supported using a clinically oriented study whereby simulated clusters of microcalcifications and masses were inserted into patient images and read by radiologists in an AFROC study to quantify their detectability. In line with the technical investigation, no significant difference was found between the two imaging modes (p-value 0.95).

  15. Preliminary studies for a CBCT imaging protocol for offline organ motion analysis: registration software validation and CTDI measurements.

    PubMed

    Falco, Maria Daniela; Fontanarosa, Davide; Miceli, Roberto; Carosi, Alessandra; Santoni, Riccardo; D'Andrea, Marco

    2011-01-01

    Cone-beam X-ray volumetric imaging in the treatment room, allows online correction of set-up errors and offline assessment of residual set-up errors and organ motion. In this study the registration algorithm of the X-ray volume imaging software (XVI, Elekta, Crawley, United Kingdom), which manages a commercial cone-beam computed tomography (CBCT)-based positioning system, has been tested using a homemade and an anthropomorphic phantom to: (1) assess its performance in detecting known translational and rotational set-up errors and (2) transfer the transformation matrix of its registrations into a commercial treatment planning system (TPS) for offline organ motion analysis. Furthermore, CBCT dose index has been measured for a particular site (prostate: 120 kV, 1028.8 mAs, approximately 640 frames) using a standard Perspex cylindrical body phantom (diameter 32 cm, length 15 cm) and a 10-cm-long pencil ionization chamber. We have found that known displacements were correctly calculated by the registration software to within 1.3 mm and 0.4°. For the anthropomorphic phantom, only translational displacements have been considered. Both studies have shown errors within the intrinsic uncertainty of our system for translational displacements (estimated as 0.87 mm) and rotational displacements (estimated as 0.22°). The resulting table translations proposed by the system to correct the displacements were also checked with portal images and found to place the isocenter of the plan on the linac isocenter within an error of 1 mm, which is the dimension of the spherical lead marker inserted at the center of the homemade phantom. The registration matrix translated into the TPS image fusion module correctly reproduced the alignment between planning CT scans and CBCT scans. Finally, measurements on the CBCT dose index indicate that CBCT acquisition delivers less dose than conventional CT scans and electronic portal imaging device portals. The registration software was found to be

  16. Preliminary Studies for a CBCT Imaging Protocol for Offline Organ Motion Analysis: Registration Software Validation and CTDI Measurements

    SciTech Connect

    Falco, Maria Daniela; Fontanarosa, Davide; Miceli, Roberto; Carosi, Alessandra; Santoni, Riccardo; D'Andrea, Marco

    2011-04-01

    Cone-beam X-ray volumetric imaging in the treatment room, allows online correction of set-up errors and offline assessment of residual set-up errors and organ motion. In this study the registration algorithm of the X-ray volume imaging software (XVI, Elekta, Crawley, United Kingdom), which manages a commercial cone-beam computed tomography (CBCT)-based positioning system, has been tested using a homemade and an anthropomorphic phantom to: (1) assess its performance in detecting known translational and rotational set-up errors and (2) transfer the transformation matrix of its registrations into a commercial treatment planning system (TPS) for offline organ motion analysis. Furthermore, CBCT dose index has been measured for a particular site (prostate: 120 kV, 1028.8 mAs, approximately 640 frames) using a standard Perspex cylindrical body phantom (diameter 32 cm, length 15 cm) and a 10-cm-long pencil ionization chamber. We have found that known displacements were correctly calculated by the registration software to within 1.3 mm and 0.4{sup o}. For the anthropomorphic phantom, only translational displacements have been considered. Both studies have shown errors within the intrinsic uncertainty of our system for translational displacements (estimated as 0.87 mm) and rotational displacements (estimated as 0.22{sup o}). The resulting table translations proposed by the system to correct the displacements were also checked with portal images and found to place the isocenter of the plan on the linac isocenter within an error of 1 mm, which is the dimension of the spherical lead marker inserted at the center of the homemade phantom. The registration matrix translated into the TPS image fusion module correctly reproduced the alignment between planning CT scans and CBCT scans. Finally, measurements on the CBCT dose index indicate that CBCT acquisition delivers less dose than conventional CT scans and electronic portal imaging device portals. The registration software was

  17. Luminosity and contrast normalization in color retinal images based on standard reference image

    NASA Astrophysics Data System (ADS)

    S. Varnousfaderani, Ehsan; Yousefi, Siamak; Belghith, Akram; Goldbaum, Michael H.

    2016-03-01

    Color retinal images are used manually or automatically for diagnosis and monitoring progression of a retinal diseases. Color retinal images have large luminosity and contrast variability within and across images due to the large natural variations in retinal pigmentation and complex imaging setups. The quality of retinal images may affect the performance of automatic screening tools therefore different normalization methods are developed to uniform data before applying any further analysis or processing. In this paper we propose a new reliable method to remove non-uniform illumination in retinal images and improve their contrast based on contrast of the reference image. The non-uniform illumination is removed by normalizing luminance image using local mean and standard deviation. Then the contrast is enhanced by shifting histograms of uniform illuminated retinal image toward histograms of the reference image to have similar histogram peaks. This process improve the contrast without changing inter correlation of pixels in different color channels. In compliance with the way humans perceive color, the uniform color space of LUV is used for normalization. The proposed method is widely tested on large dataset of retinal images with present of different pathologies such as Exudate, Lesion, Hemorrhages and Cotton-Wool and in different illumination conditions and imaging setups. Results shows that proposed method successfully equalize illumination and enhances contrast of retinal images without adding any extra artifacts.

  18. An open-source deconvolution software package for 3-D quantitative fluorescence microscopy imaging

    PubMed Central

    SUN, Y.; DAVIS, P.; KOSMACEK, E. A.; IANZINI, F.; MACKEY, M. A.

    2010-01-01

    Summary Deconvolution techniques have been widely used for restoring the 3-D quantitative information of an unknown specimen observed using a wide-field fluorescence microscope. Deconv, an open-source deconvolution software package, was developed for 3-D quantitative fluorescence microscopy imaging and was released under the GNU Public License. Deconv provides numerical routines for simulation of a 3-D point spread function and deconvolution routines implemented three constrained iterative deconvolution algorithms: one based on a Poisson noise model and two others based on a Gaussian noise model. These algorithms are presented and evaluated using synthetic images and experimentally obtained microscope images, and the use of the library is explained. Deconv allows users to assess the utility of these deconvolution algorithms and to determine which are suited for a particular imaging application. The design of Deconv makes it easy for deconvolution capabilities to be incorporated into existing imaging applications. PMID:19941558

  19. MedXViewer: an extensible web-enabled software package for medical imaging

    NASA Astrophysics Data System (ADS)

    Looney, P. T.; Young, K. C.; Mackenzie, Alistair; Halling-Brown, Mark D.

    2014-03-01

    MedXViewer (Medical eXtensible Viewer) is an application designed to allow workstation-independent, PACS-less viewing and interaction with anonymised medical images (e.g. observer studies). The application was initially implemented for use in digital mammography and tomosynthesis but the flexible software design allows it to be easily extended to other imaging modalities. Regions of interest can be identified by a user and any associated information about a mark, an image or a study can be added. The questions and settings can be easily configured depending on the need of the research allowing both ROC and FROC studies to be performed. The extensible nature of the design allows for other functionality and hanging protocols to be available for each study. Panning, windowing, zooming and moving through slices are all available while modality-specific features can be easily enabled e.g. quadrant zooming in mammographic studies. MedXViewer can integrate with a web-based image database allowing results and images to be stored centrally. The software and images can be downloaded remotely from this centralised data-store. Alternatively, the software can run without a network connection where the images and results can be encrypted and stored locally on a machine or external drive. Due to the advanced workstation-style functionality, the simple deployment on heterogeneous systems over the internet without a requirement for administrative access and the ability to utilise a centralised database, MedXViewer has been used for running remote paper-less observer studies and is capable of providing a training infrastructure and co-ordinating remote collaborative viewing sessions (e.g. cancer reviews, interesting cases).

  20. Geoscience data standards, software implementations, and the Internet. Where we came from and where we might be going.

    NASA Astrophysics Data System (ADS)

    Blodgett, D. L.

    2014-12-01

    Geographic information science and the coupled database and software systems that have grown from it have been evolving since the early 1990s. The multi-file shapefile package, invented early in this evolution, is an example of a highly generalized file format that can be used as an archival, interchange, and format for program execution. There are other formats, such as GeoTIFF and NetCDF that have similar characteristics. These de-facto standard (in contrast to the formally defined and published standards) formats, while not initially designed for machine-readable web-services, are used in them extensively. Relying on these formats allows legacy software to be adapted to web-services, but may require complicate software development to handle dynamic introspection of these legacy file formats' metadata. A generalized system of web-service types that offer archive, interchange, and run-time capabilities based on commonly implemented file formats and established web-service specifications has emerged from exemplar implementations. For example, an Open Geospatial Consortium (OGC) Web Feature Service is used to serve sites or model polygons and an OGC Sensor Observation Service provides time series data for the sites. The broad system of data formats, web-service types, and freely available software that implements the system will be described. The presentation will include a perspective on the future of this basic system and how it relates to scientific domain specific information models such as the Open Geospatial Consortium standards for geographic, hydrologic, and hydrogeologic data.

  1. The Performance Evaluation of Multi-Image 3d Reconstruction Software with Different Sensors

    NASA Astrophysics Data System (ADS)

    Mousavi, V.; Khosravi, M.; Ahmadi, M.; Noori, N.; Naveh, A. Hosseini; Varshosaz, M.

    2015-12-01

    Today, multi-image 3D reconstruction is an active research field and generating three dimensional model of the objects is one the most discussed issues in Photogrammetry and Computer Vision that can be accomplished using range-based or image-based methods. Very accurate and dense point clouds generated by range-based methods such as structured light systems and laser scanners has introduced them as reliable tools in the industry. Image-based 3D digitization methodologies offer the option of reconstructing an object by a set of unordered images that depict it from different viewpoints. As their hardware requirements are narrowed down to a digital camera and a computer system, they compose an attractive 3D digitization approach, consequently, although range-based methods are generally very accurate, image-based methods are low-cost and can be easily used by non-professional users. One of the factors affecting the accuracy of the obtained model in image-based methods is the software and algorithm used to generate three dimensional model. These algorithms are provided in the form of commercial software, open source and web-based services. Another important factor in the accuracy of the obtained model is the type of sensor used. Due to availability of mobile sensors to the public, popularity of professional sensors and the advent of stereo sensors, a comparison of these three sensors plays an effective role in evaluating and finding the optimized method to generate three-dimensional models. Lots of research has been accomplished to identify a suitable software and algorithm to achieve an accurate and complete model, however little attention is paid to the type of sensors used and its effects on the quality of the final model. The purpose of this paper is deliberation and the introduction of an appropriate combination of a sensor and software to provide a complete model with the highest accuracy. To do this, different software, used in previous studies, were compared and

  2. Standardized methods for assessing the imaging quality of intraocular lenses

    NASA Astrophysics Data System (ADS)

    Norrby, N. E. Sverker

    1995-11-01

    The relative merits of three standardized methods for assessing the imaging quality of intraocular lenses are discussed based on theoretical modulation-transfer-function calculations. The standards are ANSI Z80.7 1984 from the American National Standards Institute, now superseded by ANSI Z80.7 1994, and the proposed ISO 11979-2 from the International Organization for Standardization. They entail different test 60% resolution efficiency in air, 70% resolutionefficiency in aqueous humor, and 0.43 modulation at 100 line pairs/mm in a model eye. The ISO working group found that the latter corresponds to 60% resolution efficiency in air in a ring test among eight laboratories on a sample of 39 poly(methyl) methacrylate lenses and four silicone lenses spanning the power (in aqueous humor) range of 10-30 D. In both ANSI Z80.7 1994 and ISO 11979-2, a 60% resolution efficiency in air remains an optional approval limit. It is concluded that the ISO configuration is preferred, because it puts the intraocular lens into the context of the optics of the eye. Note that the ISO standard is tentative and is currently being voted on.

  3. Standardized methods for assessing the imaging quality of intraocular lenses.

    PubMed

    Norrby, N E

    1995-11-01

    The relative merits of three standardized methods for assessing the imaging quality of intraocular lenses are discussed based on theoretical modulation-transfer-function calculations. The standards are ANSI Z80.7 1984 from the American National Standards Institute, now superseded by ANSI Z80.7 1994, and the proposed ISO 11979-2 from the International Organization for Standardization. They entail different test configurations and approval limits, respectively: 60% resolution efficiency in air, 70% resolution efficiency in aqueous humor, and 0.43 modulation at 100 line pairs/mm in a model eye. The ISO working group found that the latter corresponds to 60% resolution efficiency in air in a ring test among eight laboratories on a sample of 39 poly(methyl) methacrylate lenses and four silicone lenses spanning the power (in aqueous humor) range of 10-30 D. In both ANSI Z80.7 1994 and ISO 11979-2, a 60% resolution efficiency in air remains an optional approval limit. It is concluded that the ISO configuration is preferred, because it puts the intraocular lens into the context of the optics of the eye. Note that the ISO standard is tentative and is currently being voted on. PMID:21060604

  4. Despeckle filtering software toolbox for ultrasound imaging of the common carotid artery.

    PubMed

    Loizou, Christos P; Theofanous, Charoula; Pantziaris, Marios; Kasparis, Takis

    2014-04-01

    Ultrasound imaging of the common carotid artery (CCA) is a non-invasive tool used in medicine to assess the severity of atherosclerosis and monitor its progression through time. It is also used in border detection and texture characterization of the atherosclerotic carotid plaque in the CCA, the identification and measurement of the intima-media thickness (IMT) and the lumen diameter that all are very important in the assessment of cardiovascular disease (CVD). Visual perception, however, is hindered by speckle, a multiplicative noise, that degrades the quality of ultrasound B-mode imaging. Noise reduction is therefore essential for improving the visual observation quality or as a pre-processing step for further automated analysis, such as image segmentation of the IMT and the atherosclerotic carotid plaque in ultrasound images. In order to facilitate this preprocessing step, we have developed in MATLAB(®) a unified toolbox that integrates image despeckle filtering (IDF), texture analysis and image quality evaluation techniques to automate the pre-processing and complement the disease evaluation in ultrasound CCA images. The proposed software, is based on a graphical user interface (GUI) and incorporates image normalization, 10 different despeckle filtering techniques (DsFlsmv, DsFwiener, DsFlsminsc, DsFkuwahara, DsFgf, DsFmedian, DsFhmedian, DsFad, DsFnldif, DsFsrad), image intensity normalization, 65 texture features, 15 quantitative image quality metrics and objective image quality evaluation. The software is publicly available in an executable form, which can be downloaded from http://www.cs.ucy.ac.cy/medinfo/. It was validated on 100 ultrasound images of the CCA, by comparing its results with quantitative visual analysis performed by a medical expert. It was observed that the despeckle filters DsFlsmv, and DsFhmedian improved image quality perception (based on the expert's assessment and the image texture and quality metrics). It is anticipated that the

  5. The Spectral Image Processing System (SIPS): Software for integrated analysis of AVIRIS data

    NASA Technical Reports Server (NTRS)

    Kruse, F. A.; Lefkoff, A. B.; Boardman, J. W.; Heidebrecht, K. B.; Shapiro, A. T.; Barloon, P. J.; Goetz, A. F. H.

    1992-01-01

    The Spectral Image Processing System (SIPS) is a software package developed by the Center for the Study of Earth from Space (CSES) at the University of Colorado, Boulder, in response to a perceived need to provide integrated tools for analysis of imaging spectrometer data both spectrally and spatially. SIPS was specifically designed to deal with data from the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) and the High Resolution Imaging Spectrometer (HIRIS), but was tested with other datasets including the Geophysical and Environmental Research Imaging Spectrometer (GERIS), GEOSCAN images, and Landsat TM. SIPS was developed using the 'Interactive Data Language' (IDL). It takes advantage of high speed disk access and fast processors running under the UNIX operating system to provide rapid analysis of entire imaging spectrometer datasets. SIPS allows analysis of single or multiple imaging spectrometer data segments at full spatial and spectral resolution. It also allows visualization and interactive analysis of image cubes derived from quantitative analysis procedures such as absorption band characterization and spectral unmixing. SIPS consists of three modules: SIPS Utilities, SIPS_View, and SIPS Analysis. SIPS version 1.1 is described below.

  6. Oxygen octahedra picker: A software tool to extract quantitative information from STEM images.

    PubMed

    Wang, Yi; Salzberger, Ute; Sigle, Wilfried; Eren Suyolcu, Y; van Aken, Peter A

    2016-09-01

    In perovskite oxide based materials and hetero-structures there are often strong correlations between oxygen octahedral distortions and functionality. Thus, atomistic understanding of the octahedral distortion, which requires accurate measurements of atomic column positions, will greatly help to engineer their properties. Here, we report the development of a software tool to extract quantitative information of the lattice and of BO6 octahedral distortions from STEM images. Center-of-mass and 2D Gaussian fitting methods are implemented to locate positions of individual atom columns. The precision of atomic column distance measurements is evaluated on both simulated and experimental images. The application of the software tool is demonstrated using practical examples. PMID:27344044

  7. The Image-Guided Surgery ToolKit IGSTK: an open source C++ software toolkit

    NASA Astrophysics Data System (ADS)

    Cheng, Peng; Ibanez, Luis; Gobbi, David; Gary, Kevin; Aylward, Stephen; Jomier, Julien; Enquobahrie, Andinet; Zhang, Hui; Kim, Hee-su; Blake, M. Brian; Cleary, Kevin

    2007-03-01

    The Image-Guided Surgery Toolkit (IGSTK) is an open source C++ software library that provides the basic components needed to develop image-guided surgery applications. The focus of the toolkit is on robustness using a state machine architecture. This paper presents an overview of the project based on a recent book which can be downloaded from igstk.org. The paper includes an introduction to open source projects, a discussion of our software development process and the best practices that were developed, and an overview of requirements. The paper also presents the architecture framework and main components. This presentation is followed by a discussion of the state machine model that was incorporated and the associated rationale. The paper concludes with an example application.

  8. Using MATLAB software with Tomcat server and Java platform for remote image analysis in pathology

    PubMed Central

    2011-01-01

    Background The Matlab software is a one of the most advanced development tool for application in engineering practice. From our point of view the most important is the image processing toolbox, offering many built-in functions, including mathematical morphology, and implementation of a many artificial neural networks as AI. It is very popular platform for creation of the specialized program for image analysis, also in pathology. Based on the latest version of Matlab Builder Java toolbox, it is possible to create the software, serving as a remote system for image analysis in pathology via internet communication. The internet platform can be realized based on Java Servlet Pages with Tomcat server as servlet container. Methods In presented software implementation we propose remote image analysis realized by Matlab algorithms. These algorithms can be compiled to executable jar file with the help of Matlab Builder Java toolbox. The Matlab function must be declared with the set of input data, output structure with numerical results and Matlab web figure. Any function prepared in that manner can be used as a Java function in Java Servlet Pages (JSP). The graphical user interface providing the input data and displaying the results (also in graphical form) must be implemented in JSP. Additionally the data storage to database can be implemented within algorithm written in Matlab with the help of Matlab Database Toolbox directly with the image processing. The complete JSP page can be run by Tomcat server. Results The proposed tool for remote image analysis was tested on the Computerized Analysis of Medical Images (CAMI) software developed by author. The user provides image and case information (diagnosis, staining, image parameter etc.). When analysis is initialized, input data with image are sent to servlet on Tomcat. When analysis is done, client obtains the graphical results as an image with marked recognized cells and also the quantitative output. Additionally, the

  9. Development of a software based automatic exposure control system for use in image guided radiation therapy

    NASA Astrophysics Data System (ADS)

    Morton, Daniel R.

    Modern image guided radiation therapy involves the use of an isocentrically mounted imaging system to take radiographs of a patient's position before the start of each treatment. Image guidance helps to minimize errors associated with a patients setup, but the radiation dose received by patients from imaging must be managed to ensure no additional risks. The Varian On-Board Imager (OBI) (Varian Medical Systems, Inc., Palo Alto, CA) does not have an automatic exposure control system and therefore requires exposure factors to be manually selected. Without patient specific exposure factors, images may become saturated and require multiple unnecessary exposures. A software based automatic exposure control system has been developed to predict optimal, patient specific exposure factors. The OBI system was modelled in terms of the x-ray tube output and detector response in order to calculate the level of detector saturation for any exposure situation. Digitally reconstructed radiographs are produced via ray-tracing through the patients' volumetric datasets that are acquired for treatment planning. The ray-trace determines the attenuation of the patient and subsequent x-ray spectra incident on the imaging detector. The resulting spectra are used in the detector response model to determine the exposure levels required to minimize detector saturation. Images calculated for various phantoms showed good agreement with the images that were acquired on the OBI. Overall, regions of detector saturation were accurately predicted and the detector response for non-saturated regions in images of an anthropomorphic phantom were calculated to generally be within 5 to 10 % of the measured values. Calculations were performed on patient data and found similar results as the phantom images, with the calculated images being able to determine detector saturation with close agreement to images that were acquired during treatment. Overall, it was shown that the system model and calculation

  10. Performing Quantitative Imaging Acquisition, Analysis and Visualization Using the Best of Open Source and Commercial Software Solutions

    PubMed Central

    Shenoy, Shailesh M.

    2016-01-01

    A challenge in any imaging laboratory, especially one that uses modern techniques, is to achieve a sustainable and productive balance between using open source and commercial software to perform quantitative image acquisition, analysis and visualization. In addition to considering the expense of software licensing, one must consider factors such as the quality and usefulness of the software’s support, training and documentation. Also, one must consider the reproducibility with which multiple people generate results using the same software to perform the same analysis, how one may distribute their methods to the community using the software and the potential for achieving automation to improve productivity. PMID:27516727

  11. TiLIA: a software package for image analysis of firefly flash patterns.

    PubMed

    Konno, Junsuke; Hatta-Ohashi, Yoko; Akiyoshi, Ryutaro; Thancharoen, Anchana; Silalom, Somyot; Sakchoowong, Watana; Yiu, Vor; Ohba, Nobuyoshi; Suzuki, Hirobumi

    2016-05-01

    As flash signaling patterns of fireflies are species specific, signal-pattern analysis is important for understanding this system of communication. Here, we present time-lapse image analysis (TiLIA), a free open-source software package for signal and flight pattern analyses of fireflies that uses video-recorded image data. TiLIA enables flight path tracing of individual fireflies and provides frame-by-frame coordinates and light intensity data. As an example of TiLIA capabilities, we demonstrate flash pattern analysis of the fireflies Luciola cruciata and L. lateralis during courtship behavior. PMID:27069594

  12. 2D-CELL: image processing software for extraction and analysis of 2-dimensional cellular structures

    NASA Astrophysics Data System (ADS)

    Righetti, F.; Telley, H.; Leibling, Th. M.; Mocellin, A.

    1992-01-01

    2D-CELL is a software package for the processing and analyzing of photographic images of cellular structures in a largely interactive way. Starting from a binary digitized image, the programs extract the line network (skeleton) of the structure and determine the graph representation that best models it. Provision is made for manually correcting defects such as incorrect node positions or dangling bonds. Then a suitable algorithm retrieves polygonal contours which define individual cells — local boundary curvatures are neglected for simplicity. Using elementary analytical geometry relations, a range of metric and topological parameters describing the population are then computed, organized into statistical distributions and graphically displayed.

  13. A software tool for automatic classification and segmentation of 2D/3D medical images

    NASA Astrophysics Data System (ADS)

    Strzelecki, Michal; Szczypinski, Piotr; Materka, Andrzej; Klepaczko, Artur

    2013-02-01

    Modern medical diagnosis utilizes techniques of visualization of human internal organs (CT, MRI) or of its metabolism (PET). However, evaluation of acquired images made by human experts is usually subjective and qualitative only. Quantitative analysis of MR data, including tissue classification and segmentation, is necessary to perform e.g. attenuation compensation, motion detection, and correction of partial volume effect in PET images, acquired with PET/MR scanners. This article presents briefly a MaZda software package, which supports 2D and 3D medical image analysis aiming at quantification of image texture. MaZda implements procedures for evaluation, selection and extraction of highly discriminative texture attributes combined with various classification, visualization and segmentation tools. Examples of MaZda application in medical studies are also provided.

  14. SOFI Simulation Tool: A Software Package for Simulating and Testing Super-Resolution Optical Fluctuation Imaging.

    PubMed

    Girsault, Arik; Lukes, Tomas; Sharipov, Azat; Geissbuehler, Stefan; Leutenegger, Marcel; Vandenberg, Wim; Dedecker, Peter; Hofkens, Johan; Lasser, Theo

    2016-01-01

    Super-resolution optical fluctuation imaging (SOFI) allows one to perform sub-diffraction fluorescence microscopy of living cells. By analyzing the acquired image sequence with an advanced correlation method, i.e. a high-order cross-cumulant analysis, super-resolution in all three spatial dimensions can be achieved. Here we introduce a software tool for a simple qualitative comparison of SOFI images under simulated conditions considering parameters of the microscope setup and essential properties of the biological sample. This tool incorporates SOFI and STORM algorithms, displays and describes the SOFI image processing steps in a tutorial-like fashion. Fast testing of various parameters simplifies the parameter optimization prior to experimental work. The performance of the simulation tool is demonstrated by comparing simulated results with experimentally acquired data. PMID:27583365

  15. AIRS: The Medical Imaging Software for Segmentation and Registration in SPECT/CT

    NASA Astrophysics Data System (ADS)

    Widita, R.; Kurniadi, R.; Haryanto, F.; Darma, Y.; Perkasa, Y. S.; Zasneda, S. S.

    2010-06-01

    We have been successfully developed a new software, Automated Image Registration and Segmentation (AIRS), to fuse the CT and SPECT images. It is designed to solve different registration and segmentation problems that arises in tomographic data sets. AIRS is addressed to obtain anatomic information to be applied to NanoSpect system which is imaging for nano-tissues or small animals. It will be demonstrated that the information obtained by SPECT/CT is more accurate in evaluating patients/objects than that obtained from either SPECT or CT alone. The registration methods developed here are for both two-dimensional and three-dimensional registration. We used normalized mutual information (NMI) which is amenable for images produced by different modalities and having unclear boundaries between tissues. The segmentation components used in this software is region growing algorithms which have proven to be an effective approach for image segmentation. The implementations of region growing developed here are connected threshold and neighborhood connected. Our method is designed to perform with clinically acceptable speed, using accelerated techniques (multiresolution).

  16. Image 100 procedures manual development: Applications system library definition and Image 100 software definition

    NASA Technical Reports Server (NTRS)

    Guseman, L. F., Jr.; Decell, H. P., Jr.

    1975-01-01

    An outline for an Image 100 procedures manual for Earth Resources Program image analysis was developed which sets forth guidelines that provide a basis for the preparation and updating of an Image 100 Procedures Manual. The scope of the outline was limited to definition of general features of a procedures manual together with special features of an interactive system. Computer programs were identified which should be implemented as part of an applications oriented library for the system.

  17. Space Station Software Issues

    NASA Technical Reports Server (NTRS)

    Voigt, S. (Editor); Beskenis, S. (Editor)

    1985-01-01

    Issues in the development of software for the Space Station are discussed. Software acquisition and management, software development environment, standards, information system support for software developers, and a future software advisory board are addressed.

  18. [Development of DICOM image viewing software for efficient image reading and evaluation of distributed server system for diagnostic environment].

    PubMed

    Ishikawa, K

    2000-12-01

    To construct an efficient diagnostic environment using computer displays, the author investigated the time of network transmission using clinical images. In our hospital, we introduced optical-fiber 100Base-Fx Ethernet connections between 22 HIS-segments and one RIS-segment. Although Ethernet architecture is inexpensive, the speed of image transmission becomes 2371 KB/sec. (4.6 CT-slice/sec.) in the RIS-segment and 996 KB/sec. (1.9 CT-slice/sec.) from the RIS-segment to HIS-segments. Because one examination is transmitted in one minute, it does not disturb image reading. Otherwise, a distributed server system using inexpensive personal computers helps in constructing an efficient system. This investigation showed that commercially based Digital Imaging and Communications in Medicine(DICOM) servers and RSNA Central Test Node servers are not so different in transmission speed. The author programmed and developed DICOM transmission and viewing software for Macintosh computers. This viewer includes two inventions, dynamic tiling window system (DTWS) and window binding mode(WBM). On DTWS, windows, tiles, and images are independent objects, which are movable and resizable. The tile-matrix is changeable by mouse dragging, which realizes suitable tile rectangles for wide-low or narrow-high images. The arranging window tool prevents windows from scattering. Using WBM, any operation affects each window similarly. This means that the relationship of compared images is always equivalent. DTWS and WBM contribute greatly to a filmless diagnostic environment. PMID:11197836

  19. Automated Scoring of Chromogenic Media for Detection of Methicillin-Resistant Staphylococcus aureus by Use of WASPLab Image Analysis Software

    PubMed Central

    Faron, Matthew L.; Vismara, Chiara; Lacchini, Carla; Bielli, Alessandra; Gesu, Giovanni; Liebregts, Theo; van Bree, Anita; Jansz, Arjan; Soucy, Genevieve; Korver, John

    2015-01-01

    Recently, systems have been developed to create total laboratory automation for clinical microbiology. These systems allow for the automation of specimen processing, specimen incubation, and imaging of bacterial growth. In this study, we used the WASPLab to validate software that discriminates and segregates positive and negative chromogenic methicillin-resistant Staphylococcus aureus (MRSA) plates by recognition of pigmented colonies. A total of 57,690 swabs submitted for MRSA screening were enrolled in the study. Four sites enrolled specimens following their standard of care. Chromogenic agar used at these sites included MRSASelect (Bio-Rad Laboratories, Redmond, WA), chromID MRSA (bioMérieux, Marcy l'Etoile, France), and CHROMagar MRSA (BD Diagnostics, Sparks, MD). Specimens were plated and incubated using the WASPLab. The digital camera took images at 0 and 16 to 24 h and the WASPLab software determined the presence of positive colonies based on a hue, saturation, and value (HSV) score. If the HSV score fell within a defined threshold, the plate was called positive. The performance of the digital analysis was compared to manual reading. Overall, the digital software had a sensitivity of 100% and a specificity of 90.7% with the specificity ranging between 90.0 and 96.0 across all sites. The results were similar using the three different agars with a sensitivity of 100% and specificity ranging between 90.7 and 92.4%. These data demonstrate that automated digital analysis can be used to accurately sort positive from negative chromogenic agar cultures regardless of the pigmentation produced. PMID:26719443

  20. Automated Scoring of Chromogenic Media for Detection of Methicillin-Resistant Staphylococcus aureus by Use of WASPLab Image Analysis Software.

    PubMed

    Faron, Matthew L; Buchan, Blake W; Vismara, Chiara; Lacchini, Carla; Bielli, Alessandra; Gesu, Giovanni; Liebregts, Theo; van Bree, Anita; Jansz, Arjan; Soucy, Genevieve; Korver, John; Ledeboer, Nathan A

    2016-03-01

    Recently, systems have been developed to create total laboratory automation for clinical microbiology. These systems allow for the automation of specimen processing, specimen incubation, and imaging of bacterial growth. In this study, we used the WASPLab to validate software that discriminates and segregates positive and negative chromogenic methicillin-resistant Staphylococcus aureus (MRSA) plates by recognition of pigmented colonies. A total of 57,690 swabs submitted for MRSA screening were enrolled in the study. Four sites enrolled specimens following their standard of care. Chromogenic agar used at these sites included MRSASelect (Bio-Rad Laboratories, Redmond, WA), chromID MRSA (bioMérieux, Marcy l'Etoile, France), and CHROMagar MRSA (BD Diagnostics, Sparks, MD). Specimens were plated and incubated using the WASPLab. The digital camera took images at 0 and 16 to 24 h and the WASPLab software determined the presence of positive colonies based on a hue, saturation, and value (HSV) score. If the HSV score fell within a defined threshold, the plate was called positive. The performance of the digital analysis was compared to manual reading. Overall, the digital software had a sensitivity of 100% and a specificity of 90.7% with the specificity ranging between 90.0 and 96.0 across all sites. The results were similar using the three different agars with a sensitivity of 100% and specificity ranging between 90.7 and 92.4%. These data demonstrate that automated digital analysis can be used to accurately sort positive from negative chromogenic agar cultures regardless of the pigmentation produced. PMID:26719443

  1. The Human Physiome: how standards, software and innovative service infrastructures are providing the building blocks to make it achievable.

    PubMed

    Nickerson, David; Atalag, Koray; de Bono, Bernard; Geiger, Jörg; Goble, Carole; Hollmann, Susanne; Lonien, Joachim; Müller, Wolfgang; Regierer, Babette; Stanford, Natalie J; Golebiewski, Martin; Hunter, Peter

    2016-04-01

    Reconstructing and understanding the Human Physiome virtually is a complex mathematical problem, and a highly demanding computational challenge. Mathematical models spanning from the molecular level through to whole populations of individuals must be integrated, then personalized. This requires interoperability with multiple disparate and geographically separated data sources, and myriad computational software tools. Extracting and producing knowledge from such sources, even when the databases and software are readily available, is a challenging task. Despite the difficulties, researchers must frequently perform these tasks so that available knowledge can be continually integrated into the common framework required to realize the Human Physiome. Software and infrastructures that support the communities that generate these, together with their underlying standards to format, describe and interlink the corresponding data and computer models, are pivotal to the Human Physiome being realized. They provide the foundations for integrating, exchanging and re-using data and models efficiently, and correctly, while also supporting the dissemination of growing knowledge in these forms. In this paper, we explore the standards, software tooling, repositories and infrastructures that support this work, and detail what makes them vital to realizing the Human Physiome. PMID:27051515

  2. The standardization of super resolution optical microscopic images based on DICOM

    NASA Astrophysics Data System (ADS)

    Xia, Wei; Gao, Xin

    2015-03-01

    Super resolution optical microscopy allows the capture of images with a higher resolution than the diffraction limit. However, due to the lack of a standard format, the processing, visualization, transfer, and exchange of Super Resolution Optical Microscope (SROM) images are inconvenient. In this work, we present an approach to standardize the SROM images based on the Digital Imaging and Communication in Medicine (DICOM) standard. The SROM images and associated information are encapsulated and converted to DICOM images based on the Visible Light Microscopic Image Information Object Definition of DICOM. The new generated SROM images in DICOM format can be displayed, processed, transferred, and exchanged by using most medical image processing tools.

  3. Software and hardware integration of a microprogrammable state machine for NMR imaging.

    PubMed

    Stewart, B K; Pratt, R G; Thomas, S R; Dieckman, S L; Ridgway, T H

    1991-01-01

    We have integrated a commercially available microprogrammable state machine (Tecmag PULSkit) for use as a magnetic resonance pulse programmer. Providing the capability for active research environment imaging protocols, it features timing resolution of 100 nsec, ten 16-bit loop counters, and individually addressable look-up tables. This integration involved hardware and software integration with a VAX 11/750 at several levels. Hardware: Each of the three gradient channels employs three digital-to-analog converters (DACs). An 8-bit, 4-quadrant, multiplying DAC generates the gradient waveform shape. A 12-bit DAC generates the multiplying DAC scaling voltage, controlling gradient amplitude and sign. A third 12-bit DAC produces a gradient offset (shim) voltage. An eddy current compensation network is present for each gradient channel. Software: The software design philosophy was to create a flexible interface (interactive window environment), while not constraining complex manipulation of the hardware (direct use of the pulse-sequence compiler primitives and microprogramming). The software levels include (a) pulse-sequence microprogramming, (b) pulse-sequence compiler, (c) interactive parameter specification, and (d) canned pulse-sequence microcode library. PMID:1779734

  4. Development of an Open Source Image-Based Flow Modeling Software - SimVascular

    NASA Astrophysics Data System (ADS)

    Updegrove, Adam; Merkow, Jameson; Schiavazzi, Daniele; Wilson, Nathan; Marsden, Alison; Shadden, Shawn

    2014-11-01

    SimVascular (www.simvascular.org) is currently the only comprehensive software package that provides a complete pipeline from medical image data segmentation to patient specific blood flow simulation. This software and its derivatives have been used in hundreds of conference abstracts and peer-reviewed journal articles, as well as the foundation of medical startups. SimVascular was initially released in August 2007, yet major challenges and deterrents for new adopters were the requirement of licensing three expensive commercial libraries utilized by the software, a complicated build process, and a lack of documentation, support and organized maintenance. In the past year, the SimVascular team has made significant progress to integrate open source alternatives for the linear solver, solid modeling, and mesh generation commercial libraries required by the original public release. In addition, the build system, available distributions, and graphical user interface have been significantly enhanced. Finally, the software has been updated to enable users to directly run simulations using models and boundary condition values, included in the Vascular Model Repository (vascularmodel.org). In this presentation we will briefly overview the capabilities of the new SimVascular 2.0 release. National Science Foundation.

  5. Simplified preparation of TO14 and Title III air toxic standards using a Windows software package and dynamic dilution schemes

    SciTech Connect

    Cardin, D.B.; Galoustian, E.A.

    1994-12-31

    The preparation of Air Toxic standards in the laboratory can be performed using several methods. These include injection of purge and trap standards, static dilution from pure compounds, and dynamic dilution from NIST traceable standards. A software package running under Windows has been developed that makes calculating dilution parameters for even complex mixtures fast and simple. Compound parameters such are name, molecular weight, boiling point, and density are saved in a data base for later access. Gas and liquid mixtures can be easily defined and saved as an inventory item, with preparation screens that calculate appropriate transfer volumes of each analyte. These mixtures can be utilized by both the static and dynamic dilution analysis windows to calculate proper flow rates and injection volumes for obtaining requested concentrations. A particularly useful approach for making accurate polar VOC standards will be presented.

  6. CONRAD—A software framework for cone-beam imaging in radiology

    SciTech Connect

    Maier, Andreas; Choi, Jang-Hwan; Riess, Christian; Keil, Andreas; Fahrig, Rebecca; Hofmann, Hannes G.; Berger, Martin; Fischer, Peter; Schwemmer, Chris; Wu, Haibo; Müller, Kerstin; Hornegger, Joachim

    2013-11-15

    Purpose: In the community of x-ray imaging, there is a multitude of tools and applications that are used in scientific practice. Many of these tools are proprietary and can only be used within a certain lab. Often the same algorithm is implemented multiple times by different groups in order to enable comparison. In an effort to tackle this problem, the authors created CONRAD, a software framework that provides many of the tools that are required to simulate basic processes in x-ray imaging and perform image reconstruction with consideration of nonlinear physical effects.Methods: CONRAD is a Java-based state-of-the-art software platform with extensive documentation. It is based on platform-independent technologies. Special libraries offer access to hardware acceleration such as OpenCL. There is an easy-to-use interface for parallel processing. The software package includes different simulation tools that are able to generate up to 4D projection and volume data and respective vector motion fields. Well known reconstruction algorithms such as FBP, DBP, and ART are included. All algorithms in the package are referenced to a scientific source.Results: A total of 13 different phantoms and 30 processing steps have already been integrated into the platform at the time of writing. The platform comprises 74.000 nonblank lines of code out of which 19% are used for documentation. The software package is available for download at http://conrad.stanford.edu. To demonstrate the use of the package, the authors reconstructed images from two different scanners, a table top system and a clinical C-arm system. Runtimes were evaluated using the RabbitCT platform and demonstrate state-of-the-art runtimes with 2.5 s for the 256 problem size and 12.4 s for the 512 problem size.Conclusions: As a common software framework, CONRAD enables the medical physics community to share algorithms and develop new ideas. In particular this offers new opportunities for scientific collaboration and

  7. CAVASS: a computer-assisted visualization and analysis software system - image processing aspects

    NASA Astrophysics Data System (ADS)

    Udupa, Jayaram K.; Grevera, George J.; Odhner, Dewey; Zhuge, Ying; Souza, Andre; Mishra, Shipra; Iwanaga, Tad

    2007-03-01

    The development of the concepts within 3DVIEWNIX and of the software system 3DVIEWNIX itself dates back to the 1970s. Since then, a series of software packages for Computer Assisted Visualization and Analysis (CAVA) of images came out from our group, 3DVIEWNIX released in 1993, being the most recent, and all were distributed with source code. CAVASS, an open source system, is the latest in this series, and represents the next major incarnation of 3DVIEWNIX. It incorporates four groups of operations: IMAGE PROCESSING (including ROI, interpolation, filtering, segmentation, registration, morphological, and algebraic operations), VISUALIZATION (including slice display, reslicing, MIP, surface rendering, and volume rendering), MANIPULATION (for modifying structures and surgery simulation), ANALYSIS (various ways of extracting quantitative information). CAVASS is designed to work on all platforms. Its key features are: (1) most major CAVA operations incorporated; (2) very efficient algorithms and their highly efficient implementations; (3) parallelized algorithms for computationally intensive operations; (4) parallel implementation via distributed computing on a cluster of PCs; (5) interface to other systems such as CAD/CAM software, ITK, and statistical packages; (6) easy to use GUI. In this paper, we focus on the image processing operations and compare the performance of CAVASS with that of ITK. Our conclusions based on assessing performance by utilizing a regular (6 MB), large (241 MB), and a super (873 MB) 3D image data set are as follows: CAVASS is considerably more efficient than ITK, especially in those operations which are computationally intensive. It can handle considerably larger data sets than ITK. It is easy and ready to use in applications since it provides an easy to use GUI. The users can easily build a cluster from ordinary inexpensive PCs and reap the full power of CAVASS inexpensively compared to expensive multiprocessing systems which are less

  8. 76 FR 43724 - In the Matter of Certain Digital Imaging Devices and Related Software; Notice of Commission...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-07-21

    ... Cupertino, California (``Apple''). 75 FR 28058 (May 19, 2010). The complaint alleged ] violations of section... COMMISSION In the Matter of Certain Digital Imaging Devices and Related Software; Notice of Commission... related software by reason of infringement of various claims of United States Patent Nos. 6,031,964 and...

  9. The role of camera-bundled image management software in the consumer digital imaging value chain

    NASA Astrophysics Data System (ADS)

    Mueller, Milton; Mundkur, Anuradha; Balasubramanian, Ashok; Chirania, Virat

    2005-02-01

    This research was undertaken by the Convergence Center at the Syracuse University School of Information Studies (www.digital-convergence.info). Project ICONICA, the name for the research, focuses on the strategic implications of digital Images and the CONvergence of Image management and image CApture. Consumer imaging - the activity that we once called "photography" - is now recognized as in the throes of a digital transformation. At the end of 2003, market researchers estimated that about 30% of the households in the U.S. and 40% of the households in Japan owned digital cameras. In 2004, of the 86 million new cameras sold (excluding one-time use cameras), a majority (56%) were estimated to be digital cameras. Sales of photographic film, while still profitable, are declining precipitously.

  10. Creation of 4D imaging data using open source image registration software

    NASA Astrophysics Data System (ADS)

    Wong, Kenneth H.; Ibanez, Luis; Popa, Teo; Cleary, Kevin

    2006-03-01

    4D images (3 spatial dimensions plus time) using CT or MRI will play a key role in radiation medicine as techniques for respiratory motion compensation become more widely available. Advance knowledge of the motion of a tumor and its surrounding anatomy will allow the creation of highly conformal dose distributions in organs such as the lung, liver, and pancreas. However, many of the current investigations into 4D imaging rely on synchronizing the image acquisition with an external respiratory signal such as skin motion, tidal flow, or lung volume, which typically requires specialized hardware and modifications to the scanner. We propose a novel method for 4D image acquisition that does not require any specific gating equipment and is based solely on open source image registration algorithms. Specifically, we use the Insight Toolkit (ITK) to compute the normalized mutual information (NMI) between images taken at different times and use that value as an index of respiratory phase. This method has the advantages of (1) being able to be implemented without any hardware modification to the scanner, and (2) basing the respiratory phase on changes in internal anatomy rather than external signal. We have demonstrated the capabilities of this method with CT fluoroscopy data acquired from a swine model.

  11. Comparison of Perfusion- and Diffusion-weighted Imaging Parameters in Brain Tumor Studies Processed Using Different Software Platforms

    PubMed Central

    Milchenko, Mikhail V.; Rajderkar, Dhanashree; LaMontagne, Pamela; Massoumzadeh, Parinaz; Bogdasarian, Ronald; Schweitzer, Gordon; Benzinger, Tammie; Marcus, Dan; Shimony, Joshua S.; Fouke, Sarah Jost

    2015-01-01

    Rationale and Objectives To compare quantitative imaging parameter measures from diffusion- and perfusion-weighted imaging magnetic resonance imaging (MRI) sequences in subjects with brain tumors that have been processed with different software platforms. Materials and Methods Scans from 20 subjects with primary brain tumors were selected from the Comprehensive Neuro-oncology Data Repository at Washington University School of Medicine (WUSM) and the Swedish Neuroscience Institute. MR images were coregistered, and each subject's data set was processed by three software packages: 1) vendor-specific scanner software, 2) research software developed at WUSM, and 3) a commercially available, Food and Drug Administration–approved, processing platform (Nordic Ice). Regions of interest (ROIs) were chosen within the brain tumor and normal nontumor tissue. The results obtained using these methods were compared. Results For diffusion parameters, including mean diffusivity and fractional anisotropy, concordance was high when comparing different processing methods. For perfusion-imaging parameters, a significant variance in cerebral blood volume, cerebral blood flow, and mean transit time (MTT) values was seen when comparing the same raw data processed using different software platforms. Correlation was better with larger ROIs (radii ≥ 5 mm). Greatest variance was observed in MTT. Conclusions Diffusion parameter values were consistent across different software processing platforms. Perfusion parameter values were more variable and were influenced by the software used. Variation in the MTT was especially large suggesting that MTT estimation may be unreliable in tumor tissues using current MRI perfusion methods. PMID:25088833

  12. Mississippi Company Using NASA Software Program to Provide Unique Imaging Service: DATASTAR Success Story

    NASA Technical Reports Server (NTRS)

    2001-01-01

    DATASTAR, Inc., of Picayune, Miss., has taken NASA's award-winning Earth Resources Laboratory Applications (ELAS) software program and evolved it to the point that the company is now providing a unique, spatial imagery service over the Internet. ELAS was developed in the early 80's to process satellite and airborne sensor imagery data of the Earth's surface into readable and useable information. While there are several software packages on the market that allow the manipulation of spatial data into useable products, this is usually a laborious task. The new program, called the DATASTAR Image Processing Exploitation, or DIPX, Delivery Service, is a subscription service available over the Internet that takes the work out of the equation and provides normalized geo-spatial data in the form of decision products.

  13. Advances in hardware, software, and automation for 193nm aerial image measurement systems

    NASA Astrophysics Data System (ADS)

    Zibold, Axel M.; Schmid, R.; Seyfarth, A.; Waechter, M.; Harnisch, W.; Doornmalen, H. v.

    2005-05-01

    A new, second generation AIMS fab 193 system has been developed which is capable of emulating lithographic imaging of any type of reticles such as binary and phase shift masks (PSM) including resolution enhancement technologies (RET) such as optical proximity correction (OPC) or scatter bars. The system emulates the imaging process by adjustment of the lithography equivalent illumination and imaging conditions of 193nm wafer steppers including circular, annular, dipole and quadrupole type illumination modes. The AIMS fab 193 allows a rapid prediction of wafer printability of critical mask features, including dense patterns and contacts, defects or repairs by acquiring through-focus image stacks by means of a CCD camera followed by quantitative image analysis. Moreover the technology can be readily applied to directly determine the process window of a given mask under stepper imaging conditions. Since data acquisition is performed electronically, AIMS in many applications replaces the need for costly and time consuming wafer prints using a wafer stepper/ scanner followed by CD SEM resist or wafer analysis. The AIMS fab 193 second generation system is designed for 193nm lithography mask printing predictability down to the 65nm node. In addition to hardware improvements a new modular AIMS software is introduced allowing for a fully automated operation mode. Multiple pre-defined points can be visited and through-focus AIMS measurements can be executed automatically in a recipe based mode. To increase the effectiveness of the automated operation mode, the throughput of the system to locate the area of interest, and to acquire the through-focus images is increased by almost a factor of two in comparison with the first generation AIMS systems. In addition a new software plug-in concept is realised for the tools. One new feature has been successfully introduced as "Global CD Map", enabling automated investigation of global mask quality based on the local determination of

  14. Features of the Upgraded Imaging for Hypersonic Experimental Aeroheating Testing (IHEAT) Software

    NASA Technical Reports Server (NTRS)

    Mason, Michelle L.; Rufer, Shann J.

    2016-01-01

    The Imaging for Hypersonic Experimental Aeroheating Testing (IHEAT) software is used at the NASA Langley Research Center to analyze global aeroheating data on wind tunnel models tested in the Langley Aerothermodynamics Laboratory. One-dimensional, semi-infinite heating data derived from IHEAT are used in the design of thermal protection systems for hypersonic vehicles that are exposed to severe aeroheating loads, such as reentry vehicles during descent and landing procedures. This software program originally was written in the PV-WAVE(Registered Trademark) programming language to analyze phosphor thermography data from the two-color, relative-intensity system developed at Langley. To increase the efficiency, functionality, and reliability of IHEAT, the program was migrated to MATLAB(Registered Trademark) syntax and compiled as a stand-alone executable file labeled version 4.0. New features of IHEAT 4.0 include the options to perform diagnostic checks of the accuracy of the acquired data during a wind tunnel test, to extract data along a specified multi-segment line following a feature such as a leading edge or a streamline, and to batch process all of the temporal frame data from a wind tunnel run. Results from IHEAT 4.0 were compared on a pixel level to the output images from the legacy software to validate the program. The absolute differences between the heat transfer data output from the two programs were on the order of 10(exp -5) to 10(exp -7). IHEAT 4.0 replaces the PV-WAVE(Registered Trademark) version as the production software for aeroheating experiments conducted in the hypersonic facilities at NASA Langley.

  15. Scalable, High-performance 3D Imaging Software Platform: System Architecture and Application to Virtual Colonoscopy

    PubMed Central

    Yoshida, Hiroyuki; Wu, Yin; Cai, Wenli; Brett, Bevin

    2013-01-01

    One of the key challenges in three-dimensional (3D) medical imaging is to enable the fast turn-around time, which is often required for interactive or real-time response. This inevitably requires not only high computational power but also high memory bandwidth due to the massive amount of data that need to be processed. In this work, we have developed a software platform that is designed to support high-performance 3D medical image processing for a wide range of applications using increasingly available and affordable commodity computing systems: multi-core, clusters, and cloud computing systems. To achieve scalable, high-performance computing, our platform (1) employs size-adaptive, distributable block volumes as a core data structure for efficient parallelization of a wide range of 3D image processing algorithms; (2) supports task scheduling for efficient load distribution and balancing; and (3) consists of a layered parallel software libraries that allow a wide range of medical applications to share the same functionalities. We evaluated the performance of our platform by applying it to an electronic cleansing system in virtual colonoscopy, with initial experimental results showing a 10 times performance improvement on an 8-core workstation over the original sequential implementation of the system. PMID:23366803

  16. Quantitative phase-flow MR imaging in dogs by using standard sequences: comparison with in vivo flow-meter measurements.

    PubMed

    Pettigrew, R I; Dannels, W; Galloway, J R; Pearson, T; Millikan, W; Henderson, J M; Peterson, J; Bernardino, M E

    1987-02-01

    For evaluation of the feasibility and clinical potential of using the phase data from standard MR imaging sequences to measure blood flow, 11 vessels with diameters of 4 to 7 mm were imaged in seven dogs. The flow in either the superior mesenteric vein or the inferior vena cava was measured first at laparotomy (in ml/min) with electromagnetic flow meters. Immediately thereafter, these vessels were imaged by MR in 25-mm thick sections by using a standard spin echo (SE) 750/30 sequence with a Philips 0.5-T imager. Previous phase-flow calibration of the imager and sequence allowed calculation of the blood flow rates from the phase images that were used to measure the vessels' cross-sectional areas and blood phase values. Comparison of the measurements obtained with each technique showed a significant correlation (r = .977, p less than .05) between MR-imaging values and flow-meter measurements when the blood velocity was less than approximately 40 cm/sec, the known upper limit of the flow dynamic range for the MR hardware and sequence used. There was no correlation for blood velocities greater than 40 cm/sec. However, the range of blood flow velocities in dogs and man extends to more than 100 cm/sec. Thus, these results suggest that this technique might yield valuable adjunctive flow data in routine clinical imaging provided that improvements in hardware and software permit a larger dynamic range. PMID:2948376

  17. Reliability and reproducibility of macular segmentation using a custom-built optical coherence tomography retinal image analysis software

    NASA Astrophysics Data System (ADS)

    Cabrera Debuc, Delia; Somfai, Gábor Márk; Ranganathan, Sudarshan; Tátrai, Erika; Ferencz, Mária; Puliafito, Carmen A.

    2009-11-01

    We determine the reliability and reproducibility of retinal thickness measurements with a custom-built OCT retinal image analysis software (OCTRIMA). Ten eyes of five healthy subjects undergo repeated standard macular thickness map scan sessions by two experienced examiners using a Stratus OCT device. Automatic/semi automatic thickness quantification of the macula and intraretinal layers is performed using OCTRIMA software. Intraobserver, interobserver, and intervisit repeatability and reproducibility coefficients, and intraclass correlation coefficients (ICCs) per scan are calculated. Intraobserver, interobserver, and intervisit variability combined account for less than 5% of total variability for the total retinal thickness measurements and less than 7% for the intraretinal layers except the outer segment/ retinal pigment epithelium (RPE) junction. There is no significant difference between scans acquired by different observers or during different visits. The ICCs obtained for the intraobserver and intervisit variability tests are greater than 0.75 for the total retina and all intraretinal layers, except the inner nuclear layer intraobserver and interobserver test and the outer plexiform layer, intraobserver, interobserver, and intervisit test. Our results indicate that thickness measurements for the total retina and all intraretinal layers (except the outer segment/RPE junction) performed using OCTRIMA are highly repeatable and reproducible.

  18. A complete software application for automatic registration of x-ray mammography and magnetic resonance images

    SciTech Connect

    Solves-Llorens, J. A.; Rupérez, M. J. Monserrat, C.; Lloret, M.

    2014-08-15

    Purpose: This work presents a complete and automatic software application to aid radiologists in breast cancer diagnosis. The application is a fully automated method that performs a complete registration of magnetic resonance (MR) images and x-ray (XR) images in both directions (from MR to XR and from XR to MR) and for both x-ray mammograms, craniocaudal (CC), and mediolateral oblique (MLO). This new approximation allows radiologists to mark points in the MR images and, without any manual intervention, it provides their corresponding points in both types of XR mammograms and vice versa. Methods: The application automatically segments magnetic resonance images and x-ray images using the C-Means method and the Otsu method, respectively. It compresses the magnetic resonance images in both directions, CC and MLO, using a biomechanical model of the breast that distinguishes the specific biomechanical behavior of each one of its three tissues (skin, fat, and glandular tissue) separately. It makes a projection of both compressions and registers them with the original XR images using affine transformations and nonrigid registration methods. Results: The application has been validated by two expert radiologists. This was carried out through a quantitative validation on 14 data sets in which the Euclidean distance between points marked by the radiologists and the corresponding points obtained by the application were measured. The results showed a mean error of 4.2 ± 1.9 mm for the MRI to CC registration, 4.8 ± 1.3 mm for the MRI to MLO registration, and 4.1 ± 1.3 mm for the CC and MLO to MRI registration. Conclusions: A complete software application that automatically registers XR and MR images of the breast has been implemented. The application permits radiologists to estimate the position of a lesion that is suspected of being a tumor in an imaging modality based on its position in another different modality with a clinically acceptable error. The results show that the

  19. Gemini planet imager integration to the Gemini South telescope software environment

    NASA Astrophysics Data System (ADS)

    Rantakyrö, Fredrik T.; Cardwell, Andrew; Chilcote, Jeffrey; Dunn, Jennifer; Goodsell, Stephen; Hibon, Pascale; Macintosh, Bruce; Quiroz, Carlos; Perrin, Marshall D.; Sadakuni, Naru; Saddlemyer, Leslie; Savransky, Dmitry; Serio, Andrew; Winge, Claudia; Galvez, Ramon; Gausachs, Gaston; Hardie, Kayla; Hartung, Markus; Luhrs, Javier; Poyneer, Lisa; Thomas, Sandrine

    2014-08-01

    The Gemini Planet Imager is an extreme AO instrument with an integral field spectrograph (IFS) operating in Y, J, H, and K bands. Both the Gemini telescope and the GPI instrument are very complex systems. Our goal is that the combined telescope and instrument system may be run by one observer operating the instrument, and one operator controlling the telescope and the acquisition of light to the instrument. This requires a smooth integration between the two systems and easily operated control interfaces. We discuss the definition of the software and hardware interfaces, their implementation and testing, and the integration of the instrument with the telescope environment.

  20. X-ray volumetric imaging in image-guided radiotherapy: The new standard in on-treatment imaging

    SciTech Connect

    McBain, Catherine A.; Henry, Ann M. . E-mail: catherine.mcbain@christie-tr.nwest.nhs.uk; Sykes, Jonathan; Amer, Ali; Marchant, Tom; Moore, Christopher M.; Davies, Julie; Stratford, Julia; McCarthy, Claire; Porritt, Bridget; Williams, Peter; Khoo, Vincent S.; Price, Pat

    2006-02-01

    Purpose: X-ray volumetric imaging (XVI) for the first time allows for the on-treatment acquisition of three-dimensional (3D) kV cone beam computed tomography (CT) images. Clinical imaging using the Synergy System (Elekta, Crawley, UK) commenced in July 2003. This study evaluated image quality and dose delivered and assessed clinical utility for treatment verification at a range of anatomic sites. Methods and Materials: Single XVIs were acquired from 30 patients undergoing radiotherapy for tumors at 10 different anatomic sites. Patients were imaged in their setup position. Radiation doses received were measured using TLDs on the skin surface. The utility of XVI in verifying target volume coverage was qualitatively assessed by experienced clinicians. Results: X-ray volumetric imaging acquisition was completed in the treatment position at all anatomic sites. At sites where a full gantry rotation was not possible, XVIs were reconstructed from projection images acquired from partial rotations. Soft-tissue definition of organ boundaries allowed direct assessment of 3D target volume coverage at all sites. Individual image quality depended on both imaging parameters and patient characteristics. Radiation dose ranged from 0.003 Gy in the head to 0.03 Gy in the pelvis. Conclusions: On-treatment XVI provided 3D verification images with soft-tissue definition at all anatomic sites at acceptably low radiation doses. This technology sets a new standard in treatment verification and will facilitate novel adaptive radiotherapy techniques.

  1. Multithreaded real-time 3D image processing software architecture and implementation

    NASA Astrophysics Data System (ADS)

    Ramachandra, Vikas; Atanassov, Kalin; Aleksic, Milivoje; Goma, Sergio R.

    2011-03-01

    Recently, 3D displays and videos have generated a lot of interest in the consumer electronics industry. To make 3D capture and playback popular and practical, a user friendly playback interface is desirable. Towards this end, we built a real time software 3D video player. The 3D video player displays user captured 3D videos, provides for various 3D specific image processing functions and ensures a pleasant viewing experience. Moreover, the player enables user interactivity by providing digital zoom and pan functionalities. This real time 3D player was implemented on the GPU using CUDA and OpenGL. The player provides user interactive 3D video playback. Stereo images are first read by the player from a fast drive and rectified. Further processing of the images determines the optimal convergence point in the 3D scene to reduce eye strain. The rationale for this convergence point selection takes into account scene depth and display geometry. The first step in this processing chain is identifying keypoints by detecting vertical edges within the left image. Regions surrounding reliable keypoints are then located on the right image through the use of block matching. The difference in the positions between the corresponding regions in the left and right images are then used to calculate disparity. The extrema of the disparity histogram gives the scene disparity range. The left and right images are shifted based upon the calculated range, in order to place the desired region of the 3D scene at convergence. All the above computations are performed on one CPU thread which calls CUDA functions. Image upsampling and shifting is performed in response to user zoom and pan. The player also consists of a CPU display thread, which uses OpenGL rendering (quad buffers). This also gathers user input for digital zoom and pan and sends them to the processing thread.

  2. Spiked proteomic standard dataset for testing label-free quantitative software and statistical methods

    PubMed Central

    Ramus, Claire; Hovasse, Agnès; Marcellin, Marlène; Hesse, Anne-Marie; Mouton-Barbosa, Emmanuelle; Bouyssié, David; Vaca, Sebastian; Carapito, Christine; Chaoui, Karima; Bruley, Christophe; Garin, Jérôme; Cianférani, Sarah; Ferro, Myriam; Dorssaeler, Alain Van; Burlet-Schiltz, Odile; Schaeffer, Christine; Couté, Yohann; Gonzalez de Peredo, Anne

    2015-01-01

    This data article describes a controlled, spiked proteomic dataset for which the “ground truth” of variant proteins is known. It is based on the LC-MS analysis of samples composed of a fixed background of yeast lysate and different spiked amounts of the UPS1 mixture of 48 recombinant proteins. It can be used to objectively evaluate bioinformatic pipelines for label-free quantitative analysis, and their ability to detect variant proteins with good sensitivity and low false discovery rate in large-scale proteomic studies. More specifically, it can be useful for tuning software tools parameters, but also testing new algorithms for label-free quantitative analysis, or for evaluation of downstream statistical methods. The raw MS files can be downloaded from ProteomeXchange with identifier PXD001819. Starting from some raw files of this dataset, we also provide here some processed data obtained through various bioinformatics tools (including MaxQuant, Skyline, MFPaQ, IRMa-hEIDI and Scaffold) in different workflows, to exemplify the use of such data in the context of software benchmarking, as discussed in details in the accompanying manuscript [1]. The experimental design used here for data processing takes advantage of the different spike levels introduced in the samples composing the dataset, and processed data are merged in a single file to facilitate the evaluation and illustration of software tools results for the detection of variant proteins with different absolute expression levels and fold change values. PMID:26862574

  3. Review of free software tools for image analysis of fluorescence cell micrographs.

    PubMed

    Wiesmann, V; Franz, D; Held, C; Münzenmayer, C; Palmisano, R; Wittenberg, T

    2015-01-01

    An increasing number of free software tools have been made available for the evaluation of fluorescence cell micrographs. The main users are biologists and related life scientists with no or little knowledge of image processing. In this review, we give an overview of available tools and guidelines about which tools the users should use to segment fluorescence micrographs. We selected 15 free tools and divided them into stand-alone, Matlab-based, ImageJ-based, free demo versions of commercial tools and data sharing tools. The review consists of two parts: First, we developed a criteria catalogue and rated the tools regarding structural requirements, functionality (flexibility, segmentation and image processing filters) and usability (documentation, data management, usability and visualization). Second, we performed an image processing case study with four representative fluorescence micrograph segmentation tasks with figure-ground and cell separation. The tools display a wide range of functionality and usability. In the image processing case study, we were able to perform figure-ground separation in all micrographs using mainly thresholding. Cell separation was not possible with most of the tools, because cell separation methods are provided only by a subset of the tools and are difficult to parametrize and to use. Most important is that the usability matches the functionality of a tool. To be usable, specialized tools with less functionality need to fulfill less usability criteria, whereas multipurpose tools need a well-structured menu and intuitive graphical user interface. PMID:25359577

  4. MIA - A free and open source software for gray scale medical image analysis

    PubMed Central

    2013-01-01

    Background Gray scale images make the bulk of data in bio-medical image analysis, and hence, the main focus of many image processing tasks lies in the processing of these monochrome images. With ever improving acquisition devices, spatial and temporal image resolution increases, and data sets become very large. Various image processing frameworks exists that make the development of new algorithms easy by using high level programming languages or visual programming. These frameworks are also accessable to researchers that have no background or little in software development because they take care of otherwise complex tasks. Specifically, the management of working memory is taken care of automatically, usually at the price of requiring more it. As a result, processing large data sets with these tools becomes increasingly difficult on work station class computers. One alternative to using these high level processing tools is the development of new algorithms in a languages like C++, that gives the developer full control over how memory is handled, but the resulting workflow for the prototyping of new algorithms is rather time intensive, and also not appropriate for a researcher with little or no knowledge in software development. Another alternative is in using command line tools that run image processing tasks, use the hard disk to store intermediate results, and provide automation by using shell scripts. Although not as convenient as, e.g. visual programming, this approach is still accessable to researchers without a background in computer science. However, only few tools exist that provide this kind of processing interface, they are usually quite task specific, and don’t provide an clear approach when one wants to shape a new command line tool from a prototype shell script. Results The proposed framework, MIA, provides a combination of command line tools, plug-ins, and libraries that make it possible to run image processing tasks interactively in a command shell

  5. Standard Reticle Slide To Objectively Evaluate Spatial Resolution and Instrument Performance in Imaging Mass Spectrometry.

    PubMed

    Zubair, Faizan; Prentice, Boone M; Norris, Jeremy L; Laibinis, Paul E; Caprioli, Richard M

    2016-07-19

    Spatial resolution is a key parameter in imaging mass spectrometry (IMS). Aside from being a primary determinant in overall image quality, spatial resolution has important consequences on the acquisition time of the IMS experiment and the resulting file size. Hardware and software modifications during instrumentation development can dramatically affect the spatial resolution achievable using a given imaging mass spectrometer. As such, an accurate and objective method to determine the working spatial resolution is needed to guide instrument development and ensure quality IMS results. We have used lithographic and self-assembly techniques to fabricate a pattern of crystal violet as a standard reticle slide for assessing spatial resolution in matrix-assisted laser desorption/ionization (MALDI) IMS experiments. The reticle is used to evaluate spatial resolution under user-defined instrumental conditions. Edgespread analysis measures the beam diameter for a Gaussian profile and line scans measure an "effective" spatial resolution that is a convolution of beam optics and sampling frequency. The patterned crystal violet reticle was also used to diagnose issues with IMS instrumentation such as intermittent losses of pixel data. PMID:27299987

  6. Implementation of multi-vendor DICOM standard image transfer in hospital wide ATM network.

    PubMed

    Kimura, M; Tani, S; Baatar, S; Naito, Y; Kanno, T; Sakusabe, T; Aizawa, M

    1998-01-01

    At Hamamatsu University Hospital, an ATM + FDDI network was installed in January 1995, when the hospital information system was upgraded. With its unique 'wheel' shape configuration, FDDI automatically backs up in case of an ATM switch failure. The authors implemented a DICOM image database server and DICOM viewer in the Hamamatsu University Hospital ATM + FDDI network. The DICOM standard worked well between different vendor products. In sending 512 x 512, 2 byte CT images, 40% of the transfer time was spent for the network data transfer, which is 70% of the theoretical value of 10 Mb peripheral transfer rate. Meanwhile, ATM load factor increased less than 0.5%. As we have a very fast data transfer network, we must check display speed, hard disc access time, PC bus speed, and display software, in order to enjoy the high speed network transfer fully. The sequence of image transmission within a study is not stated in the DICOM document and is depending on the server. Therefore, there should be an agreement between server and clients, still more than DICOM, in order to make better PACS. PMID:9804003

  7. The wavelet/scalar quantization compression standard for digital fingerprint images

    SciTech Connect

    Bradley, J.N.; Brislawn, C.M.

    1994-04-01

    A new digital image compression standard has been adopted by the US Federal Bureau of Investigation for use on digitized gray-scale fingerprint images. The algorithm is based on adaptive uniform scalar quantization of a discrete wavelet transform image decomposition and is referred to as the wavelet/scalar quantization standard. The standard produces archival quality images at compression ratios of around 20:1 and will allow the FBI to replace their current database of paper fingerprint cards with digital imagery.

  8. Hierarchical Image Segmentation of Remotely Sensed Data using Massively Parallel GNU-LINUX Software

    NASA Technical Reports Server (NTRS)

    Tilton, James C.

    2003-01-01

    A hierarchical set of image segmentations is a set of several image segmentations of the same image at different levels of detail in which the segmentations at coarser levels of detail can be produced from simple merges of regions at finer levels of detail. In [1], Tilton, et a1 describes an approach for producing hierarchical segmentations (called HSEG) and gave a progress report on exploiting these hierarchical segmentations for image information mining. The HSEG algorithm is a hybrid of region growing and constrained spectral clustering that produces a hierarchical set of image segmentations based on detected convergence points. In the main, HSEG employs the hierarchical stepwise optimization (HSWO) approach to region growing, which was described as early as 1989 by Beaulieu and Goldberg. The HSWO approach seeks to produce segmentations that are more optimized than those produced by more classic approaches to region growing (e.g. Horowitz and T. Pavlidis, [3]). In addition, HSEG optionally interjects between HSWO region growing iterations, merges between spatially non-adjacent regions (i.e., spectrally based merging or clustering) constrained by a threshold derived from the previous HSWO region growing iteration. While the addition of constrained spectral clustering improves the utility of the segmentation results, especially for larger images, it also significantly increases HSEG s computational requirements. To counteract this, a computationally efficient recursive, divide-and-conquer, implementation of HSEG (RHSEG) was devised, which includes special code to avoid processing artifacts caused by RHSEG s recursive subdivision of the image data. The recursive nature of RHSEG makes for a straightforward parallel implementation. This paper describes the HSEG algorithm, its recursive formulation (referred to as RHSEG), and the implementation of RHSEG using massively parallel GNU-LINUX software. Results with Landsat TM data are included comparing RHSEG with classic

  9. A Multimodality Imaging and Software System for Combining an Anatomical and Physiological Assessment of Skin and Underlying Tissue Conditions

    PubMed Central

    Langemo, Diane; Spahn, James G.

    2016-01-01

    ABSTRACT OBJECTIVE: The timely and accurate assessment of skin and underlying tissue is crucial for making informed decisions relating to wound development and existing wounds. The study objective was to determine within- and between-reader agreement of Scout Visual-to-Thermal Overlay (WoundVision LLC, Indianapolis, Indiana) placement (moving the wound edge trace from the visual image onto the wound edge signature of the infrared image). MATERIALS AND METHODS: For establishing within- and between-reader agreement of the Scout Visual-to-Thermal Overlay feature, 5 different readers overlaid a wound edge trace from the visual image and placed it onto the congruent thermal representation of the wound on a thermal image 3 independent times. Forty different wound image pairs were evaluated by each reader. All readers were trained by the same trainer on the operation of the Scout prior to using the software features. The Scout Visual-to-Thermal Overlay feature allows clinicians to use an anatomical measurement of the wound on the visual image (area and perimeter) to extract a congruent physiological measurement of the wound on the thermal image (thermal intensity variation data) by taking the wound edge trace from the visual image and overlaying it onto the corresponding thermal signature of the same wound edge. RESULTS: The results are very similar both within- and between-readers. The coefficient of variation (CV) for the mean PV both within- and between-readers averages less than 1%, 0.89 and 0.77 respectively. When converted into degrees Celsius across all 5 readers and all 3 wound replicates, the average temperature differential is 0.28° C (Table 2). The largest difference observed was 0.63° C and the smallest difference observed was 0.04° C. CONCLUSIONS: The Scout software’s Visual-to-Thermal Overlay procedure, as implemented in this study, is very precise. This study demonstrates that the thermal signature of wounds may be delineated repeatedly by the same

  10. A medical software system for volumetric analysis of cerebral pathologies in magnetic resonance imaging (MRI) data.

    PubMed

    Egger, Jan; Kappus, Christoph; Freisleben, Bernd; Nimsky, Christopher

    2012-08-01

    In this contribution, a medical software system for volumetric analysis of different cerebral pathologies in magnetic resonance imaging (MRI) data is presented. The software system is based on a semi-automatic segmentation algorithm and helps to overcome the time-consuming process of volume determination during monitoring of a patient. After imaging, the parameter settings-including a seed point-are set up in the system and an automatic segmentation is performed by a novel graph-based approach. Manually reviewing the result leads to reseeding, adding seed points or an automatic surface mesh generation. The mesh is saved for monitoring the patient and for comparisons with follow-up scans. Based on the mesh, the system performs a voxelization and volume calculation, which leads to diagnosis and therefore further treatment decisions. The overall system has been tested with different cerebral pathologies-glioblastoma multiforme, pituitary adenomas and cerebral aneurysms- and evaluated against manual expert segmentations using the Dice Similarity Coefficient (DSC). Additionally, intra-physician segmentations have been performed to provide a quality measure for the presented system. PMID:21384268

  11. SNARK09 - a software package for reconstruction of 2D images from 1D projections.

    PubMed

    Klukowska, Joanna; Davidi, Ran; Herman, Gabor T

    2013-06-01

    The problem of reconstruction of slices and volumes from 1D and 2D projections has arisen in a large number of scientific fields (including computerized tomography, electron microscopy, X-ray microscopy, radiology, radio astronomy and holography). Many different methods (algorithms) have been suggested for its solution. In this paper we present a software package, SNARK09, for reconstruction of 2D images from their 1D projections. In the area of image reconstruction, researchers often desire to compare two or more reconstruction techniques and assess their relative merits. SNARK09 provides a uniform framework to implement algorithms and evaluate their performance. It has been designed to treat both parallel and divergent projection geometries and can either create test data (with or without noise) for use by reconstruction algorithms or use data collected by another software or a physical device. A number of frequently-used classical reconstruction algorithms are incorporated. The package provides a means for easy incorporation of new algorithms for their testing, comparison and evaluation. It comes with tools for statistical analysis of the results and ten worked examples. PMID:23414602

  12. Functional magnetic resonance imaging for cranial neuronavigation: methods for automated and standardized data processing and management. A technical note.

    PubMed

    Nennig, E; Heiland, S; Rasche, D; Sartor, K; Stippich, C

    2007-04-30

    Preoperative fMRI is one of the best established clinical fMRI applications. Due to the difficulties in recording and coregistration of functional image data, we present methods to standardize and automate these procedures. We used a self-made interactive software package (AFI - Automated Functional Imaging) to automate the time consuming and complex analysis of fMRI data. AFI controls the BrainVoyager program, a postprocessing software package, and furthermore facilitates data management, anonymization of patient data, storage, documentation, data export to neuronavigation systems and the opportunity of spatial transformation of image data for use in group studies. By the end of 2006 we have used this method on 123 patients with brain tumors and 47 patients with trigeminal neuralgia. The fundamental basis of multimodal neuronavigation is precise coregistration. EPI images contain spatial distortions of 5-15 mm. We were able to reduce the misregistration of EPI and FLASH images in a selectable region of interest to 1-2 mm. Furthermore AFI reduces the average evaluation time for a standard clinical fMRI study (four functional measurements, one anatomical data set) by approx. 50% from 140 minutes to about 70 minutes in comparison to manual evaluation by an expert. More importantly, the personal attendance time required for the evaluation decreases by 84% to 23 minutes as the remainder of the program runs automatically. In comparison to currently available online postprocessing software tools which are more limited in use, BrainVoyager can be used for coregistration, data export to neuronavigation systems and spatial transformation. PMID:24299636

  13. Measuring the Pain Area: An Intra- and Inter-Rater Reliability Study Using Image Analysis Software.

    PubMed

    Dos Reis, Felipe Jose Jandre; de Barros E Silva, Veronica; de Lucena, Raphaela Nunes; Mendes Cardoso, Bruno Alexandre; Nogueira, Leandro Calazans

    2016-01-01

    Pain drawings have frequently been used for clinical information and research. The aim of this study was to investigate intra- and inter-rater reliability of area measurements performed on pain drawings. Our secondary objective was to verify the reliability when using computers with different screen sizes, both with and without mouse hardware. Pain drawings were completed by patients with chronic neck pain or neck-shoulder-arm pain. Four independent examiners participated in the study. Examiners A and B used the same computer with a 16-inch screen and wired mouse hardware. Examiner C used a notebook with a 16-inch screen and no mouse hardware, and Examiner D used a computer with an 11.6-inch screen and a wireless mouse. Image measurements were obtained using GIMP and NIH ImageJ computer programs. The length of all the images was measured using GIMP software to a set scale in ImageJ. Thus, each marked area was encircled and the total surface area (cm(2) ) was calculated for each pain drawing measurement. A total of 117 areas were identified and 52 pain drawings were analyzed. The intrarater reliability between all examiners was high (ICC = 0.989). The inter-rater reliability was also high. No significant differences were observed when using different screen sizes or when using or not using the mouse hardware. This suggests that the precision of these measurements is acceptable for the use of this method as a measurement tool in clinical practice and research. PMID:25490926

  14. Imaging C. elegans Embryos using an Epifluorescent Microscope and Open Source Software

    PubMed Central

    Verbrugghe, Koen J. C.; Chan, Raymond C.

    2011-01-01

    Cellular processes, such as chromosome assembly, segregation and cytokinesis,are inherently dynamic. Time-lapse imaging of living cells, using fluorescent-labeled reporter proteins or differential interference contrast (DIC) microscopy, allows for the examination of the temporal progression of these dynamic events which is otherwise inferred from analysis of fixed samples1,2. Moreover, the study of the developmental regulations of cellular processes necessitates conducting time-lapse experiments on an intact organism during development. The Caenorhabiditis elegans embryo is light-transparent and has a rapid, invariant developmental program with a known cell lineage3, thus providing an ideal experiment model for studying questions in cell biology4,5and development6-9. C. elegans is amendable to genetic manipulation by forward genetics (based on random mutagenesis10,11) and reverse genetics to target specific genes (based on RNAi-mediated interference and targeted mutagenesis12-15). In addition, transgenic animals can be readily created to express fluorescently tagged proteins or reporters16,17. These traits combine to make it easy to identify the genetic pathways regulating fundamental cellular and developmental processes in vivo18-21. In this protocol we present methods for live imaging of C. elegans embryos using DIC optics or GFP fluorescence on a compound epifluorescent microscope. We demonstrate the ease with which readily available microscopes, typically used for fixed sample imaging, can also be applied for time-lapse analysis using open-source software to automate the imaging process. PMID:21490567

  15. Image pixel guided tours: a software platform for non-destructive x-ray imaging

    NASA Astrophysics Data System (ADS)

    Lam, K. P.; Emery, R.

    2009-02-01

    Multivariate analysis seeks to describe the relationship between an arbitrary number of variables. To explore highdimensional data sets, projections are often used for data visualisation to aid discovering structure or patterns that lead to the formation of statistical hypothesis. The basic concept necessitates a systematic search for lower-dimensional representations of the data that might show interesting structure(s). Motivated by the recent research on the Image Grand Tour (IGT), which can be adapted to view guided projections by using objective indexes that are capable of revealing latent structures of the data, this paper presents a signal processing perspective on constructing such indexes under the unifying exploratory frameworks of Independent Component Analysis (ICA) and Projection Pursuit (PP). Our investigation begins with an overview of dimension reduction techniques by means of orthogonal transforms, including the classical procedure of Principal Component Analysis (PCA), and extends to an application of the more powerful techniques of ICA in the context of our recent work on non-destructive testing technology by element specific x-ray imaging.

  16. New AIRS: The medical imaging software for segmentation and registration of elastic organs in SPECT/CT

    NASA Astrophysics Data System (ADS)

    Widita, R.; Kurniadi, R.; Darma, Y.; Perkasa, Y. S.; Trianti, N.

    2012-06-01

    We have been successfully improved our software, Automated Image Registration and Segmentation (AIRS), to fuse the CT and SPECT images of elastic organs. Segmentation and registration of elastic organs presents many challenges. Many artifacts can arise in SPECT/CT scans. Also, different organs and tissues have very similar gray levels, which consign thresholding to limited utility. We have been developed a new software to solve different registration and segmentation problems that arises in tomographic data sets. It will be demonstrated that the information obtained by SPECT/CT is more accurate in evaluating patients/objects than that obtained from either SPECT or CT alone. We used multi-modality registration which is amenable for images produced by different modalities and having unclear boundaries between tissues. The segmentation components used in this software is region growing algorithms which have proven to be an effective approach for image segmentation. Our method is designed to perform with clinically acceptable speed, using accelerated techniques (multiresolution).

  17. RegStatGel: proteomic software for identifying differentially expressed proteins based on 2D gel images

    PubMed Central

    Li, Feng; Seillier-Moiseiwitsch, Françoise

    2011-01-01

    Image analysis of two-dimensional gel electrophoresis is a key step in proteomic workflow for identifying proteins that change under different experimental conditions. Since there are usually large amount of proteins and variations shown in the gel images, the use of software for analysis of 2D gel images is inevitable. We developed open-source software with graphical user interface for differential analysis of 2D gel images. The user-friendly software, RegStatGel, contains fully automated as well as interactive procedures. It was developed and has been tested under Matlab 7.01. Availability The database is available for free at http://www.mediafire.com/FengLi/2DGelsoftware PMID:21904427

  18. An Effective On-line Polymer Characterization Technique by Using SALS Image Processing Software and Wavelet Analysis

    PubMed Central

    Xian, Guang-ming; Qu, Jin-ping; Zeng, Bi-qing

    2008-01-01

    This paper describes an effective on-line polymer characterization technique by using small-angle light-scattering (SALS) image processing software and wavelet analysis. The phenomenon of small-angle light scattering has been applied to give information about transparent structures on morphology. Real-time visualization of various scattered light image and light intensity matrices is performed by the optical image real-time processing software for SALS. The software can measure the signal intensity of light scattering images, draw the frequency-intensity curves and the amplitude-intensity curves to indicate the variation of the intensity of scattered light in different processing conditions, and estimate the parameters. The current study utilizes a one-dimensional wavelet to delete noise from the original SALS signal and estimate the variation trend of maximum intensity area of the scattered light. So, the system brought the qualitative analysis of the structural information of transparent film success. PMID:19229343

  19. NeuronMetrics: Software for Semi-Automated Processing of Cultured-Neuron Images

    PubMed Central

    Narro, Martha L.; Yang, Fan; Kraft, Robert; Wenk, Carola; Efrat, Alon; Restifo, Linda L.

    2007-01-01

    Using primary cell culture to screen for changes in neuronal morphology requires specialized analysis software. We developed NeuronMetrics™ for semi-automated, quantitative analysis of two-dimensional (2D) images of fluorescently labeled cultured neurons. It skeletonizes the neuron image using two complementary image-processing techniques, capturing fine terminal neurites with high fidelity. An algorithm was devised to span wide gaps in the skeleton. NeuronMetrics uses a novel strategy based on geometric features called faces to extract a branch-number estimate from complex arbors with numerous neurite-to-neurite contacts, without creating a precise, contact-free representation of the neurite arbor. It estimates total neurite length, branch number, primary neurite number, territory (the area of the convex polygon bounding the skeleton and cell body), and Polarity Index (a measure of neuronal polarity). These parameters provide fundamental information about the size and shape of neurite arbors, which are critical factors for neuronal function. NeuronMetrics streamlines optional manual tasks such as removing noise, isolating the largest primary neurite, and correcting length for self-fasciculating neurites. Numeric data are output in a single text file, readily imported into other applications for further analysis. Written as modules for ImageJ, NeuronMetrics provides practical analysis tools that are easy to use and support batch processing. Depending on the need for manual intervention, processing time for a batch of ~60 2D images is 1.0–2.5 hours, from a folder of images to a table of numeric data. NeuronMetrics’ output accelerates the quantitative detection of mutations and chemical compounds that alter neurite morphology in vitro, and will contribute to the use of cultured neurons for drug discovery. PMID:17270152

  20. A comparison of five standard methods for evaluating image intensity uniformity in partially parallel imaging MRI

    PubMed Central

    Goerner, Frank L.; Duong, Timothy; Stafford, R. Jason; Clarke, Geoffrey D.

    2013-01-01

    Purpose: To investigate the utility of five different standard measurement methods for determining image uniformity for partially parallel imaging (PPI) acquisitions in terms of consistency across a variety of pulse sequences and reconstruction strategies. Methods: Images were produced with a phantom using a 12-channel head matrix coil in a 3T MRI system (TIM TRIO, Siemens Medical Solutions, Erlangen, Germany). Images produced using echo-planar, fast spin echo, gradient echo, and balanced steady state free precession pulse sequences were evaluated. Two different PPI reconstruction methods were investigated, generalized autocalibrating partially parallel acquisition algorithm (GRAPPA) and modified sensitivity-encoding (mSENSE) with acceleration factors (R) of 2, 3, and 4. Additionally images were acquired with conventional, two-dimensional Fourier imaging methods (R = 1). Five measurement methods of uniformity, recommended by the American College of Radiology (ACR) and the National Electrical Manufacturers Association (NEMA) were considered. The methods investigated were (1) an ACR method and a (2) NEMA method for calculating the peak deviation nonuniformity, (3) a modification of a NEMA method used to produce a gray scale uniformity map, (4) determining the normalized absolute average deviation uniformity, and (5) a NEMA method that focused on 17 areas of the image to measure uniformity. Changes in uniformity as a function of reconstruction method at the same R-value were also investigated. Two-way analysis of variance (ANOVA) was used to determine whether R-value or reconstruction method had a greater influence on signal intensity uniformity measurements for partially parallel MRI. Results: Two of the methods studied had consistently negative slopes when signal intensity uniformity was plotted against R-value. The results obtained comparing mSENSE against GRAPPA found no consistent difference between GRAPPA and mSENSE with regard to signal intensity uniformity

  1. Software-based high-level synthesis design of FPGA beamformers for synthetic aperture imaging.

    PubMed

    Amaro, Joao; Yiu, Billy Y S; Falcao, Gabriel; Gomes, Marco A C; Yu, Alfred C H

    2015-05-01

    Field-programmable gate arrays (FPGAs) can potentially be configured as beamforming platforms for ultrasound imaging, but a long design time and skilled expertise in hardware programming are typically required. In this article, we present a novel approach to the efficient design of FPGA beamformers for synthetic aperture (SA) imaging via the use of software-based high-level synthesis techniques. Software kernels (coded in OpenCL) were first developed to stage-wise handle SA beamforming operations, and their corresponding FPGA logic circuitry was emulated through a high-level synthesis framework. After design space analysis, the fine-tuned OpenCL kernels were compiled into register transfer level descriptions to configure an FPGA as a beamformer module. The processing performance of this beamformer was assessed through a series of offline emulation experiments that sought to derive beamformed images from SA channel-domain raw data (40-MHz sampling rate, 12 bit resolution). With 128 channels, our FPGA-based SA beamformer can achieve 41 frames per second (fps) processing throughput (3.44 × 10(8) pixels per second for frame size of 256 × 256 pixels) at 31.5 W power consumption (1.30 fps/W power efficiency). It utilized 86.9% of the FPGA fabric and operated at a 196.5 MHz clock frequency (after optimization). Based on these findings, we anticipate that FPGA and high-level synthesis can together foster rapid prototyping of real-time ultrasound processor modules at low power consumption budgets. PMID:25965680

  2. Pre-Hardware Optimization of Spacecraft Image Processing Software Algorithms and Hardware Implementation

    NASA Technical Reports Server (NTRS)

    Kizhner, Semion; Flatley, Thomas P.; Hestnes, Phyllis; Jentoft-Nilsen, Marit; Petrick, David J.; Day, John H. (Technical Monitor)

    2001-01-01

    Spacecraft telemetry rates have steadily increased over the last decade presenting a problem for real-time processing by ground facilities. This paper proposes a solution to a related problem for the Geostationary Operational Environmental Spacecraft (GOES-8) image processing application. Although large super-computer facilities are the obvious heritage solution, they are very costly, making it imperative to seek a feasible alternative engineering solution at a fraction of the cost. The solution is based on a Personal Computer (PC) platform and synergy of optimized software algorithms and re-configurable computing hardware technologies, such as Field Programmable Gate Arrays (FPGA) and Digital Signal Processing (DSP). It has been shown in [1] and [2] that this configuration can provide superior inexpensive performance for a chosen application on the ground station or on-board a spacecraft. However, since this technology is still maturing, intensive pre-hardware steps are necessary to achieve the benefits of hardware implementation. This paper describes these steps for the GOES-8 application, a software project developed using Interactive Data Language (IDL) (Trademark of Research Systems, Inc.) on a Workstation/UNIX platform. The solution involves converting the application to a PC/Windows/RC platform, selected mainly by the availability of low cost, adaptable high-speed RC hardware. In order for the hybrid system to run, the IDL software was modified to account for platform differences. It was interesting to examine the gains and losses in performance on the new platform, as well as unexpected observations before implementing hardware. After substantial pre-hardware optimization steps, the necessity of hardware implementation for bottleneck code in the PC environment became evident and solvable beginning with the methodology described in [1], [2], and implementing a novel methodology for this specific application [6]. The PC-RC interface bandwidth problem for the

  3. HydroImage: A New Software for HydroGeophysical and BioGeophysical Data Integration

    NASA Astrophysics Data System (ADS)

    Suribhatla, R. M.; Mok, C. M.; Kaback, D.; Chen, J.; Hubbard, S. S.

    2011-12-01

    Hydrogeophysical and biogeophysical data integration have recently emerged as cost-effective and rapid techniques for improving subsurface characterization and monitoring. In a Bayesian framework for integration, borehole based data provide prior distribution and geophysical information serve as data to update the prior through likelihood functions obtained from petrophysical models between borehole and cross-well data. We present the application of a Windows-based software called HydroImage that uses this Bayesian framework for data integration and visualization. HydroImage can be used for geostatistical estimation, geophysical tomographic inversion, petrophysical model development, and Bayesian integration. We demonstrate HydroImage using three different field datasets to estimate different subsurface states or parameters. The first example combines wellbore flowmeter test data and crosshole seismic and ground penetrating radar (GPR) data to estimate hydraulic conductivity at the DOE Bacterial Transport Site in Oyster, Virginia. The second example focuses on using time-lapse radar data to estimate moisture content dynamics associated with a desiccation test performed to remediate the deep vadose zone in Hanford, Washington. The third example demonstrates the use of spectral induced polarization data to estimate the spatial and temporal distribution of geochemical parameters that are indicative of the redox state of a contaminated aquifer.

  4. Measuring the area of tear film break-up by image analysis software

    NASA Astrophysics Data System (ADS)

    Pena-Verdeal, Hugo; García-Resúa, Carlos; Ramos, Lucía.; Mosquera, Antonio; Yebra-Pimentel, Eva; Giráldez, María. Jesús

    2013-11-01

    Tear film breakup time (BUT) test only examines the first break in the tear film, but subsequent tear film events are not monitored. We present a method of measuring the area of breakup after the appearance of the first breakup by using open source software. Furthermore, the speed of the rupture was determined. 84 subjects participated in the study. 2 μl volume of 2% sodium fluorescein was instilled using a micropipette. The subject was seated behind a slit-lamp using a cobalt blue filter together with a Wratten 12 yellow filter. Then, the tear film was recorded by a camera attached to the slit lamp. 4 frames of each video was extracted, the first rupture (BUT_0), breakup after 1 second (BUT_1), rupture after 2 seconds (BUT_2) and breakup before the last blink (BUT_F). Open source software of measurement based on Java (NIH ImageJ) was used to measure the number of pixels in areas of breakup. These areas were divided by the area of exposed cornea to obtain the percentage of ruptures. Instantaneous breakup speed was calculated for second 1 as the difference between BUT_1 - BUT_0, whereas instant speed for second 2 was BUT_2 - BUT_1. Mean area of breakup obtained was: BUT_0 = 0.26%, BUT_1 = 0.48%, BUT_2 = 0.79% and BUT_F = 1.61%. Break speed was 0.22 area/sec for second 1 and 0.31 area/sec for second 2, showing a statistical difference between them (p = 0.007). Post BUT analysis may be easily monitoring with the aid of this software.

  5. Digital mapping of side-scan sonar data with the Woods Hole Image Processing System software

    USGS Publications Warehouse

    Paskevich, Valerie F.

    1992-01-01

    Since 1985, the Branch of Atlantic Marine Geology has been involved in collecting, processing and digitally mosaicking high and low resolution sidescan sonar data. In the past, processing and digital mosaicking has been accomplished with a dedicated, shore-based computer system. Recent development of a UNIX-based image-processing software system includes a series of task specific programs for pre-processing sidescan sonar data. To extend the capabilities of the UNIX-based programs, development of digital mapping techniques have been developed. This report describes the initial development of an automated digital mapping procedure. Included is a description of the programs and steps required to complete the digital mosaicking on a UNIXbased computer system, and a comparison of techniques that the user may wish to select.

  6. SU-E-J-264: Comparison of Two Commercially Available Software Platforms for Deformable Image Registration

    SciTech Connect

    Tuohy, R; Stathakis, S; Mavroidis, P; Bosse, C; Papanikolaou, N

    2014-06-01

    Purpose: To evaluate and compare the deformable image registration algorithms available in the Velocity (Velocity Medical Solutions, Atlanta, GA) and RayStation (RaySearch Americas, Inc., Garden city NY). Methods: Ten consecutive patient cone beam CTs (CBCT) for each fraction were collected. The CBCTs along with the simulation CT were exported to the Velocity and the RayStation software. Each CBCT was registered using deformable image registration to the simulation CT and the resulting deformable vector matrix was generated. Each registration was visually inspected by a physicist and the prescribing physician. The volumes of the critical organs were calculated for each deformable CT and used for comparison. Results: The resulting deformable registrations revealed differences between the two algorithms. These differences were realized when the organs at risk were contoured on each deformed CBCT. Differences in the order of 10% ±30% in volume were observed for bladder, 17 ±21% for rectum and 16±10% for sigmoid. The prostate and PTV volume differences were in the order of 3±5%. The volumetric differences observed had a respective impact on the DVHs of all organs at risk. Differences of 8–10% in the mean dose were observed for all organs above. Conclusion: Deformable registration is a powerful tool that aids in the definition of critical structures and is often used for the evaluation of daily dose delivered to the patient. It should be noted that extended QA should be performed before clinical implementation of the software and the users should be aware of advantages and limitations of the methods.

  7. Comparison between three methods to value lower tear meniscus measured by image software

    NASA Astrophysics Data System (ADS)

    García-Resúa, Carlos; Pena-Verdeal, Hugo; Lira, Madalena; Oliveira, M. Elisabete Real; Giráldez, María. Jesús; Yebra-Pimentel, Eva

    2013-11-01

    To measure different parameters of lower tear meniscus height (TMH) by using photography with open software of measurement. TMH was addressed from lower eyelid to the top of the meniscus (absolute TMH) and to the brightest meniscus reflex (reflex TMH). 121 young healthy subjects were included in the study. The lower tear meniscus was videotaped by a digital camera attached to a slit lamp. Three videos were recorded in central meniscus portion on three different methods: slit lamp without fluorescein instillation, slit lamp with fluorescein instillation and TearscopeTM without fluorescein instillation. Then, a masked observed obtained an image from each video and measured TMH by using open source software of measurement based on Java (NIH ImageJ). Absolute central (TMH-CA), absolute with fluorescein (TMH-F) and absolute using the Tearscope (TMH-Tc) were compared each other as well as reflex central (TMH-CR) and reflex Tearscope (TMH-TcR). Mean +/- S.D. values of TMH-CA, TMH-CR, TMH-F, TMH-Tc and TMH-TcR of 0.209 +/- 0.049, 0.139 +/- 0.031, 0.222 +/- 0.058, 0.175 +/- 0.045 and 0.109 +/- 0.029 mm, respectively were found. Paired t-test was performed for the relationship between TMH-CA - TMH-CR, TMH-CA - TMH-F, TMH-CA - TMH-Tc, TMH-F - TMH-Tc, TMH-Tc - TMH-TcR and TMH-CR - TMH-TcR. In all cases, it was found a significant difference between both variables (all p < 0.008). This study showed a useful tool to objectively measure TMH by photography. Eye care professionals should maintain the same TMH parameter in the follow-up visits, due to the difference between them.

  8. Fundus image fusion in EYEPLAN software: An evaluation of a novel technique for ocular melanoma radiation treatment planning

    SciTech Connect

    Daftari, Inder K.; Mishra, Kavita K.; O'Brien, Joan M.; and others

    2010-10-15

    Purpose: The purpose of this study is to evaluate a novel approach for treatment planning using digital fundus image fusion in EYEPLAN for proton beam radiation therapy (PBRT) planning for ocular melanoma. The authors used a prototype version of EYEPLAN software, which allows for digital registration of high-resolution fundus photographs. The authors examined the improvement in tumor localization by replanning with the addition of fundus photo superimposition in patients with macular area tumors. Methods: The new version of EYEPLAN (v3.05) software allows for the registration of fundus photographs as a background image. This is then used in conjunction with clinical examination, tantalum marker clips, surgeon's mapping, and ultrasound to draw the tumor contour accurately. In order to determine if the fundus image superimposition helps in tumor delineation and treatment planning, the authors identified 79 patients with choroidal melanoma in the macular location that were treated with PBRT. All patients were treated to a dose of 56 GyE in four fractions. The authors reviewed and replanned all 79 macular melanoma cases with superimposition of pretreatment and post-treatment fundus imaging in the new EYEPLAN software. For patients with no local failure, the authors analyzed whether fundus photograph fusion accurately depicted and confirmed tumor volumes as outlined in the original treatment plan. For patients with local failure, the authors determined whether the addition of the fundus photograph might have benefited in terms of more accurate tumor volume delineation. Results: The mean follow-up of patients was 33.6{+-}23 months. Tumor growth was seen in six eyes of the 79 macular lesions. All six patients were marginal failures or tumor miss in the region of dose fall-off, including one patient with both in-field recurrence as well as marginal. Among the six recurrences, three were managed by enucleation and one underwent retreatment with proton therapy. Three

  9. 3.0 Tesla magnetic resonance imaging: A new standard in liver imaging?

    PubMed Central

    Girometti, Rossano

    2015-01-01

    An ever-increasing number of 3.0 Tesla (T) magnets are installed worldwide. Moving from the standard of 1.5 T to higher field strength implies a number of potential advantage and drawbacks, requiring careful optimization of imaging protocols or implementation of novel hardware components. Clinical practice and literature review suggest that state-of-the-art 3.0 T is equivalent to 1.5 T in the assessment of focal liver lesions and diffuse liver disease. Therefore, further technical improvements are needed in order to fully exploit the potential of higher field strength. PMID:26244063

  10. ProteoAnnotator--open source proteogenomics annotation software supporting PSI standards.

    PubMed

    Ghali, Fawaz; Krishna, Ritesh; Perkins, Simon; Collins, Andrew; Xia, Dong; Wastling, Jonathan; Jones, Andrew R

    2014-12-01

    The recent massive increase in capability for sequencing genomes is producing enormous advances in our understanding of biological systems. However, there is a bottleneck in genome annotation--determining the structure of all transcribed genes. Experimental data from MS studies can play a major role in confirming and correcting gene structure--proteogenomics. However, there are some technical and practical challenges to overcome, since proteogenomics requires pipelines comprising a complex set of interconnected modules as well as bespoke routines, for example in protein inference and statistics. We are introducing a complete, open source pipeline for proteogenomics, called ProteoAnnotator, which incorporates a graphical user interface and implements the Proteomics Standards Initiative mzIdentML standard for each analysis stage. All steps are included as standalone modules with the mzIdentML library, allowing other groups to re-use the whole pipeline or constituent parts within other tools. We have developed new modules for pre-processing and combining multiple search databases, for performing peptide-level statistics on mzIdentML files, for scoring grouped protein identifications matched to a given genomic locus to validate that updates to the official gene models are statistically sound and for mapping end results back onto the genome. ProteoAnnotator is available from http://www.proteoannotator.org/. All MS data have been deposited in the ProteomeXchange with identifiers PXD001042 and PXD001390 (http://proteomecentral.proteomexchange.org/dataset/PXD001042; http://proteomecentral.proteomexchange.org/dataset/PXD001390). PMID:25297486

  11. A flexible software architecture for scalable real-time image and video processing applications

    NASA Astrophysics Data System (ADS)

    Usamentiaga, Rubén; Molleda, Julio; García, Daniel F.; Bulnes, Francisco G.

    2012-06-01

    Real-time image and video processing applications require skilled architects, and recent trends in the hardware platform make the design and implementation of these applications increasingly complex. Many frameworks and libraries have been proposed or commercialized to simplify the design and tuning of real-time image processing applications. However, they tend to lack flexibility because they are normally oriented towards particular types of applications, or they impose specific data processing models such as the pipeline. Other issues include large memory footprints, difficulty for reuse and inefficient execution on multicore processors. This paper presents a novel software architecture for real-time image and video processing applications which addresses these issues. The architecture is divided into three layers: the platform abstraction layer, the messaging layer, and the application layer. The platform abstraction layer provides a high level application programming interface for the rest of the architecture. The messaging layer provides a message passing interface based on a dynamic publish/subscribe pattern. A topic-based filtering in which messages are published to topics is used to route the messages from the publishers to the subscribers interested in a particular type of messages. The application layer provides a repository for reusable application modules designed for real-time image and video processing applications. These modules, which include acquisition, visualization, communication, user interface and data processing modules, take advantage of the power of other well-known libraries such as OpenCV, Intel IPP, or CUDA. Finally, we present different prototypes and applications to show the possibilities of the proposed architecture.

  12. SU-E-I-13: Evaluation of Metal Artifact Reduction (MAR) Software On Computed Tomography (CT) Images

    SciTech Connect

    Huang, V; Kohli, K

    2015-06-15

    Purpose: A new commercially available metal artifact reduction (MAR) software in computed tomography (CT) imaging was evaluated with phantoms in the presence of metals. The goal was to assess the ability of the software to restore the CT number in the vicinity of the metals without impacting the image quality. Methods: A Catphan 504 was scanned with a GE Optima RT 580 CT scanner (GE Healthcare, Milwaukee, WI) and the images were reconstructed with and without the MAR software. Both datasets were analyzed with Image Owl QA software (Image Owl Inc, Greenwich, NY). CT number sensitometry, MTF, low contrast, uniformity, noise and spatial accuracy were compared for scans with and without MAR software. In addition, an in-house made phantom was scanned with and without a stainless steel insert at three different locations. The accuracy of the CT number and metal insert dimension were investigated as well. Results: Comparisons between scans with and without MAR algorithm on the Catphan phantom demonstrate similar results for image quality. However, noise was slightly higher for the MAR algorithm. Evaluation of the CT number at various locations of the in-house made phantom was also performed. The baseline HU, obtained from the scan without metal insert, was compared to scans with the stainless steel insert at 3 different locations. The HU difference between the baseline scan versus metal scan was improved when the MAR algorithm was applied. In addition, the physical diameter of the stainless steel rod was over-estimated by the MAR algorithm by 0.9 mm. Conclusion: This work indicates with the presence of metal in CT scans, the MAR algorithm is capable of providing a more accurate CT number without compromising the overall image quality. Future work will include the dosimetric impact on the MAR algorithm.

  13. Leveraging Open-Source Software and Data Standards within the Integrated Water Resources Science and Services Initiative

    NASA Astrophysics Data System (ADS)

    Clark, E. P.

    2014-12-01

    The National Oceanic and Atmospheric Administration together with the U.S. Army Corps of Engineers and the U.S. Geological Survey establish the Integrated Water Resources Science and Service (IWRSS) consortium in 2011. IWRSS is a cross cutting, multidisciplinary approach to addressing complex water problems. The IWRSS Interoperability and Data Synchronization Scoping Team was tasked with documenting requirements related to the sharing of data sets essential for monitoring, forecasting the water nation's water resources as well as informing operations and management of hydraulic structures. A number of open source software tools were identified in the team's report as well as the need to adopt open source data structures and standards. This presentation will discuss the potential applications of open-source software and development practices within the IWRSS-Interoperability and Data Synchronization construct as well as explore the underlying benefits that open-source approaches offer to the federal water resources community. Programmatically this strategy facilitates a common operating picture between the federal water enterprise that is essential for a weather and water ready nation.

  14. Integration of intraoperative and model-updated images into an industry-standard neuronavigation system: initial results

    NASA Astrophysics Data System (ADS)

    Schaewe, Timothy J.; Fan, Xiaoyao; Ji, Songbai; Hartov, Alex; Hiemenz Holton, Leslie; Roberts, David W.; Paulsen, Keith D.; Simon, David A.

    2013-03-01

    Dartmouth and Medtronic have established an academic-industrial partnership to develop, validate, and evaluate a multimodality neurosurgical image-guidance platform for brain tumor resection surgery that is capable of updating the spatial relationships between preoperative images and the current surgical field. Previous studies have shown that brain shift compensation through a modeling framework using intraoperative ultrasound and/or visible light stereovision to update preoperative MRI appears to result in improved accuracy in navigation. However, image updates have thus far only been produced retrospective to surgery in large part because of gaps in the software integration and information flow between the co-registration and tracking, image acquisition and processing, and image warping tasks which are required during a case. This paper reports the first demonstration of integration of a deformation-based image updating process for brain shift modeling with an industry-standard image guided surgery platform. Specifically, we have completed the first and most critical data transfer operation to transmit volumetric image data generated by the Dartmouth brain shift modeling process to the Medtronic StealthStation® system. StealthStation® comparison views, which allow the surgeon to verify the correspondence of the received updated image volume relative to the preoperative MRI, are presented, along with other displays of image data such as the intraoperative 3D ultrasound used to update the model. These views and data represent the first time that externally acquired and manipulated image data has been imported into the StealthStation® system through the StealthLink® portal and visualized on the StealthStation® display.

  15. WCPS: An Open Geospatial Consortium Standard Applied to Flight Hardware/Software

    NASA Astrophysics Data System (ADS)

    Cappelaere, P. G.; Mandl, D.; Stanley, J.; Frye, S.; Baumann, P.

    2009-12-01

    The Open GeoSpatial Consortium Web Coverage Processing Service (WCPS) has the potential to allow advanced users to define processing algorithms using the web environment and seamlessly provide the capability to upload them directly to the satellite for autonomous execution using smart agent technology. The Open Geospatial Consortium recently announced the adoption of a specification for a Web Coverage Processing Service on Mar 25, 2009. This effort has been spearheaded by Dr. Peter Baumann, Jacobs University, Bremen, Germany. The WCPS specifies a coverage processing language allowing clients to send processing requests for evaluation to a server. NASA has been taking the next step by wrapping the user-defined requests into dynamic agents that can be uploaded to a spacecraft for onboard processing. This could have a dramatic impact to the new decadal missions such as HyspIRI. Dynamic onboard classifiers are key to providing level 2 products in near-realtime directly to end-users on the ground. This capability, currently implemented on the Hyspiri pathfinder testbed using the NASA SpaceCube, will be demonstrated on EO-1, a NASA Hyperspectral/Multispectral imager, as the next capability for agile autonomous science experiments.

  16. Caltech/JPL Conference on Image Processing Technology, Data Sources and Software for Commercial and Scientific Applications

    NASA Technical Reports Server (NTRS)

    Redmann, G. H.

    1976-01-01

    Recent advances in image processing and new applications are presented to the user community to stimulate the development and transfer of this technology to industrial and commercial applications. The Proceedings contains 37 papers and abstracts, including many illustrations (some in color) and provides a single reference source for the user community regarding the ordering and obtaining of NASA-developed image-processing software and science data.

  17. Cloud computing geospatial application for water resources based on free and open source software and open standards - a prototype

    NASA Astrophysics Data System (ADS)

    Delipetrev, Blagoj

    2016-04-01

    Presently, most of the existing software is desktop-based, designed to work on a single computer, which represents a major limitation in many ways, starting from limited computer processing, storage power, accessibility, availability, etc. The only feasible solution lies in the web and cloud. This abstract presents research and development of a cloud computing geospatial application for water resources based on free and open source software and open standards using hybrid deployment model of public - private cloud, running on two separate virtual machines (VMs). The first one (VM1) is running on Amazon web services (AWS) and the second one (VM2) is running on a Xen cloud platform. The presented cloud application is developed using free and open source software, open standards and prototype code. The cloud application presents a framework how to develop specialized cloud geospatial application that needs only a web browser to be used. This cloud application is the ultimate collaboration geospatial platform because multiple users across the globe with internet connection and browser can jointly model geospatial objects, enter attribute data and information, execute algorithms, and visualize results. The presented cloud application is: available all the time, accessible from everywhere, it is scalable, works in a distributed computer environment, it creates a real-time multiuser collaboration platform, the programing languages code and components are interoperable, and it is flexible in including additional components. The cloud geospatial application is implemented as a specialized water resources application with three web services for 1) data infrastructure (DI), 2) support for water resources modelling (WRM), 3) user management. The web services are running on two VMs that are communicating over the internet providing services to users. The application was tested on the Zletovica river basin case study with concurrent multiple users. The application is a state

  18. Ten years of medical imaging standardization and prototypical implementation: the DICOM standard and the OFFIS DICOM toolkit (DCMTK)

    NASA Astrophysics Data System (ADS)

    Eichelberg, Marco; Riesmeier, Joerg; Wilkens, Thomas; Hewett, Andrew J.; Barth, Andreas; Jensch, Peter

    2004-04-01

    In 2003, the DICOM standard celebrated its 10th anniversary. Aside from the standard itself, also OFFIS" open source DICOM toolkit DCMTK, which has continuously followed the development of DICOM, turns 10 years old. On this occasion, this article looks back at the main standardization efforts in DICOM and illustrates related developments in DCMTK. Considering the development of the DICOM standard, it is possible to distinguish several phases of progress. Within the first phase at the beginning of the 1990s, basic network services for image transfer and retrieval were being introduced. The second phase, in the mid 1990s, was characterized by advances in the specification of a file format and of regulations for media interchange. In the later but partly parallel third phase, DICOM predominantly dealt with the problem of optimizing the workflow within imaging departments. As a result of the fact that it was now possible to exchange images between different systems, efforts concerning image display consistency followed in a fourth phase at the end of the 1990s. In the current fifth phase, security enhancements are being integrated into the standard. In another phase of progress, which took place over a relatively long time period concurrently to the other mentioned phases, DICOM Structured Reporting was developed.

  19. Evaluation of linear array human papillomavirus genotyping using automatic optical imaging software.

    PubMed

    Jeronimo, J; Wentzensen, N; Long, R; Schiffman, M; Dunn, S T; Allen, R A; Walker, J L; Gold, M A; Zuna, R E; Sherman, M E; Wacholder, S; Wang, S S

    2008-08-01

    Variations in biological behavior suggest that each carcinogenic human papillomavirus (HPV) type should be considered individually in etiologic studies. HPV genotyping assays might have clinical applications if they are approved for use by the FDA. A widely used genotyping assay is the Roche Linear Array HPV genotyping test (LA). We used LA to genotype the HPV isolates from cervical specimens from women with the full spectrum of cervical disease: cervical cancer, cervical intraepithelial neoplasia (CIN), and HPV infections. To explore the feasibility and value of the automated reading of the LA results, we custom-designed novel, optical imaging software that provides optical density measurements of LA bands. We compared unmagnified visual examination with the automated measurements. The two measurements were highly associated. By either method, the threshold between a negative and a positive result was fairly sharp, with a clear bimodal distribution. Visually, most positive results were judged to be strong or medium, with fewer equivocal results categorized as weak (9.5% of positive samples), very weak (6.5% of positive samples), or extremely weak (7.7% of positive samples). The automated measurements of the intensities were significantly associated with the strength of the visual categories (P < 0.001). At the extremes of the automated signal intensities (< or = 20 units or > or = 120 units), the bands were almost always categorized visually as negative and positive, respectively. In the equivocal zone (20 to 119 units), specimens were more increasingly likely to be judged to be visually positive as the number of other, definite infections on the same strip increased (P for trend < 0.001). Multiple, concurrent infections comprise > or = 25% of HPV infections; thus, any systematic visual tendency that influences their evaluation when the result is equivocal should be minimized. Therefore, automated reading is probably worth development if easy-to-calibrate hardware

  20. funcLAB/G-service-oriented architecture for standards-based analysis of functional magnetic resonance imaging in HealthGrids.

    PubMed

    Erberich, Stephan G; Bhandekar, Manasee; Chervenak, Ann; Kesselman, Carl; Nelson, Marvin D

    2007-01-01

    Functional MRI is successfully being used in clinical and research applications including preoperative planning, language mapping, and outcome monitoring. However, clinical use of fMRI is less widespread due to its complexity of imaging, image workflow, post-processing, and lack of algorithmic standards hindering result comparability. As a consequence, wide-spread adoption of fMRI as clinical tool is low contributing to the uncertainty of community physicians how to integrate fMRI into practice. In addition, training of physicians with fMRI is in its infancy and requires clinical and technical understanding. Therefore, many institutions which perform fMRI have a team of basic researchers and physicians to perform fMRI as a routine imaging tool. In order to provide fMRI as an advanced diagnostic tool to the benefit of a larger patient population, image acquisition and image post-processing must be streamlined, standardized, and available at any institution which does not have these resources available. Here we describe a software architecture, the functional imaging laboratory (funcLAB/G), which addresses (i) standardized image processing using Statistical Parametric Mapping and (ii) its extension to secure sharing and availability for the community using standards-based Grid technology (Globus Toolkit). funcLAB/G carries the potential to overcome the limitations of fMRI in clinical use and thus makes standardized fMRI available to the broader healthcare enterprise utilizing the Internet and HealthGrid Web Services technology. PMID:17707204

  1. The FBI wavelet/scalar quantization standard for gray-scale fingerprint image compression

    SciTech Connect

    Bradley, J.N.; Brislawn, C.M. ); Hopper, T. )

    1993-01-01

    The FBI has recently adopted a standard for the compression of digitized 8-bit gray-scale fingerprint images. The standard is based on scalar quantization of a 64-subband discrete wavelet transform decomposition of the images, followed by Huffman coding. Novel features of the algorithm include the use of symmetric boundary conditions for transforming finite-length signals and a subband decomposition tailored for fingerprint images scanned at 500 dpi. The standard is intended for use in conjunction with ANSI/NBS-CLS 1-1993, American National Standard Data Format for the Interchange of Fingerprint Information, and the FBI's Integrated Automated Fingerprint Identification System.

  2. The FBI wavelet/scalar quantization standard for gray-scale fingerprint image compression

    SciTech Connect

    Bradley, J.N.; Brislawn, C.M.; Hopper, T.

    1993-05-01

    The FBI has recently adopted a standard for the compression of digitized 8-bit gray-scale fingerprint images. The standard is based on scalar quantization of a 64-subband discrete wavelet transform decomposition of the images, followed by Huffman coding. Novel features of the algorithm include the use of symmetric boundary conditions for transforming finite-length signals and a subband decomposition tailored for fingerprint images scanned at 500 dpi. The standard is intended for use in conjunction with ANSI/NBS-CLS 1-1993, American National Standard Data Format for the Interchange of Fingerprint Information, and the FBI`s Integrated Automated Fingerprint Identification System.

  3. FBI wavelet/scalar quantization standard for gray-scale fingerprint image compression

    NASA Astrophysics Data System (ADS)

    Bradley, Jonathan N.; Brislawn, Christopher M.; Hopper, Thomas

    1993-08-01

    The FBI has recently adopted a standard for the compression of digitized 8-bit gray-scale fingerprint images. The standard is based on scalar quantization of a 64-subband discrete wavelet transform decomposition of the images, followed by Huffman coding. Novel features of the algorithm include the use of symmetric boundary conditions for transforming finite- length signals and a subband decomposition tailored for fingerprint images scanned at 500 dpi. The standard is intended for use in conjunction with ANSI/NBS-CLS 1-1993, American National Standard Data Format for the Interchange of Fingerprint Information, and the FBI's Integrated Automated Fingerprint Identification System.

  4. Standards of ultrasound imaging of the adrenal glands

    PubMed Central

    Jakubowski, Wiesław S.; Dobruch-Sobczak, Katarzyna; Kasperlik-Załuska, Anna A.

    2015-01-01

    Adrenal glands are paired endocrine glands located over the upper renal poles. Adrenal pathologies have various clinical presentations. They can coexist with the hyperfunction of individual cortical zones or the medulla, insufficiency of the adrenal cortex or retained normal hormonal function. The most common adrenal masses are tumors incidentally detected in imaging examinations (ultrasound, tomography, magnetic resonance imaging), referred to as incidentalomas. They include a range of histopathological entities but cortical adenomas without hormonal hyperfunction are the most common. Each abdominal ultrasound scan of a child or adult should include the assessment of the suprarenal areas. If a previously non-reported, incidental solid focal lesion exceeding 1 cm (incidentaloma) is detected in the suprarenal area, computed tomography or magnetic resonance imaging should be conducted to confirm its presence and for differentiation and the tumor functional status should be determined. Ultrasound imaging is also used to monitor adrenal incidentaloma that is not eligible for a surgery. The paper presents recommendations concerning the performance and assessment of ultrasound examinations of the adrenal glands and their pathological lesions. The article includes new ultrasound techniques, such as tissue harmonic imaging, spatial compound imaging, three-dimensional ultrasound, elastography, contrast-enhanced ultrasound and parametric imaging. The guidelines presented above are consistent with the recommendations of the Polish Ultrasound Society. PMID:26807295

  5. eWaterCycle: Building an operational global Hydrological forecasting system based on standards and open source software

    NASA Astrophysics Data System (ADS)

    Drost, Niels; Bierkens, Marc; Donchyts, Gennadii; van de Giesen, Nick; Hummel, Stef; Hut, Rolf; Kockx, Arno; van Meersbergen, Maarten; Sutanudjaja, Edwin; Verlaan, Martin; Weerts, Albrecht; Winsemius, Hessel

    2015-04-01

    At EGU 2015, the eWaterCycle project (www.ewatercycle.org) will launch an operational high-resolution Hydrological global model, including 14 day ensemble forecasts. Within the eWaterCycle project we aim to use standards and open source software as much as possible. This ensures the sustainability of the software created, and the ability to swap out components as newer technologies and solutions become available. It also allows us to build the system much faster than would otherwise be the case. At the heart of the eWaterCycle system is the PCRGLOB-WB Global Hydrological model (www.globalhydrology.nl) developed at Utrecht University. Version 2.0 of this model is implemented in Python, and models a wide range of Hydrological processes at 10 x 10km (and potentially higher) resolution. To assimilate near-real time satellite data into the model, and run an ensemble forecast we use the OpenDA system (www.openda.org). This allows us to make use of different data assimilation techniques without the need to implement these from scratch. As a data assimilation technique we currently use (variant of) an Ensemble Kalman Filter, specifically optimized for High Performance Computing environments. Coupling of the model with the DA is done with the Basic Model Interface (BMI), developed in the framework of the Community Surface Dynamics Modeling System (CSDMS) (csdms.colorado.edu). We have added support for BMI to PCRGLOB-WB, and developed a BMI adapter for OpenDA, allowing OpenDA to use any BMI compatible model. We currently use multiple different BMI models with OpenDA, already showing the benefits of using this standard. Throughout the system, all file based input and output is done via NetCDF files. We use several standard tools to be used for pre- and post-processing data. Finally we use ncWMS, an NetCDF based implementation of the Web Map Service (WMS) protocol to serve the forecasting result. We have build a 3D web application based on Cesium.js to visualize the output. In

  6. 76 FR 51993 - Draft Guidance for Industry on Standards for Clinical Trial Imaging Endpoints; Availability

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-19

    ... HUMAN SERVICES Food and Drug Administration Draft Guidance for Industry on Standards for Clinical Trial... entitled ``Standards for Clinical Trial Imaging Endpoints.'' The purpose of this draft guidance is to... products. The draft guidance describes standards sponsors can use to ensure that clinical trial...

  7. Experiments with a novel content-based image retrieval software: can we eliminate classification systems in adolescent idiopathic scoliosis?

    PubMed

    Menon, K Venugopal; Kumar, Dinesh; Thomas, Tessamma

    2014-02-01

    Study Design Preliminary evaluation of new tool. Objective To ascertain whether the newly developed content-based image retrieval (CBIR) software can be used successfully to retrieve images of similar cases of adolescent idiopathic scoliosis (AIS) from a database to help plan treatment without adhering to a classification scheme. Methods Sixty-two operated cases of AIS were entered into the newly developed CBIR database. Five new cases of different curve patterns were used as query images. The images were fed into the CBIR database that retrieved similar images from the existing cases. These were analyzed by a senior surgeon for conformity to the query image. Results Within the limits of variability set for the query system, all the resultant images conformed to the query image. One case had no similar match in the series. The other four retrieved several images that were matching with the query. No matching case was left out in the series. The postoperative images were then analyzed to check for surgical strategies. Broad guidelines for treatment could be derived from the results. More precise query settings, inclusion of bending films, and a larger database will enhance accurate retrieval and better decision making. Conclusion The CBIR system is an effective tool for accurate documentation and retrieval of scoliosis images. Broad guidelines for surgical strategies can be made from the postoperative images of the existing cases without adhering to any classification scheme. PMID:24494177

  8. A user-friendly LabVIEW software platform for grating based X-ray phase-contrast imaging.

    PubMed

    Wang, Shenghao; Han, Huajie; Gao, Kun; Wang, Zhili; Zhang, Can; Yang, Meng; Wu, Zhao; Wu, Ziyu

    2015-01-01

    X-ray phase-contrast imaging can provide greatly improved contrast over conventional absorption-based imaging for weakly absorbing samples, such as biological soft tissues and fibre composites. In this study, we introduced an easy and fast way to develop a user-friendly software platform dedicated to the new grating-based X-ray phase-contrast imaging setup at the National Synchrotron Radiation Laboratory of the University of Science and Technology of China. The control of 21 motorized stages, of a piezoelectric stage and of an X-ray tube are achieved with this software, it also covers image acquisition with a flat panel detector for automatic phase stepping scan. Moreover, a data post-processing module for signals retrieval and other custom features are in principle available. With a seamless integration of all the necessary functions in one software package, this platform greatly facilitate users' activities during experimental runs with this grating based X-ray phase contrast imaging setup. PMID:25882730

  9. Analyses of requirements for computer control and data processing experiment subsystems: Image data processing system (IDAPS) software description (7094 version), volume 2

    NASA Technical Reports Server (NTRS)

    1973-01-01

    A description of each of the software modules of the Image Data Processing System (IDAPS) is presented. The changes in the software modules are the result of additions to the application software of the system and an upgrade of the IBM 7094 Mod(1) computer to a 1301 disk storage configuration. Necessary information about IDAPS sofware is supplied to the computer programmer who desires to make changes in the software system or who desires to use portions of the software outside of the IDAPS system. Each software module is documented with: module name, purpose, usage, common block(s) description, method (algorithm of subroutine) flow diagram (if needed), subroutines called, and storage requirements.

  10. I-SPINE: a software package for advances in image-guided and minimally invasive spine procedures

    NASA Astrophysics Data System (ADS)

    Choi, Jae Jeong; Cleary, Kevin R.; Zeng, Jianchao; Gary, Kevin A.; Freedman, Matthew T.; Watson, Vance; Lindisch, David; Mun, Seong K.

    2000-05-01

    While image guidance is now routinely used in the brain in the form of frameless stereotaxy, it is beginning to be more widely used in other clinical areas such as the spine. At Georgetown University Medical Center, we are developing a program to provide advanced visualization and image guidance for minimally invasive spine procedures. This is a collaboration between an engineering-based research group and physicians from the radiology, neurosurgery, and orthopaedics departments. A major component of this work is the ISIS Center Spine Procedures Imaging and Navigation Engine, which is a software package under development as the base platform for technical advances.

  11. Army technology development. IBIS query. Software to support the Image Based Information System (IBIS) expansion for mapping, charting and geodesy

    NASA Technical Reports Server (NTRS)

    Friedman, S. Z.; Walker, R. E.; Aitken, R. B.

    1986-01-01

    The Image Based Information System (IBIS) has been under development at the Jet Propulsion Laboratory (JPL) since 1975. It is a collection of more than 90 programs that enable processing of image, graphical, tabular data for spatial analysis. IBIS can be utilized to create comprehensive geographic data bases. From these data, an analyst can study various attributes describing characteristics of a given study area. Even complex combinations of disparate data types can be synthesized to obtain a new perspective on spatial phenomena. In 1984, new query software was developed enabling direct Boolean queries of IBIS data bases through the submission of easily understood expressions. An improved syntax methodology, a data dictionary, and display software simplified the analysts' tasks associated with building, executing, and subsequently displaying the results of a query. The primary purpose of this report is to describe the features and capabilities of the new query software. A secondary purpose of this report is to compare this new query software to the query software developed previously (Friedman, 1982). With respect to this topic, the relative merits and drawbacks of both approaches are covered.

  12. ORBS: A data reduction software for the imaging Fourier transform spectrometers SpIOMM and SITELLE

    NASA Astrophysics Data System (ADS)

    Martin, T.; Drissen, L.; Joncas, G.

    2012-09-01

    SpIOMM (Spectromètre-Imageur de l'Observatoire du Mont Mégantic) is still the only operational astronomical Imaging Fourier Transform Spectrometer (IFTS) capable of obtaining the visible spectrum of every source of light in a field of view of 12 arc-minutes. Even if it has been designed to work with both outputs of the Michelson interferometer, up to now only one output has been used. Here we present ORBS (Outils de Réduction Binoculaire pour SpIOMM/SITELLE), the reduction software we designed in order to take advantage of the two output data. ORBS will also be used to reduce the data of SITELLE (Spectromètre-Imageur pour l' Étude en Long et en Large des raies d' Émissions) { the direct successor of SpIOMM, which will be in operation at the Canada-France- Hawaii Telescope (CFHT) in early 2013. SITELLE will deliver larger data cubes than SpIOMM (up to 2 cubes of 34 Go each). We thus have made a strong effort in optimizing its performance efficiency in terms of speed and memory usage in order to ensure the best compliance with the quality characteristics discussed with the CFHT team. As a result ORBS is now capable of reducing 68 Go of data in less than 20 hours using only 5 Go of random-access memory (RAM).

  13. 3D reconstruction of SEM images by use of optical photogrammetry software.

    PubMed

    Eulitz, Mona; Reiss, Gebhard

    2015-08-01

    Reconstruction of the three-dimensional (3D) surface of an object to be examined is widely used for structure analysis in science and many biological questions require information about their true 3D structure. For Scanning Electron Microscopy (SEM) there has been no efficient non-destructive solution for reconstruction of the surface morphology to date. The well-known method of recording stereo pair images generates a 3D stereoscope reconstruction of a section, but not of the complete sample surface. We present a simple and non-destructive method of 3D surface reconstruction from SEM samples based on the principles of optical close range photogrammetry. In optical close range photogrammetry a series of overlapping photos is used to generate a 3D model of the surface of an object. We adapted this method to the special SEM requirements. Instead of moving a detector around the object, the object itself was rotated. A series of overlapping photos was stitched and converted into a 3D model using the software commonly used for optical photogrammetry. A rabbit kidney glomerulus was used to demonstrate the workflow of this adaption. The reconstruction produced a realistic and high-resolution 3D mesh model of the glomerular surface. The study showed that SEM micrographs are suitable for 3D reconstruction by optical photogrammetry. This new approach is a simple and useful method of 3D surface reconstruction and suitable for various applications in research and teaching. PMID:26073969

  14. ViewDEX: A java-based software for presentation and evaluation of medical images in observer performance studies

    NASA Astrophysics Data System (ADS)

    Håkansson, Markus; Svensson, Sune; Båth, Magnus; Månsson, Lars Gunnar

    2007-03-01

    Observer performance studies are time-consuming tasks, both for the participating observers and for the scientists collecting and analyzing the data. A possible way to optimize such studies is to perform the study in a completely digital environment. A software tool - ViewDEX (Viewer for Digital Evaluation of X-ray images) - has been developed in Java, enabling it to function on almost any computer. ViewDEX is a DICOM-compatible software tool that can be used to display medical images with simultaneous registration of the observer's response. ViewDEX is designed so that the user in a simple way can alter the types of questions and images presented to the observers, enabling ROC, MAFC and visual grading studies to be conducted in a fast and efficient way. The software can also be used for bench marking and for educational purposes. The results from each observer are saved in a log file, which can be exported for further analysis. The software is freely available for non-commercial purposes.

  15. Parameter-based estimation of CT dose index and image quality using an in-house android™-based software

    NASA Astrophysics Data System (ADS)

    Mubarok, S.; Lubis, L. E.; Pawiro, S. A.

    2016-03-01

    Compromise between radiation dose and image quality is essential in the use of CT imaging. CT dose index (CTDI) is currently the primary dosimetric formalisms in CT scan, while the low and high contrast resolutions are aspects indicating the image quality. This study was aimed to estimate CTDIvol and image quality measures through a range of exposure parameters variation. CTDI measurements were performed using PMMA (polymethyl methacrylate) phantom of 16 cm diameter, while the image quality test was conducted by using catphan ® 600. CTDI measurements were carried out according to IAEA TRS 457 protocol using axial scan mode, under varied parameters of tube voltage, collimation or slice thickness, and tube current. Image quality test was conducted accordingly under the same exposure parameters with CTDI measurements. An Android™ based software was also result of this study. The software was designed to estimate the value of CTDIvol with maximum difference compared to actual CTDIvol measurement of 8.97%. Image quality can also be estimated through CNR parameter with maximum difference to actual CNR measurement of 21.65%.

  16. Image-based tracking: a new emerging standard

    NASA Astrophysics Data System (ADS)

    Antonisse, Jim; Randall, Scott

    2012-06-01

    Automated moving object detection and tracking are increasingly viewed as solutions to the enormous data volumes resulting from emerging wide-area persistent surveillance systems. In a previous paper we described a Motion Imagery Standards Board (MISB) initiative to help address this problem: the specification of a micro-architecture for the automatic extraction of motion indicators and tracks. This paper reports on the development of an extended specification of the plug-and-play tracking micro-architecture, on its status as an emerging standard across DoD, the Intelligence Community, and NATO.

  17. ImageMiner: a software system for comparative analysis of tissue microarrays using content-based image retrieval, high-performance computing, and grid technology

    PubMed Central

    Foran, David J; Yang, Lin; Hu, Jun; Goodell, Lauri A; Reiss, Michael; Wang, Fusheng; Kurc, Tahsin; Pan, Tony; Sharma, Ashish; Saltz, Joel H

    2011-01-01

    Objective and design The design and implementation of ImageMiner, a software platform for performing comparative analysis of expression patterns in imaged microscopy specimens such as tissue microarrays (TMAs), is described. ImageMiner is a federated system of services that provides a reliable set of analytical and data management capabilities for investigative research applications in pathology. It provides a library of image processing methods, including automated registration, segmentation, feature extraction, and classification, all of which have been tailored, in these studies, to support TMA analysis. The system is designed to leverage high-performance computing machines so that investigators can rapidly analyze large ensembles of imaged TMA specimens. To support deployment in collaborative, multi-institutional projects, ImageMiner features grid-enabled, service-based components so that multiple instances of ImageMiner can be accessed remotely and federated. Results The experimental evaluation shows that: (1) ImageMiner is able to support reliable detection and feature extraction of tumor regions within imaged tissues; (2) images and analysis results managed in ImageMiner can be searched for and retrieved on the basis of image-based features, classification information, and any correlated clinical data, including any metadata that have been generated to describe the specified tissue and TMA; and (3) the system is able to reduce computation time of analyses by exploiting computing clusters, which facilitates analysis of larger sets of tissue samples. PMID:21606133

  18. Application of adaptive optics in retinal imaging: a quantitative and clinical comparison with standard cameras

    NASA Astrophysics Data System (ADS)

    Barriga, E. S.; Erry, G.; Yang, S.; Russell, S.; Raman, B.; Soliz, P.

    2005-04-01

    Aim: The objective of this project was to evaluate high resolution images from an adaptive optics retinal imager through comparisons with standard film-based and standard digital fundus imagers. Methods: A clinical prototype adaptive optics fundus imager (AOFI) was used to collect retinal images from subjects with various forms of retinopathy to determine whether improved visibility into the disease could be provided to the clinician. The AOFI achieves low-order correction of aberrations through a closed-loop wavefront sensor and an adaptive optics system. The remaining high-order aberrations are removed by direct deconvolution using the point spread function (PSF) or by blind deconvolution when the PSF is not available. An ophthalmologist compared the AOFI images with standard fundus images and provided a clinical evaluation of all the modalities and processing techniques. All images were also analyzed using a quantitative image quality index. Results: This system has been tested on three human subjects (one normal and two with retinopathy). In the diabetic patient vascular abnormalities were detected with the AOFI that cannot be resolved with the standard fundus camera. Very small features, such as the fine vascular structures on the optic disc and the individual nerve fiber bundles are easily resolved by the AOFI. Conclusion: This project demonstrated that adaptive optic images have great potential in providing clinically significant detail of anatomical and pathological structures to the ophthalmologist.

  19. DICOM for quantitative imaging biomarker development: a standards based approach to sharing clinical data and structured PET/CT analysis results in head and neck cancer research.

    PubMed

    Fedorov, Andriy; Clunie, David; Ulrich, Ethan; Bauer, Christian; Wahle, Andreas; Brown, Bartley; Onken, Michael; Riesmeier, Jörg; Pieper, Steve; Kikinis, Ron; Buatti, John; Beichel, Reinhard R

    2016-01-01

    Background. Imaging biomarkers hold tremendous promise for precision medicine clinical applications. Development of such biomarkers relies heavily on image post-processing tools for automated image quantitation. Their deployment in the context of clinical research necessitates interoperability with the clinical systems. Comparison with the established outcomes and evaluation tasks motivate integration of the clinical and imaging data, and the use of standardized approaches to support annotation and sharing of the analysis results and semantics. We developed the methodology and tools to support these tasks in Positron Emission Tomography and Computed Tomography (PET/CT) quantitative imaging (QI) biomarker development applied to head and neck cancer (HNC) treatment response assessment, using the Digital Imaging and Communications in Medicine (DICOM(®)) international standard and free open-source software. Methods. Quantitative analysis of PET/CT imaging data collected on patients undergoing treatment for HNC was conducted. Processing steps included Standardized Uptake Value (SUV) normalization of the images, segmentation of the tumor using manual and semi-automatic approaches, automatic segmentation of the reference regions, and extraction of the volumetric segmentation-based measurements. Suitable components of the DICOM standard were identified to model the various types of data produced by the analysis. A developer toolkit of conversion routines and an Application Programming Interface (API) were contributed and applied to create a standards-based representation of the data. Results. DICOM Real World Value Mapping, Segmentation and Structured Reporting objects were utilized for standards-compliant representation of the PET/CT QI analysis results and relevant clinical data. A number of correction proposals to the standard were developed. The open-source DICOM toolkit (DCMTK) was improved to simplify the task of DICOM encoding by introducing new API abstractions

  20. DICOM for quantitative imaging biomarker development: a standards based approach to sharing clinical data and structured PET/CT analysis results in head and neck cancer research

    PubMed Central

    Clunie, David; Ulrich, Ethan; Bauer, Christian; Wahle, Andreas; Brown, Bartley; Onken, Michael; Riesmeier, Jörg; Pieper, Steve; Kikinis, Ron; Buatti, John; Beichel, Reinhard R.

    2016-01-01

    Background. Imaging biomarkers hold tremendous promise for precision medicine clinical applications. Development of such biomarkers relies heavily on image post-processing tools for automated image quantitation. Their deployment in the context of clinical research necessitates interoperability with the clinical systems. Comparison with the established outcomes and evaluation tasks motivate integration of the clinical and imaging data, and the use of standardized approaches to support annotation and sharing of the analysis results and semantics. We developed the methodology and tools to support these tasks in Positron Emission Tomography and Computed Tomography (PET/CT) quantitative imaging (QI) biomarker development applied to head and neck cancer (HNC) treatment response assessment, using the Digital Imaging and Communications in Medicine (DICOM®) international standard and free open-source software. Methods. Quantitative analysis of PET/CT imaging data collected on patients undergoing treatment for HNC was conducted. Processing steps included Standardized Uptake Value (SUV) normalization of the images, segmentation of the tumor using manual and semi-automatic approaches, automatic segmentation of the reference regions, and extraction of the volumetric segmentation-based measurements. Suitable components of the DICOM standard were identified to model the various types of data produced by the analysis. A developer toolkit of conversion routines and an Application Programming Interface (API) were contributed and applied to create a standards-based representation of the data. Results. DICOM Real World Value Mapping, Segmentation and Structured Reporting objects were utilized for standards-compliant representation of the PET/CT QI analysis results and relevant clinical data. A number of correction proposals to the standard were developed. The open-source DICOM toolkit (DCMTK) was improved to simplify the task of DICOM encoding by introducing new API abstractions

  1. ScanSAR interferometric processing using existing standard InSAR software for measuring large scale land deformation

    NASA Astrophysics Data System (ADS)

    Liang, Cunren; Zeng, Qiming; Jia, Jianying; Jiao, Jian; Cui, Xi'ai

    2013-02-01

    Scanning synthetic aperture radar (ScanSAR) mode is an efficient way to map large scale geophysical phenomena at low cost. The work presented in this paper is dedicated to ScanSAR interferometric processing and its implementation by making full use of existing standard interferometric synthetic aperture radar (InSAR) software. We first discuss the properties of the ScanSAR signal and its phase-preserved focusing using the full aperture algorithm in terms of interferometry. Then a complete interferometric processing flow is proposed. The standard ScanSAR product is decoded subswath by subswath with burst gaps padded with zero-pulses, followed by a Doppler centroid frequency estimation for each subswath and a polynomial fit of all of the subswaths for the whole scene. The burst synchronization of the interferometric pair is then calculated, and only the synchronized pulses are kept for further interferometric processing. After the complex conjugate multiplication of the interferometric pair, the residual non-integer pulse repetition interval (PRI) part between adjacent bursts caused by zero padding is compensated by resampling using a sinc kernel. The subswath interferograms are then mosaicked, in which a method is proposed to remove the subswath discontinuities in the overlap area. Then the following interferometric processing goes back to the traditional stripmap processing flow. A processor written with C and Fortran languages and controlled by Perl scripts is developed to implement these algorithms and processing flow based on the JPL/Caltech Repeat Orbit Interferometry PACkage (ROI_PAC). Finally, we use the processor to process ScanSAR data from the Envisat and ALOS satellites and obtain large scale deformation maps in the radar line-of-sight (LOS) direction.

  2. Standardization in the field of medical image management: the contribution of the MIMOSA model.

    PubMed

    Gibaud, B; Garfagni, H; Aubry, F; Pokropek, A T; Chameroy, V; Bizais, Y; Di Paola, R

    1998-02-01

    This paper deals with the development of standards in the field of medical imaging and picture archiving and communication systems (PACS's), and notably concerning the interworking between PACS's and hospital information systems (HIS). It explains, in detail, how a conceptual model of the management of medical images, such as the medical image management in an open system architecture (MIMOSA) model, can contribute to the development of standards for medical image management and PACS's. This contribution is twofold: 1) Since the model lists and structures the concepts and resources involved to make the images available to the users when and where they are required, and describes the interactions between PACS components and HIS, the MIMOSA work helps by defining a reference architecture which includes an external description of the various components of a PACS, and a logical structure for assembling them. 2) The model and the implementation of a demonstrator based on this model allow the relevance of the Digital Imaging and Communications in Medicine (DICOM) standard with respect to image management issues to be assessed, highlighting some current limitations of this standard and proposing extensions. Such a twofold action is necessary in order both to bring solutions, even partial, in the short term, and to allow for the convergence, in the long term, of the standards developed by independent standardization groups in medical informatics (e.g., those within Technical Committee 251 of CEN: Comité Européen de Normalisation). PMID:9617908

  3. Acceptance test of a commercially available software for automatic image registration of computed tomography (CT), magnetic resonance imaging (MRI) and 99mTc-methoxyisobutylisonitrile (MIBI) single-photon emission computed tomography (SPECT) brain images.

    PubMed

    Loi, Gianfranco; Dominietto, Marco; Manfredda, Irene; Mones, Eleonora; Carriero, Alessandro; Inglese, Eugenio; Krengli, Marco; Brambilla, Marco

    2008-09-01

    This note describes a method to characterize the performances of image fusion software (Syntegra) with respect to accuracy and robustness. Computed tomography (CT), magnetic resonance imaging (MRI), and single-photon emission computed tomography (SPECT) studies were acquired from two phantoms and 10 patients. Image registration was performed independently by two couples composed of one radiotherapist and one physicist by means of superposition of anatomic landmarks. Each couple performed jointly and saved the registration. The two solutions were averaged to obtain the gold standard registration. A new set of estimators was defined to identify translation and rotation errors in the coordinate axes, independently from point position in image field of view (FOV). Algorithms evaluated were local correlation (LC) for CT-MRI, normalized mutual information (MI) for CT-MRI, and CT-SPECT registrations. To evaluate accuracy, estimator values were compared to limiting values for the algorithms employed, both in phantoms and in patients. To evaluate robustness, different alignments between images taken from a sample patient were produced and registration errors determined. LC algorithm resulted accurate in CT-MRI registrations in phantoms, but exceeded limiting values in 3 of 10 patients. MI algorithm resulted accurate in CT-MRI and CT-SPECT registrations in phantoms; limiting values were exceeded in one case in CT-MRI and never reached in CT-SPECT registrations. Thus, the evaluation of robustness was restricted to the algorithm of MI both for CT-MRI and CT-SPECT registrations. The algorithm of MI proved to be robust: limiting values were not exceeded with translation perturbations up to 2.5 cm, rotation perturbations up to 10 degrees and roto-translational perturbation up to 3 cm and 5 degrees. PMID:17549564

  4. First experiences with the implementation of the European standard EN 62304 on medical device software for the quality assurance of a radiotherapy unit

    PubMed Central

    2014-01-01

    Background According to the latest amendment of the Medical Device Directive standalone software qualifies as a medical device when intended by the manufacturer to be used for medical purposes. In this context, the EN 62304 standard is applicable which defines the life-cycle requirements for the development and maintenance of medical device software. A pilot project was launched to acquire skills in implementing this standard in a hospital-based environment (in-house manufacture). Methods The EN 62304 standard outlines minimum requirements for each stage of the software life-cycle, defines the activities and tasks to be performed and scales documentation and testing according to its criticality. The required processes were established for the pre-existent decision-support software FlashDumpComparator (FDC) used during the quality assurance of treatment-relevant beam parameters. As the EN 62304 standard implicates compliance with the EN ISO 14971 standard on the application of risk management to medical devices, a risk analysis was carried out to identify potential hazards and reduce the associated risks to acceptable levels. Results The EN 62304 standard is difficult to implement without proper tools, thus open-source software was selected and integrated into a dedicated development platform. The control measures yielded by the risk analysis were independently implemented and verified, and a script-based test automation was retrofitted to reduce the associated test effort. After all documents facilitating the traceability of the specified requirements to the corresponding tests and of the control measures to the proof of execution were generated, the FDC was released as an accessory to the HIT facility. Conclusions The implementation of the EN 62304 standard was time-consuming, and a learning curve had to be overcome during the first iterations of the associated processes, but many process descriptions and all software tools can be re-utilized in follow-up projects

  5. Software safety

    NASA Technical Reports Server (NTRS)

    Leveson, Nancy

    1987-01-01

    Software safety and its relationship to other qualities are discussed. It is shown that standard reliability and fault tolerance techniques will not solve the safety problem for the present. A new attitude requires: looking at what you do NOT want software to do along with what you want it to do; and assuming things will go wrong. New procedures and changes to entire software development process are necessary: special software safety analysis techniques are needed; and design techniques, especially eliminating complexity, can be very helpful.

  6. Quantitative comparison and evaluation of software packages for assessment of abdominal adipose tissue distribution by magnetic resonance imaging

    PubMed Central

    Bonekamp, S; Ghosh, P; Crawford, S; Solga, SF; Horska, A; Brancati, FL; Diehl, AM; Smith, S; Clark, JM

    2009-01-01

    Objective To examine five available software packages for the assessment of abdominal adipose tissue with magnetic resonance imaging, compare their features and assess the reliability of measurement results. Design Feature evaluation and test–retest reliability of softwares (NIHImage, SliceOmatic, Analyze, HippoFat and EasyVision) used in manual, semi-automated or automated segmentation of abdominal adipose tissue. Subjects A random sample of 15 obese adults with type 2 diabetes. Measurements Axial T1-weighted spin echo images centered at vertebral bodies of L2–L3 were acquired at 1.5 T. Five software packages were evaluated (NIHImage, SliceOmatic, Analyze, HippoFat and EasyVision), comparing manual, semi-automated and automated segmentation approaches. Images were segmented into cross-sectional area (CSA), and the areas of visceral (VAT) and subcutaneous adipose tissue (SAT). Ease of learning and use and the design of the graphical user interface (GUI) were rated. Intra-observer accuracy and agreement between the software packages were calculated using intra-class correlation. Intra-class correlation coefficient was used to obtain test–retest reliability. Results Three of the five evaluated programs offered a semi-automated technique to segment the images based on histogram values or a user-defined threshold. One software package allowed manual delineation only. One fully automated program demonstrated the drawbacks of uncritical automated processing. The semi-automated approaches reduced variability and measurement error, and improved reproducibility. There was no significant difference in the intra-observer agreement in SAT and CSA. The VAT measurements showed significantly lower test–retest reliability. There were some differences between the software packages in qualitative aspects, such as user friendliness. Conclusion Four out of five packages provided essentially the same results with respect to the inter- and intra-rater reproducibility. Our

  7. Osteolytica: An automated image analysis software package that rapidly measures cancer-induced osteolytic lesions in in vivo models with greater reproducibility compared to other commonly used methods☆

    PubMed Central

    Evans, H.R.; Karmakharm, T.; Lawson, M.A.; Walker, R.E.; Harris, W.; Fellows, C.; Huggins, I.D.; Richmond, P.; Chantry, A.D.

    2016-01-01

    Methods currently used to analyse osteolytic lesions caused by malignancies such as multiple myeloma and metastatic breast cancer vary from basic 2-D X-ray analysis to 2-D images of micro-CT datasets analysed with non-specialised image software such as ImageJ. However, these methods have significant limitations. They do not capture 3-D data, they are time-consuming and they often suffer from inter-user variability. We therefore sought to develop a rapid and reproducible method to analyse 3-D osteolytic lesions in mice with cancer-induced bone disease. To this end, we have developed Osteolytica, an image analysis software method featuring an easy to use, step-by-step interface to measure lytic bone lesions. Osteolytica utilises novel graphics card acceleration (parallel computing) and 3-D rendering to provide rapid reconstruction and analysis of osteolytic lesions. To evaluate the use of Osteolytica we analysed tibial micro-CT datasets from murine models of cancer-induced bone disease and compared the results to those obtained using a standard ImageJ analysis method. Firstly, to assess inter-user variability we deployed four independent researchers to analyse tibial datasets from the U266-NSG murine model of myeloma. Using ImageJ, inter-user variability between the bones was substantial (± 19.6%), in contrast to using Osteolytica, which demonstrated minimal variability (± 0.5%). Secondly, tibial datasets from U266-bearing NSG mice or BALB/c mice injected with the metastatic breast cancer cell line 4T1 were compared to tibial datasets from aged and sex-matched non-tumour control mice. Analyses by both Osteolytica and ImageJ showed significant increases in bone lesion area in tumour-bearing mice compared to control mice. These results confirm that Osteolytica performs as well as the current 2-D ImageJ osteolytic lesion analysis method. However, Osteolytica is advantageous in that it analyses over the entirety of the bone volume (as opposed to selected 2-D images

  8. Osteolytica: An automated image analysis software package that rapidly measures cancer-induced osteolytic lesions in in vivo models with greater reproducibility compared to other commonly used methods.

    PubMed

    Evans, H R; Karmakharm, T; Lawson, M A; Walker, R E; Harris, W; Fellows, C; Huggins, I D; Richmond, P; Chantry, A D

    2016-02-01

    Methods currently used to analyse osteolytic lesions caused by malignancies such as multiple myeloma and metastatic breast cancer vary from basic 2-D X-ray analysis to 2-D images of micro-CT datasets analysed with non-specialised image software such as ImageJ. However, these methods have significant limitations. They do not capture 3-D data, they are time-consuming and they often suffer from inter-user variability. We therefore sought to develop a rapid and reproducible method to analyse 3-D osteolytic lesions in mice with cancer-induced bone disease. To this end, we have developed Osteolytica, an image analysis software method featuring an easy to use, step-by-step interface to measure lytic bone lesions. Osteolytica utilises novel graphics card acceleration (parallel computing) and 3-D rendering to provide rapid reconstruction and analysis of osteolytic lesions. To evaluate the use of Osteolytica we analysed tibial micro-CT datasets from murine models of cancer-induced bone disease and compared the results to those obtained using a standard ImageJ analysis method. Firstly, to assess inter-user variability we deployed four independent researchers to analyse tibial datasets from the U266-NSG murine model of myeloma. Using ImageJ, inter-user variability between the bones was substantial (±19.6%), in contrast to using Osteolytica, which demonstrated minimal variability (±0.5%). Secondly, tibial datasets from U266-bearing NSG mice or BALB/c mice injected with the metastatic breast cancer cell line 4T1 were compared to tibial datasets from aged and sex-matched non-tumour control mice. Analyses by both Osteolytica and ImageJ showed significant increases in bone lesion area in tumour-bearing mice compared to control mice. These results confirm that Osteolytica performs as well as the current 2-D ImageJ osteolytic lesion analysis method. However, Osteolytica is advantageous in that it analyses over the entirety of the bone volume (as opposed to selected 2-D images), it

  9. 3DVIEWNIX-AVS: a software package for the separate visualization of arteries and veins in CE-MRA images.

    PubMed

    Lei, Tianhu; Udupa, Jayaram K; Odhner, Dewey; Nyúl, László G; Saha, Punam K

    2003-01-01

    Our earlier study developed a computerized method, based on fuzzy connected object delineation principles and algorithms, for artery and vein separation in contrast enhanced Magnetic Resonance Angiography (CE-MRA) images. This paper reports its current development-a software package-for routine clinical use. The software package, termed 3DVIEWNIX-AVS, consists of the following major operational parts: (1) converting data from DICOM3 to 3DVIEWNIX format, (2) previewing slices and creating VOI and MIP Shell, (3) segmenting vessel, (4) separating artery and vein, (5) shell rendering vascular structures and creating animations. This package has been applied to EPIX Medical Inc's CE-MRA data (AngioMark MS-325). One hundred and thirty-five original CE-MRA data sets (of 52 patients) from 6 hospitals have been processed. In all case studies, unified parameter settings produce correct artery-vein separation. The current package is running on a Pentium PC under Linux and the total computation time per study is about 3 min. The strengths of this software package are (1) minimal user interaction, (2) minimal anatomic knowledge requirements on human vascular system, (3) clinically required speed, (4) free entry to any operational stages, (5) reproducible, reliable, high quality of results, and (6) cost effective computer implementation. To date, it seems to be the only software package (using an image processing approach) available for artery and vein separation of the human vascular system for routine use in a clinical setting. PMID:12821028

  10. EVALUATION OF DOSE REDUCTION POTENTIALS OF A NOVEL SCATTER CORRECTION SOFTWARE FOR BEDSIDE CHEST X-RAY IMAGING.

    PubMed

    Renger, Bernhard; Brieskorn, Carina; Toth, Vivien; Mentrup, Detlef; Jockel, Sascha; Lohöfer, Fabian; Schwarz, Martin; Rummeny, Ernst J; Noël, Peter B

    2016-06-01

    Bedside chest X-rays (CXR) for catheter position control may add up to a considerable radiation dose for patients in the intensive care unit (ICU). In this study, image quality and dose reduction potentials of a novel X-ray scatter correction software (SkyFlow, Philips Healthcare, Hamburg, Germany) were evaluated. CXRs of a 'LUNGMAN' (Kyoto Kagaku Co., LTD, Kyoto, Japan) thoracic phantom with a portacath system, a central venous line and a dialysis catheter were performed in an experimental set-up with multiple tube voltage and tube current settings without and with an antiscatter grid. Images with diagnostic exposure index (EI) 250-500 were evaluated for the difference in applied mAs with and without antiscatter grid. Three radiologists subjectively assessed the diagnostic image quality of grid and non-grid images. Compared with a non-grid image, usage of an antiscatter grid implied twice as high mAs in order to reach diagnostic EI. SkyFlow significantly improved the image quality of images acquired without grid. CXR with grid provided better image contrast than grid-less imaging with scatter correction. PMID:26977074

  11. Image processing of standard grading scales for objective assessment of contact lens wear complications

    NASA Astrophysics Data System (ADS)

    Perez-Cabre, Elisabet; Millan, Maria S.; Abril, Hector C.; Otxoa, E.

    2004-10-01

    Ocular complications in contact lens wearers are usually graded by specialists using visual inspection and comparing with established standards. The standard grading scales consist of either a set of illustrations or photographs ordered from a normal situation to a severe complication. In this work, an objective assessment of contact lens wear complications is intended by applying different image processing techniques to two standard grading scales (Efron and CCLRU grading scales). In particular, conjunctival hyperemia and papillary conjunctivitis are considered. Given a set of standard illustrations or pictures for each considered ocular disorder, image preprocessing is needed to compare equivalent areas. Histogram analysis allows segmenting vessel and background pixel populations, which are used to determine the most relevant features in the measurement of contact lens effects. Features such as color, total area of vessels and vessel length are used to evaluate bulbar and lid redness. The procedure to obtain an automatic grading method by digital image analysis of standard grading scales is described.

  12. ACR/NEMA Digital Image Interface Standard (An Illustrated Protocol Overview)

    NASA Astrophysics Data System (ADS)

    Lawrence, G. Robert

    1985-09-01

    The American College of Radiologists (ACR) and the National Electrical Manufacturers Association (NEMA) have sponsored a joint standards committee mandated to develop a universal interface standard for the transfer of radiology images among a variety of PACS imaging devicesl. The resulting standard interface conforms to the ISO/OSI standard reference model for network protocol layering. The standard interface specifies the lower layers of the reference model (Physical, Data Link, Transport and Session) and implies a requirement of the Network Layer should a requirement for a network exist. The message content has been considered and a flexible message and image format specified. The following Imaging Equipment modalities are supported by the standard interface... CT Computed Tomograpy DS Digital Subtraction NM Nuclear Medicine US Ultrasound MR Magnetic Resonance DR Digital Radiology The following data types are standardized over the transmission interface media.... IMAGE DATA DIGITIZED VOICE HEADER DATA RAW DATA TEXT REPORTS GRAPHICS OTHERS This paper consists of text supporting the illustrated protocol data flow. Each layer will be individually treated. Particular emphasis will be given to the Data Link layer (Frames) and the Transport layer (Packets). The discussion utilizes a finite state sequential machine model for the protocol layers.

  13. Three-Dimensional Root Phenotyping with a Novel Imaging and Software Platform1[C][W][OA

    PubMed Central

    Clark, Randy T.; MacCurdy, Robert B.; Jung, Janelle K.; Shaff, Jon E.; McCouch, Susan R.; Aneshansley, Daniel J.; Kochian, Leon V.

    2011-01-01

    A novel imaging and software platform was developed for the high-throughput phenotyping of three-dimensional root traits during seedling development. To demonstrate the platform’s capacity, plants of two rice (Oryza sativa) genotypes, Azucena and IR64, were grown in a transparent gellan gum system and imaged daily for 10 d. Rotational image sequences consisting of 40 two-dimensional images were captured using an optically corrected digital imaging system. Three-dimensional root reconstructions were generated and analyzed using a custom-designed software, RootReader3D. Using the automated and interactive capabilities of RootReader3D, five rice root types were classified and 27 phenotypic root traits were measured to characterize these two genotypes. Where possible, measurements from the three-dimensional platform were validated and were highly correlated with conventional two-dimensional measurements. When comparing gellan gum-grown plants with those grown under hydroponic and sand culture, significant differences were detected in morphological root traits (P < 0.05). This highly flexible platform provides the capacity to measure root traits with a high degree of spatial and temporal resolution and will facilitate novel investigations into the development of entire root systems or selected components of root systems. In combination with the extensive genetic resources that are now available, this platform will be a powerful resource to further explore the molecular and genetic determinants of root system architecture. PMID:21454799

  14. Sandia software guidelines: Software quality planning

    SciTech Connect

    Not Available

    1987-08-01

    This volume is one in a series of Sandia Software Guidelines intended for use in producing quality software within Sandia National Laboratories. In consonance with the IEEE Standard for Software Quality Assurance Plans, this volume identifies procedures to follow in producing a Software Quality Assurance Plan for an organization or a project, and provides an example project SQA plan. 2 figs., 4 tabs.

  15. Interference-free ultrasound imaging during HIFU therapy, using software tools

    NASA Technical Reports Server (NTRS)

    Vaezy, Shahram (Inventor); Held, Robert (Inventor); Sikdar, Siddhartha (Inventor); Managuli, Ravi (Inventor); Zderic, Vesna (Inventor)

    2010-01-01

    Disclosed herein is a method for obtaining a composite interference-free ultrasound image when non-imaging ultrasound waves would otherwise interfere with ultrasound imaging. A conventional ultrasound imaging system is used to collect frames of ultrasound image data in the presence of non-imaging ultrasound waves, such as high-intensity focused ultrasound (HIFU). The frames are directed to a processor that analyzes the frames to identify portions of the frame that are interference-free. Interference-free portions of a plurality of different ultrasound image frames are combined to generate a single composite interference-free ultrasound image that is displayed to a user. In this approach, a frequency of the non-imaging ultrasound waves is offset relative to a frequency of the ultrasound imaging waves, such that the interference introduced by the non-imaging ultrasound waves appears in a different portion of the frames.

  16. SU-E-J-42: Customized Deformable Image Registration Using Open-Source Software SlicerRT

    SciTech Connect

    Gaitan, J Cifuentes; Chin, L; Pignol, J; Kirby, N; Pouliot, J; Lasso, A; Pinter, C; Fichtinger, G

    2014-06-01

    Purpose: SlicerRT is a flexible platform that allows the user to incorporate the necessary images registration and processing tools to improve clinical workflow. This work validates the accuracy and the versatility of the deformable image registration algorithm of the free open-source software SlicerRT using a deformable physical pelvic phantom versus available commercial image fusion algorithms. Methods: Optical camera images of nonradiopaque markers implanted in an anatomical pelvic phantom were used to measure the ground-truth deformation and evaluate the theoretical deformations for several DIR algorithms. To perform the registration, full and empty bladder computed tomography (CT) images of the phantom were obtained and used as fixed and moving images, respectively. The DIR module, found in SlicerRT, used a B-spline deformable image registration with multiple optimization parameters that allowed customization of the registration including a regularization term that controlled the amount of local voxel displacement. The virtual deformation field at the center of the phantom was obtained and compared to the experimental ground-truth values. The parameters of SlicerRT were then varied to improve spatial accuracy. To quantify image similarity, the mean absolute difference (MAD) parameter using Hounsfield units was calculated. In addition, the Dice coefficient of the contoured rectum was evaluated to validate the strength of the algorithm to transfer anatomical contours. Results: Overall, SlicerRT achieved one of the lowest MAD values across the algorithm spectrum, but slightly smaller mean spatial errors in comparison to MIM software (MIM). On the other hand, SlicerRT created higher mean spatial errors than Velocity Medical Solutions (VEL), although obtaining an improvement on the DICE to 0.91. The large spatial errors were attributed to the poor contrast in the prostate bladder interface of the phantom. Conclusion: Based phantom validation, SlicerRT is capable of

  17. Nuquantus: Machine learning software for the characterization and quantification of cell nuclei in complex immunofluorescent tissue images

    PubMed Central

    Gross, Polina; Honnorat, Nicolas; Varol, Erdem; Wallner, Markus; Trappanese, Danielle M.; Sharp, Thomas E.; Starosta, Timothy; Duran, Jason M.; Koller, Sarah; Davatzikos, Christos; Houser, Steven R.

    2016-01-01

    Determination of fundamental mechanisms of disease often hinges on histopathology visualization and quantitative image analysis. Currently, the analysis of multi-channel fluorescence tissue images is primarily achieved by manual measurements of tissue cellular content and sub-cellular compartments. Since the current manual methodology for image analysis is a tedious and subjective approach, there is clearly a need for an automated analytical technique to process large-scale image datasets. Here, we introduce Nuquantus (Nuclei quantification utility software) - a novel machine learning-based analytical method, which identifies, quantifies and classifies nuclei based on cells of interest in composite fluorescent tissue images, in which cell borders are not visible. Nuquantus is an adaptive framework that learns the morphological attributes of intact tissue in the presence of anatomical variability and pathological processes. Nuquantus allowed us to robustly perform quantitative image analysis on remodeling cardiac tissue after myocardial infarction. Nuquantus reliably classifies cardiomyocyte versus non-cardiomyocyte nuclei and detects cell proliferation, as well as cell death in different cell classes. Broadly, Nuquantus provides innovative computerized methodology to analyze complex tissue images that significantly facilitates image analysis and minimizes human bias. PMID:27005843

  18. Nuquantus: Machine learning software for the characterization and quantification of cell nuclei in complex immunofluorescent tissue images

    NASA Astrophysics Data System (ADS)

    Gross, Polina; Honnorat, Nicolas; Varol, Erdem; Wallner, Markus; Trappanese, Danielle M.; Sharp, Thomas E.; Starosta, Timothy; Duran, Jason M.; Koller, Sarah; Davatzikos, Christos; Houser, Steven R.

    2016-03-01

    Determination of fundamental mechanisms of disease often hinges on histopathology visualization and quantitative image analysis. Currently, the analysis of multi-channel fluorescence tissue images is primarily achieved by manual measurements of tissue cellular content and sub-cellular compartments. Since the current manual methodology for image analysis is a tedious and subjective approach, there is clearly a need for an automated analytical technique to process large-scale image datasets. Here, we introduce Nuquantus (Nuclei quantification utility software) - a novel machine learning-based analytical method, which identifies, quantifies and classifies nuclei based on cells of interest in composite fluorescent tissue images, in which cell borders are not visible. Nuquantus is an adaptive framework that learns the morphological attributes of intact tissue in the presence of anatomical variability and pathological processes. Nuquantus allowed us to robustly perform quantitative image analysis on remodeling cardiac tissue after myocardial infarction. Nuquantus reliably classifies cardiomyocyte versus non-cardiomyocyte nuclei and detects cell proliferation, as well as cell death in different cell classes. Broadly, Nuquantus provides innovative computerized methodology to analyze complex tissue images that significantly facilitates image analysis and minimizes human bias.

  19. Nuquantus: Machine learning software for the characterization and quantification of cell nuclei in complex immunofluorescent tissue images.

    PubMed

    Gross, Polina; Honnorat, Nicolas; Varol, Erdem; Wallner, Markus; Trappanese, Danielle M; Sharp, Thomas E; Starosta, Timothy; Duran, Jason M; Koller, Sarah; Davatzikos, Christos; Houser, Steven R

    2016-01-01

    Determination of fundamental mechanisms of disease often hinges on histopathology visualization and quantitative image analysis. Currently, the analysis of multi-channel fluorescence tissue images is primarily achieved by manual measurements of tissue cellular content and sub-cellular compartments. Since the current manual methodology for image analysis is a tedious and subjective approach, there is clearly a need for an automated analytical technique to process large-scale image datasets. Here, we introduce Nuquantus (Nuclei quantification utility software) - a novel machine learning-based analytical method, which identifies, quantifies and classifies nuclei based on cells of interest in composite fluorescent tissue images, in which cell borders are not visible. Nuquantus is an adaptive framework that learns the morphological attributes of intact tissue in the presence of anatomical variability and pathological processes. Nuquantus allowed us to robustly perform quantitative image analysis on remodeling cardiac tissue after myocardial infarction. Nuquantus reliably classifies cardiomyocyte versus non-cardiomyocyte nuclei and detects cell proliferation, as well as cell death in different cell classes. Broadly, Nuquantus provides innovative computerized methodology to analyze complex tissue images that significantly facilitates image analysis and minimizes human bias. PMID:27005843

  20. New solutions for standardization, monitoring and quality management of fluorescence-based imaging systems (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Royon, Arnaud; Papon, Gautier

    2016-03-01

    Fluorescence microscopes have become ubiquitous in life sciences laboratories, including those focused on pharmaceuticals, diagnosis, and forensics. For the past few years, the need for both performance guarantees and quantifiable results has driven development in this area. However, the lack of appropriate standards and reference materials makes it difficult or impossible to compare the results of two fluorescence microscopes, or to measure performance fluctuations of one microscope over time. Therefore, the operation of fluorescence microscopes is not monitored as often as their use warrants - an issue that is recognized by both systems manufacturers and national metrology institutes. We have developed a new process that enables the etching of long-term stable fluorescent patterns with sub-micrometer sizes in three dimensions inside glass. In this paper, we present, based on this new process, a fluorescent multi-dimensional ruler and a dedicated software that are suitable for monitoring and quality management of fluorescence-based imaging systems (wide-field, confocal, multiphoton, high content machines). In addition to fluorescence, the same patterns exhibit bright- and dark-field contrast, DIC, and phase contrast, which make them also relevant to monitor these types of microscopes. Non-exhaustively, this new solution enables the measurement of: The stage repositioning accuracy; The illumination and detection homogeneities; The field flatness; The detectors' characteristics; The lateral and axial spatial resolutions; The spectral response (spectrum, intensity and lifetime) of the system. Thanks to the stability of the patterns, microscope performance assessment can be carried out as well in a daily basis as in the long term.

  1. Development of image and information management system for Korean standard brain

    NASA Astrophysics Data System (ADS)

    Chung, Soon Cheol; Choi, Do Young; Tack, Gye Rae; Sohn, Jin Hun

    2004-04-01

    The purpose of this study is to establish a reference for image acquisition for completing a standard brain for diverse Korean population, and to develop database management system that saves and manages acquired brain images and personal information of subjects. 3D MP-RAGE (Magnetization Prepared Rapid Gradient Echo) technique which has excellent Signal to Noise Ratio (SNR) and Contrast to Noise Ratio (CNR) as well as reduces image acquisition time was selected for anatomical image acquisition, and parameter values were obtained for the optimal image acquisition. Using these standards, image data of 121 young adults (early twenties) were obtained and stored in the system. System was designed to obtain, save, and manage not only anatomical image data but also subjects' basic demographic factors, medical history, handedness inventory, state-trait anxiety inventory, A-type personality inventory, self-assessment depression inventory, mini-mental state examination, intelligence test, and results of personality test via a survey questionnaire. Additionally this system was designed to have functions of saving, inserting, deleting, searching, and printing image data and personal information of subjects, and to have accessibility to them as well as automatic connection setup with ODBC. This newly developed system may have major contribution to the completion of a standard brain for diverse Korean population since it can save and manage their image data and personal information.

  2. Nonrigid registration of joint histograms for intensity standardization in magnetic resonance imaging.

    PubMed

    Jäger, Florian; Hornegger, Joachim

    2009-01-01

    A major disadvantage of magnetic resonance imaging (MRI) compared to other imaging modalities like computed tomography is the fact that its intensities are not standardized. Our contribution is a novel method for MRI signal intensity standardization of arbitrary MRI scans, so as to create a pulse sequence dependent standard intensity scale. The proposed method is the first approach that uses the properties of all acquired images jointly (e.g., T1- and T2-weighted images). The image properties are stored in multidimensional joint histograms. In order to normalize the probability density function (pdf) of a newly acquired data set, a nonrigid image registration is performed between a reference and the joint histogram of the acquired images. From this matching a nonparametric transformation is obtained, which describes a mapping between the corresponding intensity spaces and subsequently adapts the image properties of the newly acquired series to a given standard. As the proposed intensity standardization is based on the probability density functions of the data sets only, it is independent of spatial coherence or prior segmentations of the reference and current images. Furthermore, it is not designed for a particular application, body region or acquisition protocol. The evaluation was done using two different settings. First, MRI head images were used, hence the approach can be compared to state-of-the-art methods. Second, whole body MRI scans were used. For this modality no other normalization algorithm is known in literature. The Jeffrey divergence of the pdfs of the whole body scans was reduced by 45%. All used data sets were acquired during clinical routine and thus included pathologies. PMID:19116196

  3. Software workflow for the automatic tagging of medieval manuscript images (SWATI)

    NASA Astrophysics Data System (ADS)

    Chandna, Swati; Tonne, Danah; Jejkal, Thomas; Stotzka, Rainer; Krause, Celia; Vanscheidt, Philipp; Busch, Hannah; Prabhune, Ajinkya

    2015-01-01

    Digital methods, tools and algorithms are gaining in importance for the analysis of digitized manuscript collections in the arts and humanities. One example is the BMBF-funded research project "eCodicology" which aims to design, evaluate and optimize algorithms for the automatic identification of macro- and micro-structural layout features of medieval manuscripts. The main goal of this research project is to provide better insights into high-dimensional datasets of medieval manuscripts for humanities scholars. The heterogeneous nature and size of the humanities data and the need to create a database of automatically extracted reproducible features for better statistical and visual analysis are the main challenges in designing a workflow for the arts and humanities. This paper presents a concept of a workflow for the automatic tagging of medieval manuscripts. As a starting point, the workflow uses medieval manuscripts digitized within the scope of the project Virtual Scriptorium St. Matthias". Firstly, these digitized manuscripts are ingested into a data repository. Secondly, specific algorithms are adapted or designed for the identification of macro- and micro-structural layout elements like page size, writing space, number of lines etc. And lastly, a statistical analysis and scientific evaluation of the manuscripts groups are performed. The workflow is designed generically to process large amounts of data automatically with any desired algorithm for feature extraction. As a result, a database of objectified and reproducible features is created which helps to analyze and visualize hidden relationships of around 170,000 pages. The workflow shows the potential of automatic image analysis by enabling the processing of a single page in less than a minute. Furthermore, the accuracy tests of the workflow on a small set of manuscripts with respect to features like page size and text areas show that automatic and manual analysis are comparable. The usage of a computer

  4. Standardized way for imaging of the sagittal spinal balance.

    PubMed

    Morvan, Gérard; Mathieu, Philippe; Vuillemin, Valérie; Guerini, Henri; Bossard, Philippe; Zeitoun, Frédéric; Wybier, Marc

    2011-09-01

    Nowadays, conventional or digitalized teleradiography remains the most commonly used tool for the study of the sagittal balance, sometimes with secondary digitalization. The irradiation given by this technique is important and the photographic results are often poor. Some radiographic tables allow the realization of digitalized spinal radiographs by simultaneous translation of X-ray tube and receptor. EOS system is a new, very low dose system which gives good quality images, permits a simultaneous acquisition of upright frontal and sagittal views, is able to cover in the same time the spine and the lower limbs and study the axial plane on 3D envelope reconstructions. In the future, this low dose system should take a great place in the study of the pelvispinal balance. On the lateral view, several pelvic (incidence, pelvic tilt, sacral slope) and spinal (lumbar lordosis, thoracic kyphosis, Th9 sagittal offset, C7 plumb line) parameters are drawn to define the pelvispinal balance. All are interdependent. Pelvic incidence is an individual anatomic characteristic that corresponds to the "thickness" of the pelvis and governs the spinal balance. Pelvis and spine, in a harmonious whole, can be compared to an accordion, more or less compressed or stretched. PMID:21830081

  5. Development of image quality assurance measures of the ExacTrac localization system using commercially available image evaluation software and hardware for image-guided radiotherapy.

    PubMed

    Stanley, Dennis N; Papanikolaou, Nikos; Gutiérrez, Alonso N

    2014-01-01

    Quality assurance (QA) of the image quality for image-guided localization systems is crucial to ensure accurate visualization and localization of target volumes. In this study, a methodology was developed to assess and evaluate the constancy of the high-contrast spatial resolution, dose, energy, contrast, and geometrical accuracy of the BrainLAB ExacTrac system. An in-house fixation device was constructed to hold the QCkV-1 phantom firmly and reproducibly against the face of the flat panel detectors. Two image sets per detector were acquired using ExacTrac preset console settings over a period of three months. The image sets were analyzed in PIPSpro and the following metrics were recorded: high-contrast spatial resolution (f30, f40, f50 (lp/mm)), noise, and contrast-to-noise ratio. Geometrical image accu- racy was evaluated by assessing the length between to predetermined points of the QCkV-1 phantom. Dose and kVp were recorded using the Unfors RaySafe Xi R/F Detector. The kVp and dose were evaluated for the following: Cranial Standard (CS) (80 kV,80 mA,80 ms), Thorax Standard (TS) (120 kV,160 mA,160 ms), Abdomen Standard (AS) (120 kV,160 mA,130 ms), and Pelvis Standard (PS) (120 kV,160 mA,160 ms). With regard to high-contrast spatial resolution, the mean values of the f30 (lp/mm), f40 (lp/mm) and f50 (lp/mm) for the left detector were 1.39 ± 0.04, 1.24 ± 0.05, and 1.09 ± 0.04, respectively, while for the right detector they were 1.38 ± 0.04, 1.22 ± 0.05, and 1.09 ± 0.05, respectively. Mean CNRs for the left and right detectors were 148 ± 3 and 143 ± 4, respectively. For geometrical accuracy, both detectors had a measured image length of the QCkV-1 of 57.9 ± 0.5 mm. The left detector showed dose measurements of 20.4 ± 0.2 μGy (CS), 191.8 ± 0.7 μGy (TS), 154.2 ± 0.7 μGy (AS), and 192.2 ± 0.6 μGy (PS), while the right detector showed 20.3 ± 0.3 μGy (CS), 189.7 ± 0.8 μGy (TS), 151.0 ± 0.7 μGy (AS), and 189.7 ± 0.8 μGy (PS), respectively. For X

  6. NiftyFit: a Software Package for Multi-parametric Model-Fitting of 4D Magnetic Resonance Imaging Data.

    PubMed

    Melbourne, Andrew; Toussaint, Nicolas; Owen, David; Simpson, Ivor; Anthopoulos, Thanasis; De Vita, Enrico; Atkinson, David; Ourselin, Sebastien

    2016-07-01

    Multi-modal, multi-parametric Magnetic Resonance (MR) Imaging is becoming an increasingly sophisticated tool for neuroimaging. The relationships between parameters estimated from different individual MR modalities have the potential to transform our understanding of brain function, structure, development and disease. This article describes a new software package for such multi-contrast Magnetic Resonance Imaging that provides a unified model-fitting framework. We describe model-fitting functionality for Arterial Spin Labeled MRI, T1 Relaxometry, T2 relaxometry and Diffusion Weighted imaging, providing command line documentation to generate the figures in the manuscript. Software and data (using the nifti file format) used in this article are simultaneously provided for download. We also present some extended applications of the joint model fitting framework applied to diffusion weighted imaging and T2 relaxometry, in order to both improve parameter estimation in these models and generate new parameters that link different MR modalities. NiftyFit is intended as a clear and open-source educational release so that the user may adapt and develop their own functionality as they require. PMID:26972806

  7. [Attempt to objectify of coronary vessels course variability on the standard arteriograms by using original image processing algorithm].

    PubMed

    Syrycki, Marek; Stachurska, Aneta; Mysiak, Andrzej; Kacała, Ryszard

    2014-01-01

    The aim of paper: the analysis of standard angiograms of the left coronary artery was done in this paper in purpose of performing the uniform mathematical description of the coronary branches (both proximal and distal) course. The changes the coronary branches underwent depending the phase of cardiac cycle (diastole, isovolumic systole and tonic systole) were examined as well. The examined material consists of sequences of standard angiograms of the left coronary artery (LCA) obtained from 10 patients (5 male and 5 female) undergoing the standard diagnostic procedure in course of suspected unstable cardiac ischemia. The coronarograms were applied with digital angiography system INNOVA 2000 GE. The average age of the patients was 51 years. The method was based on using the original algorithm of image processing allowing automatic, in real-time, vessel edges detection and mathematical description of the vessels course. The software ImageJ, deriving from public domain of National Institutes of Health of USA was used for image analysis and for statistical analysis Statistica for Windows 5.5 version. The obtained results of examined dependences and describing them mathematically polynomial equations were presented on the diagrams. Among examined parameters the ferret diameter, area and perimeter of vessel outlines (both proximal and distal branches) were the most reliable. Their changes in relation to the phase of cardiac cycle were very close to the level of statistical significance. In conclusion the performed analysis allows to objectify description of coronary vessels course and variability. It also makes possible to identify the abnormal manners of vessels outlines that could be suspected of structural disorders even despite the absence of significant coronary stenosis. PMID:25782214

  8. Evaluation of three methods for retrospective correction of vignetting on medical microscopy images utilizing two open source software tools.

    PubMed

    Babaloukas, Georgios; Tentolouris, Nicholas; Liatis, Stavros; Sklavounou, Alexandra; Perrea, Despoina

    2011-12-01

    Correction of vignetting on images obtained by a digital camera mounted on a microscope is essential before applying image analysis. The aim of this study is to evaluate three methods for retrospective correction of vignetting on medical microscopy images and compare them with a prospective correction method. One digital image from four different tissues was used and a vignetting effect was applied on each of these images. The resulted vignetted image was replicated four times and in each replica a different method for vignetting correction was applied with fiji and gimp software tools. The highest peak signal-to-noise ratio from the comparison of each method to the original image was obtained from the prospective method in all tissues. The morphological filtering method provided the highest peak signal-to-noise ratio value amongst the retrospective methods. The prospective method is suggested as the method of choice for correction of vignetting and if it is not applicable, then the morphological filtering may be suggested as the retrospective alternative method. PMID:21950542

  9. Standardization of size, shape and internal structure of spinal cord images: comparison of three transformation methods.

    PubMed

    Fujiki, Yasuhisa; Yokota, Shigefumi; Okada, Yasumasa; Oku, Yoshitaka; Tamura, Yoshiyasu; Ishiguro, Makio; Miwakeichi, Fumikazu

    2013-01-01

    Functional fluorescence imaging has been widely applied to analyze spatio-temporal patterns of cellular dynamics in the brain and spinal cord. However, it is difficult to integrate spatial information obtained from imaging data in specific regions of interest across multiple samples, due to large variability in the size, shape and internal structure of samples. To solve this problem, we attempted to standardize transversely sectioned spinal cord images focusing on the laminar structure in the gray matter. We employed three standardization methods, the affine transformation (AT), the angle-dependent transformation (ADT) and the combination of these two methods (AT+ADT). The ADT is a novel non-linear transformation method developed in this study to adjust an individual image onto the template image in the polar coordinate system. We next compared the accuracy of these three standardization methods. We evaluated two indices, i.e., the spatial distribution of pixels that are not categorized to any layer and the error ratio by the leave-one-out cross validation method. In this study, we used neuron-specific marker (NeuN)-stained histological images of transversely sectioned cervical spinal cord slices (21 images obtained from 4 rats) to create the standard atlas and also to serve for benchmark tests. We found that the AT+ADT outperformed other two methods, though the accuracy of each method varied depending on the layer. This novel image standardization technique would be applicable to optical recording such as voltage-sensitive dye imaging, and will enable statistical evaluations of neural activation across multiple samples. PMID:24223702

  10. Standardized quantitative measurements of wrist cartilage in healthy humans using 3T magnetic resonance imaging

    PubMed Central

    Zink, Jean-Vincent; Souteyrand, Philippe; Guis, Sandrine; Chagnaud, Christophe; Fur, Yann Le; Militianu, Daniela; Mattei, Jean-Pierre; Rozenbaum, Michael; Rosner, Itzhak; Guye, Maxime; Bernard, Monique; Bendahan, David

    2015-01-01

    AIM: To quantify the wrist cartilage cross-sectional area in humans from a 3D magnetic resonance imaging (MRI) dataset and to assess the corresponding reproducibility. METHODS: The study was conducted in 14 healthy volunteers (6 females and 8 males) between 30 and 58 years old and devoid of articular pain. Subjects were asked to lie down in the supine position with the right hand positioned above the pelvic region on top of a home-built rigid platform attached to the scanner bed. The wrist was wrapped with a flexible surface coil. MRI investigations were performed at 3T (Verio-Siemens) using volume interpolated breath hold examination (VIBE) and dual echo steady state (DESS) MRI sequences. Cartilage cross sectional area (CSA) was measured on a slice of interest selected from a 3D dataset of the entire carpus and metacarpal-phalangeal areas on the basis of anatomical criteria using conventional image processing radiology software. Cartilage cross-sectional areas between opposite bones in the carpal region were manually selected and quantified using a thresholding method. RESULTS: Cartilage CSA measurements performed on a selected predefined slice were 292.4 ± 39 mm2 using the VIBE sequence and slightly lower, 270.4 ± 50.6 mm2, with the DESS sequence. The inter (14.1%) and intra (2.4%) subject variability was similar for both MRI methods. The coefficients of variation computed for the repeated measurements were also comparable for the VIBE (2.4%) and the DESS (4.8%) sequences. The carpus length averaged over the group was 37.5 ± 2.8 mm with a 7.45% between-subjects coefficient of variation. Of note, wrist cartilage CSA measured with either the VIBE or the DESS sequences was linearly related to the carpal bone length. The variability between subjects was significantly reduced to 8.4% when the CSA was normalized with respect to the carpal bone length. CONCLUSION: The ratio between wrist cartilage CSA and carpal bone length is a highly reproducible standardized

  11. Filtering Chromatic Aberration for Wide Acceptance Angle Electrostatic Lenses II--Experimental Evaluation and Software-Based Imaging Energy Analyzer.

    PubMed

    Fazekas, Ádám; Daimon, Hiroshi; Matsuda, Hiroyuki; Tóth, László

    2016-03-01

    Here, the experimental results of the method of filtering the effect of chromatic aberration for wide acceptance angle electrostatic lens-based system are described. This method can eliminate the effect of chromatic aberration from the images of a measured spectral image sequence by determining and removing the effect of higher and lower kinetic energy electrons on each different energy image, which leads to significant improvement of image and spectral quality. The method is based on the numerical solution of a large system of linear equations and equivalent with a multivariate strongly nonlinear deconvolution method. A matrix whose elements describe the strongly nonlinear chromatic aberration-related transmission function of the lens system acts on the vector of the ordered pixels of the distortion free spectral image sequence, and produces the vector of the ordered pixels of the measured spectral image sequence. Since the method can be applied not only on 2D real- and $k$ -space diffraction images, but also along a third dimension of the image sequence that is along the optical or in the 3D parameter space, the energy axis, it functions as a software-based imaging energy analyzer (SBIEA). It can also be applied in cases of light or other type of optics for different optical aberrations and distortions. In case of electron optics, the SBIEA method makes possible the spectral imaging without the application of any other energy filter. It is notable that this method also eliminates the disturbing background significantly in the present investigated case of reflection electron energy loss spectra. It eliminates the instrumental effects and makes possible to measure the real physical processes better. PMID:26863662

  12. Astronomy Software

    NASA Technical Reports Server (NTRS)

    1995-01-01

    Software Bisque's TheSky, SkyPro and Remote Astronomy Software incorporate technology developed for the Hubble Space Telescope. TheSky and SkyPro work together to orchestrate locating, identifying and acquiring images of deep sky objects. With all three systems, the user can directly control computer-driven telescopes and charge coupled device (CCD) cameras through serial ports. Through the systems, astronomers and students can remotely operate a telescope at the Mount Wilson Observatory Institute.

  13. SU-E-J-104: Evaluation of Accuracy for Various Deformable Image Registrations with Virtual Deformation QA Software

    SciTech Connect

    Han, S; Kim, K; Kim, M; Jung, H; Ji, Y; Choi, S; Park, S

    2015-06-15

    Purpose: The accuracy of deformable image registration (DIR) has a significant dosimetric impact in radiation treatment planning. We evaluated accuracy of various DIR algorithms using virtual deformation QA software (ImSimQA, Oncology System Limited, UK). Methods: The reference image (Iref) and volume (Vref) was first generated with IMSIMQA software. We deformed Iref with axial movement of deformation point and Vref depending on the type of deformation that are the deformation1 is to increase the Vref (relaxation) and the deformation 2 is to decrease the Vref (contraction) .The deformed image (Idef) and volume (Vdef) were inversely deformed to Iref and Vref using DIR algorithms. As a Result, we acquired deformed image (Iid) and volume (Vid). The DIR algorithms were optical flow (HS, IOF) and demons (MD, FD) of the DIRART. The image similarity evaluation between Iref and Iid was calculated by Normalized Mutual Information (NMI) and Normalized Cross Correlation (NCC). The value of Dice Similarity Coefficient (DSC) was used for evaluation of volume similarity. Results: When moving distance of deformation point was 4 mm, the value of NMI was above 1.81 and NCC was above 0.99 in all DIR algorithms. Since the degree of deformation was increased, the degree of image similarity was decreased. When the Vref increased or decreased about 12%, the difference between Vref and Vid was within ±5% regardless of the type of deformation. The value of DSC was above 0.95 in deformation1 except for the MD algorithm. In case of deformation 2, that of DSC was above 0.95 in all DIR algorithms. Conclusion: The Idef and Vdef have not been completely restored to Iref and Vref and the accuracy of DIR algorithms was different depending on the degree of deformation. Hence, the performance of DIR algorithms should be verified for the desired applications.

  14. Mission planning for Shuttle Imaging Radar-C (SIR-C) with a real-time interactive planning software

    NASA Astrophysics Data System (ADS)

    Potts, Su K.

    1993-03-01

    The Shuttle Imaging Radar-C (SIR-C) mission will operate from the payload bay of the space shuttle for 8 days, gathering Synthetic Aperture Radar (SAR) data over specific sites on the Earth. The short duration of the mission and the requirement for realtime planning offer challenges in mission planning and in the design of the Planning and Analysis Subsystem (PAS). The PAS generates shuttle ephemerides and mission planning data and provides an interactive real-time tool for quick mission replanning. It offers a multi-user and multiprocessing environment, and it is able to keep multiple versions of the mission timeline data while maintaining data integrity and security. Its flexible design allows one software to provide different menu options based on the user's operational function, and makes it easy to tailor the software for other Earth orbiting missions.

  15. Mission planning for Shuttle Imaging Radar-C (SIR-C) with a real-time interactive planning software

    NASA Technical Reports Server (NTRS)

    Potts, Su K.

    1993-01-01

    The Shuttle Imaging Radar-C (SIR-C) mission will operate from the payload bay of the space shuttle for 8 days, gathering Synthetic Aperture Radar (SAR) data over specific sites on the Earth. The short duration of the mission and the requirement for realtime planning offer challenges in mission planning and in the design of the Planning and Analysis Subsystem (PAS). The PAS generates shuttle ephemerides and mission planning data and provides an interactive real-time tool for quick mission replanning. It offers a multi-user and multiprocessing environment, and it is able to keep multiple versions of the mission timeline data while maintaining data integrity and security. Its flexible design allows one software to provide different menu options based on the user's operational function, and makes it easy to tailor the software for other Earth orbiting missions.

  16. Evaluation of the Image-Pro Plus 4.5 software for automatic counting of labeled nuclei by PCNA immunohistochemistry.

    PubMed

    Francisco, Jairo Silva; Moraes, Heleno Pinto de; Dias, Eliane Pedra

    2004-01-01

    The objective of this study was to create and evaluate a routine (macro) using Image-Pro Plus 4.5 software (Media Cybernetics, Silver Spring, USA) for automatic counting of labeled nuclei by proliferating cell nuclear antigen (PCNA) immunohistochemistry. A total of 154 digital color images were obtained from eleven sections of reticular oral lichen planus stained by PCNA immunohistochemistry. Mean density (gray-level), red density, green density, blue density, area, minor axis, perimeter rate and roundness were parameters used for PCNA labeled nuclei discrimination, followed by their outlined presentation and counting in each image by the macro. Mean density and area thresholds were automatically defined based, respectively, on mean density and mean area of PCNA labeled nuclei in the assessed image. The reference method consisted in visual counting of manually outlined labeled nuclei. Statistical analysis of macro results versus reference countings showed a very significant correlation (rs = 0.964, p < 0.001) for general results and a high level (89.8 +/- 3.8%) of correctly counted labeled nuclei. We conclude that the main parameters associated with a high correlation between macro and reference results were mean density (gray-level) and area thresholds based on image profiles; and that Image-Pro Plus 4.5 using a routine with automatic definition of mean density and area thresholds can be considered a valid alternative to visual counting of PCNA labeled nuclei. PMID:15311310

  17. Software Based Supernova Recognition

    NASA Astrophysics Data System (ADS)

    Walters, Stephen M.

    2014-05-01

    This paper describes software for detecting Supernova (SN) in images. The software can operate in real-time to discover SN while data is being collected so the instrumentation can immediately be re-tasked to perform spectroscopy or photometry of a discovery. Because the instrumentation captures two images per minute, the realtime budget is constrained to 30 seconds per target, a challenging goal. Using a set of two to four images, the program creates a "Reference" (REF) image and a "New" (NEW) image where all images are used in both NEW and REF but any SN survives the combination process only in the NEW image. This process produces good quality images having similar noise characteristics but without artifacts that might be interpreted as SN. The images are then adjusted for seeing and brightness differences using a variant of Tomaney and Crotts method of Point Spread Function (PSF) matching after which REF is subtracted from NEW to produce a Difference (DIF) image. A Classifier is then trained on a grid of artificial SN to estimate the statistical properties of four attributes and used in a process to mask false positives that can be clearly identified as such. Further training to avoid any remaining false positives sets the range, in standard deviations for each attribute, that the Classifier will accept as a valid SN. This training enables the Classifier to discriminate between SN and most subtraction residue. Lastly, the DIF image is scanned and measured by the Classifier to find locations where all four properties fall within their acceptance ranges. If multiple locations are found, the one best conforming to the training estimates is chosen. This location is then declared as a Candidate SN, the instrumentation re-tasked and the operator notified.

  18. Robotic 3D scanner as an alternative to standard modalities of medical imaging.

    PubMed

    Chromy, Adam; Zalud, Ludek

    2014-01-01

    There are special medical cases, where standard medical imaging modalities are able to offer sufficient results, but not in the optimal way. It means, that desired results are produced with unnecessarily high expenses, with redundant informations or with needless demands on patient. This paper deals with one special case, where information useful for examination is the body surface only, inner sight into the body is needless. New specialized medical imaging device is developed for this situation. In the Introduction section, analysis of presently used medical imaging modalities is presented, which declares, that no available imaging device is best fitting for mentioned purposes. In the next section, development of the new specialized medical imaging device is presented, and its principles and functions are described. Then, the parameters of new device are compared with present ones. It brings significant advantages comparing to present imaging systems. PMID:25694857

  19. Integration of bio- and geoscience data with the ODM2 standards and software ecosystem for the CZOData and BiG CZ Data projects

    NASA Astrophysics Data System (ADS)

    Aufdenkampe, A. K.; Mayorga, E.; Horsburgh, J. S.; Lehnert, K. A.; Zaslavsky, I.

    2015-12-01

    We have developed a family of solutions to the challenges of integrating diverse data from of biological and geological (BiG) disciplines for Critical Zone (CZ) science. These standards and software solutions have been developed around the new Observations Data Model version 2.0 (ODM2, http://ODM2.org), which was designed as a profile of the Open Geospatial Consortium's (OGC) Observations and Measurements (O&M) standard. The ODM2 standards and software ecosystem has at it's core an information model that balances specificity with flexibility to powerfully and equally serve the needs of multiple dataset types, from multivariate sensor-generated time series to geochemical measurements of specimen hierarchies to multi-dimensional spectral data to biodiversity observations. ODM2 has been adopted as the information model guiding the next generation of cyberinfrastructure development for the Interdisciplinary Earth Data Alliance (http://www.iedadata.org/) and the CUAHSI Water Data Center (https://www.cuahsi.org/wdc). Here we present several components of the ODM2 standards and software ecosystem that were developed specifically to help CZ scientists and their data managers to share and manage data through the national Critical Zone Observatory data integration project (CZOData, http://criticalzone.org/national/data/) and the bio integration with geo for critical zone science data project (BiG CZ Data, http://bigcz.org/). These include the ODM2 Controlled Vocabulary system (http://vocabulary.odm2.org), the YAML Observation Data Archive & exchange (YODA) File Format (https://github.com/ODM2/YODA-File) and the BiG CZ Toolbox, which will combine easy-to-install ODM2 databases (https://github.com/ODM2/ODM2) with a variety of graphical software packages for data management such as ODMTools (https://github.com/ODM2/ODMToolsPython) and the ODM2 Streaming Data Loader (https://github.com/ODM2/ODM2StreamingDataLoader).

  20. Prostate tumour volumes: evaluation of the agreement between magnetic resonance imaging and histology using novel co-registration software

    PubMed Central

    Le Nobin, Julien; Orczyk, Clément; Deng, Fang-Ming; Melamed, Jonathan; Rusinek, Henry; Taneja, Samir S.; Rosenkrantz, Andrew B.

    2016-01-01

    Objective To evaluate the agreement between prostate tumour volume determined using multiparametric magnetic resonance imaging (MRI) and that determined by histological assessment, using detailed software-assisted co-registration. Materials and Methods A total of 37 patients who underwent 3T multiparametric MRI (T2-weighted imaging [T2WI], diffusion-weighted imaging [DWI]/apparent diffusion coefficient [ADC], dynamic contrast-enhanced [DCE] imaging) were included. A radiologist traced the borders of suspicious lesions on T2WI and ADC and assigned a suspicion score of between 2 and 5, while a uropathologist traced the borders of tumours on histopathological photographs. Software was used to co-register MRI and three-dimensional digital reconstructions of radical prostatectomy specimens and to compute imaging and histopathological volumes. Agreement in volumes between MRI and histology was assessed using Bland–Altman plots and stratified by tumour characteristics. Results Among 50 tumours, the mean differences (95% limits of agreement) in MRI relative to histology were −32% (−128 to +65%) on T2WI and −47% (−143 to +49%) on ADC. For all tumour subsets, volume underestimation was more marked on ADC maps (mean difference ranging from −57 to −16%) than on T2WI (mean difference ranging from −45 to +2%). The 95% limits of agreement were wide for all comparisons, with the lower 95% limit ranging between −77 and −143% across assessments. Volume underestimation was more marked for tumours with a Gleason score ≥7 or a MRI suspicion score 4 or 5. Conclusion Volume estimates of prostate cancer using MRI tended to substantially underestimate histopathological volumes, with a wide variability in extent of underestimation across cases. These findings have implications for efforts to use MRI to guide risk assessment. PMID:24673731

  1. Review and Implementation of the Emerging CCSDS Recommended Standard for Multispectral and Hyperspectral Lossless Image Coding

    NASA Technical Reports Server (NTRS)

    Sanchez, Jose Enrique; Auge, Estanislau; Santalo, Josep; Blanes, Ian; Serra-Sagrista, Joan; Kiely, Aaron

    2011-01-01

    A new standard for image coding is being developed by the MHDC working group of the CCSDS, targeting onboard compression of multi- and hyper-spectral imagery captured by aircraft and satellites. The proposed standard is based on the "Fast Lossless" adaptive linear predictive compressor, and is adapted to better overcome issues of onboard scenarios. In this paper, we present a review of the state of the art in this field, and provide an experimental comparison of the coding performance of the emerging standard in relation to other state-of-the-art coding techniques. Our own independent implementation of the MHDC Recommended Standard, as well as of some of the other techniques, has been used to provide extensive results over the vast corpus of test images from the CCSDS-MHDC.

  2. Reduction of blocking effects for the JPEG baseline image compression standard

    NASA Technical Reports Server (NTRS)

    Zweigle, Gregary C.; Bamberger, Roberto H.

    1992-01-01

    Transform coding has been chosen for still image compression in the Joint Photographic Experts Group (JPEG) standard. Although transform coding performs superior to many other image compression methods and has fast algorithms for implementation, it is limited by a blocking effect at low bit rates. The blocking effect is inherent in all nonoverlapping transforms. This paper presents a technique for reducing blocking while remaining compatible with the JPEG standard. Simulations show that the system results in subjective performance improvements, sacrificing only a marginal increase in bit rate.

  3. Analysis of a marine phototrophic biofilm by confocal laser scanning microscopy using the new image quantification software PHLIP

    PubMed Central

    Mueller, Lukas N; de Brouwer, Jody FC; Almeida, Jonas S; Stal, Lucas J; Xavier, João B

    2006-01-01

    Background Confocal laser scanning microscopy (CLSM) is the method of choice to study interfacial biofilms and acquires time-resolved three-dimensional data of the biofilm structure. CLSM can be used in a multi-channel modus where the different channels map individual biofilm components. This communication presents a novel image quantification tool, PHLIP, for the quantitative analysis of large amounts of multichannel CLSM data in an automated way. PHLIP can be freely downloaded from Results PHLIP is an open source public license Matlab toolbox that includes functions for CLSM imaging data handling and ten image analysis operations describing various aspects of biofilm morphology. The use of PHLIP is here demonstrated by a study of the development of a natural marine phototrophic biofilm. It is shown how the examination of the individual biofilm components using the multi-channel capability of PHLIP allowed the description of the dynamic spatial and temporal separation of diatoms, bacteria and organic and inorganic matter during the shift from a bacteria-dominated to a diatom-dominated phototrophic biofilm. Reflection images and weight measurements complementing the PHLIP analyses suggest that a large part of the biofilm mass consisted of inorganic mineral material. Conclusion The presented case study reveals new insight into the temporal development of a phototrophic biofilm where multi-channel imaging allowed to parallel monitor the dynamics of the individual biofilm components over time. This application of PHLIP presents the power of biofilm image analysis by multi-channel CLSM software and demonstrates the importance of PHLIP for the scientific community as a flexible and extendable image analysis platform for automated image processing. PMID:16412253

  4. ImaSim, a software tool for basic education of medical x-ray imaging in radiotherapy and radiology

    NASA Astrophysics Data System (ADS)

    Landry, Guillaume; deBlois, François; Verhaegen, Frank

    2013-11-01

    Introduction: X-ray imaging is an important part of medicine and plays a crucial role in radiotherapy. Education in this field is mostly limited to textbook teaching due to equipment restrictions. A novel simulation tool, ImaSim, for teaching the fundamentals of the x-ray imaging process based on ray-tracing is presented in this work. ImaSim is used interactively via a graphical user interface (GUI). Materials and methods: The software package covers the main x-ray based medical modalities: planar kilo voltage (kV), planar (portal) mega voltage (MV), fan beam computed tomography (CT) and cone beam CT (CBCT) imaging. The user can modify the photon source, object to be imaged and imaging setup with three-dimensional editors. Objects are currently obtained by combining blocks with variable shapes. The imaging of three-dimensional voxelized geometries is currently not implemented, but can be added in a later release. The program follows a ray-tracing approach, ignoring photon scatter in its current implementation. Simulations of a phantom CT scan were generated in ImaSim and were compared to measured data in terms of CT number accuracy. Spatial variations in the photon fluence and mean energy from an x-ray tube caused by the heel effect were estimated from ImaSim and Monte Carlo simulations and compared. Results: In this paper we describe ImaSim and provide two examples of its capabilities. CT numbers were found to agree within 36 Hounsfield Units (HU) for bone, which corresponds to a 2% attenuation coefficient difference. ImaSim reproduced the heel effect reasonably well when compared to Monte Carlo simulations. Discussion: An x-ray imaging simulation tool is made available for teaching and research purposes. ImaSim provides a means to facilitate the teaching of medical x-ray imaging.

  5. JJ1017 image examination order codes: standardized codes supplementary to DICOM for imaging modality, region, and direction

    NASA Astrophysics Data System (ADS)

    Kimura, Michio; Kuranishi, Makoto; Sukenobu, Yoshiharu; Watanabe, Hiroki; Nakajima, Takashi; Morimura, Shinya; Kabata, Shun

    2002-05-01

    The DICOM standard includes non-image data information such as image study ordering data and performed procedure data, which are used for sharing information between HIS/RIS/PACS/modalities, which is essential for IHE. In order to bring such parts of the DICOM standard into force in Japan, a joint committee of JIRA and JAHIS (vendor associations) established JJ1017 management guideline. It specifies, for example, which items are legally required in Japan while remaining optional in the DICOM standard. Then, what should be used for the examination type, regional, and directional codes? Our investigation revealed that DICOM tables do not include items that are sufficiently detailed for use in Japan. This is because radiology departments (radiologists) in the US exercise greater discretion in image examination than in Japan, and the contents of orders from requesting physicians do not include the extra details used in Japan. Therefore, we have generated the JJ1017 code for these 3 codes for use based on the JJ1017 guidelines. The stem part of the JJ1017 code partially employs the DICOM codes in order to remain in line with the DICOM standard. JJ1017 codes are to be included not only in IHE-J specifications, also in Ministry recommendations of health data exchange.

  6. PCID and ASPIRE 2.0 - The Next Generation of AMOS Image Processing Software

    NASA Astrophysics Data System (ADS)

    Matson, C.; Soo Hoo, T.; Murphy, M.; Calef, B.; Beckner, C.; You, S.

    One of the missions of the Air Force Maui Optical and Supercomputing (AMOS) site is to generate high-resolution images of space objects using the Air Force telescopes located on Haleakala. Because atmospheric turbulence greatly reduces the resolution of space object images collected with ground-based telescopes, methods for overcoming atmospheric blurring are necessary. One such method is the use of adaptive optics systems to measure and compensate for atmospheric blurring in real time. A second method is to use image restoration algorithms on one or more short-exposure images of the space object under consideration. At AMOS, both methods are used routinely. In the case of adaptive optics, rarely can all atmospheric turbulence effects be removed from the imagery, so image restoration algorithms are useful even for adaptive-optics-corrected images. Historically, the bispectrum algorithm has been the primary image restoration algorithm used at AMOS. It has the advantages of being extremely fast (processing times of less than one second) and insensitive to atmospheric phase distortions. In addition, multi-frame blind deconvolution (MFBD) algorithms have also been used for image restoration. It has been observed empirically and with the use of computer simulation studies that MFBD algorithms produce higher-resolution image restorations than does the bispectrum algorithm. MFBD algorithms also do not need separate measurements of a star in order to work. However, in the past, MFBD algorithms have been factors of one hundred or more slower than the bispectrum algorithm, limiting their use to non-time-critical image restorations. Recently, with the financial support of AMOS and the High-Performance Computing Modernization Office, an MFBD algorithm called Physically-Constrained Iterative Deconvolution (PCID) has been efficiently parallelized and is able to produce image restorations in only a few seconds. In addition, with the financial support of AFOSR, it has been shown

  7. Software-based turbulence mitigation of short exposure image data with motion detection and background segmentation

    NASA Astrophysics Data System (ADS)

    Huebner, Claudia S.

    2011-11-01

    The degree of image degradation due to atmospheric turbulence is particularly severe when imaging over long horizontal paths since the turbulence is strongest close to the ground. The most pronounced effects include image blurring and image dancing and in case of strong turbulence image distortion as well. To mitigate these effects a number of methods from the field of image processing have been proposed most of which aim exclusively at the restoration of static scenes. But there is also an increasing interest in advancing turbulence mitigation to encompass moving objects as well. Therefore, in this paper a procedure is described that employs block-matching for the segmentation of static scene elements and moving objects such that image restoration can be carried out for both separately. This way motion blurring is taken into account in addition to atmospheric blurring, effectively reducing motion artefacts and improving the overall restoration result. Motion-compensated averaging with subsequent blind deconvolution is used for the actual image restoration.

  8. Software-based mitigation of image degradation due to atmospheric turbulence

    NASA Astrophysics Data System (ADS)

    Huebner, Claudia S.; Scheifling, Corinne

    2010-10-01

    Motion-Compensated Averaging (MCA) with blind deconvolution has proven successful in mitigating turbulence effects like image dancing and blurring. In this paper an image quality control according to the "Lucky Imaging" principle is combined with the MCA-procedure, weighting good frames more heavily than bad ones, skipping a given percentage of extremely degraded frames entirely. To account for local isoplanatism, when image dancing will effect local displacements between consecutive frames rather than global shifts only, a locally operating MCA variant with block matching, proposed in earlier work, is employed. In order to reduce loss of detail due to normal averaging, various combinations of temporal mode, median and mean are tested as reference image. The respective restoration results by means of a weighted blind deconvolution algorithm are presented and evaluated.

  9. Leveraging Open Standards and Technologies to Search and Display Planetary Image Data

    NASA Astrophysics Data System (ADS)

    Rose, M.; Schauer, C.; Quinol, M.; Trimble, J.

    2011-12-01

    Mars and the Moon have both been visited by multiple NASA spacecraft. A large number of images and other data have been gathered by the spacecraft and are publicly available in NASA's Planetary Data System. Through a collaboration with Google, Inc., the User Centered Technologies group at NASA Ames Resarch Center has developed at tool for searching and browsing among images from multiple Mars and Moon missions. Development of this tool was facilitated by the use of several open technologies and standards. First, an open-source full-text search engine is used to search both place names on the target and to find images matching a geographic region. Second, the published API of the Google Earth browser plugin is used to geolocate the images on a virtual globe and allow the user to navigate on the globe to see related images. The structure of the application also employs standard protocols and services. The back-end is exposed as RESTful APIs, which could be reused by other client systems in the future. Further, the communication between the front- and back-end portions of the system utilizes open data standards including XML and KML (Keyhole Markup Language) for representation of textual and geographic data. The creation of the search index was facilitated by reuse of existing, publicly available metadata, including the Gazetteer of Planetary Nomenclature from the USGS, available in KML format. And the image metadata was reused from standards-compliant archives in the Planetary Data System. The system also supports collaboration with other tools by allowing export of search results in KML, and the ability to display those results in the Google Earth desktop application. We will demonstrate the search and visualization capabilities of the system, with emphasis on how the system facilitates reuse of data and services through the adoption of open standards.

  10. [Standards for safe data exchange in teleradiology exemplified by image and report distribution].

    PubMed

    Eichelberg, M; Riesmeier, J; Thiel, A; Jensch, P; Emmel, D; Haderer, A; Ricke, J; Stohlmann, L; Bernarding, J

    2002-02-01

    The use of telemedicine is becoming indispensable for a continuous and economical delivery of a high quality of care. However, data protection requirements have to be considered. For the selection of solutions, vendor-independent components based on standards are a prerequisite for a seamless integration into the existing, often heterogeneous, IT infrastructure. The "Internet protocol" TCP/IP and the DICOM standard with it's new security extensions form the basis for an internationally standardized and accepted procedure for a secure interchange of radiological images beyond platform boundaries. PMID:11963254

  11. Updated standards and processes for accreditation of echocardiographic laboratories from The European Association of Cardiovascular Imaging.

    PubMed

    Popescu, Bogdan A; Stefanidis, Alexandros; Nihoyannopoulos, Petros; Fox, Kevin F; Ray, Simon; Cardim, Nuno; Rigo, Fausto; Badano, Luigi P; Fraser, Alan G; Pinto, Fausto; Zamorano, Jose Luis; Habib, Gilbert; Maurer, Gerald; Lancellotti, Patrizio; Andrade, Maria Joao; Donal, Erwan; Edvardsen, Thor; Varga, Albert

    2014-07-01

    Standards for echocardiographic laboratories were proposed by the European Association of Echocardiography (now the European Association of Cardiovascular Imaging) 7 years ago in order to raise standards of practice and improve the quality of care. Criteria and requirements were published at that time for transthoracic, transoesophageal, and stress echocardiography. This paper reassesses and updates the quality standards to take account of experience and the technical developments of modern echocardiographic practice. It also discusses quality control, the incentives for laboratories to apply for accreditation, the reaccreditation criteria, and the current status and future prospects of the laboratory accreditation process. PMID:24662444

  12. Role of Magnetic Resonance Imaging in Primary Rectal Cancer-Standard Protocol and Beyond.

    PubMed

    Gourtsoyianni, Sofia; Papanikolaou, Nickolas

    2016-08-01

    New-generation magnetic resonance imaging (MRI) scanners with optimal phased-array body coils have contributed to obtainment of high-resolution T2-weighted turbo spin echo images in which visualization of anatomical details such as the mesorectal fascia and the bowel wall layers is feasible. Preoperative, locoregional staging of rectal cancer with MRI, considered standard of care nowadays, relies on these images for stratification of high-risk patients for local recurrence, patients most likely to benefit from neoadjuvant therapy, as well as patients who exhibit imaging features indicative of a high risk of metastatic disease. Functional imaging, including optimized for rectal cancer diffusion-weighted imaging and more recently use of dynamic contrast-enhanced MRI, combined with radiologists׳ rising level of familiarity regarding the assessment of reactive changes postchemoradiation treatment, have shown to increase MRI staging accuracy after neoadjuvant treatment. Our intention is to review already established standard protocols for primary rectal cancer and go through potential additional promising imaging tools. PMID:27342896

  13. Development of software for digital image processing for analysis of neuroangiogenesis

    NASA Astrophysics Data System (ADS)

    Gonzalez, M. A.; Ballarin, V. L.; Celín, A. R.; Rapacioli, M.; López-Costa, J. J.; Flores, V.

    2011-12-01

    The process of formation, growth and distribution of vessels within the developing central nervous system is difficult to analyze due to the complexity of the paths and branches within the system. The study of images of this area poses particular problems because the high levels of noise, blurring and poor contrast often prevent the objects of interest detected correctly. The design of algorithms for digital image processing suitable for this type of imagery remains a constant challenge. The aim of this work is to develop a computer tool to assist the specialist in processing these images. This paper proposes the use of morphological grayscale reconstruction and other morphological operators in order to segment the images properly. The results show that the algorithms allow a suitable segmentation of the objects of interest. Moreover, the interface developed for processing enables easy and simple analysis of them by the specialists.

  14. Standard resolution spectral domain optical coherence tomography in clinical ophthalmic imaging

    NASA Astrophysics Data System (ADS)

    Szkulmowska, Anna; Cyganek, Marta; Targowski, Piotr; Kowalczyk, Andrzej; Kaluzny, Jakub J.; Wojtkowski, Maciej; Fujimoto, James G.

    2005-04-01

    In this study we show clinical application of Spectral Optical Coherence Tomography (SOCT), which enables operation with 40 times higher speed than commercial Stratus OCT instrument. Using high speed SOCT instrument it is possible to collect more information and increase the quality of reconstructed cross-sectional retinal images. Two generations of compact and portable clinical SOCT instruments were constructed in Medical Physics Group at Nicolaus Copernicus University in Poland. The first SOCT instrument is a low-cost system operating with standard, 12 micrometer axial resolution and the second is high resolution system using combined superluminescent diodes light source, which enables imaging with 4.8 micrometer axial resolution. Both instruments have worked in Ophthalmology Clinic of Collegium Medicum in Bydgoszcz. During the study we have examined 44 patients with different pathologies of the retina including: Central Serous Chorioretinopathy (CSC), Choroidal Neovascularization (CNV), Pigment Epithelial Detachment (PED), Macular Hole, Epiretinal Membrane, Outer Retinal Infarction etc. All these pathologies were first diagnosed by classical methods (like fundus camera imaging and angiography) and then examined with the aid of SOCT system. In this contribution we present examples of SOCT cross-sectional retinal imaging of pathologic eyes measured with standard resolution. We also compare cross-sectional images of pathology obtained by standard and high resolution systems.

  15. Application of Technical Measures and Software in Constructing Photorealistic 3D Models of Historical Building Using Ground-Based and Aerial (UAV) Digital Images

    NASA Astrophysics Data System (ADS)

    Zarnowski, Aleksander; Banaszek, Anna; Banaszek, Sebastian

    2015-12-01

    Preparing digital documentation of historical buildings is a form of protecting cultural heritage. Recently there have been several intensive studies using non-metric digital images to construct realistic 3D models of historical buildings. Increasingly often, non-metric digital images are obtained with unmanned aerial vehicles (UAV). Technologies and methods of UAV flights are quite different from traditional photogrammetric approaches. The lack of technical guidelines for using drones inhibits the process of implementing new methods of data acquisition. This paper presents the results of experiments in the use of digital images in the construction of photo-realistic 3D model of a historical building (Raphaelsohns' Sawmill in Olsztyn). The aim of the study at the first stage was to determine the meteorological and technical conditions for the acquisition of aerial and ground-based photographs. At the next stage, the technology of 3D modelling was developed using only ground-based or only aerial non-metric digital images. At the last stage of the study, an experiment was conducted to assess the possibility of 3D modelling with the comprehensive use of aerial (UAV) and ground-based digital photographs in terms of their labour intensity and precision of development. Data integration and automatic photo-realistic 3D construction of the models was done with Pix4Dmapper and Agisoft PhotoScan software Analyses have shown that when certain parameters established in an experiment are kept, the process of developing the stock-taking documentation for a historical building moves from the standards of analogue to digital technology with considerably reduced cost.

  16. Cognitive Factors in the Study of Visual Images: Moving Image Recognition Standards.

    ERIC Educational Resources Information Center

    Metallinos, Nikos

    This paper argues that a completed study pertaining to the various factors involved in the proper recognition and aesthetic application of moving images (primarily television pictures) should consider: (1) the individual viewer's general self-awareness, knowledge, expertise, confidence, values, beliefs, and motivation; (2) the viewer's…

  17. TGS[underscore]FIT: Image reconstruction software for quantitative, low-resolution tomographic assays

    SciTech Connect

    Estep, R J

    1993-01-01

    We developed the computer program TGS[underscore]FIT to aid in researching the tomographic gamma scanner method of nondestructive assay. This software, written in C-programming, language, implements a full Beer's Law attenuation correction in reconstructing low-resolution emission tomograms. The attenuation coefficients for the corrections are obtained by reconstructing a transmission tomogram of the same resolution. The command-driven interface, combined with (crude) simulation capabilities and command file control, allows design studies to be performed in a semi-automated manner.

  18. Analysis of image sharpness reproducibility on a novel engineered micro-CT scanner with variable geometry and embedded recalibration software.

    PubMed

    Panetta, D; Belcari, N; Del Guerra, A; Bartolomei, A; Salvadori, P A

    2012-04-01

    This study investigates the reproducibility of the reconstructed image sharpness, after modifications of the geometry setup, for a variable magnification micro-CT (μCT) scanner. All the measurements were performed on a novel engineered μCT scanner for in vivo imaging of small animals (Xalt), which has been recently built at the Institute of Clinical Physiology of the National Research Council (IFC-CNR, Pisa, Italy), in partnership with the University of Pisa. The Xalt scanner is equipped with an integrated software for on-line geometric recalibration, which will be used throughout the experiments. In order to evaluate the losses of image quality due to modifications of the geometry setup, we have made 22 consecutive acquisitions by changing alternatively the system geometry between two different setups (Large FoV - LF, and High Resolution - HR). For each acquisition, the tomographic images have been reconstructed before and after the on-line geometric recalibration. For each reconstruction, the image sharpness was evaluated using two different figures of merit: (i) the percentage contrast on a small bar pattern of fixed frequency (f = 5.5 lp/mm for the LF setup and f = 10 lp/mm for the HR setup) and (ii) the image entropy. We have found that, due to the small-scale mechanical uncertainty (in the order of the voxel size), a recalibration is necessary for each geometric setup after repositioning of the system's components; the resolution losses due to the lack of recalibration are worse for the HR setup (voxel size = 18.4 μm). The integrated on-line recalibration algorithm of the Xalt scanner allowed to perform the recalibration quickly, by restoring the spatial resolution of the system to the reference resolution obtained after the initial (off-line) calibration. PMID:21501966

  19. Consensus recommendations for a standardized Brain Tumor Imaging Protocol in clinical trials.

    PubMed

    Ellingson, Benjamin M; Bendszus, Martin; Boxerman, Jerrold; Barboriak, Daniel; Erickson, Bradley J; Smits, Marion; Nelson, Sarah J; Gerstner, Elizabeth; Alexander, Brian; Goldmacher, Gregory; Wick, Wolfgang; Vogelbaum, Michael; Weller, Michael; Galanis, Evanthia; Kalpathy-Cramer, Jayashree; Shankar, Lalitha; Jacobs, Paula; Pope, Whitney B; Yang, Dewen; Chung, Caroline; Knopp, Michael V; Cha, Soonme; van den Bent, Martin J; Chang, Susan; Yung, W K Al; Cloughesy, Timothy F; Wen, Patrick Y; Gilbert, Mark R

    2015-09-01

    A recent joint meeting was held on January 30, 2014, with the US Food and Drug Administration (FDA), National Cancer Institute (NCI), clinical scientists, imaging experts, pharmaceutical and biotech companies, clinical trials cooperative groups, and patient advocate groups to discuss imaging endpoints for clinical trials in glioblastoma. This workshop developed a set of priorities and action items including the creation of a standardized MRI protocol for multicenter studies. The current document outlines consensus recommendations for a standardized Brain Tumor Imaging Protocol (BTIP), along with the scientific and practical justifications for these recommendations, resulting from a series of discussions between various experts involved in aspects of neuro-oncology neuroimaging for clinical trials. The minimum recommended sequences include: (i) parameter-matched precontrast and postcontrast inversion recovery-prepared, isotropic 3D T1-weighted gradient-recalled echo; (ii) axial 2D T2-weighted turbo spin-echo acquired after contrast injection and before postcontrast 3D T1-weighted images to control timing of images after contrast administration; (iii) precontrast, axial 2D T2-weighted fluid-attenuated inversion recovery; and (iv) precontrast, axial 2D, 3-directional diffusion-weighted images. Recommended ranges of sequence parameters are provided for both 1.5 T and 3 T MR systems. PMID:26250565

  20. Familiarity effects in the construction of facial-composite images using modern software systems.

    PubMed

    Frowd, Charlie D; Skelton, Faye C; Butt, Neelam; Hassan, Amal; Fields, Stephen; Hancock, Peter J B

    2011-12-01

    We investigate the effect of target familiarity on the construction of facial composites, as used by law enforcement to locate criminal suspects. Two popular software construction methods were investigated. Participants were shown a target face that was either familiar or unfamiliar to them and constructed a composite of it from memory using a typical 'feature' system, involving selection of individual facial features, or one of the newer 'holistic' types, involving repeated selection and breeding from arrays of whole faces. This study found that composites constructed of a familiar face were named more successfully than composites of an unfamiliar face; also, naming of composites of internal and external features was equivalent for construction of unfamiliar targets, but internal features were better named than the external features for familiar targets. These findings applied to both systems, although benefit emerged for the holistic type due to more accurate construction of internal features and evidence for a whole-face advantage. STATEMENT OF RELEVANCE: This work is of relevance to practitioners who construct facial composites with witnesses to and victims of crime, as well as for software designers to help them improve the effectiveness of their composite systems. PMID:22103723

  1. Seismic reflection imaging of underground cavities using open-source software

    SciTech Connect

    Mellors, R J

    2011-12-20

    The Comprehensive Nuclear Test Ban Treaty (CTBT) includes provisions for an on-site inspection (OSI), which allows the use of specific techniques to detect underground anomalies including cavities and rubble zones. One permitted technique is active seismic surveys such as seismic refraction or reflection. The purpose of this report is to conduct some simple modeling to evaluate the potential use of seismic reflection in detecting cavities and to test the use of open-source software in modeling possible scenarios. It should be noted that OSI inspections are conducted under specific constraints regarding duration and logistics. These constraints are likely to significantly impact active seismic surveying, as a seismic survey typically requires considerable equipment, effort, and expertise. For the purposes of this study, which is a first-order feasibility study, these issues will not be considered. This report provides a brief description of the seismic reflection method along with some commonly used software packages. This is followed by an outline of a simple processing stream based on a synthetic model, along with results from a set of models representing underground cavities. A set of scripts used to generate the models are presented in an appendix. We do not consider detection of underground facilities in this work and the geologic setting used in these tests is an extremely simple one.

  2. Space Station Software Recommendations

    NASA Technical Reports Server (NTRS)

    Voigt, S. (Editor)

    1985-01-01

    Four panels of invited experts and NASA representatives focused on the following topics: software management, software development environment, languages, and software standards. Each panel deliberated in private, held two open sessions with audience participation, and developed recommendations for the NASA Space Station Program. The major thrusts of the recommendations were as follows: (1) The software management plan should establish policies, responsibilities, and decision points for software acquisition; (2) NASA should furnish a uniform modular software support environment and require its use for all space station software acquired (or developed); (3) The language Ada should be selected for space station software, and NASA should begin to address issues related to the effective use of Ada; and (4) The space station software standards should be selected (based upon existing standards where possible), and an organization should be identified to promulgate and enforce them. These and related recommendations are described in detail in the conference proceedings.

  3. Integration of instrumentation and processing software of a laser speckle contrast imaging system

    NASA Astrophysics Data System (ADS)

    Carrick, Jacob J.

    Laser speckle contrast imaging (LSCI) has the potential to be a powerful tool in medicine, but more research in the field is required so it can be used properly. To help in the progression of Michigan Tech's research in the field, a graphical user interface (GUI) was designed in Matlab to control the instrumentation of the experiments as well as process the raw speckle images into contrast images while they are being acquired. The design of the system was successful and is currently being used by Michigan Tech's Biomedical Engineering department. This thesis describes the development of the LSCI GUI as well as offering a full introduction into the history, theory and applications of LSCI.

  4. Predicting standard-dose PET image from low-dose PET and multimodal MR images using mapping-based sparse representation

    NASA Astrophysics Data System (ADS)

    Wang, Yan; Zhang, Pei; An, Le; Ma, Guangkai; Kang, Jiayin; Shi, Feng; Wu, Xi; Zhou, Jiliu; Lalush, David S.; Lin, Weili; Shen, Dinggang

    2016-01-01

    Positron emission tomography (PET) has been widely used in clinical diagnosis for diseases and disorders. To obtain high-quality PET images requires a standard-dose radionuclide (tracer) injection into the human body, which inevitably increases risk of radiation exposure. One possible solution to this problem is to predict the standard-dose PET image from its low-dose counterpart and its corresponding multimodal magnetic resonance (MR) images. Inspired by the success of patch-based sparse representation (SR) in super-resolution image reconstruction, we propose a mapping-based SR (m-SR) framework for standard-dose PET image prediction. Compared with the conventional patch-based SR, our method uses a mapping strategy to ensure that the sparse coefficients, estimated from the multimodal MR images and low-dose PET image, can be applied directly to the prediction of standard-dose PET image. As the mapping between multimodal MR images (or low-dose PET image) and standard-dose PET images can be particularly complex, one step of mapping is often insufficient. To this end, an incremental refinement framework is therefore proposed. Specifically, the predicted standard-dose PET image is further mapped to the target standard-dose PET image, and then the SR is performed again to predict a new standard-dose PET image. This procedure can be repeated for prediction refinement of the iterations. Also, a patch selection based dictionary construction method is further used to speed up the prediction process. The proposed method is validated on a human brain dataset. The experimental results show that our method can outperform benchmark methods in both qualitative and quantitative measures.

  5. Predicting standard-dose PET image from low-dose PET and multimodal MR images using mapping-based sparse representation.

    PubMed

    Wang, Yan; Zhang, Pei; An, Le; Ma, Guangkai; Kang, Jiayin; Shi, Feng; Wu, Xi; Zhou, Jiliu; Lalush, David S; Lin, Weili; Shen, Dinggang

    2016-01-21

    Positron emission tomography (PET) has been widely used in clinical diagnosis for diseases and disorders. To obtain high-quality PET images requires a standard-dose radionuclide (tracer) injection into the human body, which inevitably increases risk of radiation exposure. One possible solution to this problem is to predict the standard-dose PET image from its low-dose counterpart and its corresponding multimodal magnetic resonance (MR) images. Inspired by the success of patch-based sparse representation (SR) in super-resolution image reconstruction, we propose a mapping-based SR (m-SR) framework for standard-dose PET image prediction. Compared with the conventional patch-based SR, our method uses a mapping strategy to ensure that the sparse coefficients, estimated from the multimodal MR images and low-dose PET image, can be applied directly to the prediction of standard-dose PET image. As the mapping between multimodal MR images (or low-dose PET image) and standard-dose PET images can be particularly complex, one step of mapping is often insufficient. To this end, an incremental refinement framework is therefore proposed. Specifically, the predicted standard-dose PET image is further mapped to the target standard-dose PET image, and then the SR is performed again to predict a new standard-dose PET image. This procedure can be repeated for prediction refinement of the iterations. Also, a patch selection based dictionary construction method is further used to speed up the prediction process. The proposed method is validated on a human brain dataset. The experimental results show that our method can outperform benchmark methods in both qualitative and quantitative measures. PMID:26732849

  6. Improved modified pressure imaging and software for egg micro-crack detection and egg quality grading

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Cracks in the egg shell increase a food safety risk. Especially, eggs with very fine, hairline cracks (micro-cracks) are often undetected during the grading process because they are almost impossible to detect visually. A modified pressure imaging system was developed to detect eggs with micro-crack...

  7. 3D Imaging for hand gesture recognition: Exploring the software-hardware interaction of current technologies

    NASA Astrophysics Data System (ADS)

    Periverzov, Frol; Ilieş, Horea T.

    2012-09-01

    Interaction with 3D information is one of the fundamental and most familiar tasks in virtually all areas of engineering and science. Several recent technological advances pave the way for developing hand gesture recognition capabilities available to all, which will lead to more intuitive and efficient 3D user interfaces (3DUI). These developments can unlock new levels of expression and productivity in all activities concerned with the creation and manipulation of virtual 3D shapes and, specifically, in engineering design. Building fully automated systems for tracking and interpreting hand gestures requires robust and efficient 3D imaging techniques as well as potent shape classifiers. We survey and explore current and emerging 3D imaging technologies, and focus, in particular, on those that can be used to build interfaces between the users' hands and the machine. The purpose of this paper is to categorize and highlight the relevant differences between these existing 3D imaging approaches in terms of the nature of the information provided, output data format, as well as the specific conditions under which these approaches yield reliable data. Furthermore we explore the impact of each of these approaches on the computational cost and reliability of the required image processing algorithms. Finally we highlight the main challenges and opportunities in developing natural user interfaces based on hand gestures, and conclude with some promising directions for future research. [Figure not available: see fulltext.

  8. Parallel software requirements to the design of a general architecture: application to the image processing

    NASA Astrophysics Data System (ADS)

    Bonnin, Patrick J.; Hoeltzener-Douarin, Brigitte; Aubin, N.; Cartier, S.; Porcher, Thierry; Fiorini, P.; Zavidovique, Bertrand

    1993-10-01

    A great number of parallel computer architectures have been proposed, whether they are SIMD machines (Single Instruction Multiple Data) with lots of quite simple processors, or MIMD machines (Multiple Instruction Multiple Data) containing few, but powerful processors. Each one claims to offer some kind of an optimality at the hardware level. But implementing parallel image processing algorithms to make them run in real time will remain a real challenge; it addresses rather the control of communication networks between processors (message passing, circuit switching..) or the computing model (e.g. data parallel model). In that respect, our goal here is to point out some algorithmic needs to distribute image processing operators. They will be translated first in terms of programming models, more general then image processing applications, and then as hardware properties of the processor network. In that way, we do not design yet another parallel machine dedicated to image processing, but a more general parallel architecture which one will be able to efficiently implement different kinds of programming models.

  9. Software component quality evaluation

    NASA Technical Reports Server (NTRS)

    Clough, A. J.

    1991-01-01

    The paper describes a software inspection process that can be used to evaluate the quality of software components. Quality criteria, process application, independent testing of the process and proposed associated tool support are covered. Early results indicate that this technique is well suited for assessing software component quality in a standardized fashion. With automated machine assistance to facilitate both the evaluation and selection of software components, such a technique should promote effective reuse of software components.

  10. Standards and specifications in pathology: image management, report management and terminology.

    PubMed

    Daniel, Christel; Booker, David; Beckwith, Bruce; Della Mea, Vincenzo; García-Rojo, Marcial; Havener, Lori; Kennedy, Mary; Klossa, Jacques; Laurinavicius, Arvydas; Macary, François; Punys, Vytenis; Scharber, Wendy; Schrader, Thomas

    2012-01-01

    For making medical decisions, healthcare professionals require that all necessary information is both correct and easily available. Collaborative Digital Anatomic Pathology refers to the use of information technology that supports the creation and sharing or exchange of information, including data and images, during the complex workflow performed in an Anatomic Pathology department from specimen reception to report transmission and exploitation. Collaborative Digital Anatomic Pathology is supported by standardization efforts toward knowledge representation for sharable and computable clinical information. The goal of the international integrating the Healthcare Enterprise (IHE) initiative is precisely specifying how medical informatics standards should be implemented to meet specific health care needs and making systems integration more efficient and less expensive. The IHE Anatomic Pathology initiative was launched to implement the best use of medical informatics standards in order to produce, share and exchange machine-readable structured reports and their evidences (including whole slide images) within hospitals and across healthcare facilities. DICOM supplements 122 and 145 provide flexible object information definitions dedicated respectively to specimen description and WSI acquisition, storage and display. The profiles "Anatomic Pathology Reporting for Public Health" (ARPH) and "Anatomic Pathology Structured Report" (APSR) provide standard templates and transactions for sharing or exchanging structured reports in which textual observations - encoded using PathLex, an international controlled vocabulary currently being mapped to SNOMED CT concepts - may be bound to digital images or regions of interest in images. Current implementations of IHE Anatomic Pathology profiles in North America, France and Spain demonstrate the applicability of recent advances in standards for Collaborative Digital Anatomic Pathology. The use of machine-readable format of Anatomic

  11. Full-sun synchronic EUV and coronal hole mapping using multi-instrument images: Data and software made available

    NASA Astrophysics Data System (ADS)

    Caplan, R. M.; Downs, C.; Linker, J.

    2015-12-01

    A method for the automatic generation of EUV and coronal hole (CH) maps using simultaneous multi-instrument imaging data is described. Synchronized EUV images from STEREO/EUVI A&B 195Å and SDO/AIA 193Å undergo preprocessing steps that include PSF-deconvolution and the application of nonlinear data-derived intensity corrections that account for center-to-limb variations (limb-brightening) and inter-instrument intensity normalization. The latter two corrections are derived using a robust, systematic approach that takes advantage of unbiased long-term averages of data and serve to flatten the images by converting all pixel intensities to a unified disk center equivalent. While the number of applications are broad, we demonstrate how this technique is very useful for CH detection as it enables the use of a fast and simplified image segmentation algorithm to obtain consistent detection results. The multi-instrument nature of the technique also allows one to track evolving features consistently for longer periods than is possible with a single instrument, and preliminary results quantifying CH area and shape evolution are shown.Most importantly, several data and software products are made available to the community for use. For the ~4 year period of 6/10/2010 to 8/18/2014, we provide synchronic EUV and coronal hole maps at 6-hour cadence as well as the data-derived limb brightening and inter-instrument correction factors that we applied. We also make available a ready-to-use MATLAB script EUV2CHM used to generate the maps, which loads EUV images, applies our preprocessing steps, and then uses our GPU-accelerated/CPU-multithreaded segmentation algorithm EZSEG to detect coronal holes.

  12. MO-G-9A-01: Imaging Refresher for Standard of Care Radiation Therapy

    SciTech Connect

    Labby, Z; Sensakovic, W; Hipp, E; Altman, M

    2014-06-15

    Imaging techniques and technology which were previously the domain of diagnostic medicine are becoming increasingly integrated and utilized in radiation therapy (RT) clinical practice. As such, there are a number of specific imaging topics that are highly applicable to modern radiation therapy physics. As imaging becomes more widely integrated into standard clinical radiation oncology practice, the impetus is on RT physicists to be informed and up-to-date on those imaging modalities relevant to the design and delivery of therapeutic radiation treatments. For example, knowing that, for a given situation, a fluid attenuated inversion recovery (FLAIR) image set is most likely what the physician would like to import and contour is helpful, but may not be sufficient to providing the best quality of care. Understanding the physics of how that pulse sequence works and why it is used could help assess its utility and determine if it is the optimal sequence for aiding in that specific clinical situation. It is thus important that clinical medical physicists be able to understand and explain the physics behind the imaging techniques used in all aspects of clinical radiation oncology practice. This session will provide the basic physics for a variety of imaging modalities for applications that are highly relevant to radiation oncology practice: computed tomography (CT) (including kV, MV, cone beam CT [CBCT], and 4DCT), positron emission tomography (PET)/CT, magnetic resonance imaging (MRI), and imaging specific to brachytherapy (including ultrasound and some brachytherapy specific topics in MR). For each unique modality, the image formation process will be reviewed, trade-offs between image quality and other factors (e.g. imaging time or radiation dose) will be clarified, and typically used cases for each modality will be introduced. The current and near-future uses of these modalities and techniques in radiation oncology clinical practice will also be discussed. Learning

  13. Gamma-H2AX foci counting: image processing and control software for high-content screening

    NASA Astrophysics Data System (ADS)

    Barber, P. R.; Locke, R. J.; Pierce, G. P.; Rothkamm, K.; Vojnovic, B.

    2007-02-01

    Phosphorylation of the chromatin protein H2AX (forming γH2AX) is implicated in the repair of DNA double strand breaks (DSB's); a large number of H2AX molecules become phosphorylated at the sites of DSB's. Fluorescent staining of the cell nuclei for γH2AX, via an antibody, visualises the formation of these foci, allowing the quantification of DNA DSB's and forming the basis for a sensitive biological dosimeter of ionising radiation. We describe an automated fluorescence microscopy system, including automated image processing, to count γH2AX foci. The image processing is performed by a Hough transform based algorithm, CHARM, which has wide applicability for the detection and analysis of cells and cell colonies. This algorithm and its applications for cell nucleus and foci detection will be described. The system also relies heavily on robust control software, written using multi-threaded cbased modules in LabWindows/CVI that adapt to the timing requirements of a particular experiment for optimised slide/plate scanning and mosaicing, making use of modern multi-core processors. The system forms the basis of a general purpose high-content screening platform with wide ranging applications in live and fixed cell imaging and tissue micro arrays, that in future, can incorporate spectrally and time-resolved information.

  14. Anatomic standardization: Linear scaling and nonlinear warping of functional brain images

    SciTech Connect

    Minoshima, S.; Koeppe, R.A.; Frey, K.A.

    1994-09-01

    An automated method was proposed for anatomic standardization of PET scans in three dimensions, which enabled objective intersubject and cross-group comparisons of functional brain images. The method involved linear scaling to correct for individual brain size and nonlinear warping to minimize regional anatomic variations among subjects. In the linear-scaling step, the anteroposterior length and width of the brain were measured on the PET images, and the brain height was estimated by a contour-matching procedure using the midsagittal plane. In the nonlinear warping step, individual gray matter locations were matched with those of a standard brain by maximizing correlation coefficients of regional profile curves determined between predefined stretching centers (predominantly in white matter) and the gray matter landmarks. The accuracy of the brain height estimation was compared with skull x-ray estimations, showing comparable accuracy and better reproducibility. Linear-scaling and nonlinear warping methods were validated using ({sup 18}F)fluorodeoxyglucose and ({sup 15}O)water images. Regional anatomic variability on the glucose images was reduced markedly. The statistical significance of activation foci in paired water images was improved in both vibratory and visual activation paradigms. A group versus group comparison following the proposed anatomic standardization revealed highly significant glucose metabolic alterations in the brains of patients with Alzheimer`s disease compared with those of a normal control group. These results suggested that the method is well suited to both research and clinical settings and can facilitate pixel-by-pixel comparisons of PET images. 26 refs., 9 figs., 1 tab.

  15. Development of fast patient position verification software using 2D-3D image registration and its clinical experience.

    PubMed

    Mori, Shinichiro; Kumagai, Motoki; Miki, Kentaro; Fukuhara, Riki; Haneishi, Hideaki

    2015-09-01

    To improve treatment workflow, we developed a graphic processing unit (GPU)-based patient positional verification software application and integrated it into carbon-ion scanning beam treatment. Here, we evaluated the basic performance of the software. The algorithm provides 2D/3D registration matching using CT and orthogonal X-ray flat panel detector (FPD) images. The participants were 53 patients with tumors of the head and neck, prostate or lung receiving carbon-ion beam treatment. 2D/3D-ITchi-Gime (ITG) calculation accuracy was evaluated in terms of computation time and registration accuracy. Registration calculation was determined using the similarity measurement metrics gradient difference (GD), normalized mutual information (NMI), zero-mean normalized cross-correlation (ZNCC), and their combination. Registration accuracy was dependent on the particular metric used. Representative examples were determined to have target registration error (TRE) = 0.45 ± 0.23 mm and angular error (AE) = 0.35 ± 0.18° with ZNCC + GD for a head and neck tumor; TRE = 0.12 ± 0.07 mm and AE = 0.16 ± 0.07° with ZNCC for a pelvic tumor; and TRE = 1.19 ± 0.78 mm and AE = 0.83 ± 0.61° with ZNCC for lung tumor. Calculation time was less than 7.26 s.The new registration software has been successfully installed and implemented in our treatment process. We expect that it will improve both treatment workflow and treatment accuracy. PMID:26081313

  16. Development of fast patient position verification software using 2D-3D image registration and its clinical experience

    PubMed Central

    Mori, Shinichiro; Kumagai, Motoki; Miki, Kentaro; Fukuhara, Riki; Haneishi, Hideaki

    2015-01-01

    To improve treatment workflow, we developed a graphic processing unit (GPU)-based patient positional verification software application and integrated it into carbon-ion scanning beam treatment. Here, we evaluated the basic performance of the software. The algorithm provides 2D/3D registration matching using CT and orthogonal X-ray flat panel detector (FPD) images. The participants were 53 patients with tumors of the head and neck, prostate or lung receiving carbon-ion beam treatment. 2D/3D-ITchi-Gime (ITG) calculation accuracy was evaluated in terms of computation time and registration accuracy. Registration calculation was determined using the similarity measurement metrics gradient difference (GD), normalized mutual information (NMI), zero-mean normalized cross-correlation (ZNCC), and their combination. Registration accuracy was dependent on the particular metric used. Representative examples were determined to have target registration error (TRE) = 0.45 ± 0.23 mm and angular error (AE) = 0.35 ± 0.18° with ZNCC + GD for a head and neck tumor; TRE = 0.12 ± 0.07 mm and AE = 0.16 ± 0.07° with ZNCC for a pelvic tumor; and TRE = 1.19 ± 0.78 mm and AE = 0.83 ± 0.61° with ZNCC for lung tumor. Calculation time was less than 7.26 s.The new registration software has been successfully installed and implemented in our treatment process. We expect that it will improve both treatment workflow and treatment accuracy. PMID:26081313

  17. Fire service and first responder thermal imaging camera (TIC) advances and standards

    NASA Astrophysics Data System (ADS)

    Konsin, Lawrence S.; Nixdorff, Stuart

    2007-04-01

    Fire Service and First Responder Thermal Imaging Camera (TIC) applications are growing, saving lives and preventing injury and property damage. Firefighters face a wide range of serious hazards. TICs help mitigate the risks by protecting Firefighters and preventing injury, while reducing time spent fighting the fire and resources needed to do so. Most fire safety equipment is covered by performance standards. Fire TICs, however, are not covered by such standards and are also subject to inadequate operational performance and insufficient user training. Meanwhile, advancements in Fire TICs and lower costs are driving product demand. The need for a Fire TIC Standard was spurred in late 2004 through a Government sponsored Workshop where experts from the First Responder community, component manufacturers, firefighter training, and those doing research on TICs discussed strategies, technologies, procedures, best practices and R&D that could improve Fire TICs. The workshop identified pressing image quality, performance metrics, and standards issues. Durability and ruggedness metrics and standard testing methods were also seen as important, as was TIC training and certification of end-users. A progress report on several efforts in these areas and their impact on the IR sensor industry will be given. This paper is a follow up to the SPIE Orlando 2004 paper on Fire TIC usage (entitled Emergency Responders' Critical Infrared) which explored the technological development of this IR industry segment from the viewpoint of the end user, in light of the studies and reports that had established TICs as a mission critical tool for firefighters.

  18. Programmable vision processor/controller for flexible implementation of current and future image compression standards

    SciTech Connect

    Bailey, D.; Cressa, M.; Fandrianto, J.; Neubauer, D.; Rainnie, H.K.J.; Chi-Shin Wang

    1992-10-01

    The image compression algorithm standardization process has been in motion for over five years. Due to the broad range of interests that gave input at the national and international levels, the three products of this effort, px64, JPEG, and MPEG, combine flexibility and quality. The standardization process also included a number of semiconductor companies interested in creating supporting products, which are now nearing completion. One of the first highly integrated products dedicated to video compression available from an IC manufacturer is IIT`s Vision Processor/Controller. 5 figs., 1 tab.

  19. Photon counting imaging and centroiding with an electron-bombarded CCD using single molecule localisation software

    NASA Astrophysics Data System (ADS)

    Hirvonen, Liisa M.; Barber, Matthew J.; Suhling, Klaus

    2016-06-01

    Photon event centroiding in photon counting imaging and single-molecule localisation in super-resolution fluorescence microscopy share many traits. Although photon event centroiding has traditionally been performed with simple single-iteration algorithms, we recently reported that iterative fitting algorithms originally developed for single-molecule localisation fluorescence microscopy work very well when applied to centroiding photon events imaged with an MCP-intensified CMOS camera. Here, we have applied these algorithms for centroiding of photon events from an electron-bombarded CCD (EBCCD). We find that centroiding algorithms based on iterative fitting of the photon events yield excellent results and allow fitting of overlapping photon events, a feature not reported before and an important aspect to facilitate an increased count rate and shorter acquisition times.

  20. Photon counting imaging and centroiding with an electron-bombarded CCD using single molecule localisation software

    PubMed Central

    Hirvonen, Liisa M.; Barber, Matthew J.; Suhling, Klaus

    2016-01-01

    Photon event centroiding in photon counting imaging and single-molecule localisation in super-resolution fluorescence microscopy share many traits. Although photon event centroiding has traditionally been performed with simple single-iteration algorithms, we recently reported that iterative fitting algorithms originally developed for single-molecule localisation fluorescence microscopy work very well when applied to centroiding photon events imaged with an MCP-intensified CMOS camera. Here, we have applied these algorithms for centroiding of photon events from an electron-bombarded CCD (EBCCD). We find that centroiding algorithms based on iterative fitting of the photon events yield excellent results and allow fitting of overlapping photon events, a feature not reported before and an important aspect to facilitate an increased count rate and shorter acquisition times. PMID:27274604

  1. Improved structure, function and compatibility for CellProfiler: modular high-throughput image analysis software

    PubMed Central

    Kamentsky, Lee; Jones, Thouis R.; Fraser, Adam; Bray, Mark-Anthony; Logan, David J.; Madden, Katherine L.; Ljosa, Vebjorn; Rueden, Curtis; Eliceiri, Kevin W.; Carpenter, Anne E.

    2011-01-01

    Summary: There is a strong and growing need in the biology research community for accurate, automated image analysis. Here, we describe CellProfiler 2.0, which has been engineered to meet the needs of its growing user base. It is more robust and user friendly, with new algorithms and features to facilitate high-throughput work. ImageJ plugins can now be run within a CellProfiler pipeline. Availability and Implementation: CellProfiler 2.0 is free and open source, available at http://www.cellprofiler.org under the GPL v. 2 license. It is available as a packaged application for Macintosh OS X and Microsoft Windows and can be compiled for Linux. Contact: anne@broadinstitute.org Supplementary information: Supplementary data are available at Bioinformatics online. PMID:21349861

  2. ORBS, ORCS, OACS, a Software Suite for Data Reduction and Analysis of the Hyperspectral Imagers SITELLE and SpIOMM

    NASA Astrophysics Data System (ADS)

    Martin, T.; Drissen, L.; Joncas, G.

    2015-09-01

    SITELLE (installed in 2015 at the Canada-France-Hawaii Telescope) and SpIOMM (a prototype attached to the Observatoire du Mont-Mégantic) are the first Imaging Fourier Transform Spectrometers (IFTS) capable of obtaining a hyperspectral data cube which samples a 12 arc minutes field of view into four millions of visible spectra. The result of each observation is made up of two interferometric data cubes which need to be merged, corrected, transformed and calibrated in order to get a spectral cube of the observed region ready to be analysed. ORBS is a fully automatic data reduction software that has been entirely designed for this purpose. The data size (up to 68 Gb for larger science cases) and the computational needs have been challenging and the highly parallelized object-oriented architecture of ORBS reflects the solutions adopted which made possible to process 68 Gb of raw data in less than 11 hours using 8 cores and 22.6 Gb of RAM. It is based on a core framework (ORB) that has been designed to support the whole software suite for data analysis (ORCS and OACS), data simulation (ORUS) and data acquisition (IRIS). They all aim to provide a strong basis for the creation and development of specialized analysis modules that could benefit the scientific community working with SITELLE and SpIOMM.

  3. Software for Sunspots Automatic Detection, Heliographic Location and Area Measurement for Soho Images

    NASA Astrophysics Data System (ADS)

    Rivero Gavilán, H.; Guevara Day, W.

    2006-06-01

    Active regions (ARs) are the manifestation of the magnetic flux tubes because of the buoyancy action these emerge in the typical letter Greek Ω shape. The tracking and the respective study ARs permit us to study the global properties of the flow tubes (which form the active regions) and provide important information about the origin (formation and transports in the convective zone) and how the magnetic helicity is taken along the corona by pho-tos-phe-ric movements. In order to initiate an study of these behaviors we are developing a programming algorithm using IDL as base, moreover taking routines developed in SOLARSOFT, will allow us to pursue of some interesting active region. The program has obtained the year 2005 magnetograms data base provided by MDI-SOHO, in which we selected the ARs of interest to determine the location of the region in function of its heliographic coordinates. At the time of selecting this image, the level of intensity of the interest field is selected and the program calculates the position of different polarities and his geometric area (given in arcsec), these values are stored in a text file as well as a support image which shows the contour lines of magnetic field intensities chosen by the user. As a test of the algorithm we have taken several images MDI-SOHO of the 10715 NOAA region from 01 to 03 of January of the present year; we have used up to 43 images. These results by are part of an study of active zones evolution for the purpose of determining the origin of the RA's formation.

  4. The Java Image Science Toolkit (JIST) for Rapid Prototyping and Publishing of Neuroimaging Software

    PubMed Central

    Lucas, Blake C.; Bogovic, John A.; Carass, Aaron; Bazin, Pierre-Louis; Prince, Jerry L.; Pham, Dzung

    2010-01-01

    Non-invasive neuroimaging techniques enable extraordinarily sensitive and specific in vivo study of the structure, functional response and connectivity of biological mechanisms. With these advanced methods comes a heavy reliance on computer-based processing, analysis and interpretation. While the neuroimaging community has produced many excellent academic and commercial tool packages, new tools are often required to interpret new modalities and paradigms. Developing custom tools and ensuring interoperability with existing tools is a significant hurdle. To address these limitations, we present a new framework for algorithm development that implicitly ensures tool interoperability, generates graphical user interfaces, provides advanced batch processing tools, and, most importantly, requires minimal additional programming or computational overhead. Java-based rapid prototyping with this system is an efficient and practical approach to evaluate new algorithms since the proposed system ensures that rapidly constructed prototypes are actually fully-functional processing modules with support for multiple GUI's, a broad range of file formats, and distributed computation. Herein, we demonstrate MRI image processing with the proposed system for cortical surface extraction in large cross-sectional cohorts, provide a system for fully automated diffusion tensor image analysis, and illustrate how the system can be used as a simulation framework for the development of a new image analysis method. The system is released as open source under the Lesser GNU Public License (LGPL) through the Neuroimaging Informatics Tools and Resources Clearinghouse (NITRC). PMID:20077162

  5. Design of a hardware/software FPGA-based driver system for a large area high resolution CCD image sensor

    NASA Astrophysics Data System (ADS)

    Chen, Ying; Xu, Wanpeng; Zhao, Rongsheng; Chen, Xiangning

    2014-09-01

    A hardware/software field programmable gate array (FPGA)-based driver system was proposed and demonstrated for the KAF-39000 large area high resolution charge coupled device (CCD). The requirements of the KAF-39000 driver system were analyzed. The structure of "microprocessor with application specific integrated circuit (ASIC) chips" was implemented to design the driver system. The system test results showed that dual channels of imaging analog data were obtained with a frame rate of 0.87 frame/s. The frequencies of horizontal timing and vertical timing were 22.9 MHz and 28.7 kHz, respectively, which almost reached the theoretical value of 24 MHz and 30 kHz, respectively.

  6. Development and Evaluation of Reference Standards for Image-based Telemedicine Diagnosis and Clinical Research Studies in Ophthalmology

    PubMed Central

    Ryan, Michael C.; Ostmo, Susan; Jonas, Karyn; Berrocal, Audina; Drenser, Kimberly; Horowitz, Jason; Lee, Thomas C.; Simmons, Charles; Martinez-Castellanos, Maria-Ana; Chan, R.V. Paul; Chiang, Michael F.

    2014-01-01

    Information systems managing image-based data for telemedicine or clinical research applications require a reference standard representing the correct diagnosis. Accurate reference standards are difficult to establish because of imperfect agreement among physicians, and discrepancies between clinical vs. image-based diagnosis. This study is designed to describe the development and evaluation of reference standards for image-based diagnosis, which combine diagnostic impressions of multiple image readers with the actual clinical diagnoses. We show that agreement between image reading and clinical examinations was imperfect (689 [32%] discrepancies in 2148 image readings), as was inter-reader agreement (kappa 0.490-0.652). This was improved by establishing an image-based reference standard defined as the majority diagnosis given by three readers (13% discrepancies with image readers). It was further improved by establishing an overall reference standard that incorporated the clinical diagnosis (10% discrepancies with image readers). These principles of establishing reference standards may be applied to improve robustness of real-world systems supporting image-based diagnosis. PMID:25954463

  7. SU-E-I-63: Quantitative Evaluation of the Effects of Orthopedic Metal Artifact Reduction (OMAR) Software On CT Images for Radiotherapy Simulation

    SciTech Connect

    Jani, S

    2014-06-01

    Purpose: CT simulation for patients with metal implants can often be challenging due to artifacts that obscure tumor/target delineation and normal organ definition. Our objective was to evaluate the effectiveness of Orthopedic Metal Artifact Reduction (OMAR), a commercially available software, in reducing metal-induced artifacts and its effect on computed dose during treatment planning. Methods: CT images of water surrounding metallic cylindrical rods made of aluminum, copper and iron were studied in terms of Hounsfield Units (HU) spread. Metal-induced artifacts were characterized in terms of HU/Volume Histogram (HVH) using the Pinnacle treatment planning system. Effects of OMAR on enhancing our ability to delineate organs on CT and subsequent dose computation were examined in nine (9) patients with hip implants and two (2) patients with breast tissue expanders. Results: Our study characterized water at 1000 HU with a standard deviation (SD) of about 20 HU. The HVHs allowed us to evaluate how the presence of metal changed the HU spread. For example, introducing a 2.54 cm diameter copper rod in water increased the SD in HU of the surrounding water from 20 to 209, representing an increase in artifacts. Subsequent use of OMAR brought the SD down to 78. Aluminum produced least artifacts whereas Iron showed largest amount of artifacts. In general, an increase in kVp and mA during CT scanning showed better effectiveness of OMAR in reducing artifacts. Our dose analysis showed that some isodose contours shifted by several mm with OMAR but infrequently and were nonsignificant in planning process. Computed volumes of various dose levels showed <2% change. Conclusions: In our experience, OMAR software greatly reduced the metal-induced CT artifacts for the majority of patients with implants, thereby improving our ability to delineate tumor and surrounding organs. OMAR had a clinically negligible effect on computed dose within tissues. Partially funded by unrestricted

  8. [A solution to add digital signatures to medical images according to the DICOM standard: embedded systems].

    PubMed

    Schütze, B; Kroll, M; Filler, T J

    2005-01-01

    Radiology departments often underestimate the importance of protecting medical data during transmission, including the precautions taken to ensure data protection. In teleradiology, transmitted as well as stored patient data have to be signed digitally according to the currently valid regulation (Rontgenverordnung, RoV). The DICOM standard facilitates a digital signature. So far, medical image manufacturers only announced to support this security feature. We introduce a solution that extends the feature of digital signing to older modalities. PMID:15657831

  9. Control Software

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Real-Time Innovations, Inc. (RTI) collaborated with Ames Research Center, the Jet Propulsion Laboratory and Stanford University to leverage NASA research to produce ControlShell software. RTI is the first "graduate" of Ames Research Center's Technology Commercialization Center. The ControlShell system was used extensively on a cooperative project to enhance the capabilities of a Russian-built Marsokhod rover being evaluated for eventual flight to Mars. RTI's ControlShell is complex, real-time command and control software, capable of processing information and controlling mechanical devices. One ControlShell tool is StethoScope. As a real-time data collection and display tool, StethoScope allows a user to see how a program is running without changing its execution. RTI has successfully applied its software savvy in other arenas, such as telecommunications, networking, video editing, semiconductor manufacturing, automobile systems, and medical imaging.

  10. Orbit Determination and Gravity Field Estimation of the Dawn spacecraft at Vesta Using Radiometric and Image Constraints with GEODYN Software

    NASA Astrophysics Data System (ADS)

    Centinello, F. J.; Zuber, M. T.; Mazarico, E.

    2013-12-01

    The Dawn spacecraft orbited the protoplanet Vesta from May 3, 2011 to July 25, 2012. Precise orbit determination was critical for the geophysical investigation, as well as the definition of the Vesta-fixed reference frame and the subsequent registration of datasets to the surface. GEODYN, the orbit determination and geodetic parameter estimation software of NASA Goddard Spaceflight Center, was used to compute the orbit of the Dawn spacecraft and estimate the gravity field of Vesta. GEODYN utilizes radiometric Doppler and range measurements, and was modified to process image data from Dawn's cameras. X-band radiometric measurements were acquired by the NASA Deep Space Network (DSN). The addition of the capability to process image constraints decreases position uncertainty in the along- and cross-orbit track directions because of their geometric strengths compared with radiometric measurements. This capability becomes critical for planetary missions such as Dawn due to the weak gravity environment, where non-conservative forces affect the orbit more than typical of orbits at larger planetary bodies. Radiometric measurements were fit to less than 0.1 mm/s and 5 m for Doppler and range during the Survey orbit phase (compared with measurement noise RMS of about 0.05 mm/s and 2 m for Doppler and range). Image constraint RMS was fit to less than 100 m (resolution is 5 - 150 m/pixel, depending on the spacecraft altitude). Orbits computed using GEODYN were used to estimate a 20th degree and order gravity field of Vesta. The quality of the orbit determination and estimated gravity field with and without image constraints was assessed through comparison with the spacecraft trajectory and gravity model provided by the Dawn Science Team.

  11. Software Archive Related Issues

    NASA Technical Reports Server (NTRS)

    Angelini, Lorella

    2008-01-01

    With the archive opening of the major X-ray and Gamma ray missions, the school is intended to provide information on the resource available in the data archive and the public software. This talk reviews the archive content, the data format for the major active missions Chandra, XMM-Newton, Swift, RXTE, Integral and Suzaku and the available software for each of these missions. It will explain the FITS format in general and the specific layout for the most popular mission, explaining the role of keywords and how they fit in the multimission standard approach embrace by the High Energy Community. Specifically, it reviews : the difference data levels and the difference software applicable; the popular/standard method of analysis for high level products such as spectra, timing and images; the role of calibration in the multi mission approach; how to navigate the archive query databases. It will present also how the school is organized and how the information provided will be relevant to each of the afternoon science projects that will be proposed to the students and led by a project leader

  12. On Software Compatibility.

    ERIC Educational Resources Information Center

    Ershov, Andrei P.

    The problem of compatibility of software hampers the development of computer application. One solution lies in standardization of languages, terms, peripherais, operating systems and computer characteristics. (AB)

  13. C++ software integration for a high-throughput phase imaging platform

    NASA Astrophysics Data System (ADS)

    Kandel, Mikhail E.; Luo, Zelun; Han, Kevin; Popescu, Gabriel

    2015-03-01

    The multi-shot approach in SLIM requires reliable, synchronous, and parallel operation of three independent hardware devices - not meeting these challenges results in degraded phase and slow acquisition speeds, narrowing applications to holistic statements about complex phenomena. The relative youth of quantitative imaging and the lack of ready-made commercial hardware and tools further compounds the problem as Higher level programming languages result in inflexible, experiment specific instruments limited by ill-fitting computational modules, resulting in a palpable chasm between promised and realized hardware performance. Furthermore, general unfamiliarity with intricacies such as background calibration, objective lens attenuation, along with spatial light modular alignment, makes successful measurements difficult for the inattentive or uninitiated. This poses an immediate challenge for moving our techniques beyond the lab to biologically oriented collaborators and clinical practitioners. To meet these challenges, we present our new Quantitative Phase Imaging pipeline, with improved instrument performance, friendly user interface and robust data processing features, enabling us to acquire and catalog clinical datasets hundreds of gigapixels in size.

  14. Should software hold data hostage?

    SciTech Connect

    Wiley, H S.; Michaels, George S.

    2004-08-01

    Software tools have become an indispensable part of modern biology, but issues surrounding propriety file formats and closed software architectures threaten to stunt the growth of this rapidly expanding area of research. In an effort to ensure continuous software upgrades to provide a continuous income stream, some software companies have resorted to holding the user?s data hostage by locking them into proprietary file and data formats. Although this might make sense from a business perspective, it violates fundamental principles of data ownership and control. Such tactics should not be tolerated by the scientific community. The future of data-intensive biology depends on ensuring open data standards and freely exchangeable file formats. Compared to the engineering and chemistry fields, computers are a relatively recent addition to the arsenal of biological tools. Thus the pool of potential users of biology-oriented software is comparatively small. Biology itself is a broad field with many sub-disciplines, such as neurobiology, biochemistry, genomics and cell biology. This creates the need for task-oriented software tools that necessarily have a small user base. Simultaneously, the task of developing software has become more complex with the need for multi-platform software and increasing user expectations of sophisticated interfaces and a high degree of usability. Writing successful software in such an environment is very challenging, but progress in biology will increasingly depend on the success of companies and individuals in creating powerful new software tools. The trend to open source software could have an enormous impact on biology by providing the large number of specialized analysis tools that are required. Indeed, in the field of bioinformatics, open source software has become pervasive, largely because of the high degree of computer skill necessary for workers in this field. For these tools to be usable by non-specialists, however, requires the

  15. Image-guided Tumor Ablation: Standardization of Terminology and Reporting Criteria—A 10-Year Update

    PubMed Central

    Solbiati, Luigi; Brace, Christopher L.; Breen, David J.; Callstrom, Matthew R.; Charboneau, J. William; Chen, Min-Hua; Choi, Byung Ihn; de Baère, Thierry; Dodd, Gerald D.; Dupuy, Damian E.; Gervais, Debra A.; Gianfelice, David; Gillams, Alice R.; Lee, Fred T.; Leen, Edward; Lencioni, Riccardo; Littrup, Peter J.; Livraghi, Tito; Lu, David S.; McGahan, John P.; Meloni, Maria Franca; Nikolic, Boris; Pereira, Philippe L.; Liang, Ping; Rhim, Hyunchul; Rose, Steven C.; Salem, Riad; Sofocleous, Constantinos T.; Solomon, Stephen B.; Soulen, Michael C.; Tanaka, Masatoshi; Vogl, Thomas J.; Wood, Bradford J.; Goldberg, S. Nahum

    2014-01-01

    Image-guided tumor ablation has become a well-established hallmark of local cancer therapy. The breadth of options available in this growing field increases the need for standardization of terminology and reporting criteria to facilitate effective communication of ideas and appropriate comparison among treatments that use different technologies, such as chemical (eg, ethanol or acetic acid) ablation, thermal therapies (eg, radiofrequency, laser, microwave, focused ultrasound, and cryoablation) and newer ablative modalities such as irreversible electroporation. This updated consensus document provides a framework that will facilitate the clearest communication among investigators regarding ablative technologies. An appropriate vehicle is proposed for reporting the various aspects of image-guided ablation therapy including classification of therapies, procedure terms, descriptors of imaging guidance, and terminology for imaging and pathologic findings. Methods are addressed for standardizing reporting of technique, follow-up, complications, and clinical results. As noted in the original document from 2003, adherence to the recommendations will improve the precision of communications in this field, leading to more accurate comparison of technologies and results, and ultimately to improved patient outcomes. © RSNA, 2014 Online supplemental material is available for this article. PMID:24927329

  16. Image-guided tumor ablation: standardization of terminology and reporting criteria--a 10-year update.

    PubMed

    Ahmed, Muneeb; Solbiati, Luigi; Brace, Christopher L; Breen, David J; Callstrom, Matthew R; Charboneau, J William; Chen, Min-Hua; Choi, Byung Ihn; de Baère, Thierry; Dodd, Gerald D; Dupuy, Damian E; Gervais, Debra A; Gianfelice, David; Gillams, Alice R; Lee, Fred T; Leen, Edward; Lencioni, Riccardo; Littrup, Peter J; Livraghi, Tito; Lu, David S; McGahan, John P; Meloni, Maria Franca; Nikolic, Boris; Pereira, Philippe L; Liang, Ping; Rhim, Hyunchul; Rose, Steven C; Salem, Riad; Sofocleous, Constantinos T; Solomon, Stephen B; Soulen, Michael C; Tanaka, Masatoshi; Vogl, Thomas J; Wood, Bradford J; Goldberg, S Nahum

    2014-10-01

    Image-guided tumor ablation has become a well-established hallmark of local cancer therapy. The breadth of options available in this growing field increases the need for standardization of terminology and reporting criteria to facilitate effective communication of ideas and appropriate comparison among treatments that use different technologies, such as chemical (eg, ethanol or acetic acid) ablation, thermal therapies (eg, radiofrequency, laser, microwave, focused ultrasound, and cryoablation) and newer ablative modalities such as irreversible electroporation. This updated consensus document provides a framework that will facilitate the clearest communication among investigators regarding ablative technologies. An appropriate vehicle is proposed for reporting the various aspects of image-guided ablation therapy including classification of therapies, procedure terms, descriptors of imaging guidance, and terminology for imaging and pathologic findings. Methods are addressed for standardizing reporting of technique, follow-up, complications, and clinical results. As noted in the original document from 2003, adherence to the recommendations will improve the precision of communications in this field, leading to more accurate comparison of technologies and results, and ultimately to improved patient outcomes. Online supplemental material is available for this article . PMID:24927329

  17. Image-guided tumor ablation: standardization of terminology and reporting criteria--a 10-year update.

    PubMed

    Ahmed, Muneeb; Solbiati, Luigi; Brace, Christopher L; Breen, David J; Callstrom, Matthew R; Charboneau, J William; Chen, Min-Hua; Choi, Byung Ihn; de Baère, Thierry; Dodd, Gerald D; Dupuy, Damian E; Gervais, Debra A; Gianfelice, David; Gillams, Alice R; Lee, Fred T; Leen, Edward; Lencioni, Riccardo; Littrup, Peter J; Livraghi, Tito; Lu, David S; McGahan, John P; Meloni, Maria Franca; Nikolic, Boris; Pereira, Philippe L; Liang, Ping; Rhim, Hyunchul; Rose, Steven C; Salem, Riad; Sofocleous, Constantinos T; Solomon, Stephen B; Soulen, Michael C; Tanaka, Masatoshi; Vogl, Thomas J; Wood, Bradford J; Goldberg, S Nahum

    2014-11-01

    Image-guided tumor ablation has become a well-established hallmark of local cancer therapy. The breadth of options available in this growing field increases the need for standardization of terminology and reporting criteria to facilitate effective communication of ideas and appropriate comparison among treatments that use different technologies, such as chemical (eg, ethanol or acetic acid) ablation, thermal therapies (eg, radiofrequency, laser, microwave, focused ultrasound, and cryoablation) and newer ablative modalities such as irreversible electroporation. This updated consensus document provides a framework that will facilitate the clearest communication among investigators regarding ablative technologies. An appropriate vehicle is proposed for reporting the various aspects of image-guided ablation therapy including classification of therapies, procedure terms, descriptors of imaging guidance, and terminology for imaging and pathologic findings. Methods are addressed for standardizing reporting of technique, follow-up, complications, and clinical results. As noted in the original document from 2003, adherence to the recommendations will improve the precision of communications in this field, leading to more accurate comparison of technologies and results, and ultimately to improved patient outcomes. PMID:25442132

  18. FIRE: an open-software suite for real-time 2D/3D image registration for image guided radiotherapy research

    NASA Astrophysics Data System (ADS)

    Furtado, H.; Gendrin, C.; Spoerk, J.; Steiner, E.; Underwood, T.; Kuenzler, T.; Georg, D.; Birkfellner, W.

    2016-03-01

    Radiotherapy treatments have changed at a tremendously rapid pace. Dose delivered to the tumor has escalated while organs at risk (OARs) are better spared. The impact of moving tumors during dose delivery has become higher due to very steep dose gradients. Intra-fractional tumor motion has to be managed adequately to reduce errors in dose delivery. For tumors with large motion such as tumors in the lung, tracking is an approach that can reduce position uncertainty. Tumor tracking approaches range from purely image intensity based techniques to motion estimation based on surrogate tracking. Research efforts are often based on custom designed software platforms which take too much time and effort to develop. To address this challenge we have developed an open software platform especially focusing on tumor motion management. FLIRT is a freely available open-source software platform. The core method for tumor tracking is purely intensity based 2D/3D registration. The platform is written in C++ using the Qt framework for the user interface. The performance critical methods are implemented on the graphics processor using the CUDA extension. One registration can be as fast as 90ms (11Hz). This is suitable to track tumors moving due to respiration (~0.3Hz) or heartbeat (~1Hz). Apart from focusing on high performance, the platform is designed to be flexible and easy to use. Current use cases range from tracking feasibility studies, patient positioning and method validation. Such a framework has the potential of enabling the research community to rapidly perform patient studies or try new methods.

  19. The analysis and rationale behind the upgrading of existing standard definition thermal imagers to high definition

    NASA Astrophysics Data System (ADS)

    Goss, Tristan M.

    2016-05-01

    With 640x512 pixel format IR detector arrays having been on the market for the past decade, Standard Definition (SD) thermal imaging sensors have been developed and deployed across the world. Now with 1280x1024 pixel format IR detector arrays becoming readily available designers of thermal imager systems face new challenges as pixel sizes reduce and the demand and applications for High Definition (HD) thermal imaging sensors increases. In many instances the upgrading of existing under-sampled SD thermal imaging sensors into more optimally sampled or oversampled HD thermal imaging sensors provides a more cost effective and reduced time to market option than to design and develop a completely new sensor. This paper presents the analysis and rationale behind the selection of the best suited HD pixel format MWIR detector for the upgrade of an existing SD thermal imaging sensor to a higher performing HD thermal imaging sensor. Several commercially available and "soon to be" commercially available HD small pixel IR detector options are included as part of the analysis and are considered for this upgrade. The impact the proposed detectors have on the sensor's overall sensitivity, noise and resolution is analyzed, and the improved range performance is predicted. Furthermore with reduced dark currents due to the smaller pixel sizes, the candidate HD MWIR detectors are operated at higher temperatures when compared to their SD predecessors. Therefore, as an additional constraint and as a design goal, the feasibility of achieving upgraded performance without any increase in the size, weight and power consumption of the thermal imager is discussed herein.

  20. Leap Motion Gesture Control With Carestream Software in the Operating Room to Control Imaging: Installation Guide and Discussion.

    PubMed

    Pauchot, Julien; Di Tommaso, Laetitia; Lounis, Ahmed; Benassarou, Mourad; Mathieu, Pierre; Bernot, Dominique; Aubry, Sébastien

    2015-12-01

    Nowadays, routine cross-sectional imaging viewing during a surgical procedure requires physical contact with an interface (mouse or touch-sensitive screen). Such contact risks exposure to aseptic conditions and causes loss of time. Devices such as the recently introduced Leap Motion (Leap Motion Society, San Francisco, CA), which enables interaction with the computer without any physical contact, are of wide interest in the field of surgery, but configuration and ergonomics are key challenges for the practitioner, imaging software, and surgical environment. This article aims to suggest an easy configuration of Leap Motion on a PC for optimized use with Carestream Vue PACS v11.3.4 (Carestream Health, Inc, Rochester, NY) using a plug-in (to download at https://drive.google.com/open?id=0B_F4eBeBQc3yNENvTXlnY09qS00&authuser=0) and a video tutorial (https://www.youtube.com/watch?v=yVPTgxg-SIk). Videos of surgical procedure and discussion about innovative gesture control technology and its various configurations are provided in this article. PMID:26002115

  1. Software reengineering

    NASA Technical Reports Server (NTRS)

    Fridge, Ernest M., III

    1991-01-01

    Programs in use today generally have all of the function and information processing capabilities required to do their specified job. However, older programs usually use obsolete technology, are not integrated properly with other programs, and are difficult to maintain. Reengineering is becoming a prominent discipline as organizations try to move their systems to more modern and maintainable technologies. The Johnson Space Center (JSC) Software Technology Branch (STB) is researching and developing a system to support reengineering older FORTRAN programs into more maintainable forms that can also be more readily translated to a modern languages such as FORTRAN 8x, Ada, or C. This activity has led to the development of maintenance strategies for design recovery and reengineering. These strategies include a set of standards, methodologies, and the concepts for a software environment to support design recovery and reengineering. A brief description of the problem being addressed and the approach that is being taken by the STB toward providing an economic solution to the problem is provided. A statement of the maintenance problems, the benefits and drawbacks of three alternative solutions, and a brief history of the STB experience in software reengineering are followed by the STB new FORTRAN standards, methodology, and the concepts for a software environment.

  2. Using Image Pro Plus Software to Develop Particle Mapping on Genesis Solar Wind Collector Surfaces

    NASA Technical Reports Server (NTRS)

    Rodriquez, Melissa C.; Allton, J. H.; Burkett, P. J.

    2012-01-01

    The continued success of the Genesis mission science team in analyzing solar wind collector array samples is partially based on close collaboration of the JSC curation team with science team members who develop cleaning techniques and those who assess elemental cleanliness at the levels of detection. The goal of this collaboration is to develop a reservoir of solar wind collectors of known cleanliness to be available to investigators. The heart and driving force behind this effort is Genesis mission PI Don Burnett. While JSC contributes characterization, safe clean storage, and benign collector cleaning with ultrapure water (UPW) and UV ozone, Burnett has coordinated more exotic and rigorous cleaning which is contributed by science team members. He also coordinates cleanliness assessment requiring expertise and instruments not available in curation, such as XPS, TRXRF [1,2] and synchrotron TRXRF. JSC participates by optically documenting the particle distributions as cleaning steps progress. Thus, optical document supplements SEM imaging and analysis, and elemental assessment by TRXRF.

  3. Cytopathology whole slide images and virtual microscopy adaptive tutorials: A software pilot

    PubMed Central

    Van Es, Simone L.; Pryor, Wendy M.; Belinson, Zack; Salisbury, Elizabeth L.; Velan, Gary M.

    2015-01-01

    Background: The constant growth in the body of knowledge in medicine requires pathologists and pathology trainees to engage in continuing education. Providing them with equitable access to efficient and effective forms of education in pathology (especially in remote and rural settings) is important, but challenging. Methods: We developed three pilot cytopathology virtual microscopy adaptive tutorials (VMATs) to explore a novel adaptive E-learning platform (AeLP) which can incorporate whole slide images for pathology education. We collected user feedback to further develop this educational material and to subsequently deploy randomized trials in both pathology specialist trainee and also medical student cohorts. Cytopathology whole slide images were first acquired then novel VMATs teaching cytopathology were created using the AeLP, an intelligent tutoring system developed by Smart Sparrow. The pilot was run for Australian pathologists and trainees through the education section of Royal College of Pathologists of Australasia website over a period of 9 months. Feedback on the usability, impact on learning and any technical issues was obtained using 5-point Likert scale items and open-ended feedback in online questionnaires. Results: A total of 181 pathologists and pathology trainees anonymously attempted the three adaptive tutorials, a smaller proportion of whom went on to provide feedback at the end of each tutorial. VMATs were perceived as effective and efficient E-learning tools for pathology education. User feedback was positive. There were no significant technical issues. Conclusion: During this pilot, the user feedback on the educational content and interface and the lack of technical issues were helpful. Large scale trials of similar online cytopathology adaptive tutorials were planned for the future. PMID:26605119

  4. An Upgrade of the Imaging for Hypersonic Experimental Aeroheating Testing (IHEAT) Software

    NASA Technical Reports Server (NTRS)

    Mason, Michelle L.; Rufer, Shann J.

    2015-01-01

    The Imaging for Hypersonic Experimental Aeroheating Testing (IHEAT) code is used at NASA Langley Research Center to analyze global aeroheating data on wind tunnel models tested in the Langley Aerothermodynamics Laboratory. One-dimensional, semi-infinite heating data derived from IHEAT are used to design thermal protection systems to mitigate the risks due to the aeroheating loads on hypersonic vehicles, such as re-entry vehicles during descent and landing procedures. This code was originally written in the PV-WAVE programming language to analyze phosphor thermography data from the two-color, relativeintensity system developed at Langley. To increase the efficiency, functionality, and reliability of IHEAT, the code was migrated to MATLAB syntax and compiled as a stand-alone executable file labeled version 4.0. New features of IHEAT 4.0 include the options to batch process all of the data from a wind tunnel run, to map the two-dimensional heating distribution to a three-dimensional computer-aided design model of the vehicle to be viewed in Tecplot, and to extract data from a segmented line that follows an interesting feature in the data. Results from IHEAT 4.0 were compared on a pixel level to the output images from the legacy code to validate the program. The differences between the two codes were on the order of 10-5 to 10-7. IHEAT 4.0 replaces the PV-WAVE version as the production code for aeroheating experiments conducted in the hypersonic facilities at NASA Langley.

  5. Software engineering

    NASA Technical Reports Server (NTRS)

    Fridge, Ernest M., III; Hiott, Jim; Golej, Jim; Plumb, Allan

    1993-01-01

    Today's software systems generally use obsolete technology, are not integrated properly with other software systems, and are difficult and costly to maintain. The discipline of reverse engineering is becoming prominent as organizations try to move their systems up to more modern and maintainable technology in a cost effective manner. The Johnson Space Center (JSC) created a significant set of tools to develop and maintain FORTRAN and C code during development of the space shuttle. This tool set forms the basis for an integrated environment to reengineer existing code into modern software engineering structures which are then easier and less costly to maintain and which allow a fairly straightforward translation into other target languages. The environment will support these structures and practices even in areas where the language definition and compilers do not enforce good software engineering. The knowledge and data captured using the reverse engineering tools is passed to standard forward engineering tools to redesign or perform major upgrades to software systems in a much more cost effective manner than using older technologies. The latest release of the environment was in Feb. 1992.

  6. Software reengineering

    NASA Technical Reports Server (NTRS)

    Fridge, Ernest M., III

    1991-01-01

    Today's software systems generally use obsolete technology, are not integrated properly with other software systems, and are difficult and costly to maintain. The discipline of reverse engineering is becoming prominent as organizations try to move their systems up to more modern and maintainable technology in a cost effective manner. JSC created a significant set of tools to develop and maintain FORTRAN and C code during development of the Space Shuttle. This tool set forms the basis for an integrated environment to re-engineer existing code into modern software engineering structures which are then easier and less costly to maintain and which allow a fairly straightforward translation into other target languages. The environment will support these structures and practices even in areas where the language definition and compilers do not enforce good software engineering. The knowledge and data captured using the reverse engineering tools is passed to standard forward engineering tools to redesign or perform major upgrades to software systems in a much more cost effective manner than using older technologies. A beta vision of the environment was released in Mar. 1991. The commercial potential for such re-engineering tools is very great. CASE TRENDS magazine reported it to be the primary concern of over four hundred of the top MIS executives.

  7. Towards a repository for standardized medical image and signal case data annotated with ground truth.

    PubMed

    Deserno, Thomas M; Welter, Petra; Horsch, Alexander

    2012-04-01

    Validation of medical signal and image processing systems requires quality-assured, representative and generally acknowledged databases accompanied by appropriate reference (ground truth) and clinical metadata, which are composed laboriously for each project and are not shared with the scientific community. In our vision, such data will be stored centrally in an open repository. We propose an architecture for a standardized case data and ground truth information repository supporting the evaluation and analysis of computer-aided diagnosis based on (a) the Reference Model for an Open Archival Information System (OAIS) provided by the NASA Consultative Committee for Space Data Systems (ISO 14721:2003), (b) the Dublin Core Metadata Initiative (DCMI) Element Set (ISO 15836:2009), (c) the Open Archive Initiative (OAI) Protocol for Metadata Harvesting, and (d) the Image Retrieval in Medical Applications (IRMA) framework. In our implementation, a portal bunches all of the functionalities that are needed for data submission and retrieval. The complete life cycle of the data (define, create, store, sustain, share, use, and improve) is managed. Sophisticated search tools make it easier to use the datasets, which may be merged from different providers. An integrated history record guarantees reproducibility. A standardized creation report is generated with a permanent digital object identifier. This creation report must be referenced by all of the data users. Peer-reviewed e-publishing of these reports will create a reputation for the data contributors and will form de-facto standards regarding image and signal datasets. Good practice guidelines for validation methodology complement the concept of the case repository. This procedure will increase the comparability of evaluation studies for medical signal and image processing methods and applications. PMID:22075810

  8. Stereo Imaging Velocimetry Technique Using Standard Off-the-Shelf CCD Cameras

    NASA Technical Reports Server (NTRS)

    McDowell, Mark; Gray, Elizabeth

    2004-01-01

    Stereo imaging velocimetry is a fluid physics technique for measuring three-dimensional (3D) velocities at a plurality of points. This technique provides full-field 3D analysis of any optically clear fluid or gas experiment seeded with tracer particles. Unlike current 3D particle imaging velocimetry systems that rely primarily on laser-based systems, stereo imaging velocimetry uses standard off-the-shelf charge-coupled device (CCD) cameras to provide accurate and reproducible 3D velocity profiles for experiments that require 3D analysis. Using two cameras aligned orthogonally, we present a closed mathematical solution resulting in an accurate 3D approximation of the observation volume. The stereo imaging velocimetry technique is divided into four phases: 3D camera calibration, particle overlap decomposition, particle tracking, and stereo matching. Each phase is explained in detail. In addition to being utilized for space shuttle experiments, stereo imaging velocimetry has been applied to the fields of fluid physics, bioscience, and colloidal microscopy.

  9. The Decoding Toolbox (TDT): a versatile software package for multivariate analyses of functional imaging data

    PubMed Central

    Hebart, Martin N.; Görgen, Kai; Haynes, John-Dylan

    2015-01-01

    The multivariate analysis of brain signals has recently sparked a great amount of interest, yet accessible and versatile tools to carry out decoding analyses are scarce. Here we introduce The Decoding Toolbox (TDT) which represents a user-friendly, powerful and flexible package for multivariate analysis of functional brain imaging data. TDT is written in Matlab and equipped with an interface to the widely used brain data analysis package SPM. The toolbox allows running fast whole-brain analyses, region-of-interest analyses and searchlight analyses, using machine learning classifiers, pattern correlation analysis, or representational similarity analysis. It offers automatic creation and visualization of diverse cross-validation schemes, feature scaling, nested parameter selection, a variety of feature selection methods, multiclass capabilities, and pattern reconstruction from classifier weights. While basic users can implement a generic analysis in one line of code, advanced users can extend the toolbox to their needs or exploit the structure to combine it with external high-performance classification toolboxes. The toolbox comes with an example data set which can be used to try out the various analysis methods. Taken together, TDT offers a promising option for researchers who want to employ multivariate analyses of brain activity patterns. PMID:25610393

  10. The Decoding Toolbox (TDT): a versatile software package for multivariate analyses of functional imaging data.

    PubMed

    Hebart, Martin N; Görgen, Kai; Haynes, John-Dylan

    2014-01-01

    The multivariate analysis of brain signals has recently sparked a great amount of interest, yet accessible and versatile tools to carry out decoding analyses are scarce. Here we introduce The Decoding Toolbox (TDT) which represents a user-friendly, powerful and flexible package for multivariate analysis of functional brain imaging data. TDT is written in Matlab and equipped with an interface to the widely used brain data analysis package SPM. The toolbox allows running fast whole-brain analyses, region-of-interest analyses and searchlight analyses, using machine learning classifiers, pattern correlation analysis, or representational similarity analysis. It offers automatic creation and visualization of diverse cross-validation schemes, feature scaling, nested parameter selection, a variety of feature selection methods, multiclass capabilities, and pattern reconstruction from classifier weights. While basic users can implement a generic analysis in one line of code, advanced users can extend the toolbox to their needs or exploit the structure to combine it with external high-performance classification toolboxes. The toolbox comes with an example data set which can be used to try out the various analysis methods. Taken together, TDT offers a promising option for researchers who want to employ multivariate analyses of brain activity patterns. PMID:25610393

  11. User interface software development for the WIYN One Degree Imager (ODI)

    NASA Astrophysics Data System (ADS)

    Ivens, John; Yeatts, Andrey; Harbeck, Daniel; Martin, Pierre

    2010-07-01

    User interfaces (UIs) are a necessity for almost any data acquisition system. The development team for the WIYN One Degree Imager (ODI) chose to develop a user interface that allows access to most of the instrument control for both scientists and engineers through the World Wide Web, because of the web's ease of use and accessibility around the world. Having a web based UI allows ODI to grow from a visitor-mode instrument to a queue-managed instrument and also facilitate remote servicing and troubleshooting. The challenges of developing such a system involve the difficulties of browser inter-operability, speed, presentation, and the choices involved with integrating browser and server technologies. To this end, the team has chosen a combination of Java, JBOSS, AJAX technologies, XML data descriptions, Oracle XML databases, and an emerging technology called the Google Web Toolkit (GWT) that compiles Java into Javascript for presentation in a browser. Advantages of using GWT include developing the front end browser code in Java, GWT's native support for AJAX, the use of XML to describe the user interface, the ability to profile code speed and discover bottlenecks, the ability to efficiently communicate with application servers such as JBOSS, and the ability to optimize and test code for multiple browsers. We discuss the inter-operation of all of these technologies to create fast, flexible, and robust user interfaces that are scalable, manageable, separable, and as much as possible allow maintenance of all code in Java.

  12. A format standard for efficient interchange of high-contrast direct imaging science products

    NASA Astrophysics Data System (ADS)

    Choquet, Élodie; Vigan, Arthur; Soummer, Rémi; Chauvin, Gaël.; Pueyo, Laurent; Perrin, Marshall D.; Hines, Dean C.

    2014-07-01

    The present and next few years will see the arrival of several new coronagraphic instruments dedicated to the detection and characterization of planetary systems. These ground- and space-based instruments (Gemini/GPI, VLT/SPHERE, Subaru/ CHARIS, JWST NIRCam and MIRI coronagraphs among others), will provide a large number of new candidates, through multiple nearby-star surveys and will complete and extend those acquired with current generation instruments (Palomar P1640, VLT/NACO, Keck, HST). To optimize the use of the wealth of data, including non-detection results, the science products of these instruments will require to be shared among the community. In the long term such data exchange will significantly ease companion confirmations, planet characterization via different type of instruments (integral field spectrographs, polarimetric imagers, etc.), and Monte-Carlo population studies from detection and non-detection results. In this context, we initiated a collaborative effort between the teams developing the data reduction pipelines for SPHERE, GPI, and the JWST coronagraphs, and the ALICE (Archival Legacy Investigations of Circumstellar Environment) collaboration, which is currently reprocessing all the HST/NICMOS coronagraphic surveys. We are developing a standard format for the science products generated by high-contrast direct imaging instruments (reduced image, sensitivity limits, noise image, candidate list, etc.), that is directly usable for astrophysical investigations. In this paper, we present first results of this work and propose a preliminary format adopted for the science product. We call for discussions in the high-contrast direct imaging community to develop this effort, reach a consensus and finalize this standard. This action will be critical to enable data interchange and combination in a consistent way between several instruments and to stiffen the scientific production in the community.

  13. Analysis of Endoscopic Electronic Image of Intramucosal Gastric Carcinoma Using a Software Program for Calculating Hemoglobin Index

    PubMed Central

    Kim, Gwang Ha; Kim, Kwang Baek; Lim, Eun Kyung; Choi, Seong Ho; Kim, Tae Oh; Heo, Jeong; Kang, Dae Hwan; Cho, Mong; Park, Do Youn

    2006-01-01

    Hemoglobin is the predominent pigment in the gastrointestinal mucosa, and the development of electronic endoscopy has made it possible to quantitatively measure the mucosal hemoglobin volume, by using a hemoglobin index (IHb). The aims of this study were to make a software program to calculate the IHb and then to investigate whether the mucosal IHb determined from the electronic endoscopic data is a useful marker for evaluating the color of intramucosal gastric carcinoma with regard to its value for discriminating between the histologic types. We made a software program for calculating the IHb in the endoscopic images. By using this program, the mean values of the IHb for the carcinoma (IHb-C) and those of the IHb for the surrounding non-cancerous mucosa (IHb-N) were calculated in 75 intestinal-type and 34 diffuse-type intramucosal gastric carcinomas. We then analyzed the ratio of the IHb-C to the IHb-N (C/N ratio). The C/N ratio in the intestinal-type carcinoma group was higher than that in the diffuse-type carcinoma group (p<0.001). In the diffuse-type carcinoma group, the C/N ratio in the body was lower than that in the antrum (p=0.022). The accuracy rate, sensitivity, specificity, and the positive and negative predictive values for the differential diagnosis of the diffuse-type carcinoma from the intestinal-type carcinoma were 94.5%, 94.1%, 94.7%, 88.9% and 97.3%, respectively. IHb is useful for making quantitative measurement of the endoscopic color in the intramucosal gastric carcinoma, and the C/N ratio by using the IHb would be helpful for distinguishing the diffuse-type carcinoma from the intestinal-type carcinoma. PMID:17179684

  14. In vivo validation of cardiac output assessment in non-standard 3D echocardiographic images

    NASA Astrophysics Data System (ADS)

    Nillesen, M. M.; Lopata, R. G. P.; de Boode, W. P.; Gerrits, I. H.; Huisman, H. J.; Thijssen, J. M.; Kapusta, L.; de Korte, C. L.

    2009-04-01

    Automatic segmentation of the endocardial surface in three-dimensional (3D) echocardiographic images is an important tool to assess left ventricular (LV) geometry and cardiac output (CO). The presence of speckle noise as well as the nonisotropic characteristics of the myocardium impose strong demands on the segmentation algorithm. In the analysis of normal heart geometries of standardized (apical) views, it is advantageous to incorporate a priori knowledge about the shape and appearance of the heart. In contrast, when analyzing abnormal heart geometries, for example in children with congenital malformations, this a priori knowledge about the shape and anatomy of the LV might induce erroneous segmentation results. This study describes a fully automated segmentation method for the analysis of non-standard echocardiographic images, without making strong assumptions on the shape and appearance of the heart. The method was validated in vivo in a piglet model. Real-time 3D echocardiographic image sequences of five piglets were acquired in radiofrequency (rf) format. These ECG-gated full volume images were acquired intra-operatively in a non-standard view. Cardiac blood flow was measured simultaneously by an ultrasound transit time flow probe positioned around the common pulmonary artery. Three-dimensional adaptive filtering using the characteristics of speckle was performed on the demodulated rf data to reduce the influence of speckle noise and to optimize the distinction between blood and myocardium. A gradient-based 3D deformable simplex mesh was then used to segment the endocardial surface. A gradient and a speed force were included as external forces of the model. To balance data fitting and mesh regularity, one fixed set of weighting parameters of internal, gradient and speed forces was used for all data sets. End-diastolic and end-systolic volumes were computed from the segmented endocardial surface. The cardiac output derived from this automatic segmentation was

  15. Groupwise consistent image registration: a crucial step for the construction of a standardized near infrared hyper-spectral teeth database

    NASA Astrophysics Data System (ADS)

    Špiclin, Žiga; Usenik, Peter; Bürmen, Miran; Fidler, Aleš; Pernuš, Franjo; Likar, Boštjan

    2011-03-01

    Construction of a standardized near infrared (NIR) hyper-spectral teeth database is a first step in the development of a reliable diagnostic tool for quantification and early detection of dental diseases. The standardized diffuse reflectance hyper-spectral database was constructed by imaging 12 extracted human teeth with natural lesions of various degrees in the spectral range from 900 to 1700 nm with spectral resolution of 10 nm. Additionally, all the teeth were imaged by X-ray and digital color camera. The color and X-ray teeth images were presented to the expert for localization and classification of the dental diseases, thereby obtaining a dental disease gold standard. Accurate transfer of the dental disease gold standard to the NIR images was achieved by image registration in a groupwise manner, taking advantage of the multichannel image information and promoting image edges as the features for the improvement of spatial correspondence detection. By the presented fully automatic multi-modal groupwise registration method, images of new teeth samples can be accurately and reliably registered and then added to the standardized NIR hyper-spectral teeth database. Adding more samples increases the biological and patho-physiological variability of the NIR hyper-spectral teeth database and can importantly contribute to the objective assessment of the sensitivity and specificity of multivariate image analysis techniques used for the detection of dental diseases. Such assessment is essential for the development and validation of reliable qualitative and especially quantitative diagnostic tools based on NIR spectroscopy.

  16. Validation of the International Labour Office Digitized Standard Images for Recognition and Classification of Radiographs of Pneumoconiosis

    PubMed Central

    Halldin, Cara N.; Petsonk, Edward L.; Laney, A. Scott

    2015-01-01

    Rationale and Objectives Chest radiographs are recommended for prevention and detection of pneumoconiosis. In 2011, the International Labour Office (ILO) released a revision of the International Classification of Radiographs of Pneumoconioses that included a digitized standard images set. The present study compared results of classifications of digital chest images performed using the new ILO 2011 digitized standard images to classification approaches used in the past. Materials and Methods Underground coal miners (N = 172) were examined using both digital and film-screen radiography (FSR) on the same day. Seven National Institute for Occupational Safety and Health-certified B Readers independently classified all 172 digital radiographs, once using the ILO 2011 digitized standard images (DRILO2011-D) and once using digitized standard images used in the previous research (DRRES). The same seven B Readers classified all the miners’ chest films using the ILO film-based standards. Results Agreement between classifications of FSR and digital radiography was identical, using a standard image set (either DRILO2011-D or DRRES). The overall weighted κ value was 0.58.Somespecific differences in the results were seen and noted. However, intrareader variability in this study was similar to the published values and did not appear to be affected by the use of the new ILO 2011 digitized standard images. Conclusions These findings validate the use of the ILO digitized standard images for classification of small pneumoconiotic opacities. When digital chest radiographs are obtained and displayed appropriately, results of pneumoconiosis classifications using the 2011 ILO digitized standards are comparable to film-based ILO classifications and to classifications using earlier research standards. PMID:24507420

  17. Performance impact of parameter tuning on the CCSDS-123 lossless multi- and hyperspectral image compression standard

    NASA Astrophysics Data System (ADS)

    Augé, Estanislau; Sánchez, Jose Enrique; Kiely, Aaron; Blanes, Ian; Serra-Sagristà, Joan

    2013-01-01

    Multi-spectral and hyperspectral image data payloads have large size and may be challenging to download from remote sensors. To alleviate this problem, such images can be effectively compressed using specially designed algorithms. The new CCSDS-123 standard has been developed to address onboard lossless coding of multi-spectral and hyperspectral images. The standard is based on the fast lossless algorithm, which is composed of a causal context-based prediction stage and an entropy-coding stage that utilizes Golomb power-of-two codes. Several parts of each of these two stages have adjustable parameters. CCSDS-123 provides satisfactory performance for a wide set of imagery acquired by various sensors; but end-users of a CCSDS-123 implementation may require assistance to select a suitable combination of parameters for a specific application scenario. To assist end-users, this paper investigates the performance of CCSDS-123 under different parameter combinations and addresses the selection of an adequate combination given a specific sensor. Experimental results suggest that prediction parameters have a greater impact on the compression performance than entropy-coding parameters.

  18. Latest developments in the iLids performance standard: from multiple standard camera views to new imaging modalities

    NASA Astrophysics Data System (ADS)

    Sage, K. H.; Nilski, A. J.; Sillett, I. M.

    2009-09-01

    The Imagery Library for Intelligent Detection Systems (iLids) is the UK Government's standard for Video Based Detection Systems (VBDS). The first four iLids scenarios were released in November 2006 and annual evaluations for these four scenarios began in 2007. The Home Office Scientific Development Branch (HOSDB), in partnership with the Centre for the Protection of National Infrastructure (CPNI), has also developed a fifth iLids Scenario; Multiple Camera Tracking (MCT). The fifth scenario data sets were made available in November 2008 to industry, academic and commercial research organizations The imagery contains various staged events of people walking through the camera views. Multiple Camera Tracking Systems (MCTS) are expected to initialise on a specific target and be able to track the target over some or all of the camera views. HOSDB and CPNI are now working on a sixth iLids dataset series. These datasets will cover several technology areas: • Thermal imaging systems • Systems that rely on active IR illumination The aim is to develop libraries that promote the development of systems that are able to demonstrate effective performance in the key application area of people and vehicular detection at a distance. This paper will: • Describe the evaluation process, infrastructure and tools that HOSDB will use to evaluate MCT systems. Building on the success of our previous automated tools for evaluation, HOSDB has developed the MCT evaluation tool CLAYMORE. CLAYMORE is a tool for the real-time evaluation of MCT systems. • Provide an overview of the new sixth scenario aims and objectives, library specifications and timescales for release.

  19. Comparison of retinal thickness by Fourier-domain optical coherence tomography and OCT retinal image analysis software segmentation analysis derived from Stratus optical coherence tomography images

    NASA Astrophysics Data System (ADS)

    Tátrai, Erika; Ranganathan, Sudarshan; Ferencz, Mária; Debuc, Delia Cabrera; Somfai, Gábor Márk

    2011-05-01

    Purpose: To compare thickness measurements between Fourier-domain optical coherence tomography (FD-OCT) and time-domain OCT images analyzed with a custom-built OCT retinal image analysis software (OCTRIMA). Methods: Macular mapping (MM) by StratusOCT and MM5 and MM6 scanning protocols by an RTVue-100 FD-OCT device are performed on 11 subjects with no retinal pathology. Retinal thickness (RT) and the thickness of the ganglion cell complex (GCC) obtained with the MM6 protocol are compared for each early treatment diabetic retinopathy study (ETDRS)-like region with corresponding results obtained with OCTRIMA. RT results are compared by analysis of variance with Dunnett post hoc test, while GCC results are compared by paired t-test. Results: A high correlation is obtained for the RT between OCTRIMA and MM5 and MM6 protocols. In all regions, the StratusOCT provide the lowest RT values (mean difference 43 +/- 8 μm compared to OCTRIMA, and 42 +/- 14 μm compared to RTVue MM6). All RTVue GCC measurements were significantly thicker (mean difference between 6 and 12 μm) than the GCC measurements of OCTRIMA. Conclusion: High correspondence of RT measurements is obtained not only for RT but also for the segmentation of intraretinal layers between FD-OCT and StratusOCT-derived OCTRIMA analysis. However, a correction factor is required to compensate for OCT-specific differences to make measurements more comparable to any available OCT device.

  20. Radiotherapy of high-grade gliomas: current standards and new concepts, innovations in imaging and radiotherapy, and new therapeutic approaches.

    PubMed

    Dhermain, Frederic

    2014-01-01

    The current standards in radiotherapy of high-grade gliomas (HGG) are based on anatomic imaging techniques, usually computed tomography (CT) scanning and magnetic resonance imaging (MRI). The guidelines vary depending on whether the HGG is a histological grade 3 anaplastic glioma (AG) or a grade 4 glioblastoma multiforme (GBM). For AG, T2-weighted MRI sequences plus the region of contrast enhancement in T1 are considered for the delineation of the gross tumor volume (GTV), and an isotropic expansion of 15 to 20 mm is recommended for the clinical target volume (CTV). For GBM, the Radiation Therapy Oncology Group favors a two-step technique, with an initial phase (CTV1) including any T2 hyperintensity area (edema) plus a 20 mm margin treated with up to 46 Gy in 23 fractions, followed by a reduction in CTV2 to the contrast enhancement region in T1 with an additional 25 mm margin. The European Organisation of Research and Treatment of Cancer recommends a single-phase technique with a unique GTV, which comprises the T1 contrast enhancement region plus a margin of 20 to 30 mm. A total dose of 60 Gy in 30 fractions is usually delivered for GBM, and a dose of 59.4 Gy in 33 fractions is typically given for AG. As more than 85% of HGGs recur in field, dose-escalation studies have shown that 70 to 75 Gy can be delivered in 6 weeks with relevant toxicities developing in <10% of the patients. However, the only randomized dose-escalation trial, in which the boost dose was guided by conventional MRI, did not show any survival advantage of this treatment over the reference arm. HGGs are amongst the most infiltrative and heterogeneous tumors, and it was hypothesized that the most highly aggressive areas were missed; thus, better visualization of these high-risk regions for radiation boost could decrease the recurrence rate. Innovations in imaging and linear accelerators (LINAC) could help deliver the right doses of radiation to the right subvolumes according to the dose

  1. Estimation of Beef Marbling Standard Number Based on Dynamic Ultrasound Image

    NASA Astrophysics Data System (ADS)

    Fukuda, Osamu; Nabeoka, Natsuko; Miyajima, Tsuneharu; Hashimoto, Daisuke; Okushi, Masaaki

    Up to the present time, estimation of Beef Marbling Standard (BMS) number based on ultrasound echo imaging of live beef cattle has been studied. However, previous attempts to establish the objective and high accurate estimation method have not been satisfactory. Our previous work showed that estimation of BMS number was achieved by neural network modeling with non-linear mapping ablity. This paper reports a significant improvement of the estimation method based on dynamic ultrasound image. The proposed method consists of four processes: the extraction of dynamic and static texture features, frequency analysis, principal component analysis, and the estimation of BMS number by neural network. In order to evaluate the effectiveness of the proposed method, the experiments were conducted with or without dynamic image information. The number of target regions was set to 1 or 2, and two groups of samples, Case 1 and Case 2, were used for the experiments. Case 1 and Case 2 included 18 and 27 samples, which were measured at Saga Livestock Experiment Station and Nagasaki Agricultural and Forestry Technical Development Center, respectively. The image analysis was performed using only Case 1 or using the mixed group of Case 1 and 2. The experimental results with Case 1 showed the correlation coefficient of the estimated and the actual BMS number was improved from r=0.55 to r=0.79 by adding dynamic image information. Moreover, the correlation coefficient was further raised to r=0.84 with the number of target region increased from 1 to 2. Similarly, as for the mixed group of Case 1 and 2, the correlation coefficients were r=0.77, r=0.76, and r=0.88, respectively. These results suggested that a high estimation accuracy was achieved by adding dynamic image information and increasing target region.

  2. Microbleed Detection Using Automated Segmentation (MIDAS): A New Method Applicable to Standard Clinical MR Images

    PubMed Central

    Seghier, Mohamed L.; Kolanko, Magdalena A.; Leff, Alexander P.; Jäger, Hans R.; Gregoire, Simone M.; Werring, David J.

    2011-01-01

    Background Cerebral microbleeds, visible on gradient-recalled echo (GRE) T2* MRI, have generated increasing interest as an imaging marker of small vessel diseases, with relevance for intracerebral bleeding risk or brain dysfunction. Methodology/Principal Findings Manual rating methods have limited reliability and are time-consuming. We developed a new method for microbleed detection using automated segmentation (MIDAS) and compared it with a validated visual rating system. In thirty consecutive stroke service patients, standard GRE T2* images were acquired and manually rated for microbleeds by a trained observer. After spatially normalizing each patient's GRE T2* images into a standard stereotaxic space, the automated microbleed detection algorithm (MIDAS) identified cerebral microbleeds by explicitly incorporating an “extra” tissue class for abnormal voxels within a unified segmentation-normalization model. The agreement between manual and automated methods was assessed using the intraclass correlation coefficient (ICC) and Kappa statistic. We found that MIDAS had generally moderate to good agreement with the manual reference method for the presence of lobar microbleeds (Kappa = 0.43, improved to 0.65 after manual exclusion of obvious artefacts). Agreement for the number of microbleeds was very good for lobar regions: (ICC = 0.71, improved to ICC = 0.87). MIDAS successfully detected all patients with multiple (≥2) lobar microbleeds. Conclusions/Significance MIDAS can identify microbleeds on standard MR datasets, and with an additional rapid editing step shows good agreement with a validated visual rating system. MIDAS may be useful in screening for multiple lobar microbleeds. PMID:21448456

  3. Comparisons of neural networks to standard techniques for image classification and correlation

    NASA Technical Reports Server (NTRS)

    Paola, Justin D.; Schowengerdt, Robert A.

    1994-01-01

    Neural network techniques for multispectral image classification and spatial pattern detection are compared to the standard techniques of maximum-likelihood classification and spatial correlation. The neural network produced a more accurate classification than maximum-likelihood of a Landsat scene of Tucson, Arizona. Some of the errors in the maximum-likelihood classification are illustrated using decision region and class probability density plots. As expected, the main drawback to the neural network method is the long time required for the training stage. The network was trained using several different hidden layer sizes to optimize both the classification accuracy and training speed, and it was found that one node per class was optimal. The performance improved when 3x3 local windows of image data were entered into the net. This modification introduces texture into the classification without explicit calculation of a texture measure. Larger windows were successfully used for the detection of spatial features in Landsat and Magellan synthetic aperture radar imagery.

  4. A standardized infrared imaging technique that specifically detects UCP1-mediated thermogenesis in vivo

    PubMed Central

    Crane, Justin D.; Mottillo, Emilio P.; Farncombe, Troy H.; Morrison, Katherine M.; Steinberg, Gregory R.

    2014-01-01

    The activation and expansion of brown adipose tissue (BAT) has emerged as a promising strategy to counter obesity and the metabolic syndrome by increasing energy expenditure. The subsequent testing and validation of novel agents that augment BAT necessitates accurate pre-clinical measurements in rodents regarding the capacity for BAT-derived thermogenesis. We present a novel method to measure BAT thermogenesis using infrared imaging following β3-adrenoreceptor stimulation in mice. We show that the increased body surface temperature observed using this method is due solely to uncoupling protein-1 (UCP1)-mediated thermogenesis and that this technique is able to discern differences in BAT activity in mice acclimated to 23 °C or thermoneutrality (30 °C). These findings represent the first standardized method utilizing infrared imaging to specifically detect UCP1 activity in vivo. PMID:24944909

  5. Vehicle occupancy detection camera position optimization using design of experiments and standard image references

    NASA Astrophysics Data System (ADS)

    Paul, Peter; Hoover, Martin; Rabbani, Mojgan

    2013-03-01

    Camera positioning and orientation is important to applications in domains such as transportation since the objects to be imaged vary greatly in shape and size. In a typical transportation application that requires capturing still images, inductive loops buried in the ground or laser trigger sensors are used when a vehicle reaches the image capture zone to trigger the image capture system. The camera in such a system is in a fixed position pointed at the roadway and at a fixed orientation. Thus the problem is to determine the optimal location and orientation of the camera when capturing images from a wide variety of vehicles. Methods from Design for Six Sigma, including identifying important parameters and noise sources and performing systematically designed experiments (DOE) can be used to determine an effective set of parameter settings for the camera position and orientation under these conditions. In the transportation application of high occupancy vehicle lane enforcement, the number of passengers in the vehicle is to be counted. Past work has described front seat vehicle occupant counting using a camera mounted on an overhead gantry looking through the front windshield in order to capture images of vehicle occupants. However, viewing rear seat passengers is more problematic due to obstructions including the vehicle body frame structures and seats. One approach is to view the rear seats through the side window. In this situation the problem of optimally positioning and orienting the camera to adequately capture the rear seats through the side window can be addressed through a designed experiment. In any automated traffic enforcement system it is necessary for humans to be able to review any automatically captured digital imagery in order to verify detected infractions. Thus for defining an output to be optimized for the designed experiment, a human defined standard image reference (SIR) was used to quantify the quality of the line-of-sight to the rear seats of

  6. ASTER VNIR 15 years growth to the standard imaging radiometer in remote sensing

    NASA Astrophysics Data System (ADS)

    Hiramatsu, Masaru; Inada, Hitomi; Kikuchi, Masakuni; Sakuma, Fumihiro

    2015-10-01

    The Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) Visible and Near Infrared Radiometer (VNIR) is the remote sensing equipment which has 3 spectral bands and one along-track stereoscopic band radiometer. ASTER VNIR's planned long life design (more than 5 years) is successfully achieved. ASTER VNIR has been imaging the World-wide Earth surface multiband images and the Global Digital Elevation Model (GDEM). VNIR data create detailed world-wide maps and change-detection of the earth surface as utilization transitions and topographical changes. ASTER VNIR's geometric resolution is 15 meters; it is the highest spatial resolution instrument on NASA's Terra spacecraft. Then, ASTER VNIR was planned for the geometrical basis map makers in Terra instruments. After 15-years VNIR growth to the standard map-maker for space remote-sensing. This paper presents VNIR's feature items during 15-year operation as change-detection images , DEM and calibration result. VNIR observed the World-wide Earth images for biological, climatological, geological, and hydrological study, those successful work shows a way on space remote sensing instruments. Still more, VNIR 15 years observation data trend and onboard calibration trend data show several guide or support to follow-on instruments.

  7. Technology 84: software

    SciTech Connect

    Wallich, P.

    1984-01-01

    Progress is reported with regard to knowledge systems-artificial intelligence software capable of giving expert advice or analyzing complex information-and their major tasks and applications. A standard military language, ADA, is also discussed along with efforts to standardize software environments.

  8. MpUL-multi: Software for Calculation of Amyloid Fibril Mass per Unit Length from TB-TEM Images

    PubMed Central

    Iadanza, Matthew G.; Jackson, Matthew P.; Radford, Sheena E.; Ranson, Neil A.

    2016-01-01

    Structure determination for amyloid fibrils presents many challenges due to the high variability exhibited by fibrils and heterogeneous morphologies present, even in single samples. Mass per unit length (MPL) estimates can be used to differentiate amyloid fibril morphologies and provide orthogonal evidence for helical symmetry parameters determined by other methods. In addition, MPL data can provide insight on the arrangement of subunits in a fibril, especially for more complex fibrils assembled with multiple parallel copies of the asymmetric unit or multiple twisted protofilaments. By detecting only scattered electrons, which serve as a relative measure of total scattering, and therefore protein mass, dark field imaging gives an approximation of the total mass of protein present in any given length of fibril. When compared with a standard of known MPL, such as Tobacco Mosaic Virus (TMV), MPL of the fibrils in question can be determined. The program suite MpUL-multi was written for rapid semi-automated processing of TB-TEM dark field data acquired using this method. A graphical user interface allows for simple designation of fibrils and standards. A second program averages intensities from multiple TMV molecules for accurate standard determination, makes multiple measurements along a given fibril, and calculates the MPL. PMID:26867957

  9. A passive autofocus system by using standard deviation of the image on a liquid lens

    NASA Astrophysics Data System (ADS)

    Rasti, Pejman; Kesküla, Arko; Haus, Henry; Schlaak, Helmut F.; Anbarjafari, Gholamreza; Aabloo, Alvo; Kiefer, Rudolf

    2015-04-01

    Today most of applications have a small camera such as cell phones, tablets and medical devices. A micro lens is required in order to reduce the size of the devices. In this paper an auto focus system is used in order to find the best position of a liquid lens without any active components such as ultrasonic or infrared. In fact a passive auto focus system by using standard deviation of the images on a liquid lens which consist of a Dielectric Elastomer Actuator (DEA) membrane between oil and water is proposed.

  10. A no-gold-standard technique for objective assessment of quantitative nuclear-medicine imaging methods.

    PubMed

    Jha, Abhinav K; Caffo, Brian; Frey, Eric C

    2016-04-01

    The objective optimization and evaluation of nuclear-medicine quantitative imaging methods using patient data is highly desirable but often hindered by the lack of a gold standard. Previously, a regression-without-truth (RWT) approach has been proposed for evaluating quantitative imaging methods in the absence of a gold standard, but this approach implicitly assumes that bounds on the distribution of true values are known. Several quantitative imaging methods in nuclear-medicine imaging measure parameters where these bounds are not known, such as the activity concentration in an organ or the volume of a tumor. We extended upon the RWT approach to develop a no-gold-standard (NGS) technique for objectively evaluating such quantitative nuclear-medicine imaging methods with patient data in the absence of any ground truth. Using the parameters estimated with the NGS technique, a figure of merit, the noise-to-slope ratio (NSR), can be computed, which can rank the methods on the basis of precision. An issue with NGS evaluation techniques is the requirement of a large number of patient studies. To reduce this requirement, the proposed method explored the use of multiple quantitative measurements from the same patient, such as the activity concentration values from different organs in the same patient. The proposed technique was evaluated using rigorous numerical experiments and using data from realistic simulation studies. The numerical experiments demonstrated that the NSR was estimated accurately using the proposed NGS technique when the bounds on the distribution of true values were not precisely known, thus serving as a very reliable metric for ranking the methods on the basis of precision. In the realistic simulation study, the NGS technique was used to rank reconstruction methods for quantitative single-photon emission computed tomography (SPECT) based on their performance on the task of estimating the mean activity concentration within a known volume of interest

  11. A no-gold-standard technique for objective assessment of quantitative nuclear-medicine imaging methods

    NASA Astrophysics Data System (ADS)

    Jha, Abhinav K.; Caffo, Brian; Frey, Eric C.

    2016-04-01

    The objective optimization and evaluation of nuclear-medicine quantitative imaging methods using patient data is highly desirable but often hindered by the lack of a gold standard. Previously, a regression-without-truth (RWT) approach has been proposed for evaluating quantitative imaging methods in the absence of a gold standard, but this approach implicitly assumes that bounds on the distribution of true values are known. Several quantitative imaging methods in nuclear-medicine imaging measure parameters where these bounds are not known, such as the activity concentration in an organ or the volume of a tumor. We extended upon the RWT approach to develop a no-gold-standard (NGS) technique for objectively evaluating such quantitative nuclear-medicine imaging methods with patient data in the absence of any ground truth. Using the parameters estimated with the NGS technique, a figure of merit, the noise-to-slope ratio (NSR), can be computed, which can rank the methods on the basis of precision. An issue with NGS evaluation techniques is the requirement of a large number of patient studies. To reduce this requirement, the proposed method explored the use of multiple quantitative measurements from the same patient, such as the activity concentration values from different organs in the same patient. The proposed technique was evaluated using rigorous numerical experiments and using data from realistic simulation studies. The numerical experiments demonstrated that the NSR was estimated accurately using the proposed NGS technique when the bounds on the distribution of true values were not precisely known, thus serving as a very reliable metric for ranking the methods on the basis of precision. In the realistic simulation study, the NGS technique was used to rank reconstruction methods for quantitative single-photon emission computed tomography (SPECT) based on their performance on the task of estimating the mean activity concentration within a known volume of interest

  12. Validation of spectral radiance assignments to integrating sphere radiance standards for the Advanced Baseline Imager

    NASA Astrophysics Data System (ADS)

    Johnson, B. C.; Maxwell, Stephen; Shirley, Eric; Slack, Kim; Graham, Gary D.

    2014-09-01

    The Advanced Baseline Imager (ABI) is the next-generation imaging sensor for the National Oceanic and Atmospheric Administration's (NOAA's) operational me