Science.gov

Sample records for standard imaging software

  1. Software Formal Inspections Standard

    NASA Technical Reports Server (NTRS)

    1993-01-01

    This Software Formal Inspections Standard (hereinafter referred to as Standard) is applicable to NASA software. This Standard defines the requirements that shall be fulfilled by the software formal inspections process whenever this process is specified for NASA software. The objective of this Standard is to define the requirements for a process that inspects software products to detect and eliminate defects as early as possible in the software life cycle. The process also provides for the collection and analysis of inspection data to improve the inspection process as well as the quality of the software.

  2. Software assurance standard

    NASA Technical Reports Server (NTRS)

    1992-01-01

    This standard specifies the software assurance program for the provider of software. It also delineates the assurance activities for the provider and the assurance data that are to be furnished by the provider to the acquirer. In any software development effort, the provider is the entity or individual that actually designs, develops, and implements the software product, while the acquirer is the entity or individual who specifies the requirements and accepts the resulting products. This standard specifies at a high level an overall software assurance program for software developed for and by NASA. Assurance includes the disciplines of quality assurance, quality engineering, verification and validation, nonconformance reporting and corrective action, safety assurance, and security assurance. The application of these disciplines during a software development life cycle is called software assurance. Subsequent lower-level standards will specify the specific processes within these disciplines.

  3. NASA Software Documentation Standard

    NASA Technical Reports Server (NTRS)

    1991-01-01

    The NASA Software Documentation Standard (hereinafter referred to as "Standard") is designed to support the documentation of all software developed for NASA; its goal is to provide a framework and model for recording the essential information needed throughout the development life cycle and maintenance of a software system. The NASA Software Documentation Standard can be applied to the documentation of all NASA software. The Standard is limited to documentation format and content requirements. It does not mandate specific management, engineering, or assurance standards or techniques. This Standard defines the format and content of documentation for software acquisition, development, and sustaining engineering. Format requirements address where information shall be recorded and content requirements address what information shall be recorded. This Standard provides a framework to allow consistency of documentation across NASA and visibility into the completeness of project documentation. The basic framework consists of four major sections (or volumes). The Management Plan contains all planning and business aspects of a software project, including engineering and assurance planning. The Product Specification contains all technical engineering information, including software requirements and design. The Assurance and Test Procedures contains all technical assurance information, including Test, Quality Assurance (QA), and Verification and Validation (V&V). The Management, Engineering, and Assurance Reports is the library and/or listing of all project reports.

  4. NASA Software Safety Standard

    NASA Technical Reports Server (NTRS)

    Rosenberg, Linda

    1997-01-01

    If software is a critical element in a safety critical system, it is imperative to implement a systematic approach to software safety as an integral part of the overall system safety programs. The NASA-STD-8719.13A, "NASA Software Safety Standard", describes the activities necessary to ensure that safety is designed into software that is acquired or developed by NASA, and that safety is maintained throughout the software life cycle. A PDF version, is available on the WWW from Lewis. A Guidebook that will assist in the implementation of the requirements in the Safety Standard is under development at the Lewis Research Center (LeRC). After completion, it will also be available on the WWW from Lewis.

  5. NASA's Software Safety Standard

    NASA Technical Reports Server (NTRS)

    Ramsay, Christopher M.

    2007-01-01

    NASA relies more and more on software to control, monitor, and verify its safety critical systems, facilities and operations. Since the 1960's there has hardly been a spacecraft launched that does not have a computer on board that will provide command and control services. There have been recent incidents where software has played a role in high-profile mission failures and hazardous incidents. For example, the Mars Orbiter, Mars Polar Lander, the DART (Demonstration of Autonomous Rendezvous Technology), and MER (Mars Exploration Rover) Spirit anomalies were all caused or contributed to by software. The Mission Control Centers for the Shuttle, ISS, and unmanned programs are highly dependant on software for data displays, analysis, and mission planning. Despite this growing dependence on software control and monitoring, there has been little to no consistent application of software safety practices and methodology to NASA's projects with safety critical software. Meanwhile, academia and private industry have been stepping forward with procedures and standards for safety critical systems and software, for example Dr. Nancy Leveson's book Safeware: System Safety and Computers. The NASA Software Safety Standard, originally published in 1997, was widely ignored due to its complexity and poor organization. It also focused on concepts rather than definite procedural requirements organized around a software project lifecycle. Led by NASA Headquarters Office of Safety and Mission Assurance, the NASA Software Safety Standard has recently undergone a significant update. This new standard provides the procedures and guidelines for evaluating a project for safety criticality and then lays out the minimum project lifecycle requirements to assure the software is created, operated, and maintained in the safest possible manner. This update of the standard clearly delineates the minimum set of software safety requirements for a project without detailing the implementation for those

  6. NASA's Software Safety Standard

    NASA Technical Reports Server (NTRS)

    Ramsay, Christopher M.

    2005-01-01

    NASA (National Aeronautics and Space Administration) relies more and more on software to control, monitor, and verify its safety critical systems, facilities and operations. Since the 1960's there has hardly been a spacecraft (manned or unmanned) launched that did not have a computer on board that provided vital command and control services. Despite this growing dependence on software control and monitoring, there has been no consistent application of software safety practices and methodology to NASA's projects with safety critical software. Led by the NASA Headquarters Office of Safety and Mission Assurance, the NASA Software Safety Standard (STD-18l9.13B) has recently undergone a significant update in an attempt to provide that consistency. This paper will discuss the key features of the new NASA Software Safety Standard. It will start with a brief history of the use and development of software in safety critical applications at NASA. It will then give a brief overview of the NASA Software Working Group and the approach it took to revise the software engineering process across the Agency.

  7. Development of a viability standard curve for microencapsulated probiotic bacteria using confocal microscopy and image analysis software.

    PubMed

    Moore, Sarah; Kailasapathy, Kasipathy; Phillips, Michael; Jones, Mark R

    2015-07-01

    Microencapsulation is proposed to protect probiotic strains from food processing procedures and to maintain probiotic viability. Little research has described the in situ viability of microencapsulated probiotics. This study successfully developed a real-time viability standard curve for microencapsulated bacteria using confocal microscopy, fluorescent dyes and image analysis software.

  8. Design and evaluation of a THz time domain imaging system using standard optical design software.

    PubMed

    Brückner, Claudia; Pradarutti, Boris; Müller, Ralf; Riehemann, Stefan; Notni, Gunther; Tünnermann, Andreas

    2008-09-20

    A terahertz (THz) time domain imaging system is analyzed and optimized with standard optical design software (ZEMAX). Special requirements to the illumination optics and imaging optics are presented. In the optimized system, off-axis parabolic mirrors and lenses are combined. The system has a numerical aperture of 0.4 and is diffraction limited for field points up to 4 mm and wavelengths down to 750 microm. ZEONEX is used as the lens material. Higher aspherical coefficients are used for correction of spherical aberration and reduction of lens thickness. The lenses were manufactured by ultraprecision machining. For optimization of the system, ray tracing and wave-optical methods were combined. We show how the ZEMAX Gaussian beam analysis tool can be used to evaluate illumination optics. The resolution of the THz system was tested with a wire and a slit target, line gratings of different period, and a Siemens star. The behavior of the temporal line spread function can be modeled with the polychromatic coherent line spread function feature in ZEMAX. The spectral and temporal resolutions of the line gratings are compared with the respective modulation transfer function of ZEMAX. For maximum resolution, the system has to be diffraction limited down to the smallest wavelength of the spectrum of the THz pulse. Then, the resolution on time domain analysis of the pulse maximum can be estimated with the spectral resolution of the center of gravity wavelength. The system resolution near the optical axis on time domain analysis of the pulse maximum is 1 line pair/mm with an intensity contrast of 0.22. The Siemens star is used for estimation of the resolution of the whole system. An eight channel electro-optic sampling system was used for detection. The resolution on time domain analysis of the pulse maximum of all eight channels could be determined with the Siemens star to be 0.7 line pairs/mm. PMID:18806862

  9. Image Processing Software

    NASA Technical Reports Server (NTRS)

    1990-01-01

    The Ames digital image velocimetry technology has been incorporated in a commercially available image processing software package that allows motion measurement of images on a PC alone. The software, manufactured by Werner Frei Associates, is IMAGELAB FFT. IMAGELAB FFT is a general purpose image processing system with a variety of other applications, among them image enhancement of fingerprints and use by banks and law enforcement agencies for analysis of videos run during robberies.

  10. NASA software documentation standard software engineering program

    NASA Technical Reports Server (NTRS)

    1991-01-01

    The NASA Software Documentation Standard (hereinafter referred to as Standard) can be applied to the documentation of all NASA software. This Standard is limited to documentation format and content requirements. It does not mandate specific management, engineering, or assurance standards or techniques. This Standard defines the format and content of documentation for software acquisition, development, and sustaining engineering. Format requirements address where information shall be recorded and content requirements address what information shall be recorded. This Standard provides a framework to allow consistency of documentation across NASA and visibility into the completeness of project documentation. This basic framework consists of four major sections (or volumes). The Management Plan contains all planning and business aspects of a software project, including engineering and assurance planning. The Product Specification contains all technical engineering information, including software requirements and design. The Assurance and Test Procedures contains all technical assurance information, including Test, Quality Assurance (QA), and Verification and Validation (V&V). The Management, Engineering, and Assurance Reports is the library and/or listing of all project reports.

  11. Cathodoluminescence Spectrum Imaging Software

    2011-04-07

    The software developed for spectrum imaging is applied to the analysis of the spectrum series generated by our cathodoluminescence instrumentation. This software provides advanced processing capabilities s such: reconstruction of photon intensity (resolved in energy) and photon energy maps, extraction of the spectrum from selected areas, quantitative imaging mode, pixel-to-pixel correlation spectrum line scans, ASCII, output, filling routines, drift correction, etc.

  12. Biological Imaging Software Tools

    PubMed Central

    Eliceiri, Kevin W.; Berthold, Michael R.; Goldberg, Ilya G.; Ibáñez, Luis; Manjunath, B.S.; Martone, Maryann E.; Murphy, Robert F.; Peng, Hanchuan; Plant, Anne L.; Roysam, Badrinath; Stuurman, Nico; Swedlow, Jason R.; Tomancak, Pavel; Carpenter, Anne E.

    2013-01-01

    Few technologies are more widespread in modern biological laboratories than imaging. Recent advances in optical technologies and instrumentation are providing hitherto unimagined capabilities. Almost all these advances have required the development of software to enable the acquisition, management, analysis, and visualization of the imaging data. We review each computational step that biologists encounter when dealing with digital images, the challenges in that domain, and the overall status of available software for bioimage informatics, focusing on open source options. PMID:22743775

  13. Software engineering standards and practices

    NASA Technical Reports Server (NTRS)

    Durachka, R. W.

    1981-01-01

    Guidelines are presented for the preparation of a software development plan. The various phases of a software development project are discussed throughout its life cycle including a general description of the software engineering standards and practices to be followed during each phase.

  14. Image Processing Software

    NASA Technical Reports Server (NTRS)

    1992-01-01

    To convert raw data into environmental products, the National Weather Service and other organizations use the Global 9000 image processing system marketed by Global Imaging, Inc. The company's GAE software package is an enhanced version of the TAE, developed by Goddard Space Flight Center to support remote sensing and image processing applications. The system can be operated in three modes and is combined with HP Apollo workstation hardware.

  15. Future of Software Engineering Standards

    NASA Technical Reports Server (NTRS)

    Poon, Peter T.

    1997-01-01

    In the new millennium, software engineering standards are expected to continue to influence the process of producing software-intensive systems which are cost-effetive and of high quality. These sytems may range from ground and flight systems used for planetary exploration to educational support systems used in schools as well as consumer-oriented systems.

  16. Software Development Standard Processes (SDSP)

    NASA Technical Reports Server (NTRS)

    Lavin, Milton L.; Wang, James J.; Morillo, Ronald; Mayer, John T.; Jamshidian, Barzia; Shimizu, Kenneth J.; Wilkinson, Belinda M.; Hihn, Jairus M.; Borgen, Rosana B.; Meyer, Kenneth N.; Crean, Kathleen A.; Rinker, George C.; Smith, Thomas P.; Lum, Karen T.; Hanna, Robert A.; Erickson, Daniel E.; Gamble, Edward B., Jr.; Morgan, Scott C.; Kelsay, Michael G.; Newport, Brian J.; Lewicki, Scott A.; Stipanuk, Jeane G.; Cooper, Tonja M.; Meshkat, Leila

    2011-01-01

    A JPL-created set of standard processes is to be used throughout the lifecycle of software development. These SDSPs cover a range of activities, from management and engineering activities, to assurance and support activities. These processes must be applied to software tasks per a prescribed set of procedures. JPL s Software Quality Improvement Project is currently working at the behest of the JPL Software Process Owner to ensure that all applicable software tasks follow these procedures. The SDSPs are captured as a set of 22 standards in JPL s software process domain. They were developed in-house at JPL by a number of Subject Matter Experts (SMEs) residing primarily within the Engineering and Science Directorate, but also from the Business Operations Directorate and Safety and Mission Success Directorate. These practices include not only currently performed best practices, but also JPL-desired future practices in key thrust areas like software architecting and software reuse analysis. Additionally, these SDSPs conform to many standards and requirements to which JPL projects are beholden.

  17. Confined Space Imager (CSI) Software

    SciTech Connect

    Karelilz, David

    2013-07-03

    The software provides real-time image capture, enhancement, and display, and sensor control for the Confined Space Imager (CSI) sensor system The software captures images over a Cameralink connection and provides the following image enhancements: camera pixel to pixel non-uniformity correction, optical distortion correction, image registration and averaging, and illumination non-uniformity correction. The software communicates with the custom CSI hardware over USB to control sensor parameters and is capable of saving enhanced sensor images to an external USB drive. The software provides sensor control, image capture, enhancement, and display for the CSI sensor system. It is designed to work with the custom hardware.

  18. An overview of software safety standards

    SciTech Connect

    Lawrence, J.D.

    1995-10-01

    The writing of standards for software safety is an increasingly important activity. This essay briefly describes the two primary standards-writing organizations, IEEE and IEC, and provides a discussion of some of the more interesting software safety standards.

  19. Software thermal imager simulator

    NASA Astrophysics Data System (ADS)

    Le Noc, Loic; Pancrati, Ovidiu; Doucet, Michel; Dufour, Denis; Debaque, Benoit; Turbide, Simon; Berthiaume, Francois; Saint-Laurent, Louis; Marchese, Linda; Bolduc, Martin; Bergeron, Alain

    2014-10-01

    A software application, SIST, has been developed for the simulation of the video at the output of a thermal imager. The approach offers a more suitable representation than current identification (ID) range predictors do: the end user can evaluate the adequacy of a virtual camera as if he was using it in real operating conditions. In particular, the ambiguity in the interpretation of ID range is cancelled. The application also allows for a cost-efficient determination of the optimal design of an imager and of its subsystems without over- or under-specification: the performances are known early in the development cycle, for targets, scene and environmental conditions of interest. The simulated image is also a powerful method for testing processing algorithms. Finally, the display, which can be a severe system limitation, is also fully considered in the system by the use of real hardware components. The application consists in Matlabtm routines that simulate the effect of the subsystems atmosphere, optical lens, detector, and image processing algorithms. Calls to MODTRAN® for the atmosphere modeling and to Zemax for the optical modeling have been implemented. The realism of the simulation depends on the adequacy of the input scene for the application and on the accuracy of the subsystem parameters. For high accuracy results, measured imager characteristics such as noise can be used with SIST instead of less accurate models. The ID ranges of potential imagers were assessed for various targets, backgrounds and atmospheric conditions. The optimal specifications for an optical design were determined by varying the Seidel aberration coefficients to find the worst MTF that still respects the desired ID range.

  20. Confined Space Imager (CSI) Software

    2013-07-03

    The software provides real-time image capture, enhancement, and display, and sensor control for the Confined Space Imager (CSI) sensor system The software captures images over a Cameralink connection and provides the following image enhancements: camera pixel to pixel non-uniformity correction, optical distortion correction, image registration and averaging, and illumination non-uniformity correction. The software communicates with the custom CSI hardware over USB to control sensor parameters and is capable of saving enhanced sensor images to anmore » external USB drive. The software provides sensor control, image capture, enhancement, and display for the CSI sensor system. It is designed to work with the custom hardware.« less

  1. Standardized development of computer software. Part 2: Standards

    NASA Technical Reports Server (NTRS)

    Tausworthe, R. C.

    1978-01-01

    This monograph contains standards for software development and engineering. The book sets forth rules for design, specification, coding, testing, documentation, and quality assurance audits of software; it also contains detailed outlines for the documentation to be produced.

  2. Diversification and Challenges of Software Engineering Standards

    NASA Technical Reports Server (NTRS)

    Poon, Peter T.

    1994-01-01

    The author poses certain questions in this paper: 'In the future, should there be just one software engineering standards set? If so, how can we work towards that goal? What are the challenges of internationalizing standards?' Based on the author's personal view, the statement of his position is as follows: 'There should NOT be just one set of software engineering standards in the future. At the same time, there should NOT be the proliferation of standards, and the number of sets of standards should be kept to a minimum.It is important to understand the diversification of the areas which are spanned by the software engineering standards.' The author goes on to describe the diversification of processes, the diversification in the national and international character of standards organizations, the diversification of the professional organizations producing standards, the diversification of the types of businesses and industries, and the challenges of internationalizing standards.

  3. Standard classification of software documentation

    NASA Technical Reports Server (NTRS)

    Tausworthe, R. C.

    1976-01-01

    General conceptual requirements for standard levels of documentation and for application of these requirements to intended usages. These standards encourage the policy to produce only those forms of documentation that are needed and adequate for the purpose. Documentation standards are defined with respect to detail and format quality. Classes A through D range, in order, from the most definitive down to the least definitive, and categories 1 through 4 range, in order, from high-quality typeset down to handwritten material. Criteria for each of the classes and categories, as well as suggested selection guidelines for each are given.

  4. Reference standards for software evaluation.

    PubMed

    Michaelis, J; Wellek, S; Willems, J L

    1990-09-01

    The field of automated ECG analysis was one of the earliest topics in Medical Informatics and may be regarded as a model both for computer-assisted medical diagnosis and for evaluating medical diagnostic programs. The CSE project has set reference standards of two kinds: In a broad sense, a standard how to perform a comprehensive evaluation study, in a narrow sense, standards as specific references for evaluating computer ECG programs. The evaluation methodology used within the CSE project is described as a basis for presentation of results which are published elsewhere in this issue. PMID:2233375

  5. The IEEE Software Engineering Standards Process

    PubMed Central

    Buckley, Fletcher J.

    1984-01-01

    Software Engineering has emerged as a field in recent years, and those involved increasingly recognize the need for standards. As a result, members of the Institute of Electrical and Electronics Engineers (IEEE) formed a subcommittee to develop these standards. This paper discusses the ongoing standards development, and associated efforts.

  6. Spotlight-8 Image Analysis Software

    NASA Technical Reports Server (NTRS)

    Klimek, Robert; Wright, Ted

    2006-01-01

    Spotlight is a cross-platform GUI-based software package designed to perform image analysis on sequences of images generated by combustion and fluid physics experiments run in a microgravity environment. Spotlight can perform analysis on a single image in an interactive mode or perform analysis on a sequence of images in an automated fashion. Image processing operations can be employed to enhance the image before various statistics and measurement operations are performed. An arbitrarily large number of objects can be analyzed simultaneously with independent areas of interest. Spotlight saves results in a text file that can be imported into other programs for graphing or further analysis. Spotlight can be run on Microsoft Windows, Linux, and Apple OS X platforms.

  7. Acoustic image-processing software

    NASA Astrophysics Data System (ADS)

    Several algorithims that display, enhance and analyze side-scan sonar images of the seafloor, have been developed by the University of Washington, Seattle, as part of an Office of Naval Research funded program in acoustic image analysis. One of these programs, PORTAL, is a small (less than 100K) image display and enhancement program that can run on MS-DOS computers with VGA boards. This program is now available in the public domain for general use in acoustic image processing.PORTAL is designed to display side-scan sonar data that is stored in most standard formats, including SeaMARC I, II, 150 and GLORIA data. (See image.) In addition to the “standard” formats, PORTAL has a module “front end” that allows the user to modify the program to accept other image formats. In addition to side-scan sonar data, the program can also display digital optical images from scanners and “framegrabbers,” gridded bathymetry data from Sea Beam and other sources, and potential field (magnetics/gravity) data. While limited in image analysis capability, the program allows image enhancement by histogram manipulation, and basic filtering operations, including multistage filtering. PORTAL can print reasonably high-quality images on Postscript laser printers and lower-quality images on non-Postscript printers with HP Laserjet emulation. Images suitable only for index sheets are also possible on dot matrix printers.

  8. FITS Liberator: Image processing software

    NASA Astrophysics Data System (ADS)

    Lindberg Christensen, Lars; Nielsen, Lars Holm; Nielsen, Kaspar K.; Johansen, Teis; Hurt, Robert; de Martin, David

    2012-06-01

    The ESA/ESO/NASA FITS Liberator makes it possible to process and edit astronomical science data in the FITS format to produce stunning images of the universe. Formerly a plugin for Adobe Photoshop, the current version of FITS Liberator is a stand-alone application and no longer requires Photoshop. This image processing software makes it possible to create color images using raw observations from a range of telescopes; the FITS Liberator continues to support the FITS and PDS formats, preferred by astronomers and planetary scientists respectively, which enables data to be processed from a wide range of telescopes and planetary probes, including ESO's Very Large Telescope, the NASA/ESA Hubble Space Telescope, NASA's Spitzer Space Telescope, ESA's XMM-Newton Telescope and Cassini-Huygens or Mars Reconnaissance Orbiter.

  9. Automated computer software development standards enforcement

    SciTech Connect

    Yule, H.P.; Formento, J.W.

    1991-01-01

    The Uniform Development Environment (UDE) is being investigated as a means of enforcing software engineering standards. For the programmer, it provides an environment containing the tools and utilities necessary for orderly and controlled development and maintenance of code according to requirements. In addition, it provides DoD management and developer management the tools needed for all phases of software life cycle management and control, from project planning and management, to code development, configuration management, version control, and change control. This paper reports the status of UDE development and field testing. 5 refs.

  10. Software for Automated Image-to-Image Co-registration

    NASA Technical Reports Server (NTRS)

    Benkelman, Cody A.; Hughes, Heidi

    2007-01-01

    The project objectives are: a) Develop software to fine-tune image-to-image co-registration, presuming images are orthorectified prior to input; b) Create a reusable software development kit (SDK) to enable incorporation of these tools into other software; d) provide automated testing for quantitative analysis; and e) Develop software that applies multiple techniques to achieve subpixel precision in the co-registration of image pairs.

  11. Sandia software guidelines. Volume 3. Standards, practices, and conventions

    SciTech Connect

    Not Available

    1986-07-01

    This volume is one in a series of Sandia Software Guidelines intended for use in producing quality software within Sandia National Laboratories. In consonance with the IEEE Standard for Software Quality Assurance Plans, this volume identifies software standards, conventions, and practices. These guidelines are the result of a collective effort within Sandia National Laboratories to define recommended deliverables and to document standards, practices, and conventions which will help ensure quality software. 66 refs., 5 figs., 6 tabs.

  12. Image analysis library software development

    NASA Technical Reports Server (NTRS)

    Guseman, L. F., Jr.; Bryant, J.

    1977-01-01

    The Image Analysis Library consists of a collection of general purpose mathematical/statistical routines and special purpose data analysis/pattern recognition routines basic to the development of image analysis techniques for support of current and future Earth Resources Programs. Work was done to provide a collection of computer routines and associated documentation which form a part of the Image Analysis Library.

  13. Imaging standards for smart cards

    NASA Astrophysics Data System (ADS)

    Ellson, Richard N.; Ray, Lawrence A.

    1996-01-01

    'Smart cards' are plastic cards the size of credit cards which contain integrated circuits for the storage of digital information. The applications of these cards for image storage has been growing as card data capacities have moved from tens of bytes to thousands of bytes. This has prompted the recommendation of standards by the X3B10 committee of ANSI for inclusion in ISO standards for card image storage of a variety of image data types including digitized signatures and color portrait images. This paper reviews imaging requirements of the smart card industry, challenges of image storage for small memory devices, card image communications, and the present status of standards. The paper concludes with recommendations for the evolution of smart card image standards towards image formats customized to the image content and more optimized for smart card memory constraints.

  14. Video Image Stabilization and Registration (VISAR) Software

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Two scientists at NASA's Marshall Space Flight Center, atmospheric scientist Paul Meyer and solar physicist Dr. David Hathaway, developed promising new software, called Video Image Stabilization and Registration (VISAR), which is illustrated in this Quick Time movie. VISAR is a computer algorithm that stabilizes camera motion in the horizontal and vertical as well as rotation and zoom effects producing clearer images of moving objects, smoothes jagged edges, enhances still images, and reduces video noise or snow. It could steady images of ultrasounds, which are infamous for their grainy, blurred quality. VISAR could also have applications in law enforcement, medical, and meteorological imaging. The software can be used for defense application by improving reconnaissance video imagery made by military vehicles, aircraft, and ships traveling in harsh, rugged environments.

  15. Video Image Stabilization and Registration (VISAR) Software

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Two scientists at NASA's Marshall Space Flight Center,atmospheric scientist Paul Meyer and solar physicist Dr. David Hathaway, developed promising new software, called Video Image stabilization and Registration (VISAR), which is illustrated in this Quick Time movie. VISAR is a computer algorithm that stabilizes camera motion in the horizontal and vertical as well as rotation and zoom effects producing clearer images of moving objects, smoothes jagged edges, enhances still images, and reduces video noise or snow. It could steady images of ultrasounds, which are infamous for their grainy, blurred quality. VISAR could also have applications in law enforcement, medical, and meteorological imaging. The software can be used for defense application by improving reconnaissance video imagery made by military vehicles, aircraft, and ships traveling in harsh, rugged environments.

  16. Standard practices for the implementation of computer software

    NASA Technical Reports Server (NTRS)

    Irvine, A. P. (Editor)

    1978-01-01

    A standard approach to the development of computer program is provided that covers the file cycle of software development from the planning and requirements phase through the software acceptance testing phase. All documents necessary to provide the required visibility into the software life cycle process are discussed in detail.

  17. Video Image Stabilization and Registration (VISAR) Software

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Two scientists at NASA's Marshall Space Flight Center,atmospheric scientist Paul Meyer and solar physicist Dr. David Hathaway, developed promising new software, called Video Image Stabilization and Registration (VISAR). VISAR may help law enforcement agencies catch criminals by improving the quality of video recorded at crime scenes. In this photograph, the single frame at left, taken at night, was brightened in order to enhance details and reduce noise or snow. To further overcome the video defects in one frame, Law enforcement officials can use VISAR software to add information from multiple frames to reveal a person. Images from less than a second of videotape were added together to create the clarified image at right. VISAR stabilizes camera motion in the horizontal and vertical as well as rotation and zoom effects producing clearer images of moving objects, smoothes jagged edges, enhances still images, and reduces video noise or snow. VISAR could also have applications in medical and meteorological imaging. It could steady images of ultrasounds, which are infamous for their grainy, blurred quality. The software can be used for defense application by improving recornaissance video imagery made by military vehicles, aircraft, and ships traveling in harsh, rugged environments.

  18. Video Image Stabilization and Registration (VISAR) Software

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Two scientists at NASA Marshall Space Flight Center, atmospheric scientist Paul Meyer (left) and solar physicist Dr. David Hathaway, have developed promising new software, called Video Image Stabilization and Registration (VISAR), that may help law enforcement agencies to catch criminals by improving the quality of video recorded at crime scenes, VISAR stabilizes camera motion in the horizontal and vertical as well as rotation and zoom effects; produces clearer images of moving objects; smoothes jagged edges; enhances still images; and reduces video noise of snow. VISAR could also have applications in medical and meteorological imaging. It could steady images of Ultrasounds which are infamous for their grainy, blurred quality. It would be especially useful for tornadoes, tracking whirling objects and helping to determine the tornado's wind speed. This image shows two scientists reviewing an enhanced video image of a license plate taken from a moving automobile.

  19. A study of software standards used in the avionics industry

    NASA Technical Reports Server (NTRS)

    Hayhurst, Kelly J.

    1994-01-01

    Within the past decade, software has become an increasingly common element in computing systems. In particular, the role of software used in the aerospace industry, especially in life- or safety-critical applications, is rapidly expanding. This intensifies the need to use effective techniques for achieving and verifying the reliability of avionics software. Although certain software development processes and techniques are mandated by government regulating agencies, no one methodology has been shown to consistently produce reliable software. The knowledge base for designing reliable software simply has not reached the maturity of its hardware counterpart. In an effort to increase our understanding of software, the Langley Research Center conducted a series of experiments over 15 years with the goal of understanding why and how software fails. As part of this program, the effectiveness of current industry standards for the development of avionics is being investigated. This study involves the generation of a controlled environment to conduct scientific experiments on software processes.

  20. Open environment for image processing and software development

    NASA Astrophysics Data System (ADS)

    Rasure, John R.; Young, Mark

    1992-04-01

    The main goal of the Khoros software project is to create and provide an integrated software development environment for information processing and data visualization. The Khoros software system is now being used as a foundation to improve productivity and promote software reuse in a wide variety of application domain. A powerful feature of the Khoros system is the high-level, abstract visual language that can be employed to significantly boost the productivity of the researcher. Central to the Khoros system is the need for a consistent yet flexible user interface development system that provides cohesiveness to the vast number of programs that make up the Khoros system. Automated tools assist in maintenance as well as development of programs. The software structure that embodies this system provides for extensibility and portability, and allows for easy tailoring to target specific application domains and processing environments. First, an overview of the Khoros software environment is given. Then this paper presents the abstract applications programmer interface, API, the data services that are provided in Khoros to support it, and the Khoros visualization and image file format. The authors contend that Khoros is an excellent environment for the exploration and implementation of imaging standards.

  1. Imaging Sensor Flight and Test Equipment Software

    NASA Technical Reports Server (NTRS)

    Freestone, Kathleen; Simeone, Louis; Robertson, Byran; Frankford, Maytha; Trice, David; Wallace, Kevin; Wilkerson, DeLisa

    2007-01-01

    The Lightning Imaging Sensor (LIS) is one of the components onboard the Tropical Rainfall Measuring Mission (TRMM) satellite, and was designed to detect and locate lightning over the tropics. The LIS flight code was developed to run on a single onboard digital signal processor, and has operated the LIS instrument since 1997 when the TRMM satellite was launched. The software provides controller functions to the LIS Real-Time Event Processor (RTEP) and onboard heaters, collects the lightning event data from the RTEP, compresses and formats the data for downlink to the satellite, collects housekeeping data and formats the data for downlink to the satellite, provides command processing and interface to the spacecraft communications and data bus, and provides watchdog functions for error detection. The Special Test Equipment (STE) software was designed to operate specific test equipment used to support the LIS hardware through development, calibration, qualification, and integration with the TRMM spacecraft. The STE software provides the capability to control instrument activation, commanding (including both data formatting and user interfacing), data collection, decompression, and display and image simulation. The LIS STE code was developed for the DOS operating system in the C programming language. Because of the many unique data formats implemented by the flight instrument, the STE software was required to comprehend the same formats, and translate them for the test operator. The hardware interfaces to the LIS instrument using both commercial and custom computer boards, requiring that the STE code integrate this variety into a working system. In addition, the requirement to provide RTEP test capability dictated the need to provide simulations of background image data with short-duration lightning transients superimposed. This led to the development of unique code used to control the location, intensity, and variation above background for simulated lightning strikes

  2. Terahertz/mm wave imaging simulation software

    NASA Astrophysics Data System (ADS)

    Fetterman, M. R.; Dougherty, J.; Kiser, W. L., Jr.

    2006-10-01

    We have developed a mm wave/terahertz imaging simulation package from COTS graphic software and custom MATLAB code. In this scheme, a commercial ray-tracing package was used to simulate the emission and reflections of radiation from scenes incorporating highly realistic imagery. Accurate material properties were assigned to objects in the scenes, with values obtained from the literature, and from our own terahertz spectroscopy measurements. The images were then post-processed with custom Matlab code to include the blur introduced by the imaging system and noise levels arising from system electronics and detector noise. The Matlab code was also used to simulate the effect of fog, an important aspect for mm wave imaging systems. Several types of image scenes were evaluated, including bar targets, contrast detail targets, a person in a portal screening situation, and a sailboat on the open ocean. The images produced by this simulation are currently being used as guidance for a 94 GHz passive mm wave imaging system, but have broad applicability for frequencies extending into the terahertz region.

  3. ICER-3D Hyperspectral Image Compression Software

    NASA Technical Reports Server (NTRS)

    Xie, Hua; Kiely, Aaron; Klimesh, matthew; Aranki, Nazeeh

    2010-01-01

    Software has been developed to implement the ICER-3D algorithm. ICER-3D effects progressive, three-dimensional (3D), wavelet-based compression of hyperspectral images. If a compressed data stream is truncated, the progressive nature of the algorithm enables reconstruction of hyperspectral data at fidelity commensurate with the given data volume. The ICER-3D software is capable of providing either lossless or lossy compression, and incorporates an error-containment scheme to limit the effects of data loss during transmission. The compression algorithm, which was derived from the ICER image compression algorithm, includes wavelet-transform, context-modeling, and entropy coding subalgorithms. The 3D wavelet decomposition structure used by ICER-3D exploits correlations in all three dimensions of sets of hyperspectral image data, while facilitating elimination of spectral ringing artifacts, using a technique summarized in "Improving 3D Wavelet-Based Compression of Spectral Images" (NPO-41381), NASA Tech Briefs, Vol. 33, No. 3 (March 2009), page 7a. Correlation is further exploited by a context-modeling subalgorithm, which exploits spectral dependencies in the wavelet-transformed hyperspectral data, using an algorithm that is summarized in "Context Modeler for Wavelet Compression of Hyperspectral Images" (NPO-43239), which follows this article. An important feature of ICER-3D is a scheme for limiting the adverse effects of loss of data during transmission. In this scheme, as in the similar scheme used by ICER, the spatial-frequency domain is partitioned into rectangular error-containment regions. In ICER-3D, the partitions extend through all the wavelength bands. The data in each partition are compressed independently of those in the other partitions, so that loss or corruption of data from any partition does not affect the other partitions. Furthermore, because compression is progressive within each partition, when data are lost, any data from that partition received

  4. Data to Pictures to Data: Outreach Imaging Software and Metadata

    NASA Astrophysics Data System (ADS)

    Levay, Z.

    2011-07-01

    A convergence between astronomy science and digital photography has enabled a steady stream of visually rich imagery from state-of-the-art data. The accessibility of hardware and software has facilitated an explosion of astronomical images for outreach, from space-based observatories, ground-based professional facilities and among the vibrant amateur astrophotography community. Producing imagery from science data involves a combination of custom software to understand FITS data (FITS Liberator), off-the-shelf, industry-standard software to composite multi-wavelength data and edit digital photographs (Adobe Photoshop), and application of photo/image-processing techniques. Some additional effort is needed to close the loop and enable this imagery to be conveniently available for various purposes beyond web and print publication. The metadata paradigms in digital photography are now complying with FITS and science software to carry information such as keyword tags and world coordinates, enabling these images to be usable in more sophisticated, imaginative ways exemplified by Sky in Google Earth and World Wide Telescope.

  5. Sine-Fitting Software for IEEE Standard 1057

    SciTech Connect

    Blair, Jerome

    1999-05-01

    Software application that performs the calculations related to the sine-fit tests of IEEE Standard 1057/94. Example outputs and explainations of these outputs to determine the important characteristics of the device under test. This application performs the calculations related to sine-fit tests and uses 4-parameter sine fit from IEEE Standard 1057-1994.

  6. Standardized development of computer software. Part 1: Methods

    NASA Technical Reports Server (NTRS)

    Tausworthe, R. C.

    1976-01-01

    This work is a two-volume set on standards for modern software engineering methodology. This volume presents a tutorial and practical guide to the efficient development of reliable computer software, a unified and coordinated discipline for design, coding, testing, documentation, and project organization and management. The aim of the monograph is to provide formal disciplines for increasing the probability of securing software that is characterized by high degrees of initial correctness, readability, and maintainability, and to promote practices which aid in the consistent and orderly development of a total software system within schedule and budgetary constraints. These disciplines are set forth as a set of rules to be applied during software development to drastically reduce the time traditionally spent in debugging, to increase documentation quality, to foster understandability among those who must come in contact with it, and to facilitate operations and alterations of the program as requirements on the program environment change.

  7. Software for Viewing Landsat Mosaic Images

    NASA Technical Reports Server (NTRS)

    Watts, Zack; Farve, Catharine L.; Harvey, Craig

    2003-01-01

    A Windows-based computer program has been written to enable novice users (especially educators and students) to view images of large areas of the Earth (e.g., the continental United States) generated from image data acquired in the Landsat observations performed circa the year 1990. The large-area images are constructed as mosaics from the original Landsat images, which were acquired in several wavelength bands and each of which spans an area (in effect, one tile of a mosaic) of .5 in latitude by .6 in longitude. Whereas the original Landsat data are registered on a universal transverse Mercator (UTM) grid, the program converts the UTM coordinates of a mouse pointer in the image to latitude and longitude, which are continuously updated and displayed as the pointer is moved. The mosaic image currently on display can be exported as a Windows bitmap file. Other images (e.g., of state boundaries or interstate highways) can be overlaid on Landsat mosaics. The program interacts with the user via standard toolbar, keyboard, and mouse user interfaces. The program is supplied on a compact disk along with tutorial and educational information.

  8. FITSH- a software package for image processing

    NASA Astrophysics Data System (ADS)

    Pál, András.

    2012-04-01

    In this paper we describe the main features of the software package named FITSH, intended to provide a standalone environment for analysis of data acquired by imaging astronomical detectors. The package both provides utilities for the full pipeline of subsequent related data-processing steps (including image calibration, astrometry, source identification, photometry, differential analysis, low-level arithmetic operations, multiple-image combinations, spatial transformations and interpolations) and aids the interpretation of the (mainly photometric and/or astrometric) results. The package also features a consistent implementation of photometry based on image subtraction, point spread function fitting and aperture photometry and provides easy-to-use interfaces for comparisons and for picking the most suitable method for a particular problem. The set of utilities found in this package is built on top of the commonly used UNIX/POSIX shells (hence the name of the package); therefore, both frequently used and well-documented tools for such environments can be exploited and managing a massive amount of data is rather convenient.

  9. Porcess-industry CAPE-OPEN software standard overview

    SciTech Connect

    Zitney, S.

    2009-01-01

    CAPE-OPEN (CAPE is short for Computer Aided Process Engineering) is a standard for writing computer software interfaces. It is mainly applied in process engineering where it enables a standardized communication between process simulators (e.g. Aspen Plus) and products developed by ourselves. The advantage of CAPE-OPEN is that these products are applicable to more than just one process simulator; they are aimed at all process simulators that are CAPE-OPEN compliant.

  10. Standards guide for space and earth sciences computer software

    NASA Technical Reports Server (NTRS)

    Mason, G.; Chapman, R.; Klinglesmith, D.; Linnekin, J.; Putney, W.; Shaffer, F.; Dapice, R.

    1972-01-01

    Guidelines for the preparation of systems analysis and programming work statements are presented. The data is geared toward the efficient administration of available monetary and equipment resources. Language standards and the application of good management techniques to software development are emphasized.

  11. Software components for medical image visualization and surgical planning

    NASA Astrophysics Data System (ADS)

    Starreveld, Yves P.; Gobbi, David G.; Finnis, Kirk; Peters, Terence M.

    2001-05-01

    Purpose: The development of new applications in medical image visualization and surgical planning requires the completion of many common tasks such as image reading and re-sampling, segmentation, volume rendering, and surface display. Intra-operative use requires an interface to a tracking system and image registration, and the application requires basic, easy to understand user interface components. Rapid changes in computer and end-application hardware, as well as in operating systems and network environments make it desirable to have a hardware and operating system as an independent collection of reusable software components that can be assembled rapidly to prototype new applications. Methods: Using the OpenGL based Visualization Toolkit as a base, we have developed a set of components that implement the above mentioned tasks. The components are written in both C++ and Python, but all are accessible from Python, a byte compiled scripting language. The components have been used on the Red Hat Linux, Silicon Graphics Iris, Microsoft Windows, and Apple OS X platforms. Rigorous object-oriented software design methods have been applied to ensure hardware independence and a standard application programming interface (API). There are components to acquire, display, and register images from MRI, MRA, CT, Computed Rotational Angiography (CRA), Digital Subtraction Angiography (DSA), 2D and 3D ultrasound, video and physiological recordings. Interfaces to various tracking systems for intra-operative use have also been implemented. Results: The described components have been implemented and tested. To date they have been used to create image manipulation and viewing tools, a deep brain functional atlas, a 3D ultrasound acquisition and display platform, a prototype minimally invasive robotic coronary artery bypass graft planning system, a tracked neuro-endoscope guidance system and a frame-based stereotaxy neurosurgery planning tool. The frame-based stereotaxy module has been

  12. Standardizing Activation Analysis: New Software for Photon Activation Analysis

    SciTech Connect

    Sun, Z. J.; Wells, D.; Green, J.; Segebade, C.

    2011-06-01

    Photon Activation Analysis (PAA) of environmental, archaeological and industrial samples requires extensive data analysis that is susceptible to error. For the purpose of saving time, manpower and minimizing error, a computer program was designed, built and implemented using SQL, Access 2007 and asp.net technology to automate this process. Based on the peak information of the spectrum and assisted by its PAA library, the program automatically identifies elements in the samples and calculates their concentrations and respective uncertainties. The software also could be operated in browser/server mode, which gives the possibility to use it anywhere the internet is accessible. By switching the nuclide library and the related formula behind, the new software can be easily expanded to neutron activation analysis (NAA), charged particle activation analysis (CPAA) or proton-induced X-ray emission (PIXE). Implementation of this would standardize the analysis of nuclear activation data. Results from this software were compared to standard PAA analysis with excellent agreement. With minimum input from the user, the software has proven to be fast, user-friendly and reliable.

  13. Perspective automated inkless fingerprinting imaging software for fingerprint research.

    PubMed

    Nanakorn, Somsong; Poosankam, Pongsakorn; Mongconthawornchai, Paiboon

    2008-01-01

    Fingerprint collection using ink-and-paper image is a conventional method i.e. an ink-print, transparent-adhesive tape techniques which are slower and cumbersome. This is a pilot research for software development aimed at imaging an automated, inkless fingerprint using a fingerprint sensor, a development kit of the IT WORKS Company Limited, PC camera, and printer The development of software was performed to connect with the fingerprint sensor for collection of fingerprint images and recorded into a hard disk. It was also developed to connect with the PC camera for recording a face image of persons' fingerprints or identification card images. These images had been appropriately arranged in a PDF file prior to printing. This software is able to scan ten fingerprints and store high-quality electronics fingertip images with rapid, large, and clear images without dirt of ink or carbon. This fingerprint technology is helpful in a potential application in public health and clinical medicine research.

  14. Volume Measurement of Various Tissues Using the Image J Software.

    PubMed

    Rha, Eun Young; Kim, Ji Min; Yoo, Gyeol

    2015-09-01

    Various methods have been introduced to assess the tissue volume because volumetric evaluation is recognized as one of the most important steps in reconstructive surgery. Advanced volume measurement methods proposed recently use three-dimensional images. They are convenient but have drawbacks such as requiring expensive equipment and volume-analysis software. The authors devised a volume measurement method using the Image J software, which is in the public domain and does not require specific devices or software packages. The orbital and breast volumes were measured by our method using Image J data from facial computed tomography (CT) and breast magnetic resonance imaging (MRI). The authors obtained the final volume results, which were similar to the known volume values. The authors propose here a cost-effective, simple, and easily accessible volume measurement method using the Image J software.

  15. Software to model AXAF image quality

    NASA Technical Reports Server (NTRS)

    Ahmad, Anees

    1993-01-01

    This draft final report describes the work performed under this delivery order from May 1992 through June 1993. The purpose of this contract was to enhance and develop an integrated optical performance modeling software for complex x-ray optical systems such as AXAF. The GRAZTRACE program developed by the MSFC Optical Systems Branch for modeling VETA-I was used as the starting baseline program. The original program was a large single file program and, therefore, could not be modified very efficiently. The original source code has been reorganized, and a 'Make Utility' has been written to update the original program. The new version of the source code consists of 36 small source files to make it easier for the code developer to manage and modify the program. A user library has also been built and a 'Makelib' utility has been furnished to update the library. With the user library, the users can easily access the GRAZTRACE source files and build a custom library. A user manual for the new version of GRAZTRACE has been compiled. The plotting capability for the 3-D point spread functions and contour plots has been provided in the GRAZTRACE using the graphics package DISPLAY. The Graphics emulator over the network has been set up for programming the graphics routine. The point spread function and the contour plot routines have also been modified to display the plot centroid, and to allow the user to specify the plot range, and the viewing angle options. A Command Mode version of GRAZTRACE has also been developed. More than 60 commands have been implemented in a Code-V like format. The functions covered in this version include data manipulation, performance evaluation, and inquiry and setting of internal parameters. The user manual for these commands has been formatted as in Code-V, showing the command syntax, synopsis, and options. An interactive on-line help system for the command mode has also been accomplished to allow the user to find valid commands, command syntax

  16. Integration of CMM software standards for nanopositioning and nanomeasuring machines

    NASA Astrophysics Data System (ADS)

    Sparrer, E.; Machleidt, T.; Hausotte, T.; Manske, E.; Franke, K.-H.

    2011-06-01

    The paper focuses on the utilization of nanopositioning and nanomeasuring machines as a three dimensional coordinate measuring machine by means of the international harmonized communication protocol Inspection plus plus for Dimensional Measurement Equipment (abbreviated I++DME). I++DME was designed 1999 to enable the interoperability of different measuring hardware, like coordinate measuring machines, form tester, camshaft or crankshaft measuring machines, with a priori unknown third party controlling and analyzing software. Our recent work was focused on the implementation of a modular, standard conform command interpreter server for the Inspection plus plus protocol. This communication protocol enables the application of I++DME compliant graphical controlling software, which is easy to operate and less error prone than the currently used textural programming via MathWorks MATLab. The function and architecture of the I++DME command interpreter is discussed and the principle of operation is demonstrated by means of an example controlling a nanopositioning and nanomeasuring machine with Hexagon Metrology's controlling and analyzing software QUINDOS 7 via the I++DME command interpreter server.

  17. Software Helps Extract Information From Astronomical Images

    NASA Technical Reports Server (NTRS)

    Hartley, Booth; Ebert, Rick; Laughlin, Gaylin

    1995-01-01

    PAC Skyview 2.0 is interactive program for display and analysis of astronomical images. Includes large set of functions for display, analysis and manipulation of images. "Man" pages with descriptions of functions and examples of usage included. Skyview used interactively or in "server" mode, in which another program calls Skyview and executes commands itself. Skyview capable of reading image data files of four types, including those in FITS, S, IRAF, and Z formats. Written in C.

  18. Earth Observation Services (Image Processing Software)

    NASA Technical Reports Server (NTRS)

    1992-01-01

    San Diego State University and Environmental Systems Research Institute, with other agencies, have applied satellite imaging and image processing techniques to geographic information systems (GIS) updating. The resulting images display land use and are used by a regional planning agency for applications like mapping vegetation distribution and preserving wildlife habitats. The EOCAP program provides government co-funding to encourage private investment in, and to broaden the use of NASA-developed technology for analyzing information about Earth and ocean resources.

  19. Development and implementation of software systems for imaging spectroscopy

    USGS Publications Warehouse

    Boardman, J.W.; Clark, R.N.; Mazer, A.S.; Biehl, L.L.; Kruse, F.A.; Torson, J.; Staenz, K.

    2006-01-01

    Specialized software systems have played a crucial role throughout the twenty-five year course of the development of the new technology of imaging spectroscopy, or hyperspectral remote sensing. By their very nature, hyperspectral data place unique and demanding requirements on the computer software used to visualize, analyze, process and interpret them. Often described as a marriage of the two technologies of reflectance spectroscopy and airborne/spaceborne remote sensing, imaging spectroscopy, in fact, produces data sets with unique qualities, unlike previous remote sensing or spectrometer data. Because of these unique spatial and spectral properties hyperspectral data are not readily processed or exploited with legacy software systems inherited from either of the two parent fields of study. This paper provides brief reviews of seven important software systems developed specifically for imaging spectroscopy.

  20. The image related services of the HELIOS software engineering environment.

    PubMed

    Engelmann, U; Meinzer, H P; Schröter, A; Günnel, U; Demiris, A M; Makabe, M; Evers, H; Jean, F C; Degoulet, P

    1995-01-01

    This paper describes the approach of the European HELIOS project to integrate image processing tools into ward information systems. The image processing tools are the result of the basic research in image analysis in the Department Medical and Biological Informatics at the German Cancer Research Center. These tools for the analysis of two-dimensional images and three-dimensional data volumes with 3D reconstruction and visualization ae part of the Image Related Services of HELIOS. The HELIOS software engineering environment allows to use the image processing functionality in integrated applications.

  1. gr-MRI: A software package for magnetic resonance imaging using software defined radios.

    PubMed

    Hasselwander, Christopher J; Cao, Zhipeng; Grissom, William A

    2016-09-01

    The goal of this work is to develop software that enables the rapid implementation of custom MRI spectrometers using commercially-available software defined radios (SDRs). The developed gr-MRI software package comprises a set of Python scripts, flowgraphs, and signal generation and recording blocks for GNU Radio, an open-source SDR software package that is widely used in communications research. gr-MRI implements basic event sequencing functionality, and tools for system calibrations, multi-radio synchronization, and MR signal processing and image reconstruction. It includes four pulse sequences: a single-pulse sequence to record free induction signals, a gradient-recalled echo imaging sequence, a spin echo imaging sequence, and an inversion recovery spin echo imaging sequence. The sequences were used to perform phantom imaging scans with a 0.5Tesla tabletop MRI scanner and two commercially-available SDRs. One SDR was used for RF excitation and reception, and the other for gradient pulse generation. The total SDR hardware cost was approximately $2000. The frequency of radio desynchronization events and the frequency with which the software recovered from those events was also measured, and the SDR's ability to generate frequency-swept RF waveforms was validated and compared to the scanner's commercial spectrometer. The spin echo images geometrically matched those acquired using the commercial spectrometer, with no unexpected distortions. Desynchronization events were more likely to occur at the very beginning of an imaging scan, but were nearly eliminated if the user invoked the sequence for a short period before beginning data recording. The SDR produced a 500kHz bandwidth frequency-swept pulse with high fidelity, while the commercial spectrometer produced a waveform with large frequency spike errors. In conclusion, the developed gr-MRI software can be used to develop high-fidelity, low-cost custom MRI spectrometers using commercially-available SDRs. PMID:27394165

  2. gr-MRI: A software package for magnetic resonance imaging using software defined radios.

    PubMed

    Hasselwander, Christopher J; Cao, Zhipeng; Grissom, William A

    2016-09-01

    The goal of this work is to develop software that enables the rapid implementation of custom MRI spectrometers using commercially-available software defined radios (SDRs). The developed gr-MRI software package comprises a set of Python scripts, flowgraphs, and signal generation and recording blocks for GNU Radio, an open-source SDR software package that is widely used in communications research. gr-MRI implements basic event sequencing functionality, and tools for system calibrations, multi-radio synchronization, and MR signal processing and image reconstruction. It includes four pulse sequences: a single-pulse sequence to record free induction signals, a gradient-recalled echo imaging sequence, a spin echo imaging sequence, and an inversion recovery spin echo imaging sequence. The sequences were used to perform phantom imaging scans with a 0.5Tesla tabletop MRI scanner and two commercially-available SDRs. One SDR was used for RF excitation and reception, and the other for gradient pulse generation. The total SDR hardware cost was approximately $2000. The frequency of radio desynchronization events and the frequency with which the software recovered from those events was also measured, and the SDR's ability to generate frequency-swept RF waveforms was validated and compared to the scanner's commercial spectrometer. The spin echo images geometrically matched those acquired using the commercial spectrometer, with no unexpected distortions. Desynchronization events were more likely to occur at the very beginning of an imaging scan, but were nearly eliminated if the user invoked the sequence for a short period before beginning data recording. The SDR produced a 500kHz bandwidth frequency-swept pulse with high fidelity, while the commercial spectrometer produced a waveform with large frequency spike errors. In conclusion, the developed gr-MRI software can be used to develop high-fidelity, low-cost custom MRI spectrometers using commercially-available SDRs.

  3. gr-MRI: A software package for magnetic resonance imaging using software defined radios

    NASA Astrophysics Data System (ADS)

    Hasselwander, Christopher J.; Cao, Zhipeng; Grissom, William A.

    2016-09-01

    The goal of this work is to develop software that enables the rapid implementation of custom MRI spectrometers using commercially-available software defined radios (SDRs). The developed gr-MRI software package comprises a set of Python scripts, flowgraphs, and signal generation and recording blocks for GNU Radio, an open-source SDR software package that is widely used in communications research. gr-MRI implements basic event sequencing functionality, and tools for system calibrations, multi-radio synchronization, and MR signal processing and image reconstruction. It includes four pulse sequences: a single-pulse sequence to record free induction signals, a gradient-recalled echo imaging sequence, a spin echo imaging sequence, and an inversion recovery spin echo imaging sequence. The sequences were used to perform phantom imaging scans with a 0.5 Tesla tabletop MRI scanner and two commercially-available SDRs. One SDR was used for RF excitation and reception, and the other for gradient pulse generation. The total SDR hardware cost was approximately 2000. The frequency of radio desynchronization events and the frequency with which the software recovered from those events was also measured, and the SDR's ability to generate frequency-swept RF waveforms was validated and compared to the scanner's commercial spectrometer. The spin echo images geometrically matched those acquired using the commercial spectrometer, with no unexpected distortions. Desynchronization events were more likely to occur at the very beginning of an imaging scan, but were nearly eliminated if the user invoked the sequence for a short period before beginning data recording. The SDR produced a 500 kHz bandwidth frequency-swept pulse with high fidelity, while the commercial spectrometer produced a waveform with large frequency spike errors. In conclusion, the developed gr-MRI software can be used to develop high-fidelity, low-cost custom MRI spectrometers using commercially-available SDRs.

  4. The influence of software filtering in digital mammography image quality

    NASA Astrophysics Data System (ADS)

    Michail, C.; Spyropoulou, V.; Kalyvas, N.; Valais, I.; Dimitropoulos, N.; Fountos, G.; Kandarakis, I.; Panayiotakis, G.

    2009-05-01

    Breast cancer is one of the most frequently diagnosed cancers among women. Several techniques have been developed to help in the early detection of breast cancer such as conventional and digital x-ray mammography, positron and single-photon emission mammography, etc. A key advantage in digital mammography is that images can be manipulated as simple computer image files. Thus non-dedicated commercially available image manipulation software can be employed to process and store the images. The image processing tools of the Photoshop (CS 2) software usually incorporate digital filters which may be used to reduce image noise, enhance contrast and increase spatial resolution. However, improving an image quality parameter may result in degradation of another. The aim of this work was to investigate the influence of three sharpening filters, named hereafter sharpen, sharpen more and sharpen edges on image resolution and noise. Image resolution was assessed by means of the Modulation Transfer Function (MTF).In conclusion it was found that the correct use of commercial non-dedicated software on digital mammograms may improve some aspects of image quality.

  5. Vertical bone measurements from cone beam computed tomography images using different software packages.

    PubMed

    Vasconcelos, Taruska Ventorini; Neves, Frederico Sampaio; Moraes, Lívia Almeida Bueno; Freitas, Deborah Queiroz

    2015-01-01

    This article aimed at comparing the accuracy of linear measurement tools of different commercial software packages. Eight fully edentulous dry mandibles were selected for this study. Incisor, canine, premolar, first molar and second molar regions were selected. Cone beam computed tomography (CBCT) images were obtained with i-CAT Next Generation. Linear bone measurements were performed by one observer on the cross-sectional images using three different software packages: XoranCat®, OnDemand3D® and KDIS3D®, all able to assess DICOM images. In addition, 25% of the sample was reevaluated for the purpose of reproducibility. The mandibles were sectioned to obtain the gold standard for each region. Intraclass coefficients (ICC) were calculated to examine the agreement between the two periods of evaluation; the one-way analysis of variance performed with the post-hoc Dunnett test was used to compare each of the software-derived measurements with the gold standard. The ICC values were excellent for all software packages. The least difference between the software-derived measurements and the gold standard was obtained with the OnDemand3D and KDIS3D (-0.11 and -0.14 mm, respectively), and the greatest, with the XoranCAT (+0.25 mm). However, there was no statistical significant difference between the measurements obtained with the different software packages and the gold standard (p> 0.05). In conclusion, linear bone measurements were not influenced by the software package used to reconstruct the image from CBCT DICOM data.

  6. Image-Processing Software For A Hypercube Computer

    NASA Technical Reports Server (NTRS)

    Lee, Meemong; Mazer, Alan S.; Groom, Steven L.; Williams, Winifred I.

    1992-01-01

    Concurrent Image Processing Executive (CIPE) is software system intended to develop and use image-processing application programs on concurrent computing environment. Designed to shield programmer from complexities of concurrent-system architecture, it provides interactive image-processing environment for end user. CIPE utilizes architectural characteristics of particular concurrent system to maximize efficiency while preserving architectural independence from user and programmer. CIPE runs on Mark-IIIfp 8-node hypercube computer and associated SUN-4 host computer.

  7. Uses of software in digital image analysis: a forensic report

    NASA Astrophysics Data System (ADS)

    Sharma, Mukesh; Jha, Shailendra

    2010-02-01

    Forensic image analysis is required an expertise to interpret the content of an image or the image itself in legal matters. Major sub-disciplines of forensic image analysis with law enforcement applications include photo-grammetry, photographic comparison, content analysis and image authentication. It has wide applications in forensic science range from documenting crime scenes to enhancing faint or indistinct patterns such as partial fingerprints. The process of forensic image analysis can involve several different tasks, regardless of the type of image analysis performed. Through this paper authors have tried to explain these tasks, which are described in to three categories: Image Compression, Image Enhancement & Restoration and Measurement Extraction. With the help of examples like signature comparison, counterfeit currency comparison and foot-wear sole impression using the software Canvas and Corel Draw.

  8. Single-molecule localization software applied to photon counting imaging.

    PubMed

    Hirvonen, Liisa M; Kilfeather, Tiffany; Suhling, Klaus

    2015-06-01

    Centroiding in photon counting imaging has traditionally been accomplished by a single-step, noniterative algorithm, often implemented in hardware. Single-molecule localization techniques in superresolution fluorescence microscopy are conceptually similar, but use more sophisticated iterative software-based fitting algorithms to localize the fluorophore. Here, we discuss common features and differences between single-molecule localization and photon counting imaging and investigate the suitability of single-molecule localization software for photon event localization. We find that single-molecule localization software packages designed for superresolution microscopy-QuickPALM, rapidSTORM, and ThunderSTORM-can work well when applied to photon counting imaging with a microchannel-plate-based intensified camera system: photon event recognition can be excellent, fixed pattern noise can be low, and the microchannel plate pores can easily be resolved. PMID:26192667

  9. Single-molecule localization software applied to photon counting imaging.

    PubMed

    Hirvonen, Liisa M; Kilfeather, Tiffany; Suhling, Klaus

    2015-06-01

    Centroiding in photon counting imaging has traditionally been accomplished by a single-step, noniterative algorithm, often implemented in hardware. Single-molecule localization techniques in superresolution fluorescence microscopy are conceptually similar, but use more sophisticated iterative software-based fitting algorithms to localize the fluorophore. Here, we discuss common features and differences between single-molecule localization and photon counting imaging and investigate the suitability of single-molecule localization software for photon event localization. We find that single-molecule localization software packages designed for superresolution microscopy-QuickPALM, rapidSTORM, and ThunderSTORM-can work well when applied to photon counting imaging with a microchannel-plate-based intensified camera system: photon event recognition can be excellent, fixed pattern noise can be low, and the microchannel plate pores can easily be resolved.

  10. Approach to standardizing MR image intensity scale

    NASA Astrophysics Data System (ADS)

    Nyul, Laszlo G.; Udupa, Jayaram K.

    1999-05-01

    Despite the many advantages of MR images, they lack a standard image intensity scale. MR image intensity ranges and the meaning of intensity values vary even for the same protocol (P) and the same body region (D). This causes many difficulties in image display and analysis. We propose a two-step method for standardizing the intensity scale in such a way that for the same P and D, similar intensities will have similar meanings. In the first step, the parameters of the standardizing transformation are 'learned' from an image set. In the second step, for each MR study, these parameters are used to map their histogram into the standardized histogram. The method was tested quantitatively on 90 whole brain FSE T2, PD and T1 studies of MS patients and qualitatively on several other SE PD, T2 and SPGR studies of the grain and foot. Measurements using mean squared difference showed that the standardized image intensities have statistically significantly more consistent range and meaning than the originals. Fixed windows can be established for standardized imags and used for display without the need of per case adjustment. Preliminary results also indicate that the method facilitates improving the degree of automation of image segmentation.

  11. Image processing software for providing radiometric inputs to land surface climatology models

    NASA Technical Reports Server (NTRS)

    Newcomer, Jeffrey A.; Goetz, Scott J.; Strebel, Donald E.; Hall, Forrest G.

    1989-01-01

    During the First International Land Surface Climatology Project (ISLSCP) Field Experiment (FIFE), 80 gigabytes of image data were generated from a variety of satellite and airborne sensors in a multidisciplinary attempt to study energy and mass exchange between the land surface and the atmosphere. To make these data readily available to researchers with a range of image data handling experience and capabilities, unique image-processing software was designed to perform a variety of nonstandard image-processing manipulations and to derive a set of standard-format image products. The nonconventional features of the software include: (1) adding new layers of geographic coordinates, and solar and viewing conditions to existing data; (2) providing image polygon extraction and calibration of data to at-sensor radiances; and, (3) generating standard-format derived image products that can be easily incorporated into radiometric or climatology models. The derived image products consist of easily handled ASCII descriptor files, byte image data files, and additional per-pixel integer data files (e.g., geographic coordinates, and sun and viewing conditions). Details of the solutions to the image-processing problems, the conventions adopted for handling a variety of satellite and aircraft image data, and the applicability of the output products to quantitative modeling are presented. They should be of general interest to future experiment and data-handling design considerations.

  12. Computer Software Configuration Item-Specific Flight Software Image Transfer Script Generator

    NASA Technical Reports Server (NTRS)

    Bolen, Kenny; Greenlaw, Ronald

    2010-01-01

    A K-shell UNIX script enables the International Space Station (ISS) Flight Control Team (FCT) operators in NASA s Mission Control Center (MCC) in Houston to transfer an entire or partial computer software configuration item (CSCI) from a flight software compact disk (CD) to the onboard Portable Computer System (PCS). The tool is designed to read the content stored on a flight software CD and generate individual CSCI transfer scripts that are capable of transferring the flight software content in a given subdirectory on the CD to the scratch directory on the PCS. The flight control team can then transfer the flight software from the PCS scratch directory to the Electronically Erasable Programmable Read Only Memory (EEPROM) of an ISS Multiplexer/ Demultiplexer (MDM) via the Indirect File Transfer capability. The individual CSCI scripts and the CSCI Specific Flight Software Image Transfer Script Generator (CFITSG), when executed a second time, will remove all components from their original execution. The tool will identify errors in the transfer process and create logs of the transferred software for the purposes of configuration management.

  13. Software Development for Ring Imaging Detector

    NASA Astrophysics Data System (ADS)

    Torisky, Benjamin

    2016-03-01

    Jefferson Lab (Jlab) is performing a large-scale upgrade to their Continuous Electron Beam Accelerator Facility (CEBAF) up to 12GeV beam. The Large Acceptance Spectrometer (CLAS12) in Hall B is being upgraded and a new Ring Imaging Cherenkov (RICH) detector is being developed to provide better kaon - pion separation throughout the 3 to 12 GeV range. With this addition, when the electron beam hits the target, the resulting pions, kaons, and other particles will pass through a wall of translucent aerogel tiles and create Cherenkov radiation. This light can then be accurately detected by a large array of Multi-Anode PhotoMultiplier Tubes (MA-PMT). I am presenting an update on my work on the implementation of Java based reconstruction programs for the RICH in the CLAS12 main analysis package.

  14. Software development for a Ring Imaging Detector

    NASA Astrophysics Data System (ADS)

    Torisky, Benjamin; Benmokhtar, Fatiha

    2015-04-01

    Jefferson Lab (Jlab) is performing a large-scale upgrade to their Continuous Electron Beam Accelerator Facility (CEBAF) up to 12 GeV beam. The Large Acceptance Spectrometer (CLAS12) in Hall B is being upgraded and a new Ring Imaging CHerenkov (RICH) detector is being developed to provide better kaon - pion separation throughout the 3 to 12 GeV range. With this addition, when the electron beam hits the target, the resulting pions, kaons, and other particles will pass through a wall of translucent aerogel tiles and create Cherenkov radiation. This light can then be accurately detected by a large array of Multi-Anode PhotoMultiplier Tubes (MA-PMT). I am presenting my work on the implementation of Java based reconstruction programs for the RICH in the CLAS12 main analysis package.

  15. Digital hardware and software design for infrared sensor image processing

    NASA Astrophysics Data System (ADS)

    Bekhtin, Yuri; Barantsev, Alexander; Solyakov, Vladimir; Medvedev, Alexander

    2005-06-01

    The example of the digital hardware-and-software complex consisting of the multi-element matrix sensor the personal computer along with the installed special card AMBPCI is described. The problems of elimination socalled fixed pattern noise (FPN) are considered. To improve current imaging the residual FPN is represented as a multiplicative noise. The wavelet-based de-noising algorithm using sets of noisy and non-noisy data of images is applied.

  16. Software to model AXAF-I image quality

    NASA Technical Reports Server (NTRS)

    Ahmad, Anees; Feng, Chen

    1995-01-01

    A modular user-friendly computer program for the modeling of grazing-incidence type x-ray optical systems has been developed. This comprehensive computer software GRAZTRACE covers the manipulation of input data, ray tracing with reflectivity and surface deformation effects, convolution with x-ray source shape, and x-ray scattering. The program also includes the capabilities for image analysis, detector scan modeling, and graphical presentation of the results. A number of utilities have been developed to interface the predicted Advanced X-ray Astrophysics Facility-Imaging (AXAF-I) mirror structural and thermal distortions with the ray-trace. This software is written in FORTRAN 77 and runs on a SUN/SPARC station. An interactive command mode version and a batch mode version of the software have been developed.

  17. SIMA: Python software for analysis of dynamic fluorescence imaging data

    PubMed Central

    Kaifosh, Patrick; Zaremba, Jeffrey D.; Danielson, Nathan B.; Losonczy, Attila

    2014-01-01

    Fluorescence imaging is a powerful method for monitoring dynamic signals in the nervous system. However, analysis of dynamic fluorescence imaging data remains burdensome, in part due to the shortage of available software tools. To address this need, we have developed SIMA, an open source Python package that facilitates common analysis tasks related to fluorescence imaging. Functionality of this package includes correction of motion artifacts occurring during in vivo imaging with laser-scanning microscopy, segmentation of imaged fields into regions of interest (ROIs), and extraction of signals from the segmented ROIs. We have also developed a graphical user interface (GUI) for manual editing of the automatically segmented ROIs and automated registration of ROIs across multiple imaging datasets. This software has been designed with flexibility in mind to allow for future extension with different analysis methods and potential integration with other packages. Software, documentation, and source code for the SIMA package and ROI Buddy GUI are freely available at http://www.losonczylab.org/sima/. PMID:25295002

  18. SIMA: Python software for analysis of dynamic fluorescence imaging data.

    PubMed

    Kaifosh, Patrick; Zaremba, Jeffrey D; Danielson, Nathan B; Losonczy, Attila

    2014-01-01

    Fluorescence imaging is a powerful method for monitoring dynamic signals in the nervous system. However, analysis of dynamic fluorescence imaging data remains burdensome, in part due to the shortage of available software tools. To address this need, we have developed SIMA, an open source Python package that facilitates common analysis tasks related to fluorescence imaging. Functionality of this package includes correction of motion artifacts occurring during in vivo imaging with laser-scanning microscopy, segmentation of imaged fields into regions of interest (ROIs), and extraction of signals from the segmented ROIs. We have also developed a graphical user interface (GUI) for manual editing of the automatically segmented ROIs and automated registration of ROIs across multiple imaging datasets. This software has been designed with flexibility in mind to allow for future extension with different analysis methods and potential integration with other packages. Software, documentation, and source code for the SIMA package and ROI Buddy GUI are freely available at http://www.losonczylab.org/sima/.

  19. TANGO standard software to control the Nuclotron beam slow extraction

    NASA Astrophysics Data System (ADS)

    Andreev, V. A.; Volkov, V. I.; Gorbachev, E. V.; Isadov, V. A.; Kirichenko, A. E.; Romanov, S. V.; Sedykh, G. S.

    2016-09-01

    TANGO Controls is a basis of the NICA control system. The report describes the software which integrates the Nuclotron beam slow extraction subsystem into the TANGO system of NICA. Objects of control are power supplies for resonance lenses. The software consists of the subsystem device server, remote client and web-module for viewing the subsystem data.

  20. Stromatoporoid biometrics using image analysis software: A first order approach

    NASA Astrophysics Data System (ADS)

    Wolniewicz, Pawel

    2010-04-01

    Strommetric is a new image analysis computer program that performs morphometric measurements of stromatoporoid sponges. The program measures 15 features of skeletal elements (pillars and laminae) visible in both longitudinal and transverse thin sections. The software is implemented in C++, using the Open Computer Vision (OpenCV) library. The image analysis system distinguishes skeletal elements from sparry calcite using Otsu's method for image thresholding. More than 150 photos of thin sections were used as a test set, from which 36,159 measurements were obtained. The software provided about one hundred times more data than the current method applied until now. The data obtained are reproducible, even if the work is repeated by different workers. Thus the method makes the biometric studies of stromatoporoids objective.

  1. FBI compression standard for digitized fingerprint images

    NASA Astrophysics Data System (ADS)

    Brislawn, Christopher M.; Bradley, Jonathan N.; Onyshczak, Remigius J.; Hopper, Thomas

    1996-11-01

    The FBI has formulated national standards for digitization and compression of gray-scale fingerprint images. The compression algorithm for the digitized images is based on adaptive uniform scalar quantization of a discrete wavelet transform subband decomposition, a technique referred to as the wavelet/scalar quantization method. The algorithm produces archival-quality images at compression ratios of around 15 to 1 and will allow the current database of paper fingerprint cards to be replaced by digital imagery. A compliance testing program is also being implemented to ensure high standards of image quality and interchangeability of data between different implementations. We will review the current status of the FBI standard, including the compliance testing process and the details of the first-generation encoder.

  2. Standard gray scale images users manual

    NASA Astrophysics Data System (ADS)

    1986-09-01

    The CCITT is now in the process of developing standards for the transmission of gray scale, or continuous tone, monochromatic imagery as part of the Group 4 facsimile recommendations. The digital transmission of gray scale imagery is of particular importance to the government for the transmission of photographs, half-tones, maps, etc. Unfortunately, at the present time there is no standard set of gray scale images which can be used by all experimenters in the facsimile field. The purpose of this project is to develop such a set of standard images and provide them in digital form on magnetic tape or use in the development of gray scale techniques to be considered for standardization. The tapes are available from the NCS. The purpose of this manual is to describe the format and content of the image tapes in sufficient detail so that a user can make use of the information on the tape easily.

  3. MMX-I: data-processing software for multimodal X-ray imaging and tomography

    PubMed Central

    Bergamaschi, Antoine; Medjoubi, Kadda; Messaoudi, Cédric; Marco, Sergio; Somogyi, Andrea

    2016-01-01

    A new multi-platform freeware has been developed for the processing and reconstruction of scanning multi-technique X-ray imaging and tomography datasets. The software platform aims to treat different scanning imaging techniques: X-ray fluorescence, phase, absorption and dark field and any of their combinations, thus providing an easy-to-use data processing tool for the X-ray imaging user community. A dedicated data input stream copes with the input and management of large datasets (several hundred GB) collected during a typical multi-technique fast scan at the Nanoscopium beamline and even on a standard PC. To the authors’ knowledge, this is the first software tool that aims at treating all of the modalities of scanning multi-technique imaging and tomography experiments. PMID:27140159

  4. Image Fusion Software in the Clearpem-Sonic Project

    NASA Astrophysics Data System (ADS)

    Pizzichemi, M.; di Vara, N.; Cucciati, G.; Ghezzi, A.; Paganoni, M.; Farina, F.; Frisch, B.; Bugalho, R.

    2012-08-01

    ClearPEM-Sonic is a mammography scanner that combines Positron Emission Tomography with 3D ultrasound echographic and elastographic imaging. It has been developed to improve early stage detection of breast cancer by combining metabolic and anatomical information. The PET system has been developed by the Crystal Clear Collaboration, while the 3D ultrasound probe has been provided by SuperSonic Imagine. In this framework, the visualization and fusion software is an essential tool for the radiologists in the diagnostic process. This contribution discusses the design choices, the issues faced during the implementation, and the commissioning of the software tools developed for ClearPEM-Sonic.

  5. Open Architecture Standard for NASA's Software-Defined Space Telecommunications Radio Systems

    NASA Technical Reports Server (NTRS)

    Reinhart, Richard C.; Johnson, Sandra K.; Kacpura, Thomas J.; Hall, Charles S.; Smith, Carl R.; Liebetreu, John

    2008-01-01

    NASA is developing an architecture standard for software-defined radios used in space- and ground-based platforms to enable commonality among radio developments to enhance capability and services while reducing mission and programmatic risk. Transceivers (or transponders) with functionality primarily defined in software (e.g., firmware) have the ability to change their functional behavior through software alone. This radio architecture standard offers value by employing common waveform software interfaces, method of instantiation, operation, and testing among different compliant hardware and software products. These common interfaces within the architecture abstract application software from the underlying hardware to enable technology insertion independently at either the software or hardware layer. This paper presents the initial Space Telecommunications Radio System (STRS) Architecture for NASA missions to provide the desired software abstraction and flexibility while minimizing the resources necessary to support the architecture.

  6. Development of standard digital images for pneumoconiosis.

    PubMed

    Lee, Won-Jeong; Choi, Byung-Soon; Kim, Sung Jin; Park, Choong-Ki; Park, Jai-Soung; Tae, Seok; Hering, Kurt Georg

    2011-11-01

    We developed the standard digital images (SDIs) to be used in the classification and recognition of pneumoconiosis. From July 3, 2006 through August 31, 2007, 531 retired male workers exposed to inorganic dust were examined by digital (DR) and analog radiography (AR) on the same day, after being approved by our institutional review board and obtaining informed consent from all participants. All images were twice classified according to the International Labour Office (ILO) 2000 guidelines with reference to ILO standard analog radiographs (SARs) by four chest radiologists. After consensus reading on 349 digital images matched with the first selected analog images, 120 digital images were selected as the SDIs that considered the distribution of pneumoconiosis findings. Images with profusion category 0/1, 1, 2, and 3 were 12, 50, 40, and 15, respectively, and a large opacity were in 43 images (A = 20, B = 22, C = 1). Among pleural abnormality, costophrenic angle obliteration, pleural plaque and thickening were in 11 (9.2%), 31 (25.8%), and 9 (7.5%) images, respectively. Twenty-one of 29 symbols were present except cp, ef, ho, id, me, pa, ra, and rp. A set of 120 SDIs had more various pneumoconiosis findings than ILO SARs that were developed from adequate methods. It can be used as digital reference images for the recognition and classification of pneumoconiosis. PMID:22065894

  7. Standardizing PhenoCam Image Processing and Data Products

    NASA Astrophysics Data System (ADS)

    Milliman, T. E.; Richardson, A. D.; Klosterman, S.; Gray, J. M.; Hufkens, K.; Aubrecht, D.; Chen, M.; Friedl, M. A.

    2014-12-01

    The PhenoCam Network (http://phenocam.unh.edu) contains an archive of imagery from digital webcams to be used for scientific studies of phenological processes of vegetation. The image archive continues to grow and currently has over 4.8 million images representing 850 site-years of data. Time series of broadband reflectance (e.g., red, green, blue, infrared bands) and derivative vegetation indices (e.g. green chromatic coordinate or GCC) are calculated for regions of interest (ROI) within each image series. These time series form the basis for subsequent analysis, such as spring and autumn transition date extraction (using curvature analysis techniques) and modeling the climate-phenology relationship. Processing is relatively straightforward but time consuming, with some sites having more than 100,000 images available. While the PhenoCam Network distributes the original image data, it is our goal to provide higher-level vegetation phenology products, generated in a standardized way, to encourage use of the data without the need to download and analyze individual images. We describe here the details of the standard image processing procedures, and also provide a description of the products that will be available for download. Products currently in development include an "all-image" file, which contains a statistical summary of the red, green and blue bands over the pixels in predefined ROI's for each image from a site. This product is used to generate 1-day and 3-day temporal aggregates with 90th percentile values of GCC for the specified time-periodwith standard image selection/filtering criteria applied. Sample software (in python, R, MATLAB) that can be used to read in and plot these products will also be described.

  8. Open source tools for standardized privacy protection of medical images

    NASA Astrophysics Data System (ADS)

    Lien, Chung-Yueh; Onken, Michael; Eichelberg, Marco; Kao, Tsair; Hein, Andreas

    2011-03-01

    In addition to the primary care context, medical images are often useful for research projects and community healthcare networks, so-called "secondary use". Patient privacy becomes an issue in such scenarios since the disclosure of personal health information (PHI) has to be prevented in a sharing environment. In general, most PHIs should be completely removed from the images according to the respective privacy regulations, but some basic and alleviated data is usually required for accurate image interpretation. Our objective is to utilize and enhance these specifications in order to provide reliable software implementations for de- and re-identification of medical images suitable for online and offline delivery. DICOM (Digital Imaging and Communications in Medicine) images are de-identified by replacing PHI-specific information with values still being reasonable for imaging diagnosis and patient indexing. In this paper, this approach is evaluated based on a prototype implementation built on top of the open source framework DCMTK (DICOM Toolkit) utilizing standardized de- and re-identification mechanisms. A set of tools has been developed for DICOM de-identification that meets privacy requirements of an offline and online sharing environment and fully relies on standard-based methods.

  9. The application of image processing software: Photoshop in environmental design

    NASA Astrophysics Data System (ADS)

    Dong, Baohua; Zhang, Chunmi; Zhuo, Chen

    2011-02-01

    In the process of environmental design and creation, the design sketch holds a very important position in that it not only illuminates the design's idea and concept but also shows the design's visual effects to the client. In the field of environmental design, computer aided design has made significant improvement. Many types of specialized design software for environmental performance of the drawings and post artistic processing have been implemented. Additionally, with the use of this software, working efficiency has greatly increased and drawings have become more specific and more specialized. By analyzing the application of photoshop image processing software in environmental design and comparing and contrasting traditional hand drawing and drawing with modern technology, this essay will further explore the way for computer technology to play a bigger role in environmental design.

  10. SAO mission support software and data standards, version 1.0

    NASA Technical Reports Server (NTRS)

    Hsieh, P.

    1993-01-01

    This document defines the software developed by the SAO AXAF Mission Support (MS) Program and defines standards for the software development process and control of data products generated by the software. The SAO MS is tasked to develop and use software to perform a variety of functions in support of the AXAF mission. Software is developed by software engineers and scientists, and commercial off-the-shelf (COTS) software is used either directly or customized through the use of scripts to implement analysis procedures. Software controls real-time laboratory instruments, performs data archiving, displays data, and generates model predictions. Much software is used in the analysis of data to generate data products that are required by the AXAF project, for example, on-orbit mirror performance predictions or detailed characterization of the mirror reflection performance with energy.

  11. Towards a Reference Implementation of a Standardized Astronomical Software Environment

    NASA Astrophysics Data System (ADS)

    Paioro, L.; Garilli, B.; Grosbøl, P.; Tody, D.; Fenouillet, T.; Granet, Y.; Surace, C.

    2010-12-01

    The OPTICON Network 3.6 (FP6) and the US NVO, coordinating with international partners and the Virtual Observatory, have already identified high-level requirements and a global architectural design for a future astronomical software environment. In order to continue this project and demonstrate the concepts outlined, the new OPTICON Network 9.2 (FP7) was born and is working on a concrete prototype which will contribute to the development of a reference implementation of the basic core system to be jointly developed by the major partners. As the reference implementation stabilizes, we plan to work with selected groups within the astronomical community to port software to test the new environment and provide feedback for its further evolution. These groups will include both producers of new software as well as the major legacy systems (e.g. AIPS, CASA, IRAF/PyRAF, Starlink and ESO Common Pipeline Library).

  12. An ion beam analysis software based on ImageJ

    NASA Astrophysics Data System (ADS)

    Udalagama, C.; Chen, X.; Bettiol, A. A.; Watt, F.

    2013-07-01

    The suit of techniques (RBS, STIM, ERDS, PIXE, IL, IF,…) available in ion beam analysis yields a variety of rich information. Typically, after the initial challenge of acquiring data we are then faced with the task of having to extract relevant information or to present the data in a format with the greatest impact. This process sometimes requires developing new software tools. When faced with such situations the usual practice at the Centre for Ion Beam Applications (CIBA) in Singapore has been to use our computational expertise to develop ad hoc software tools as and when we need them. It then became apparent that the whole ion beam community can benefit from such tools; specifically from a common software toolset that can be developed and maintained by everyone with freedom to use and allowance to modify. In addition to the benefits of readymade tools and sharing the onus of development, this also opens up the possibility for collaborators to access and analyse ion beam data without having to depend on an ion beam lab. This has the virtue of making the ion beam techniques more accessible to a broader scientific community. We have identified ImageJ as an appropriate software base to develop such a common toolset. In addition to being in the public domain and been setup for collaborative tool development, ImageJ is accompanied by hundreds of modules (plugins) that allow great breadth in analysis. The present work is the first step towards integrating ion beam analysis into ImageJ. Some of the features of the current version of the ImageJ ‘ion beam' plugin are: (1) reading list mode or event-by-event files, (2) energy gates/sorts, (3) sort stacks, (4) colour function, (5) real time map updating, (6) real time colour updating and (7) median & average map creation.

  13. Software for visualization, analysis, and manipulation of laser scan images

    NASA Astrophysics Data System (ADS)

    Burnsides, Dennis B.

    1997-03-01

    The recent introduction of laser surface scanning to scientific applications presents a challenge to computer scientists and engineers. Full utilization of this two- dimensional (2-D) and three-dimensional (3-D) data requires advances in techniques and methods for data processing and visualization. This paper explores the development of software to support the visualization, analysis and manipulation of laser scan images. Specific examples presented are from on-going efforts at the Air Force Computerized Anthropometric Research and Design (CARD) Laboratory.

  14. Content standards for medical image metadata

    NASA Astrophysics Data System (ADS)

    d'Ornellas, Marcos C.; da Rocha, Rafael P.

    2003-12-01

    Medical images are at the heart of the healthcare diagnostic procedures. They have provided not only a noninvasive mean to view anatomical cross-sections of internal organs but also a mean for physicians to evaluate the patient"s diagnosis and monitor the effects of the treatment. For a Medical Center, the emphasis may shift from the generation of image to post processing and data management since the medical staff may generate even more processed images and other data from the original image after various analyses and post processing. A medical image data repository for health care information system is becoming a critical need. This data repository would contain comprehensive patient records, including information such as clinical data and related diagnostic images, and post-processed images. Due to the large volume and complexity of the data as well as the diversified user access requirements, the implementation of the medical image archive system will be a complex and challenging task. This paper discusses content standards for medical image metadata. In addition it also focuses on the image metadata content evaluation and metadata quality management.

  15. Software Defined Radio Standard Architecture and its Application to NASA Space Missions

    NASA Technical Reports Server (NTRS)

    Andro, Monty; Reinhart, Richard C.

    2006-01-01

    A software defined radio (SDR) architecture used in space-based platforms proposes to standardize certain aspects of radio development such as interface definitions, functional control and execution, and application software and firmware development. NASA has charted a team to develop an open software defined radio hardware and software architecture to support NASA missions and determine the viability of an Agency-wide Standard. A draft concept of the proposed standard has been released and discussed among organizations in the SDR community. Appropriate leveraging of the JTRS SCA, OMG's SWRadio Architecture and other aspects are considered. A standard radio architecture offers potential value by employing common waveform software instantiation, operation, testing and software maintenance. While software defined radios offer greater flexibility, they also poses challenges to the radio development for the space environment in terms of size, mass and power consumption and available technology. An SDR architecture for space must recognize and address the constraints of space flight hardware, and systems along with flight heritage and culture. NASA is actively participating in the development of technology and standards related to software defined radios. As NASA considers a standard radio architecture for space communications, input and coordination from government agencies, the industry, academia, and standards bodies is key to a successful architecture. The unique aspects of space require thorough investigation of relevant terrestrial technologies properly adapted to space. The talk will describe NASA's current effort to investigate SDR applications to space missions and a brief overview of a candidate architecture under consideration for space based platforms.

  16. Parallel-Processing Software for Creating Mosaic Images

    NASA Technical Reports Server (NTRS)

    Klimeck, Gerhard; Deen, Robert; McCauley, Michael; DeJong, Eric

    2008-01-01

    A computer program implements parallel processing for nearly real-time creation of panoramic mosaics of images of terrain acquired by video cameras on an exploratory robotic vehicle (e.g., a Mars rover). Because the original images are typically acquired at various camera positions and orientations, it is necessary to warp the images into the reference frame of the mosaic before stitching them together to create the mosaic. [Also see "Parallel-Processing Software for Correlating Stereo Images," Software Supplement to NASA Tech Briefs, Vol. 31, No. 9 (September 2007) page 26.] The warping algorithm in this computer program reflects the considerations that (1) for every pixel in the desired final mosaic, a good corresponding point must be found in one or more of the original images and (2) for this purpose, one needs a good mathematical model of the cameras and a good correlation of individual pixels with respect to their positions in three dimensions. The desired mosaic is divided into slices, each of which is assigned to one of a number of central processing units (CPUs) operating simultaneously. The results from the CPUs are gathered and placed into the final mosaic. The time taken to create the mosaic depends upon the number of CPUs, the speed of each CPU, and whether a local or a remote data-staging mechanism is used.

  17. Standards for Instructional Computing Software Design and Development.

    ERIC Educational Resources Information Center

    Schaefermeyer, Shanna

    1990-01-01

    Identifies desirable features that should be included in software for effective instructional computing use. Highlights include design of learning activities; curriculum role; modes of instruction, including drill and practice, tutorials, games, simulation, and problem solving; branching; menu driven programs; screen displays; graphics; teacher…

  18. GILDAS: Grenoble Image and Line Data Analysis Software

    NASA Astrophysics Data System (ADS)

    Gildas Team

    2013-05-01

    GILDAS is a collection of software oriented toward (sub-)millimeter radioastronomical applications (either single-dish or interferometer). It has been adopted as the IRAM standard data reduction package and is jointly maintained by IRAM & CNRS. GILDAS contains many facilities, most of which are oriented towards spectral line mapping and many kinds of 3-dimensional data. The code, written in Fortran-90 with a few parts in C/C++ (mainly keyboard interaction, plotting, widgets), is easily extensible.

  19. Software tools of the Computis European project to process mass spectrometry images.

    PubMed

    Robbe, Marie-France; Both, Jean-Pierre; Prideaux, Brendan; Klinkert, Ivo; Picaud, Vincent; Schramm, Thorsten; Hester, Atfons; Guevara, Victor; Stoeckli, Markus; Roempp, Andreas; Heeren, Ron M A; Spengler, Bernhard; Gala, Olivier; Haan, Serge

    2014-01-01

    Among the needs usually expressed by teams using mass spectrometry imaging, one that often arises is that for user-friendly software able to manage huge data volumes quickly and to provide efficient assistance for the interpretation of data. To answer this need, the Computis European project developed several complementary software tools to process mass spectrometry imaging data. Data Cube Explorer provides a simple spatial and spectral exploration for matrix-assisted laser desorption/ionisation-time of flight (MALDI-ToF) and time of flight-secondary-ion mass spectrometry (ToF-SIMS) data. SpectViewer offers visualisation functions, assistance to the interpretation of data, classification functionalities, peak list extraction to interrogate biological database and image overlay, and it can process data issued from MALDI-ToF, ToF-SIMS and desorption electrospray ionisation (DESI) equipment. EasyReg2D is able to register two images, in American Standard Code for Information Interchange (ASCII) format, issued from different technologies. The collaboration between the teams was hampered by the multiplicity of equipment and data formats, so the project also developed a common data format (imzML) to facilitate the exchange of experimental data and their interpretation by the different software tools. The BioMap platform for visualisation and exploration of MALDI-ToF and DESI images was adapted to parse imzML files, enabling its access to all project partners and, more globally, to a larger community of users. Considering the huge advantages brought by the imzML standard format, a specific editor (vBrowser) for imzML files and converters from proprietary formats to imzML were developed to enable the use of the imzML format by a broad scientific community. This initiative paves the way toward the development of a large panel of software tools able to process mass spectrometry imaging datasets in the future.

  20. Woods Hole Image Processing System Software implementation; using NetCDF as a software interface for image processing

    USGS Publications Warehouse

    Paskevich, Valerie F.

    1992-01-01

    The Branch of Atlantic Marine Geology has been involved in the collection, processing and digital mosaicking of high, medium and low-resolution side-scan sonar data during the past 6 years. In the past, processing and digital mosaicking has been accomplished with a dedicated, shore-based computer system. With the need to process sidescan data in the field with increased power and reduced cost of major workstations, a need to have an image processing package on a UNIX based computer system which could be utilized in the field as well as be more generally available to Branch personnel was identified. This report describes the initial development of that package referred to as the Woods Hole Image Processing System (WHIPS). The software was developed using the Unidata NetCDF software interface to allow data to be more readily portable between different computer operating systems.

  1. Development of Software to Model AXAF-I Image Quality

    NASA Technical Reports Server (NTRS)

    Ahmad, Anees; Hawkins, Lamar

    1996-01-01

    This draft final report describes the work performed under the delivery order number 145 from May 1995 through August 1996. The scope of work included a number of software development tasks for the performance modeling of AXAF-I. A number of new capabilities and functions have been added to the GT software, which is the command mode version of the GRAZTRACE software, originally developed by MSFC. A structural data interface has been developed for the EAL (old SPAR) finite element analysis FEA program, which is being used by MSFC Structural Analysis group for the analysis of AXAF-I. This interface utility can read the structural deformation file from the EAL and other finite element analysis programs such as NASTRAN and COSMOS/M, and convert the data to a suitable format that can be used for the deformation ray-tracing to predict the image quality for a distorted mirror. There is a provision in this utility to expand the data from finite element models assuming 180 degrees symmetry. This utility has been used to predict image characteristics for the AXAF-I HRMA, when subjected to gravity effects in the horizontal x-ray ground test configuration. The development of the metrology data processing interface software has also been completed. It can read the HDOS FITS format surface map files, manipulate and filter the metrology data, and produce a deformation file, which can be used by GT for ray tracing for the mirror surface figure errors. This utility has been used to determine the optimum alignment (axial spacing and clocking) for the four pairs of AXAF-I mirrors. Based on this optimized alignment, the geometric images and effective focal lengths for the as built mirrors were predicted to cross check the results obtained by Kodak.

  2. Demineralization Depth Using QLF and a Novel Image Processing Software.

    PubMed

    Wu, Jun; Donly, Zachary R; Donly, Kevin J; Hackmyer, Steven

    2010-01-01

    Quantitative Light-Induced fluorescence (QLF) has been widely used to detect tooth demineralization indicated by fluorescence loss with respect to surrounding sound enamel. The correlation between fluorescence loss and demineralization depth is not fully understood. The purpose of this project was to study this correlation to estimate demineralization depth. Extracted teeth were collected. Artificial caries-like lesions were created and imaged with QLF. Novel image processing software was developed to measure the largest percent of fluorescence loss in the region of interest. All teeth were then sectioned and imaged by polarized light microscopy. The largest depth of demineralization was measured by NIH ImageJ software. The statistical linear regression method was applied to analyze these data. The linear regression model was Y = 0.32X + 0.17, where X was the percent loss of fluorescence and Y was the depth of demineralization. The correlation coefficient was 0.9696. The two-tailed t-test for coefficient was 7.93, indicating the P-value = .0014. The F test for the entire model was 62.86, which shows the P-value = .0013. The results indicated statistically significant linear correlation between the percent loss of fluorescence and depth of the enamel demineralization.

  3. Demineralization Depth Using QLF and a Novel Image Processing Software

    PubMed Central

    Wu, Jun; Donly, Zachary R.; Donly, Kevin J.; Hackmyer, Steven

    2010-01-01

    Quantitative Light-Induced fluorescence (QLF) has been widely used to detect tooth demineralization indicated by fluorescence loss with respect to surrounding sound enamel. The correlation between fluorescence loss and demineralization depth is not fully understood. The purpose of this project was to study this correlation to estimate demineralization depth. Extracted teeth were collected. Artificial caries-like lesions were created and imaged with QLF. Novel image processing software was developed to measure the largest percent of fluorescence loss in the region of interest. All teeth were then sectioned and imaged by polarized light microscopy. The largest depth of demineralization was measured by NIH ImageJ software. The statistical linear regression method was applied to analyze these data. The linear regression model was Y = 0.32X + 0.17, where X was the percent loss of fluorescence and Y was the depth of demineralization. The correlation coefficient was 0.9696. The two-tailed t-test for coefficient was 7.93, indicating the P-value = .0014. The F test for the entire model was 62.86, which shows the P-value = .0013. The results indicated statistically significant linear correlation between the percent loss of fluorescence and depth of the enamel demineralization. PMID:20445755

  4. Software for Verifying Image-Correlation Tie Points

    NASA Technical Reports Server (NTRS)

    Klimeck, Gerhard; Yagi, Gary

    2008-01-01

    A computer program enables assessment of the quality of tie points in the image-correlation processes of the software described in the immediately preceding article. Tie points are computed in mappings between corresponding pixels in the left and right images of a stereoscopic pair. The mappings are sometimes not perfect because image data can be noisy and parallax can cause some points to appear in one image but not the other. The present computer program relies on the availability of a left- right correlation map in addition to the usual right left correlation map. The additional map must be generated, which doubles the processing time. Such increased time can now be afforded in the data-processing pipeline, since the time for map generation is now reduced from about 60 to 3 minutes by the parallelization discussed in the previous article. Parallel cluster processing time, therefore, enabled this better science result. The first mapping is typically from a point (denoted by coordinates x,y) in the left image to a point (x',y') in the right image. The second mapping is from (x',y') in the right image to some point (x",y") in the left image. If (x,y) and(x",y") are identical, then the mapping is considered perfect. The perfect-match criterion can be relaxed by introducing an error window that admits of round-off error and a small amount of noise. The mapping procedure can be repeated until all points in each image not connected to points in the other image are eliminated, so that what remains are verified correlation data.

  5. Software and Algorithms for Biomedical Image Data Processing and Visualization

    NASA Technical Reports Server (NTRS)

    Talukder, Ashit; Lambert, James; Lam, Raymond

    2004-01-01

    A new software equipped with novel image processing algorithms and graphical-user-interface (GUI) tools has been designed for automated analysis and processing of large amounts of biomedical image data. The software, called PlaqTrak, has been specifically used for analysis of plaque on teeth of patients. New algorithms have been developed and implemented to segment teeth of interest from surrounding gum, and a real-time image-based morphing procedure is used to automatically overlay a grid onto each segmented tooth. Pattern recognition methods are used to classify plaque from surrounding gum and enamel, while ignoring glare effects due to the reflection of camera light and ambient light from enamel regions. The PlaqTrak system integrates these components into a single software suite with an easy-to-use GUI (see Figure 1) that allows users to do an end-to-end run of a patient s record, including tooth segmentation of all teeth, grid morphing of each segmented tooth, and plaque classification of each tooth image. The automated and accurate processing of the captured images to segment each tooth [see Figure 2(a)] and then detect plaque on a tooth-by-tooth basis is a critical component of the PlaqTrak system to do clinical trials and analysis with minimal human intervention. These features offer distinct advantages over other competing systems that analyze groups of teeth or synthetic teeth. PlaqTrak divides each segmented tooth into eight regions using an advanced graphics morphing procedure [see results on a chipped tooth in Figure 2(b)], and a pattern recognition classifier is then used to locate plaque [red regions in Figure 2(d)] and enamel regions. The morphing allows analysis within regions of teeth, thereby facilitating detailed statistical analysis such as the amount of plaque present on the biting surfaces on teeth. This software system is applicable to a host of biomedical applications, such as cell analysis and life detection, or robotic applications, such

  6. Integration of HIS components through open standards: an American HIS and a European Image Processing System.

    PubMed Central

    London, J. W.; Engelmann, U.; Morton, D. E.; Meinzer, H. P.; Degoulet, P.

    1993-01-01

    This paper describes the integration of an existing American Hospital Information System with a European Image Processing System. Both systems were built independently (with no knowledge of each other), but on open systems standards. The easy integration of these systems demonstrates the major benefit of open standards-based software design. PMID:8130452

  7. Planning the Unplanned Experiment: Assessing the Efficacy of Standards for Safety Critical Software

    NASA Technical Reports Server (NTRS)

    Graydon, Patrick J.; Holloway, C. Michael

    2015-01-01

    We need well-founded means of determining whether software is t for use in safety-critical applications. While software in industries such as aviation has an excellent safety record, the fact that software aws have contributed to deaths illustrates the need for justi ably high con dence in software. It is often argued that software is t for safety-critical use because it conforms to a standard for software in safety-critical systems. But little is known about whether such standards `work.' Reliance upon a standard without knowing whether it works is an experiment; without collecting data to assess the standard, this experiment is unplanned. This paper reports on a workshop intended to explore how standards could practicably be assessed. Planning the Unplanned Experiment: Assessing the Ecacy of Standards for Safety Critical Software (AESSCS) was held on 13 May 2014 in conjunction with the European Dependable Computing Conference (EDCC). We summarize and elaborate on the workshop's discussion of the topic, including both the presented positions and the dialogue that ensued.

  8. Standardized food images: A photographing protocol and image database.

    PubMed

    Charbonnier, Lisette; van Meer, Floor; van der Laan, Laura N; Viergever, Max A; Smeets, Paul A M

    2016-01-01

    The regulation of food intake has gained much research interest because of the current obesity epidemic. For research purposes, food images are a good and convenient alternative for real food because many dietary decisions are made based on the sight of foods. Food pictures are assumed to elicit anticipatory responses similar to real foods because of learned associations between visual food characteristics and post-ingestive consequences. In contemporary food science, a wide variety of images are used which introduces between-study variability and hampers comparison and meta-analysis of results. Therefore, we created an easy-to-use photographing protocol which enables researchers to generate high resolution food images appropriate for their study objective and population. In addition, we provide a high quality standardized picture set which was characterized in seven European countries. With the use of this photographing protocol a large number of food images were created. Of these images, 80 were selected based on their recognizability in Scotland, Greece and The Netherlands. We collected image characteristics such as liking, perceived calories and/or perceived healthiness ratings from 449 adults and 191 children. The majority of the foods were recognized and liked at all sites. The differences in liking ratings, perceived calories and perceived healthiness between sites were minimal. Furthermore, perceived caloric content and healthiness ratings correlated strongly (r ≥ 0.8) with actual caloric content in both adults and children. The photographing protocol as well as the images and the data are freely available for research use on http://nutritionalneuroscience.eu/. By providing the research community with standardized images and the tools to create their own, comparability between studies will be improved and a head-start is made for a world-wide standardized food image database.

  9. Software architecture standard for simulation virtual machine, version 2.0

    NASA Technical Reports Server (NTRS)

    Sturtevant, Robert; Wessale, William

    1994-01-01

    The Simulation Virtual Machine (SBM) is an Ada architecture which eases the effort involved in the real-time software maintenance and sustaining engineering. The Software Architecture Standard defines the infrastructure which all the simulation models are built from. SVM was developed for and used in the Space Station Verification and Training Facility.

  10. Special Software for Planetary Image Processing and Research

    NASA Astrophysics Data System (ADS)

    Zubarev, A. E.; Nadezhdina, I. E.; Kozlova, N. A.; Brusnikin, E. S.; Karachevtseva, I. P.

    2016-06-01

    The special modules of photogrammetric processing of remote sensing data that provide the opportunity to effectively organize and optimize the planetary studies were developed. As basic application the commercial software package PHOTOMOD™ is used. Special modules were created to perform various types of data processing: calculation of preliminary navigation parameters, calculation of shape parameters of celestial body, global view image orthorectification, estimation of Sun illumination and Earth visibilities from planetary surface. For photogrammetric processing the different types of data have been used, including images of the Moon, Mars, Mercury, Phobos, Galilean satellites and Enceladus obtained by frame or push-broom cameras. We used modern planetary data and images that were taken over the years, shooting from orbit flight path with various illumination and resolution as well as obtained by planetary rovers from surface. Planetary data image processing is a complex task, and as usual it can take from few months to years. We present our efficient pipeline procedure that provides the possibilities to obtain different data products and supports a long way from planetary images to celestial body maps. The obtained data - new three-dimensional control point networks, elevation models, orthomosaics - provided accurate maps production: a new Phobos atlas (Karachevtseva et al., 2015) and various thematic maps that derived from studies of planetary surface (Karachevtseva et al., 2016a).

  11. OsiriX: an open-source software for navigating in multidimensional DICOM images.

    PubMed

    Rosset, Antoine; Spadola, Luca; Ratib, Osman

    2004-09-01

    A multidimensional image navigation and display software was designed for display and interpretation of large sets of multidimensional and multimodality images such as combined PET-CT studies. The software is developed in Objective-C on a Macintosh platform under the MacOS X operating system using the GNUstep development environment. It also benefits from the extremely fast and optimized 3D graphic capabilities of the OpenGL graphic standard widely used for computer games optimized for taking advantage of any hardware graphic accelerator boards available. In the design of the software special attention was given to adapt the user interface to the specific and complex tasks of navigating through large sets of image data. An interactive jog-wheel device widely used in the video and movie industry was implemented to allow users to navigate in the different dimensions of an image set much faster than with a traditional mouse or on-screen cursors and sliders. The program can easily be adapted for very specific tasks that require a limited number of functions, by adding and removing tools from the program's toolbar and avoiding an overwhelming number of unnecessary tools and functions. The processing and image rendering tools of the software are based on the open-source libraries ITK and VTK. This ensures that all new developments in image processing that could emerge from other academic institutions using these libraries can be directly ported to the OsiriX program. OsiriX is provided free of charge under the GNU open-source licensing agreement at http://homepage.mac.com/rossetantoine/osirix.

  12. Towards establishing compact imaging spectrometer standards

    USGS Publications Warehouse

    Slonecker, E. Terrence; Allen, David W.; Resmini, Ronald G.

    2016-01-01

    Remote sensing science is currently undergoing a tremendous expansion in the area of hyperspectral imaging (HSI) technology. Spurred largely by the explosive growth of Unmanned Aerial Vehicles (UAV), sometimes called Unmanned Aircraft Systems (UAS), or drones, HSI capabilities that once required access to one of only a handful of very specialized and expensive sensor systems are now miniaturized and widely available commercially. Small compact imaging spectrometers (CIS) now on the market offer a number of hyperspectral imaging capabilities in terms of spectral range and sampling. The potential uses of HSI/CIS on UAVs/UASs seem limitless. However, the rapid expansion of unmanned aircraft and small hyperspectral sensor capabilities has created a number of questions related to technological, legal, and operational capabilities. Lightweight sensor systems suitable for UAV platforms are being advertised in the trade literature at an ever-expanding rate with no standardization of system performance specifications or terms of reference. To address this issue, both the U.S. Geological Survey and the National Institute of Standards and Technology are eveloping draft standards to meet these issues. This paper presents the outline of a combined USGS/NIST cooperative strategy to develop and test a characterization methodology to meet the needs of a new and expanding UAV/CIS/HSI user community.

  13. 'Face value': new medical imaging software in commercial view.

    PubMed

    Coopmans, Catelijne

    2011-04-01

    Based on three ethnographic vignettes describing the engagements of a small start-up company with prospective competitors, partners and customers, this paper shows how commercial considerations are folded into the ways visual images become 'seeable'. When company members mount demonstrations of prototype mammography software, they seek to generate interest but also to protect their intellectual property. Pivotal to these efforts to manage revelation and concealment is the visual interface, which is variously performed as obstacle and ally in the development of a profitable product. Using the concept of 'face value', the paper seeks to develop further insight into contemporary dynamics of seeing and showing by tracing the way techno-visual presentations and commercial considerations become entangled in practice. It also draws attention to the salience and significance of enactments of surface and depth in image-based practices. PMID:21998921

  14. Crystallographic Image Processing Software for Scanning Probe Microscopists

    NASA Astrophysics Data System (ADS)

    Plachinda, Pavel; Moon, Bill; Moeck, Peter

    2010-03-01

    Following the common practice of structural electron crystallography, scanning probe microscopy (SPM) images can be processed ``crystallographically'' [1,2]. An estimate of the point spread function of the SPM can be obtained and subsequently its influence removed from the images. Also a difference Fourier synthesis can be calculated in order to enhance the visibility of structural defects. We are currently in the process of developing dedicated PC-based software for the wider SPM community. [4pt] [1] P. Moeck, B. Moon Jr., M. Abdel-Hafiez, and M. Hietschold, Proc. NSTI 2009, Houston, May 3-7, 2009, Vol. I (2009) 314-317, (ISBN: 978-1-4398-1782-7). [0pt] [2] P. Moeck, M. Toader, M. Abdel-Hafiez, and M. Hietschold, Proc. 2009 International Conference on Frontiers of Characterization and Metrology for Nanoelectronics, May 11-14, 2009, Albany, New York, Best Paper Award

  15. 'Face value': new medical imaging software in commercial view.

    PubMed

    Coopmans, Catelijne

    2011-04-01

    Based on three ethnographic vignettes describing the engagements of a small start-up company with prospective competitors, partners and customers, this paper shows how commercial considerations are folded into the ways visual images become 'seeable'. When company members mount demonstrations of prototype mammography software, they seek to generate interest but also to protect their intellectual property. Pivotal to these efforts to manage revelation and concealment is the visual interface, which is variously performed as obstacle and ally in the development of a profitable product. Using the concept of 'face value', the paper seeks to develop further insight into contemporary dynamics of seeing and showing by tracing the way techno-visual presentations and commercial considerations become entangled in practice. It also draws attention to the salience and significance of enactments of surface and depth in image-based practices.

  16. Planning the Unplanned Experiment: Towards Assessing the Efficacy of Standards for Safety-Critical Software

    NASA Technical Reports Server (NTRS)

    Graydon, Patrick J.; Holloway, C. M.

    2015-01-01

    Safe use of software in safety-critical applications requires well-founded means of determining whether software is fit for such use. While software in industries such as aviation has a good safety record, little is known about whether standards for software in safety-critical applications 'work' (or even what that means). It is often (implicitly) argued that software is fit for safety-critical use because it conforms to an appropriate standard. Without knowing whether a standard works, such reliance is an experiment; without carefully collecting assessment data, that experiment is unplanned. To help plan the experiment, we organized a workshop to develop practical ideas for assessing software safety standards. In this paper, we relate and elaborate on the workshop discussion, which revealed subtle but important study design considerations and practical barriers to collecting appropriate historical data and recruiting appropriate experimental subjects. We discuss assessing standards as written and as applied, several candidate definitions for what it means for a standard to 'work,' and key assessment strategies and study techniques and the pros and cons of each. Finally, we conclude with thoughts about the kinds of research that will be required and how academia, industry, and regulators might collaborate to overcome the noted barriers.

  17. Software development for ACR-approved phantom-based nuclear medicine tomographic image quality control with cross-platform compatibility

    NASA Astrophysics Data System (ADS)

    Oh, Jungsu S.; Choi, Jae Min; Nam, Ki Pyo; Chae, Sun Young; Ryu, Jin-Sook; Moon, Dae Hyuk; Kim, Jae Seung

    2015-07-01

    Quality control and quality assurance (QC/QA) have been two of the most important issues in modern nuclear medicine (NM) imaging for both clinical practices and academic research. Whereas quantitative QC analysis software is common to modern positron emission tomography (PET) scanners, the QC of gamma cameras and/or single-photon-emission computed tomography (SPECT) scanners has not been sufficiently addressed. Although a thorough standard operating process (SOP) for mechanical and software maintenance may help the QC/QA of a gamma camera and SPECT-computed tomography (CT), no previous study has addressed a unified platform or process to decipher or analyze SPECT phantom images acquired from various scanners thus far. In addition, a few approaches have established cross-platform software to enable the technologists and physicists to assess the variety of SPECT scanners from different manufacturers. To resolve these issues, we have developed Interactive Data Language (IDL)-based in-house software for crossplatform (in terms of not only operating systems (OS) but also manufacturers) analyses of the QC data on an ACR SPECT phantom, which is essential for assessing and assuring the tomographical image quality of SPECT. We applied our devised software to our routine quarterly QC of ACR SPECT phantom images acquired from a number of platforms (OS/manufacturers). Based on our experience, we suggest that our devised software can offer a unified platform that allows images acquired from various types of scanners to be analyzed with great precision and accuracy.

  18. Vobi One: a data processing software package for functional optical imaging.

    PubMed

    Takerkart, Sylvain; Katz, Philippe; Garcia, Flavien; Roux, Sébastien; Reynaud, Alexandre; Chavane, Frédéric

    2014-01-01

    Optical imaging is the only technique that allows to record the activity of a neuronal population at the mesoscopic scale. A large region of the cortex (10-20 mm diameter) is directly imaged with a CCD camera while the animal performs a behavioral task, producing spatio-temporal data with an unprecedented combination of spatial and temporal resolutions (respectively, tens of micrometers and milliseconds). However, researchers who have developed and used this technique have relied on heterogeneous software and methods to analyze their data. In this paper, we introduce Vobi One, a software package entirely dedicated to the processing of functional optical imaging data. It has been designed to facilitate the processing of data and the comparison of different analysis methods. Moreover, it should help bring good analysis practices to the community because it relies on a database and a standard format for data handling and it provides tools that allow producing reproducible research. Vobi One is an extension of the BrainVISA software platform, entirely written with the Python programming language, open source and freely available for download at https://trac.int.univ-amu.fr/vobi_one.

  19. Development of image-processing software for automatic segmentation of brain tumors in MR images.

    PubMed

    Vijayakumar, C; Gharpure, Damayanti Chandrashekhar

    2011-07-01

    Most of the commercially available software for brain tumor segmentation have limited functionality and frequently lack the careful validation that is required for clinical studies. We have developed an image-analysis software package called 'Prometheus,' which performs neural system-based segmentation operations on MR images using pre-trained information. The software also has the capability to improve its segmentation performance by using the training module of the neural system. The aim of this article is to present the design and modules of this software. The segmentation module of Prometheus can be used primarily for image analysis in MR images. Prometheus was validated against manual segmentation by a radiologist and its mean sensitivity and specificity was found to be 85.71±4.89% and 93.2±2.87%, respectively. Similarly, the mean segmentation accuracy and mean correspondence ratio was found to be 92.35±3.37% and 0.78±0.046, respectively. PMID:21897560

  20. Two-Dimensional Gel Electrophoresis Image Analysis via Dedicated Software Packages.

    PubMed

    Maurer, Martin H

    2016-01-01

    Analyzing two-dimensional gel electrophoretic images is supported by a number of freely and commercially available software. Although the respective program is highly specific, all the programs follow certain standardized algorithms. General steps are: (1) detecting and separating individual spots, (2) subtracting background, (3) creating a reference gel and (4) matching the spots to the reference gel, (5) modifying the reference gel, (6) normalizing the gel measurements for comparison, (7) calibrating for isoelectric point and molecular weight markers, and moreover, (8) constructing a database containing the measurement results and (9) comparing data by statistical and bioinformatic methods.

  1. Predictive images of postoperative levator resection outcome using image processing software

    PubMed Central

    Mawatari, Yuki; Fukushima, Mikiko

    2016-01-01

    Purpose This study aims to evaluate the efficacy of processed images to predict postoperative appearance following levator resection. Methods Analysis involved 109 eyes from 65 patients with blepharoptosis who underwent advancement of levator aponeurosis and Müller’s muscle complex (levator resection). Predictive images were prepared from preoperative photographs using the image processing software (Adobe Photoshop®). Images of selected eyes were digitally enlarged in an appropriate manner and shown to patients prior to surgery. Results Approximately 1 month postoperatively, we surveyed our patients using questionnaires. Fifty-six patients (89.2%) were satisfied with their postoperative appearances, and 55 patients (84.8%) positively responded to the usefulness of processed images to predict postoperative appearance. Conclusion Showing processed images that predict postoperative appearance to patients prior to blepharoptosis surgery can be useful for those patients concerned with their postoperative appearance. This approach may serve as a useful tool to simulate blepharoptosis surgery. PMID:27757008

  2. GelClust: a software tool for gel electrophoresis images analysis and dendrogram generation.

    PubMed

    Khakabimamaghani, Sahand; Najafi, Ali; Ranjbar, Reza; Raam, Monireh

    2013-08-01

    This paper presents GelClust, a new software that is designed for processing gel electrophoresis images and generating the corresponding phylogenetic trees. Unlike the most of commercial and non-commercial related softwares, we found that GelClust is very user-friendly and guides the user from image toward dendrogram through seven simple steps. Furthermore, the software, which is implemented in C# programming language under Windows operating system, is more accurate than similar software regarding image processing and is the only software able to detect and correct gel 'smile' effects completely automatically. These claims are supported with experiments.

  3. GelClust: a software tool for gel electrophoresis images analysis and dendrogram generation.

    PubMed

    Khakabimamaghani, Sahand; Najafi, Ali; Ranjbar, Reza; Raam, Monireh

    2013-08-01

    This paper presents GelClust, a new software that is designed for processing gel electrophoresis images and generating the corresponding phylogenetic trees. Unlike the most of commercial and non-commercial related softwares, we found that GelClust is very user-friendly and guides the user from image toward dendrogram through seven simple steps. Furthermore, the software, which is implemented in C# programming language under Windows operating system, is more accurate than similar software regarding image processing and is the only software able to detect and correct gel 'smile' effects completely automatically. These claims are supported with experiments. PMID:23727299

  4. A Survey of DICOM Viewer Software to Integrate Clinical Research and Medical Imaging.

    PubMed

    Haak, Daniel; Page, Charles-E; Deserno, Thomas M

    2016-04-01

    The digital imaging and communications in medicine (DICOM) protocol is the leading standard for image data management in healthcare. Imaging biomarkers and image-based surrogate endpoints in clinical trials and medical registries require DICOM viewer software with advanced functionality for visualization and interfaces for integration. In this paper, a comprehensive evaluation of 28 DICOM viewers is performed. The evaluation criteria are obtained from application scenarios in clinical research rather than patient care. They include (i) platform, (ii) interface, (iii) support, (iv) two-dimensional (2D), and (v) three-dimensional (3D) viewing. On the average, 4.48 and 1.43 of overall 8 2D and 5 3D image viewing criteria are satisfied, respectively. Suitable DICOM interfaces for central viewing in hospitals are provided by GingkoCADx, MIPAV, and OsiriX Lite. The viewers ImageJ, MicroView, MIPAV, and OsiriX Lite offer all included 3D-rendering features for advanced viewing. Interfaces needed for decentral viewing in web-based systems are offered by Oviyam, Weasis, and Xero. Focusing on open source components, MIPAV is the best candidate for 3D imaging as well as DICOM communication. Weasis is superior for workflow optimization in clinical trials. Our evaluation shows that advanced visualization and suitable interfaces can also be found in the open source field and not only in commercial products.

  5. Understanding the Perception of Very Small Software Companies towards the Adoption of Process Standards

    NASA Astrophysics Data System (ADS)

    Basri, Shuib; O'Connor, Rory V.

    This paper is concerned with understanding the issues that affect the adoption of software process standards by Very Small Entities (VSEs), their needs from process standards and their willingness to engage with the new ISO/IEC 29110 standard in particular. In order to achieve this goal, a series of industry data collection studies were undertaken with a collection of VSEs. A twin track approach of a qualitative data collection (interviews and focus groups) and quantitative data collection (questionnaire) were undertaken. Data analysis was being completed separately and the final results were merged, using the coding mechanisms of grounded theory. This paper serves as a roadmap for both researchers wishing to understand the issues of process standards adoption by very small companies and also for the software process standards community.

  6. Software defined multi-spectral imaging for Arctic sensor networks

    NASA Astrophysics Data System (ADS)

    Siewert, Sam; Angoth, Vivek; Krishnamurthy, Ramnarayan; Mani, Karthikeyan; Mock, Kenrick; Singh, Surjith B.; Srivistava, Saurav; Wagner, Chris; Claus, Ryan; Vis, Matthew Demi

    2016-05-01

    Availability of off-the-shelf infrared sensors combined with high definition visible cameras has made possible the construction of a Software Defined Multi-Spectral Imager (SDMSI) combining long-wave, near-infrared and visible imaging. The SDMSI requires a real-time embedded processor to fuse images and to create real-time depth maps for opportunistic uplink in sensor networks. Researchers at Embry Riddle Aeronautical University working with University of Alaska Anchorage at the Arctic Domain Awareness Center and the University of Colorado Boulder have built several versions of a low-cost drop-in-place SDMSI to test alternatives for power efficient image fusion. The SDMSI is intended for use in field applications including marine security, search and rescue operations and environmental surveys in the Arctic region. Based on Arctic marine sensor network mission goals, the team has designed the SDMSI to include features to rank images based on saliency and to provide on camera fusion and depth mapping. A major challenge has been the design of the camera computing system to operate within a 10 to 20 Watt power budget. This paper presents a power analysis of three options: 1) multi-core, 2) field programmable gate array with multi-core, and 3) graphics processing units with multi-core. For each test, power consumed for common fusion workloads has been measured at a range of frame rates and resolutions. Detailed analyses from our power efficiency comparison for workloads specific to stereo depth mapping and sensor fusion are summarized. Preliminary mission feasibility results from testing with off-the-shelf long-wave infrared and visible cameras in Alaska and Arizona are also summarized to demonstrate the value of the SDMSI for applications such as ice tracking, ocean color, soil moisture, animal and marine vessel detection and tracking. The goal is to select the most power efficient solution for the SDMSI for use on UAVs (Unoccupied Aerial Vehicles) and other drop

  7. Efficient 3D rendering for web-based medical imaging software: a proof of concept

    NASA Astrophysics Data System (ADS)

    Cantor-Rivera, Diego; Bartha, Robert; Peters, Terry

    2011-03-01

    Medical Imaging Software (MIS) found in research and in clinical practice, such as in Picture and Archiving Communication Systems (PACS) and Radiology Information Systems (RIS), has not been able to take full advantage of the Internet as a deployment platform. MIS is usually tightly coupled to algorithms that have substantial hardware and software requirements. Consequently, MIS is deployed on thick clients which usually leads project managers to allocate more resources during the deployment phase of the application than the resources that would be allocated if the application were deployed through a web interface.To minimize the costs associated with this scenario, many software providers use or develop plug-ins to provide the delivery platform (internet browser) with the features to load, interact and analyze medical images. Nevertheless there has not been a successful standard means to achieve this goal so far. This paper presents a study of WebGL as an alternative to plug-in development for efficient rendering of 3D medical models and DICOM images. WebGL is a technology that enables the internet browser to have access to the local graphics hardware in a native fashion. Because it is based in OpenGL, a widely accepted graphic industry standard, WebGL is being implemented in most of the major commercial browsers. After a discussion on the details of the technology, a series of experiments are presented to determine the operational boundaries in which WebGL is adequate for MIS. A comparison with current alternatives is also addressed. Finally conclusions and future work are discussed.

  8. Collaboration using open standards and open source software (examples of DIAS/CEOS Water Portal)

    NASA Astrophysics Data System (ADS)

    Miura, S.; Sekioka, S.; Kuroiwa, K.; Kudo, Y.

    2015-12-01

    The DIAS/CEOS Water Portal is a part of the DIAS (Data Integration and Analysis System, http://www.editoria.u-tokyo.ac.jp/projects/dias/?locale=en_US) systems for data distribution for users including, but not limited to, scientists, decision makers and officers like river administrators. One of the functions of this portal is to enable one-stop search and access variable water related data archived multiple data centers located all over the world. This portal itself does not store data. Instead, according to requests made by users on the web page, it retrieves data from distributed data centers on-the-fly and lets them download and see rendered images/plots. Our system mainly relies on the open source software GI-cat (http://essi-lab.eu/do/view/GIcat) and open standards such as OGC-CSW, Opensearch and OPeNDAP protocol to enable the above functions. Details on how it works will be introduced during the presentation. Although some data centers have unique meta data format and/or data search protocols, our portal's brokering function enables users to search across various data centers at one time. And this portal is also connected to other data brokering systems, including GEOSS DAB (Discovery and Access Broker). As a result, users can search over thousands of datasets, millions of files at one time. Users can access the DIAS/CEOS Water Portal system at http://waterportal.ceos.org/.

  9. JUPOS : Amateur analysis of Jupiter images with specialized measurement software

    NASA Astrophysics Data System (ADS)

    Jacquesson, M.; Mettig, H.-J.

    2008-09-01

    spectral range (sorted by descending priority): o color o monochrome red o IR broadband o green o not blue except for particular cases o not narrow band (except for methane band at 889nm) 5) Correct alignment of RGB color images from 3 monochrome frames 6) Choose images of better quality if several are available from about the same time An important prerequisite: adjust the outline frame correctly - Problems: o phase (darkening of the terminator) o limb darkening o tilt of the image (belt edges are not always horizontal) o north-south asymmetries (rare) o mirror-inverted images o invisibility of the illuminated limb on IR broad band and methane images - How to adjust the outline frame : o increase the luminosity and gamma to display the "real limb". This solution is not sufficient: many images do not show the real limb because of the image processing o use positions of satellites and shadows if visible o refer to latitudes of permanent or long lived objects, but only from recent images as their latitude can vary o setting the frame first on: the limb; the north and south poles; NOT at the terminator o in a series of images taken about 1 ½ hours apart, the same object must have the same position (+/- 0.5°) Measuring objects : 1) Place the WinJUPOS cursor onto the feature's centre - What to measure, what to omit? o some regions of Jupiter with big activity (SEB at present) show many small features that we omit because - finest details are often indistinguishable from image artefacts and noise - they are often short-living, and appear as "noise" in drift charts o omit measuring features too close to the planet's limb o problem with measuring the center of extended objects (e.g. GRS) : visual estimation can give a systematic error. The solution: rotate the image o diffuse features have no clear boundaries 2) Enter the standard JUPOS code of the object (longitude and latitude are automatically computed) 3) Optional: add a description of particular characteristics of the

  10. WorkstationJ: workstation emulation software for medical image perception and technology evaluation research

    NASA Astrophysics Data System (ADS)

    Schartz, Kevin M.; Berbaum, Kevin S.; Caldwell, Robert T.; Madsen, Mark T.

    2007-03-01

    We developed image presentation software that mimics the functionality available in the clinic, but also records time-stamped, observer-display interactions and is readily deployable on diverse workstations making it possible to collect comparable observer data at multiple sites. Commercial image presentation software for clinical use has limited application for research on image perception, ergonomics, computer-aids and informatics because it does not collect observer responses, or other information on observer-display interactions, in real time. It is also very difficult to collect observer data from multiple institutions unless the same commercial software is available at different sites. Our software not only records observer reports of abnormalities and their locations, but also inspection time until report, inspection time for each computed radiograph and for each slice of tomographic studies, window/level, and magnification settings used by the observer. The software is a modified version of the open source ImageJ software available from the National Institutes of Health. Our software involves changes to the base code and extensive new plugin code. Our free software is currently capable of displaying computed tomography and computed radiography images. The software is packaged as Java class files and can be used on Windows, Linux, or Mac systems. By deploying our software together with experiment-specific script files that administer experimental procedures and image file handling, multi-institutional studies can be conducted that increase reader and/or case sample sizes or add experimental conditions.

  11. The Effects of Personalized Practice Software on Learning Math Standards in the Third through Fifth Grades

    ERIC Educational Resources Information Center

    Gomez, Angela Nicole

    2012-01-01

    The purpose of this study was to investigate the effectiveness of "MathFacts in a Flash" software in helping students learn math standards. In each of their classes, the third-, fourth-, and fifth-grade students in a small private Roman Catholic school from the Pacific Northwest were randomly assigned either to a control group that used…

  12. Variability and accuracy of different software packages for dynamic susceptibility contrast magnetic resonance imaging for distinguishing glioblastoma progression from pseudoprogression

    PubMed Central

    Kelm, Zachary S.; Korfiatis, Panagiotis D.; Lingineni, Ravi K.; Daniels, John R.; Buckner, Jan C.; Lachance, Daniel H.; Parney, Ian F.; Carter, Rickey E.; Erickson, Bradley J.

    2015-01-01

    Abstract. Determining whether glioblastoma multiforme (GBM) is progressing despite treatment is challenging due to the pseudoprogression phenomenon seen on conventional MRIs, but relative cerebral blood volume (CBV) has been shown to be helpful. As CBV’s calculation from perfusion-weighted images is not standardized, we investigated whether there were differences between three FDA-cleared software packages in their CBV output values and subsequent performance regarding predicting survival/progression. Forty-five postradiation therapy GBM cases were retrospectively identified as having indeterminate MRI findings of progression versus pseudoprogression. The dynamic susceptibility contrast MR images were processed with different software and three different relative CBV metrics based on the abnormally enhancing regions were computed. The intersoftware intraclass correlation coefficients were 0.8 and below, depending on the metric used. No statistically significant difference in progression determination performance was found between the software packages, but performance was better for the cohort imaged at 3.0 T versus those imaged at 1.5 T for many relative CBV metric and classification criteria combinations. The results revealed clinically significant variation in relative CBV measures based on the software used, but minimal interoperator variation. We recommend against using specific relative CBV measurement thresholds for GBM progression determination unless the same software or processing algorithm is used. PMID:26158114

  13. Design and validation of Segment - freely available software for cardiovascular image analysis

    PubMed Central

    2010-01-01

    Background Commercially available software for cardiovascular image analysis often has limited functionality and frequently lacks the careful validation that is required for clinical studies. We have already implemented a cardiovascular image analysis software package and released it as freeware for the research community. However, it was distributed as a stand-alone application and other researchers could not extend it by writing their own custom image analysis algorithms. We believe that the work required to make a clinically applicable prototype can be reduced by making the software extensible, so that researchers can develop their own modules or improvements. Such an initiative might then serve as a bridge between image analysis research and cardiovascular research. The aim of this article is therefore to present the design and validation of a cardiovascular image analysis software package (Segment) and to announce its release in a source code format. Results Segment can be used for image analysis in magnetic resonance imaging (MRI), computed tomography (CT), single photon emission computed tomography (SPECT) and positron emission tomography (PET). Some of its main features include loading of DICOM images from all major scanner vendors, simultaneous display of multiple image stacks and plane intersections, automated segmentation of the left ventricle, quantification of MRI flow, tools for manual and general object segmentation, quantitative regional wall motion analysis, myocardial viability analysis and image fusion tools. Here we present an overview of the validation results and validation procedures for the functionality of the software. We describe a technique to ensure continued accuracy and validity of the software by implementing and using a test script that tests the functionality of the software and validates the output. The software has been made freely available for research purposes in a source code format on the project home page http

  14. Software-Assisted Depth Analysis of Optic Nerve Stereoscopic Images in Telemedicine.

    PubMed

    Xia, Tian; Patel, Shriji N; Szirth, Ben C; Kolomeyer, Anton M; Khouri, Albert S

    2016-01-01

    Background. Software guided optic nerve assessment can assist in process automation and reduce interobserver disagreement. We tested depth analysis software (DAS) in assessing optic nerve cup-to-disc ratio (VCD) from stereoscopic optic nerve images (SONI) of normal eyes. Methods. In a prospective study, simultaneous SONI from normal subjects were collected during telemedicine screenings using a Kowa 3Wx nonmydriatic simultaneous stereoscopic retinal camera (Tokyo, Japan). VCD was determined from SONI pairs and proprietary pixel DAS (Kowa Inc., Tokyo, Japan) after disc and cup contour line placement. A nonstereoscopic VCD was determined using the right channel of a stereo pair. Mean, standard deviation, t-test, and the intraclass correlation coefficient (ICCC) were calculated. Results. 32 patients had mean age of 40 ± 14 years. Mean VCD on SONI was 0.36 ± 0.09, with DAS 0.38 ± 0.08, and with nonstereoscopic 0.29 ± 0.12. The difference between stereoscopic and DAS assisted was not significant (p = 0.45). ICCC showed agreement between stereoscopic and software VCD assessment. Mean VCD difference was significant between nonstereoscopic and stereoscopic (p < 0.05) and nonstereoscopic and DAS (p < 0.005) recordings. Conclusions. DAS successfully assessed SONI and showed a high degree of correlation to physician-determined stereoscopic VCD. PMID:27190507

  15. Software-Assisted Depth Analysis of Optic Nerve Stereoscopic Images in Telemedicine

    PubMed Central

    Xia, Tian; Patel, Shriji N.; Szirth, Ben C.

    2016-01-01

    Background. Software guided optic nerve assessment can assist in process automation and reduce interobserver disagreement. We tested depth analysis software (DAS) in assessing optic nerve cup-to-disc ratio (VCD) from stereoscopic optic nerve images (SONI) of normal eyes. Methods. In a prospective study, simultaneous SONI from normal subjects were collected during telemedicine screenings using a Kowa 3Wx nonmydriatic simultaneous stereoscopic retinal camera (Tokyo, Japan). VCD was determined from SONI pairs and proprietary pixel DAS (Kowa Inc., Tokyo, Japan) after disc and cup contour line placement. A nonstereoscopic VCD was determined using the right channel of a stereo pair. Mean, standard deviation, t-test, and the intraclass correlation coefficient (ICCC) were calculated. Results. 32 patients had mean age of 40 ± 14 years. Mean VCD on SONI was 0.36 ± 0.09, with DAS 0.38 ± 0.08, and with nonstereoscopic 0.29 ± 0.12. The difference between stereoscopic and DAS assisted was not significant (p = 0.45). ICCC showed agreement between stereoscopic and software VCD assessment. Mean VCD difference was significant between nonstereoscopic and stereoscopic (p < 0.05) and nonstereoscopic and DAS (p < 0.005) recordings. Conclusions. DAS successfully assessed SONI and showed a high degree of correlation to physician-determined stereoscopic VCD. PMID:27190507

  16. Automated facial coding software outperforms people in recognizing neutral faces as neutral from standardized datasets

    PubMed Central

    Lewinski, Peter

    2015-01-01

    Little is known about people’s accuracy of recognizing neutral faces as neutral. In this paper, I demonstrate the importance of knowing how well people recognize neutral faces. I contrasted human recognition scores of 100 typical, neutral front-up facial images with scores of an arguably objective judge – automated facial coding (AFC) software. I hypothesized that the software would outperform humans in recognizing neutral faces because of the inherently objective nature of computer algorithms. Results confirmed this hypothesis. I provided the first-ever evidence that computer software (90%) was more accurate in recognizing neutral faces than people were (59%). I posited two theoretical mechanisms, i.e., smile-as-a-baseline and false recognition of emotion, as possible explanations for my findings. PMID:26441761

  17. BioBrick assembly standards and techniques and associated software tools.

    PubMed

    Røkke, Gunvor; Korvald, Eirin; Pahr, Jarle; Oyås, Ove; Lale, Rahmi

    2014-01-01

    The BioBrick idea was developed to introduce the engineering principles of abstraction and standardization into synthetic biology. BioBricks are DNA sequences that serve a defined biological function and can be readily assembled with any other BioBrick parts to create new BioBricks with novel properties. In order to achieve this, several assembly standards can be used. Which assembly standards a BioBrick is compatible with, depends on the prefix and suffix sequences surrounding the part. In this chapter, five of the most common assembly standards will be described, as well as some of the most used assembly techniques, cloning procedures, and a presentation of the available software tools that can be used for deciding on the best method for assembling of different BioBricks, and searching for BioBrick parts in the Registry of Standard Biological Parts database. PMID:24395353

  18. BioBrick assembly standards and techniques and associated software tools.

    PubMed

    Røkke, Gunvor; Korvald, Eirin; Pahr, Jarle; Oyås, Ove; Lale, Rahmi

    2014-01-01

    The BioBrick idea was developed to introduce the engineering principles of abstraction and standardization into synthetic biology. BioBricks are DNA sequences that serve a defined biological function and can be readily assembled with any other BioBrick parts to create new BioBricks with novel properties. In order to achieve this, several assembly standards can be used. Which assembly standards a BioBrick is compatible with, depends on the prefix and suffix sequences surrounding the part. In this chapter, five of the most common assembly standards will be described, as well as some of the most used assembly techniques, cloning procedures, and a presentation of the available software tools that can be used for deciding on the best method for assembling of different BioBricks, and searching for BioBrick parts in the Registry of Standard Biological Parts database.

  19. Survey of standards for electronic image displays

    NASA Astrophysics Data System (ADS)

    Rowe, William A.

    1996-01-01

    Electronic visual displays have been evolving from the 1960s basis of cathode ray tube (CRT) technology. Now, many other technologies are also available, including both flat panels and projection displays. Standards for these displays are being developed at both the national level and the international levels. Standards activity within the United States is in its infancy and is fragmented according to the inclination of each of the standards developing organizations. The latest round of flat panel display technology was primarily developed in Japan. Initially standards arose from component vendor-to-OEM customer relationships. As a result, Japanese standards for components are the best developed. The Electronics Industries Association of Japan (EIAJ) is providing their standards to the International Electrotechnical Commission (IEC) for adoption. On the international level, professional societies such as the human factors society (hfs) and the International Organization for Standardization (ISO) have completed major standards. Human factors society developed the first ergonomic standard hfs-100 and the ISO has developed some sections of a broader ergonomic standard ISO 9241. This paper addresses the organization of standards activity. Active organizations and their areas of focus are identified. The major standards that have been completed or are in development are described. Finally, suggestions for improving this standards activity are proposed.

  20. HARPS-N: software path from the observation block to the image

    NASA Astrophysics Data System (ADS)

    Sosnowska, D.; Lodi, M.; Gao, X.; Buchschacher, N.; Vick, A.; Guerra, J.; Gonzalez, M.; Kelly, D.; Lovis, C.; Pepe, F.; Molinari, E.; Cameron, A. C.; Latham, D.; Udry, S.

    2012-09-01

    HARPS North is the twin of the HARPS (High Accuracy Radial velocity for Planetary Search) spectrograph operating in La Silla (Chile) recently installed on the TNG in La Palma observatory and used to follow-up, the "hot" candidates delivered by the Kepler satellite. HARPS-N is delivered with its own software that completely integrates with the TNG control system. A special care has been dedicated to develop tools that will assist the astronomers during the whole process of taking images: from the observation schedule to the raw image acquisition. All these tools are presented in the paper. In order to provide a stable and reliable system, the software has been developed keeping in mind concepts like failover and high-availability. HARPS-N is made of heterogeneous systems, from normal computer to real-time systems, that's why the standard message queue middleware (ActiveMQ) was chosen to provide the communications between different processes. The path of operations starting with the Observation Blocks and ending with the FITS frames is fully automated and could allow, in the future, the completely remote observing runs optimized for the time and quality constraints.

  1. Software for X-Ray Images Calculation of Hydrogen Compression Device in Megabar Pressure Range

    NASA Astrophysics Data System (ADS)

    Egorov, Nikolay; Bykov, Alexander; Pavlov, Valery

    2007-06-01

    Software for x-ray images simulation is described. The software is a part of x-ray method used for investigation of an equation of state of hydrogen in a megabar pressure range. A graphical interface that clearly and simply allows users to input data for x-ray image calculation: properties of the studied device, parameters of the x-ray radiation source, parameters of the x-ray radiation recorder, the experiment geometry; to represent the calculation results and efficiently transmit them to other software for processing. The calculation time is minimized. This makes it possible to perform calculations in a dialogue regime. The software is written in ``MATLAB'' system.

  2. NEIGHBOUR-IN: Image processing software for spatial analysis of animal grouping

    PubMed Central

    Caubet, Yves; Richard, Freddie-Jeanne

    2015-01-01

    Abstract Animal grouping is a very complex process that occurs in many species, involving many individuals under the influence of different mechanisms. To investigate this process, we have created an image processing software, called NEIGHBOUR-IN, designed to analyse individuals’ coordinates belonging to up to three different groups. The software also includes statistical analysis and indexes to discriminate aggregates based on spatial localisation of individuals and their neighbours. After the description of the software, the indexes computed by the software are illustrated using both artificial patterns and case studies using the spatial distribution of woodlice. The added strengths of this software and methods are also discussed. PMID:26261448

  3. 25 CFR 547.8 - What are the minimum technical software standards applicable to Class II gaming systems?

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 25 Indians 2 2014-04-01 2014-04-01 false What are the minimum technical software standards... EQUIPMENT § 547.8 What are the minimum technical software standards applicable to Class II gaming systems... adopted by the tribe or TGRA; (ii) Display player interface identification; and (iii) Display...

  4. Standardization of (99m)Tc by means of a software coincidence system.

    PubMed

    Brito, A B; Koskinas, M F; Litvak, F; Toledo, F; Dias, M S

    2012-09-01

    The procedure followed by the Nuclear Metrology Laboratory, at IPEN, for the primary standardization of (99m)Tc is described. The primary standardization has been accomplished by the coincidence method. The beta channel efficiency was varied by electronic discrimination using a software coincidence counting system. Two windows were selected for the gamma channel: one at 140 keV gamma-ray and the other at 20 keV X-ray total absorption peaks. The experimental extrapolation curves were compared with Monte Carlo simulations by means of code ESQUEMA.

  5. Quantitative Neuroimaging Software for Clinical Assessment of Hippocampal Volumes on MR Imaging

    PubMed Central

    Ahdidan, Jamila; Raji, Cyrus A.; DeYoe, Edgar A.; Mathis, Jedidiah; Noe, Karsten Ø.; Rimestad, Jens; Kjeldsen, Thomas K.; Mosegaard, Jesper; Becker, James T.; Lopez, Oscar

    2015-01-01

    Background: Multiple neurological disorders including Alzheimer’s disease (AD), mesial temporal sclerosis, and mild traumatic brain injury manifest with volume loss on brain MRI. Subtle volume loss is particularly seen early in AD. While prior research has demonstrated the value of this additional information from quantitative neuroimaging, very few applications have been approved for clinical use. Here we describe a US FDA cleared software program, NeuroreaderTM, for assessment of clinical hippocampal volume on brain MRI. Objective: To present the validation of hippocampal volumetrics on a clinical software program. Method: Subjects were drawn (n = 99) from the Alzheimer Disease Neuroimaging Initiative study. Volumetric brain MR imaging was acquired in both 1.5 T (n = 59) and 3.0 T (n = 40) scanners in participants with manual hippocampal segmentation. Fully automated hippocampal segmentation and measurement was done using a multiple atlas approach. The Dice Similarity Coefficient (DSC) measured the level of spatial overlap between NeuroreaderTM and gold standard manual segmentation from 0 to 1 with 0 denoting no overlap and 1 representing complete agreement. DSC comparisons between 1.5 T and 3.0 T scanners were done using standard independent samples T-tests. Results: In the bilateral hippocampus, mean DSC was 0.87 with a range of 0.78–0.91 (right hippocampus) and 0.76–0.91 (left hippocampus). Automated segmentation agreement with manual segmentation was essentially equivalent at 1.5 T (DSC = 0.879) versus 3.0 T (DSC = 0.872). Conclusion: This work provides a description and validation of a software program that can be applied in measuring hippocampal volume, a biomarker that is frequently abnormal in AD and other neurological disorders. PMID:26484924

  6. Software interface for high-speed readout of particle detectors based on the CoaXPress communication standard

    NASA Astrophysics Data System (ADS)

    Hejtmánek, M.; Neue, G.; Voleš, P.

    2015-06-01

    This article is devoted to the software design and development of a high-speed readout application used for interfacing particle detectors via the CoaXPress communication standard. The CoaXPress provides an asymmetric high-speed serial connection over a single coaxial cable. It uses a widely available 75 Ω BNC standard and can operate in various modes with a data throughput ranging from 1.25 Gbps up to 25 Gbps. Moreover, it supports a low speed uplink with a fixed bit rate of 20.833 Mbps, which can be used to control and upload configuration data to the particle detector. The CoaXPress interface is an upcoming standard in medical imaging, therefore its usage promises long-term compatibility and versatility. This work presents an example of how to develop DAQ system for a pixel detector. For this purpose, a flexible DAQ card was developed using the XILINX Spartan 6 FPGA. The DAQ card is connected to the framegrabber FireBird CXP6 Quad, which is plugged in the PCI Express bus of the standard PC. The data transmission was performed between the FPGA and framegrabber card via the standard coaxial cable in communication mode with a bit rate of 3.125 Gbps. Using the Medipix2 Quad pixel detector, the framerate of 100 fps was achieved. The front-end application makes use of the FireBird framegrabber software development kit and is suitable for data acquisition as well as control of the detector through the registers implemented in the FPGA.

  7. Integration of XNAT/PACS, DICOM, and research software for automated multi-modal image analysis

    NASA Astrophysics Data System (ADS)

    Gao, Yurui; Burns, Scott S.; Lauzon, Carolyn B.; Fong, Andrew E.; James, Terry A.; Lubar, Joel F.; Thatcher, Robert W.; Twillie, David A.; Wirt, Michael D.; Zola, Marc A.; Logan, Bret W.; Anderson, Adam W.; Landman, Bennett A.

    2013-03-01

    Traumatic brain injury (TBI) is an increasingly important public health concern. While there are several promising avenues of intervention, clinical assessments are relatively coarse and comparative quantitative analysis is an emerging field. Imaging data provide potentially useful information for evaluating TBI across functional, structural, and microstructural phenotypes. Integration and management of disparate data types are major obstacles. In a multi-institution collaboration, we are collecting electroencephalogy (EEG), structural MRI, diffusion tensor MRI (DTI), and single photon emission computed tomography (SPECT) from a large cohort of US Army service members exposed to mild or moderate TBI who are undergoing experimental treatment. We have constructed a robust informatics backbone for this project centered on the DICOM standard and eXtensible Neuroimaging Archive Toolkit (XNAT) server. Herein, we discuss (1) optimization of data transmission, validation and storage, (2) quality assurance and workflow management, and (3) integration of high performance computing with research software.

  8. Comparison of grey scale median (GSM) measurement in ultrasound images of human carotid plaques using two different softwares.

    PubMed

    Östling, Gerd; Persson, Margaretha; Hedblad, Bo; Gonçalves, Isabel

    2013-11-01

    Grey scale median (GSM) measured on ultrasound images of carotid plaques has been used for several years now in research to find the vulnerable plaque. Centres have used different software and also different methods for GSM measurement. This has resulted in a wide range of GSM values and cut-off values for the detection of the vulnerable plaque. The aim of this study was to compare the values obtained with two different softwares, using different standardization methods, for the measurement of GSM on ultrasound images of carotid human plaques. GSM was measured with Adobe Photoshop(®) and with Artery Measurement System (AMS) on duplex ultrasound images of 100 consecutive medium- to large-sized carotid plaques of the Beta-blocker Cholesterol-lowering Asymptomatic Plaque Study (BCAPS). The mean values of GSM were 35·2 ± 19·3 and 55·8 ± 22·5 for Adobe Photoshop(®) and AMS, respectively. Mean difference was 20·45 (95% CI: 19·17-21·73). Although the absolute values of GSM differed, the agreement between the two measurements was good, correlation coefficient 0·95. A chi-square test revealed a kappa value of 0·68 when studying quartiles of GSM. The intra-observer variability was 1·9% for AMS and 2·5% for Adobe Photoshop. The difference between softwares and standardization methods must be taken into consideration when comparing studies. To avoid these problems, researcher should come to a consensus regarding software and standardization method for GSM measurement on ultrasound images of plaque in the arteries.

  9. Xmipp 3.0: an improved software suite for image processing in electron microscopy.

    PubMed

    de la Rosa-Trevín, J M; Otón, J; Marabini, R; Zaldívar, A; Vargas, J; Carazo, J M; Sorzano, C O S

    2013-11-01

    Xmipp is a specialized software package for image processing in electron microscopy, and that is mainly focused on 3D reconstruction of macromolecules through single-particles analysis. In this article we present Xmipp 3.0, a major release which introduces several improvements and new developments over the previous version. A central improvement is the concept of a project that stores the entire processing workflow from data import to final results. It is now possible to monitor, reproduce and restart all computing tasks as well as graphically explore the complete set of interrelated tasks associated to a given project. Other graphical tools have also been improved such as data visualization, particle picking and parameter "wizards" that allow the visual selection of some key parameters. Many standard image formats are transparently supported for input/output from all programs. Additionally, results have been standardized, facilitating the interoperation between different Xmipp programs. Finally, as a result of a large code refactoring, the underlying C++ libraries are better suited for future developments and all code has been optimized. Xmipp is an open-source package that is freely available for download from: http://xmipp.cnb.csic.es.

  10. A Critical Appraisal of Techniques, Software Packages, and Standards for Quantitative Proteomic Analysis

    PubMed Central

    Lawless, Craig; Hubbard, Simon J.; Fan, Jun; Bessant, Conrad; Hermjakob, Henning; Jones, Andrew R.

    2012-01-01

    Abstract New methods for performing quantitative proteome analyses based on differential labeling protocols or label-free techniques are reported in the literature on an almost monthly basis. In parallel, a correspondingly vast number of software tools for the analysis of quantitative proteomics data has also been described in the literature and produced by private companies. In this article we focus on the review of some of the most popular techniques in the field and present a critical appraisal of several software packages available to process and analyze the data produced. We also describe the importance of community standards to support the wide range of software, which may assist researchers in the analysis of data using different platforms and protocols. It is intended that this review will serve bench scientists both as a useful reference and a guide to the selection and use of different pipelines to perform quantitative proteomics data analysis. We have produced a web-based tool (http://www.proteosuite.org/?q=other_resources) to help researchers find appropriate software for their local instrumentation, available file formats, and quantitative methodology. PMID:22804616

  11. JHelioviewer: Open-Source Software for Discovery and Image Access in the Petabyte Age (Invited)

    NASA Astrophysics Data System (ADS)

    Mueller, D.; Dimitoglou, G.; Langenberg, M.; Pagel, S.; Dau, A.; Nuhn, M.; Garcia Ortiz, J. P.; Dietert, H.; Schmidt, L.; Hughitt, V. K.; Ireland, J.; Fleck, B.

    2010-12-01

    The unprecedented torrent of data returned by the Solar Dynamics Observatory is both a blessing and a barrier: a blessing for making available data with significantly higher spatial and temporal resolution, but a barrier for scientists to access, browse and analyze them. With such staggering data volume, the data is bound to be accessible only from a few repositories and users will have to deal with data sets effectively immobile and practically difficult to download. From a scientist's perspective this poses three challenges: accessing, browsing and finding interesting data while avoiding the proverbial search for a needle in a haystack. To address these challenges, we have developed JHelioviewer, an open-source visualization software that lets users browse large data volumes both as still images and movies. We did so by deploying an efficient image encoding, storage, and dissemination solution using the JPEG 2000 standard. This solution enables users to access remote images at different resolution levels as a single data stream. Users can view, manipulate, pan, zoom, and overlay JPEG 2000 compressed data quickly, without severe network bandwidth penalties. Besides viewing data, the browser provides third-party metadata and event catalog integration to quickly locate data of interest, as well as an interface to the Virtual Solar Observatory to download science-quality data. As part of the Helioviewer Project, JHelioviewer offers intuitive ways to browse large amounts of heterogeneous data remotely and provides an extensible and customizable open-source platform for the scientific community.

  12. An image-based software tool for screening retinal fundus images using vascular morphology and network transport analysis

    NASA Astrophysics Data System (ADS)

    Clark, Richard D.; Dickrell, Daniel J.; Meadows, David L.

    2014-03-01

    As the number of digital retinal fundus images taken each year grows at an increasing rate, there exists a similarly increasing need for automatic eye disease detection through image-based analysis. A new method has been developed for classifying standard color fundus photographs into both healthy and diseased categories. This classification was based on the calculated network fluid conductance, a function of the geometry and connectivity of the vascular segments. To evaluate the network resistance, the retinal vasculature was first manually separated from the background to ensure an accurate representation of the geometry and connectivity. The arterial and venous networks were then semi-automatically separated into two separate binary images. The connectivity of the arterial network was then determined through a series of morphological image operations. The network comprised of segments of vasculature and points of bifurcation, with each segment having a characteristic geometric and fluid properties. Based on the connectivity and fluid resistance of each vascular segment, an arterial network flow conductance was calculated, which described the ease with which blood can pass through a vascular system. In this work, 27 eyes (13 healthy and 14 diabetic) from patients roughly 65 years in age were evaluated using this methodology. Healthy arterial networks exhibited an average fluid conductance of 419 ± 89 μm3/mPa-s while the average network fluid conductance of the diabetic set was 165 ± 87 μm3/mPa-s (p < 0.001). The results of this new image-based software demonstrated an ability to automatically, quantitatively and efficiently screen diseased eyes from color fundus imagery.

  13. Future trends in image processing software and hardware

    NASA Technical Reports Server (NTRS)

    Green, W. B.

    1979-01-01

    JPL image processing applications are examined, considering future trends in fields such as planetary exploration, electronics, astronomy, computers, and Landsat. Attention is given to adaptive search and interrogation of large image data bases, the display of multispectral imagery recorded in many spectral channels, merging data acquired by a variety of sensors, and developing custom large scale integrated chips for high speed intelligent image processing user stations and future pipeline production processors.

  14. Wavelet/scalar quantization compression standard for fingerprint images

    SciTech Connect

    Brislawn, C.M.

    1996-06-12

    US Federal Bureau of Investigation (FBI) has recently formulated a national standard for digitization and compression of gray-scale fingerprint images. Fingerprints are scanned at a spatial resolution of 500 dots per inch, with 8 bits of gray-scale resolution. The compression algorithm for the resulting digital images is based on adaptive uniform scalar quantization of a discrete wavelet transform subband decomposition (wavelet/scalar quantization method). The FBI standard produces archival-quality images at compression ratios of around 15 to 1 and will allow the current database of paper fingerprint cards to be replaced by digital imagery. The compression standard specifies a class of potential encoders and a universal decoder with sufficient generality to reconstruct compressed images produced by any compliant encoder, allowing flexibility for future improvements in encoder technology. A compliance testing program is also being implemented to ensure high standards of image quality and interchangeability of data between different implementations.

  15. Implementation of a real-time software-only image smoothing filter for a block-transform video codec

    NASA Astrophysics Data System (ADS)

    Miaw, Wesley F.; Rowe, Lawrence A.

    2003-05-01

    The JPEG compression standard is a popular image format. However, at high compression ratios JPEG compression, which uses block-transform coding, can produce blocking artifacts, or artificially introduced edges within the image. Several post-processing algorithms have been developed to remove these artifacts. This paper describes an implementation of a post-processing algorithm developed by Ramchandran, Chou, and Crouse (RCC) which is fast enough for real-time software-only video applications. The original implementation of the RCC algorithm involved calculating thresholds to identify artificial edges. These calculations proved too expensive for use in real-time software-only applications. We replaced these calculations with a linear scale approximating ideal threshold values based on a combination of peak signal-to-noise ratio calculations and subjective visual quality. The resulting filter implementation is available in the widely-deployed Open Mash streaming media toolkit.

  16. Software control and characterization aspects for image derotator of the AO188 system at Subaru

    NASA Astrophysics Data System (ADS)

    Golota, Taras; Oya, Shin; Egner, Sebastian; Watanabe, Makoto; Eldred, Michael; Minowa, Yosuke; Takami, Hideki; Cook, David; Hayano, Yutaka; Saito, Yoshihiko; Hattori, Masayuki; Garrel, Vincent; Ito, Meguru

    2010-07-01

    The image derotator is an integral part of the AO188 System at Subaru Telescope. In this article software control, characterization and integration issues of the image derotator for AO188 System presented. Physical limitations of the current hardware reviewed. Image derotator synchronization, tracking accuracy, and problem solving strategies to achieve requirements presented. It's use in different observation modes for various instruments and interaction with the telescope control system provides status and control functionality. We describe available observation modes along with integration issues. Technical solutions with results of the image derotator performance presented. Further improvements and control software for on-sky observations discussed based on the results obtained during engineering observations. An overview of the requirements, the final control method, and the structure of its control software is shown. Control limitations and accepted solutions that might be useful for development of other instrument's image derotators presented.

  17. msIQuant--Quantitation Software for Mass Spectrometry Imaging Enabling Fast Access, Visualization, and Analysis of Large Data Sets.

    PubMed

    Källback, Patrik; Nilsson, Anna; Shariatgorji, Mohammadreza; Andrén, Per E

    2016-04-19

    This paper presents msIQuant, a novel instrument- and manufacturer-independent quantitative mass spectrometry imaging software suite that uses the standardized open access data format imzML. Its data processing structure enables rapid image display and the analysis of very large data sets (>50 GB) without any data reduction. In addition, msIQuant provides many tools for image visualization including multiple interpolation methods, low intensity transparency display, and image fusion. It also has a quantitation function that automatically generates calibration standard curves from series of standards that can be used to determine the concentrations of specific analytes. Regions-of-interest in a tissue section can be analyzed based on a number of quantities including the number of pixels, average intensity, standard deviation of intensity, and median and quartile intensities. Moreover, the suite's export functions enable simplified postprocessing of data and report creation. We demonstrate its potential through several applications including the quantitation of small molecules such as drugs and neurotransmitters. The msIQuant suite is a powerful tool for accessing and evaluating very large data sets, quantifying drugs and endogenous compounds in tissue areas of interest, and for processing mass spectra and images.

  18. Eclipse: ESO C Library for an Image Processing Software Environment

    NASA Astrophysics Data System (ADS)

    Devillard, Nicolas

    2011-12-01

    Written in ANSI C, eclipse is a library offering numerous services related to astronomical image processing: FITS data access, various image and cube loading methods, binary image handling and filtering (including convolution and morphological filters), 2-D cross-correlation, connected components, cube and image arithmetic, dead pixel detection and correction, object detection, data extraction, flat-fielding with robust fit, image generation, statistics, photometry, image-space resampling, image combination, and cube stacking. It also contains support for mathematical tools like random number generation, FFT, curve fitting, matrices, fast median computation, and point-pattern matching. The main feature of this library is its ability to handle large amounts of input data (up to 2GB in the current version) regardless of the amount of memory and swap available on the local machine. Another feature is the very high speed allowed by optimized C, making it an ideal base tool for programming efficient number-crunching applications, e.g., on parallel (Beowulf) systems.

  19. Upgrade and standardization of real-time software for telescope systems at the Gemini telescopes

    NASA Astrophysics Data System (ADS)

    Rambold, William N.; Gigoux, Pedro; Urrutia, Cristian; Ebbers, Angelic; Taylor, Philip; Rippa, Mathew J.; Rojas, Roberto; Cumming, Tom

    2014-07-01

    The real-time control systems for the Gemini Telescopes were designed and built in the 1990s using state-of-the-art software tools and operating systems of that time. Since these systems are in use every night they have not been kept upto- date and are now obsolete and very labor intensive to support. Gemini is currently engaged in a major upgrade of its telescope control systems. This paper reviews the studies performed to select and develop a new standard operating environment for Gemini real-time systems and the work performed so far in implementing it.

  20. Self-contained off-line media for exchanging medical images using DICOM-compliant standard

    NASA Astrophysics Data System (ADS)

    Ratib, Osman M.; Ligier, Yves; Rosset, Antoine; Staub, Jean-Christophe; Logean, Marianne; Girard, Christian

    2000-05-01

    The goal of this project is to develop and implement off-line DICOM-compliant CD ROMs that contain the necessary software tools for displaying the images and related data on any personal computer. We implemented a hybrid recording technique allowing CD-ROMs for Macintosh and Windows platforms to be fully DICOM compliant. A public domain image viewing program (OSIRIS) is recorded on the CD for display and manipulation of sequences of images. The content of the disk is summarized in a standard HTML file that can be displayed on any web-browser. This allows the images to be easily accessible on any desktop computer, while being also readable on high-end commercial DICOM workstations. The HTML index page contains a set of thumbnails and full-size JPEG images that are directly linked to the original high-resolution DICOM images through an activation of the OSIRIS program. Reports and associated text document are also converted to HTML format to be easily displayable directly within the web browser. This portable solution provides a convenient and low cost alternative to hard copy images for exchange and transmission of images to referring physicians and external care providers without the need for any specialized software or hardware.

  1. Polarization information processing and software system design for simultaneously imaging polarimetry

    NASA Astrophysics Data System (ADS)

    Wang, Yahui; Liu, Jing; Jin, Weiqi; Wen, Renjie

    2015-08-01

    Simultaneous imaging polarimetry can realize real-time polarization imaging of the dynamic scene, which has wide application prospect. This paper first briefly illustrates the design of the double separate Wollaston Prism simultaneous imaging polarimetry, and then emphases are put on the polarization information processing methods and software system design for the designed polarimetry. Polarization information processing methods consist of adaptive image segmentation, high-accuracy image registration, instrument matrix calibration. Morphological image processing was used for image segmentation by taking dilation of an image; The accuracy of image registration can reach 0.1 pixel based on the spatial and frequency domain cross-correlation; Instrument matrix calibration adopted four-point calibration method. The software system was implemented under Windows environment based on C++ programming language, which realized synchronous polarization images acquisition and preservation, image processing and polarization information extraction and display. Polarization data obtained with the designed polarimetry shows that: the polarization information processing methods and its software system effectively performs live realize polarization measurement of the four Stokes parameters of a scene. The polarization information processing methods effectively improved the polarization detection accuracy.

  2. Software for Analyzing Sequences of Flow-Related Images

    NASA Technical Reports Server (NTRS)

    Klimek, Robert; Wright, Ted

    2004-01-01

    Spotlight is a computer program for analysis of sequences of images generated in combustion and fluid physics experiments. Spotlight can perform analysis of a single image in an interactive mode or a sequence of images in an automated fashion. The primary type of analysis is tracking of positions of objects over sequences of frames. Features and objects that are typically tracked include flame fronts, particles, droplets, and fluid interfaces. Spotlight automates the analysis of object parameters, such as centroid position, velocity, acceleration, size, shape, intensity, and color. Images can be processed to enhance them before statistical and measurement operations are performed. An unlimited number of objects can be analyzed simultaneously. Spotlight saves results of analyses in a text file that can be exported to other programs for graphing or further analysis. Spotlight is a graphical-user-interface-based program that at present can be executed on Microsoft Windows and Linux operating systems. A version that runs on Macintosh computers is being considered.

  3. BIRP: Software for interactive search and retrieval of image engineering data

    NASA Technical Reports Server (NTRS)

    Arvidson, R. E.; Bolef, L. K.; Guinness, E. A.; Norberg, P.

    1980-01-01

    Better Image Retrieval Programs (BIRP), a set of programs to interactively sort through and to display a database, such as engineering data for images acquired by spacecraft is described. An overview of the philosophy of BIRP design, the structure of BIRP data files, and examples that illustrate the capabilities of the software are provided.

  4. A Review of Diffusion Tensor Magnetic Resonance Imaging Computational Methods and Software Tools

    PubMed Central

    Hasan, Khader M.; Walimuni, Indika S.; Abid, Humaira; Hahn, Klaus R.

    2010-01-01

    In this work we provide an up-to-date short review of computational magnetic resonance imaging (MRI) and software tools that are widely used to process and analyze diffusion-weighted MRI data. A review of different methods used to acquire, model and analyze diffusion-weighted imaging data (DWI) is first provided with focus on diffusion tensor imaging (DTI). The major preprocessing, processing and post-processing procedures applied to DTI data are discussed. A list of freely available software packages to analyze diffusion MRI data is also provided. PMID:21087766

  5. Spatial data software integration - Merging CAD/CAM/mapping with GIS and image processing

    NASA Technical Reports Server (NTRS)

    Logan, Thomas L.; Bryant, Nevin A.

    1987-01-01

    The integration of CAD/CAM/mapping with image processing using geographic information systems (GISs) as the interface is examined. Particular emphasis is given to the development of software interfaces between JPL's Video Image Communication and Retrieval (VICAR)/Imaged Based Information System (IBIS) raster-based GIS and the CAD/CAM/mapping system. The design and functions of the VICAR and IBIS are described. Vector data capture and editing are studied. Various software programs for interfacing between the VICAR/IBIS and CAD/CAM/mapping are presented and analyzed.

  6. The design of real time infrared image generation software based on Creator and Vega

    NASA Astrophysics Data System (ADS)

    Wang, Rui-feng; Wu, Wei-dong; Huo, Jun-xiu

    2013-09-01

    Considering the requirement of high reality and real-time quality dynamic infrared image of an infrared image simulation, a method to design real-time infrared image simulation application on the platform of VC++ is proposed. This is based on visual simulation software Creator and Vega. The functions of Creator are introduced simply, and the main features of Vega developing environment are analyzed. The methods of infrared modeling and background are offered, the designing flow chart of the developing process of IR image real-time generation software and the functions of TMM Tool and MAT Tool and sensor module are explained, at the same time, the real-time of software is designed.

  7. IDP: Image and data processing (software) in C++

    SciTech Connect

    Lehman, S.

    1994-11-15

    IDP++(Image and Data Processing in C++) is a complied, multidimensional, multi-data type, signal processing environment written in C++. It is being developed within the Radar Ocean Imaging group and is intended as a partial replacement for View. IDP++ takes advantage of the latest object-oriented compiler technology to provide `information hiding.` Users need only know C, not C++. Signals are treated like any other variable with a defined set of operators and functions in an intuitive manner. IDP++ is being designed for real-time environment where interpreted signal processing packages are less efficient.

  8. Image compression software for the SOHO LASCO and EIT experiments

    NASA Technical Reports Server (NTRS)

    Grunes, Mitchell R.; Howard, Russell A.; Hoppel, Karl; Mango, Stephen A.; Wang, Dennis

    1994-01-01

    This paper describes the lossless and lossy image compression algorithms to be used on board the Solar Heliospheric Observatory (SOHO) in conjunction with the Large Angle Spectrometric Coronograph and Extreme Ultraviolet Imaging Telescope experiments. It also shows preliminary results obtained using similar prior imagery and discusses the lossy compression artifacts which will result. This paper is in part intended for the use of SOHO investigators who need to understand the results of SOHO compression in order to better allocate the transmission bits which they have been allocated.

  9. Software optimization for electrical conductivity imaging in polycrystalline diamond cutters

    SciTech Connect

    Bogdanov, G.; Ludwig, R.; Wiggins, J.; Bertagnolli, K.

    2014-02-18

    We previously reported on an electrical conductivity imaging instrument developed for measurements on polycrystalline diamond cutters. These cylindrical cutters for oil and gas drilling feature a thick polycrystalline diamond layer on a tungsten carbide substrate. The instrument uses electrical impedance tomography to profile the conductivity in the diamond table. Conductivity images must be acquired quickly, on the order of 5 sec per cutter, to be useful in the manufacturing process. This paper reports on successful efforts to optimize the conductivity reconstruction routine, porting major portions of it to NVIDIA GPUs, including a custom CUDA kernel for Jacobian computation.

  10. Reliability evaluation of I-123 ADAM SPECT imaging using SPM software and AAL ROI methods

    NASA Astrophysics Data System (ADS)

    Yang, Bang-Hung; Tsai, Sung-Yi; Wang, Shyh-Jen; Su, Tung-Ping; Chou, Yuan-Hwa; Chen, Chia-Chieh; Chen, Jyh-Cheng

    2011-08-01

    The level of serotonin was regulated by serotonin transporter (SERT), which is a decisive protein in regulation of serotonin neurotransmission system. Many psychiatric disorders and therapies were also related to concentration of cerebral serotonin. I-123 ADAM was the novel radiopharmaceutical to image SERT in brain. The aim of this study was to measure reliability of SERT densities of healthy volunteers by automated anatomical labeling (AAL) method. Furthermore, we also used statistic parametric mapping (SPM) on a voxel by voxel analysis to find difference of cortex between test and retest of I-123 ADAM single photon emission computed tomography (SPECT) images.Twenty-one healthy volunteers were scanned twice with SPECT at 4 h after intravenous administration of 185 MBq of 123I-ADAM. The image matrix size was 128×128 and pixel size was 3.9 mm. All images were obtained through filtered back-projection (FBP) reconstruction algorithm. Region of interest (ROI) definition was performed based on the AAL brain template in PMOD version 2.95 software package. ROI demarcations were placed on midbrain, pons, striatum, and cerebellum. All images were spatially normalized to the SPECT MNI (Montreal Neurological Institute) templates supplied with SPM2. And each image was transformed into standard stereotactic space, which was matched to the Talairach and Tournoux atlas. Then differences across scans were statistically estimated on a voxel by voxel analysis using paired t-test (population main effect: 2 cond's, 1 scan/cond.), which was applied to compare concentration of SERT between the test and retest cerebral scans.The average of specific uptake ratio (SUR: target/cerebellum-1) of 123I-ADAM binding to SERT in midbrain was 1.78±0.27, pons was 1.21±0.53, and striatum was 0.79±0.13. The cronbach's α of intra-class correlation coefficient (ICC) was 0.92. Besides, there was also no significant statistical finding in cerebral area using SPM2 analysis. This finding might help us

  11. Software for browsing sectioned images of a dog body and generating a 3D model.

    PubMed

    Park, Jin Seo; Jung, Yong Wook

    2016-01-01

    The goals of this study were (1) to provide accessible and instructive browsing software for sectioned images and a portable document format (PDF) file that includes three-dimensional (3D) models of an entire dog body and (2) to develop techniques for segmentation and 3D modeling that would enable an investigator to perform these tasks without the aid of a computer engineer. To achieve these goals, relatively important or large structures in the sectioned images were outlined to generate segmented images. The sectioned and segmented images were then packaged into browsing software. In this software, structures in the sectioned images are shown in detail and in real color. After 3D models were made from the segmented images, the 3D models were exported into a PDF file. In this format, the 3D models could be manipulated freely. The browsing software and PDF file are available for study by students, for lecture for teachers, and for training for clinicians. These files will be helpful for anatomical study by and clinical training of veterinary students and clinicians. Furthermore, these techniques will be useful for researchers who study two-dimensional images and 3D models.

  12. Image analysis software for following progression of peripheral neuropathy

    NASA Astrophysics Data System (ADS)

    Epplin-Zapf, Thomas; Miller, Clayton; Larkin, Sean; Hermesmeyer, Eduardo; Macy, Jenny; Pellegrini, Marco; Luccarelli, Saverio; Staurenghi, Giovanni; Holmes, Timothy

    2009-02-01

    A relationship has been reported by several research groups [1 - 4] between the density and shapes of nerve fibers in the cornea and the existence and severity of peripheral neuropathy. Peripheral neuropathy is a complication of several prevalent diseases or conditions, which include diabetes, HIV, prolonged alcohol overconsumption and aging. A common clinical technique for confirming the condition is intramuscular electromyography (EMG), which is invasive, so a noninvasive technique like the one proposed here carries important potential advantages for the physician and patient. A software program that automatically detects the nerve fibers, counts them and measures their shapes is being developed and tested. Tests were carried out with a database of subjects with levels of severity of diabetic neuropathy as determined by EMG testing. Results from this testing, that include a linear regression analysis are shown.

  13. Development of Software to Model AXAF-I Image Quality

    NASA Technical Reports Server (NTRS)

    Geary, Joseph; Hawkins, Lamar; Ahmad, Anees; Gong, Qian

    1997-01-01

    This report describes work conducted on Delivery Order 181 between October 1996 through June 1997. During this period software was written to: compute axial PSD's from RDOS AXAF-I mirror surface maps; plot axial surface errors and compute PSD's from HDOS "Big 8" axial scans; plot PSD's from FITS format PSD files; plot band-limited RMS vs axial and azimuthal position for multiple PSD files; combine and organize PSD's from multiple mirror surface measurements formatted as input to GRAZTRACE; modify GRAZTRACE to read FITS formatted PSD files; evaluate AXAF-I test results; improve and expand the capabilities of the GT x-ray mirror analysis package. During this period work began on a more user-friendly manual for the GT program, and improvements were made to the on-line help manual.

  14. Do we really need standards in digital image management?

    PubMed Central

    Ho, ELM

    2008-01-01

    Convention dictates that standards are a necessity rather than a luxury. Standards are supposed to improve the exchange of health and image data information resulting in improved quality and efficiency of patient care. True standardisation is some time away yet, as barriers exist with evolving equipment, storage formats and even the standards themselves. The explosive growth in the size and complexity of images such as those generated by multislice computed tomography have driven the need for digital image management, created problems of storage space and costs, and created a challenge for increasing or getting an adequate speed for transmitting, accessing and retrieving the image data. The search for a suitable and practical format for storing the data without loss of information and medico-legal implications has become a necessity and a matter of ‘urgency’. Existing standards are either open or proprietary and must comply with local, regional or national laws. Currently there are the Picture Archiving and Communications System (PACS); Digital Imaging and Communications in Medicine (DICOM); Health Level 7 (HL7) and Integrating the Healthcare Enterprise (IHE). Issues in digital image management can be categorised as operational, procedural, technical and administrative. Standards must stay focussed on the ultimate goal – that is, improved patient care worldwide. PMID:21611012

  15. The FBI compression standard for digitized fingerprint images

    SciTech Connect

    Brislawn, C.M.; Bradley, J.N.; Onyshczak, R.J.; Hopper, T.

    1996-10-01

    The FBI has formulated national standards for digitization and compression of gray-scale fingerprint images. The compression algorithm for the digitized images is based on adaptive uniform scalar quantization of a discrete wavelet transform subband decomposition, a technique referred to as the wavelet/scalar quantization method. The algorithm produces archival-quality images at compression ratios of around 15 to 1 and will allow the current database of paper fingerprint cards to be replaced by digital imagery. A compliance testing program is also being implemented to ensure high standards of image quality and interchangeability of data between different implementations. We will review the current status of the FBI standard, including the compliance testing process and the details of the first-generation encoder.

  16. Parallel-Processing Software for Correlating Stereo Images

    NASA Technical Reports Server (NTRS)

    Klimeck, Gerhard; Deen, Robert; Mcauley, Michael; DeJong, Eric

    2007-01-01

    A computer program implements parallel- processing algorithms for cor relating images of terrain acquired by stereoscopic pairs of digital stereo cameras on an exploratory robotic vehicle (e.g., a Mars rove r). Such correlations are used to create three-dimensional computatio nal models of the terrain for navigation. In this program, the scene viewed by the cameras is segmented into subimages. Each subimage is assigned to one of a number of central processing units (CPUs) opera ting simultaneously.

  17. Validated novel software to measure the conspicuity index of lesions in DICOM images

    NASA Astrophysics Data System (ADS)

    Szczepura, K. R.; Manning, D. J.

    2016-03-01

    A novel software programme and associated Excel spreadsheet has been developed to provide an objective measure of the expected visual detectability of focal abnormalities within DICOM images. ROIs are drawn around the abnormality, the software then fits the lesion using a least squares method to recognize the edges of the lesion based on the full width half maximum. 180 line profiles are then plotted around the lesion, giving 360 edge profiles.

  18. Software engineering methods and standards used int he sloan digital sky survey

    SciTech Connect

    Petravick, D.; Berman, E.; Gurbani, V.; Nicinski, T.; Pordes, R.; Rechenmacher, R.; Sergey, G.; Lupton, R.H.

    1995-04-01

    We present an integrated science software development environment, code maintenance and support system for the Sloan Digital Sky Survey (SDSS) now being actively used throughout the collaboration. The SDSS is a collaboration between the Fermi National Accelerator Laboratory, the Institute for Advanced Study, The Japan Promotion Group, Johns Hopkins University, Princeton University, The United States Naval Observatory, the University of Chicago, and the University of Washington. The SDSS will produce a five-color imaging survey of 1/4 of the sky about the north galactic cap and image 10{sup 8} Stars, 10{sup 8} galaxies, and 10{sup 5} Quasars. Spectra will be obtained for 10{sup 6} galaxies and 10{sup 5} Quasars as well. The survey will utilize a dedicated 2.5 meter telescope at the Apache Point Observatory in New Mexico. Its imaging camera will hold 54 Charge-Coupled Devices (CADS). The SDSS will take five years to complete, acquiring well over 12 TB of data.

  19. JHelioviewer: Open-Source Software for Discovery and Image Access in the Petabyte Age

    NASA Astrophysics Data System (ADS)

    Mueller, D.; Dimitoglou, G.; Garcia Ortiz, J.; Langenberg, M.; Nuhn, M.; Dau, A.; Pagel, S.; Schmidt, L.; Hughitt, V. K.; Ireland, J.; Fleck, B.

    2011-12-01

    The unprecedented torrent of data returned by the Solar Dynamics Observatory is both a blessing and a barrier: a blessing for making available data with significantly higher spatial and temporal resolution, but a barrier for scientists to access, browse and analyze them. With such staggering data volume, the data is accessible only from a few repositories and users have to deal with data sets effectively immobile and practically difficult to download. From a scientist's perspective this poses three challenges: accessing, browsing and finding interesting data while avoiding the proverbial search for a needle in a haystack. To address these challenges, we have developed JHelioviewer, an open-source visualization software that lets users browse large data volumes both as still images and movies. We did so by deploying an efficient image encoding, storage, and dissemination solution using the JPEG 2000 standard. This solution enables users to access remote images at different resolution levels as a single data stream. Users can view, manipulate, pan, zoom, and overlay JPEG 2000 compressed data quickly, without severe network bandwidth penalties. Besides viewing data, the browser provides third-party metadata and event catalog integration to quickly locate data of interest, as well as an interface to the Virtual Solar Observatory to download science-quality data. As part of the ESA/NASA Helioviewer Project, JHelioviewer offers intuitive ways to browse large amounts of heterogeneous data remotely and provides an extensible and customizable open-source platform for the scientific community. In addition, the easy-to-use graphical user interface enables the general public and educators to access, enjoy and reuse data from space missions without barriers.

  20. New StatPhantom software for assessment of digital image quality

    NASA Astrophysics Data System (ADS)

    Gurvich, Victor A.; Davydenko, George I.

    2002-04-01

    The rapid development of digital imaging and computers networks, using Picture Archiving and Communication Systems (PACS) and DICOM compatible devices increase requirements to the quality control process in medical imaging departments, but provide new opportunities for evaluation of image quality. New StatPhantom software simplifies statistical techniques based on modern detection theory and ROC analysis improving the accuracy and reliability of known methods and allowing to implement statistical analysis with phantoms of any design. In contrast to manual statistical methods, all calculation, analysis of results, and test elements positions changes in the image of phantom are implemented by computer. This paper describes the user interface and functionality of StatPhantom software, its opportunities and advantages in the assessment of various imaging modalities, and the diagnostic preference of an observer. The results obtained by the conventional ROC analysis, manual, and computerized statistical methods are analyzed. Different designs of phantoms are considered.

  1. DEIReconstructor: a software for diffraction enhanced imaging processing and tomography reconstruction

    NASA Astrophysics Data System (ADS)

    Zhang, Kai; Yuan, Qing-Xi; Huang, Wan-Xia; Zhu, Pei-Ping; Wu, Zi-Yu

    2014-10-01

    Diffraction enhanced imaging (DEI) has been widely applied in many fields, especially when imaging low-Z samples or when the difference in the attenuation coefficient between different regions in the sample is too small to be detected. Recent developments of this technique have presented a need for a new software package for data analysis. Here, the Diffraction Enhanced Image Reconstructor (DEIReconstructor), developed in Matlab, is presented. DEIReconstructor has a user-friendly graphical user interface and runs under any of the 32-bit or 64-bit Microsoft Windows operating systems including XP and Win7. Many of its features are integrated to support imaging preprocessing, extract absorption, refractive and scattering information of diffraction enhanced imaging and allow for parallel-beam tomography reconstruction for DEI-CT. Furthermore, many other useful functions are also implemented in order to simplify the data analysis and the presentation of results. The compiled software package is freely available.

  2. LISIRD 2: Applying Standards and Open Source Software in Exploring and Serving Scientific Data

    NASA Astrophysics Data System (ADS)

    Wilson, A.; Lindholm, D. M.; Ware Dewolfe, A.; Lindholm, C.; Pankratz, C. K.; Snow, M.; Woods, T. N.

    2009-12-01

    The LASP Interactive Solar IRradiance Datacenter (LISIRD), http://lasp.colorado.edu/lisird, seeks to provide exploration of and access to solar irradiance data, models and other related data. These irradiance datasets, from the SME, UARS, TIMED, and SORCE missions, are primarily a function of time and often also wavelength. Their measurements are typically made on a scale of seconds and derived products are provided at daily cadence. The first version of the LISIRD site was built using non standard, proprietary software. The non standard application structure and tight coupling to a variety of dataset representations made changes arduous and maintenance difficult. Eventually the software vender decided to no longer support a critical software component, further decreasing the viability of the site. In LISIRD 2, through the application of the Java EE standard coupled with open source software to fetch and plot the data, the functionality of the original site is being improved while the code structure is being streamlined and simplified. With a relatively minimal effort, the new site can access and serve a greater variety of datasets in an easier fashion, and produce responsive, interactive plots of datasets overlaid and/or linked in time. And it does so using a significantly smaller code base that is, at the same time, much more flexible and extensible. In particular, LISIRD 2 heavily leverages powerful, flexible functionality provided by the Time Series Data Server (TSDS). The OPeNDAP compliant TSDS supports requests for any data that are function of time. It can support scalar, vector, and spectra data types. Through the use of the Unidata NetCDF-Java library and NcML, the TSDS supports multiple input and output formats and is easily extended to support more. It also supports a variety of filters that can be chained and applied to the data on the server before being delivered. TSDS thinning capabilities make it easy for the clients to request appropriate data

  3. Web-based spatial analysis with the ILWIS open source GIS software and satellite images from GEONETCast

    NASA Astrophysics Data System (ADS)

    Lemmens, R.; Maathuis, B.; Mannaerts, C.; Foerster, T.; Schaeffer, B.; Wytzisk, A.

    2009-12-01

    fingertips of users around the globe. This user-friendly and low-cost information dissemination provides global information as a basis for decision-making in a number of critical areas, including public health, energy, agriculture, weather, water, climate, natural disasters and ecosystems. GEONETCast makes available satellite images via Digital Video Broadcast (DVB) technology. An OGC WMS interface and plug-ins which convert GEONETCast data streams allow an ILWIS user to integrate various distributed data sources with data locally stored on his machine. Our paper describes a use case in which ILWIS is used with GEONETCast satellite imagery for decision making processes in Ghana. We also explain how the ILWIS software can be extended with additional functionality by means of building plug-ins and unfold our plans to implement other OGC standards, such as WCS and WPS in the same context. Especially, the latter one can be seen as a major step forward in terms of moving well-proven desktop based processing functionality to the web. This enables the embedding of ILWIS functionality in Spatial Data Infrastructures or even the execution in scalable and on-demand cloud computing environments.

  4. A Monte Carlo approach for estimating measurement uncertainty using standard spreadsheet software.

    PubMed

    Chew, Gina; Walczyk, Thomas

    2012-03-01

    Despite the importance of stating the measurement uncertainty in chemical analysis, concepts are still not widely applied by the broader scientific community. The Guide to the expression of uncertainty in measurement approves the use of both the partial derivative approach and the Monte Carlo approach. There are two limitations to the partial derivative approach. Firstly, it involves the computation of first-order derivatives of each component of the output quantity. This requires some mathematical skills and can be tedious if the mathematical model is complex. Secondly, it is not able to predict the probability distribution of the output quantity accurately if the input quantities are not normally distributed. Knowledge of the probability distribution is essential to determine the coverage interval. The Monte Carlo approach performs random sampling from probability distributions of the input quantities; hence, there is no need to compute first-order derivatives. In addition, it gives the probability density function of the output quantity as the end result, from which the coverage interval can be determined. Here we demonstrate how the Monte Carlo approach can be easily implemented to estimate measurement uncertainty using a standard spreadsheet software program such as Microsoft Excel. It is our aim to provide the analytical community with a tool to estimate measurement uncertainty using software that is already widely available and that is so simple to apply that it can even be used by students with basic computer skills and minimal mathematical knowledge.

  5. Starworld: Preparing Accountants for the Future: A Case-Based Approach to Teach International Financial Reporting Standards Using ERP Software

    ERIC Educational Resources Information Center

    Ragan, Joseph M.; Savino, Christopher J.; Parashac, Paul; Hosler, Jonathan C.

    2010-01-01

    International Financial Reporting Standards now constitute an important part of educating young professional accountants. This paper looks at a case based process to teach International Financial Reporting Standards using integrated Enterprise Resource Planning software. The case contained within the paper can be used within a variety of courses…

  6. Robust Intensity Standardization in Brain Magnetic Resonance Images.

    PubMed

    De Nunzio, Giorgio; Cataldo, Rosella; Carlà, Alessandra

    2015-12-01

    The paper is focused on a tiSsue-Based Standardization Technique (SBST) of magnetic resonance (MR) brain images. Magnetic Resonance Imaging intensities have no fixed tissue-specific numeric meaning, even within the same MRI protocol, for the same body region, or even for images of the same patient obtained on the same scanner in different moments. This affects postprocessing tasks such as automatic segmentation or unsupervised/supervised classification methods, which strictly depend on the observed image intensities, compromising the accuracy and efficiency of many image analyses algorithms. A large number of MR images from public databases, belonging to healthy people and to patients with different degrees of neurodegenerative pathology, were employed together with synthetic MRIs. Combining both histogram and tissue-specific intensity information, a correspondence is obtained for each tissue across images. The novelty consists of computing three standardizing transformations for the three main brain tissues, for each tissue class separately. In order to create a continuous intensity mapping, spline smoothing of the overall slightly discontinuous piecewise-linear intensity transformation is performed. The robustness of the technique is assessed in a post hoc manner, by verifying that automatic segmentation of images before and after standardization gives a high overlapping (Dice index >0.9) for each tissue class, even across images coming from different sources. Furthermore, SBST efficacy is tested by evaluating if and how much it increases intertissue discrimination and by assessing gaussianity of tissue gray-level distributions before and after standardization. Some quantitative comparisons to already existing different approaches available in the literature are performed.

  7. An image-processing software package: UU and Fig for optical metrology applications

    NASA Astrophysics Data System (ADS)

    Chen, Lujie

    2013-06-01

    Modern optical metrology applications are largely supported by computational methods, such as phase shifting [1], Fourier Transform [2], digital image correlation [3], camera calibration [4], etc, in which image processing is a critical and indispensable component. While it is not too difficult to obtain a wide variety of image-processing programs from the internet; few are catered for the relatively special area of optical metrology. This paper introduces an image-processing software package: UU (data processing) and Fig (data rendering) that incorporates many useful functions to process optical metrological data. The cross-platform programs UU and Fig are developed based on wxWidgets. At the time of writing, it has been tested on Windows, Linux and Mac OS. The userinterface is designed to offer precise control of the underline processing procedures in a scientific manner. The data input/output mechanism is designed to accommodate diverse file formats and to facilitate the interaction with other independent programs. In terms of robustness, although the software was initially developed for personal use, it is comparably stable and accurate to most of the commercial software of similar nature. In addition to functions for optical metrology, the software package has a rich collection of useful tools in the following areas: real-time image streaming from USB and GigE cameras, computational geometry, computer vision, fitting of data, 3D image processing, vector image processing, precision device control (rotary stage, PZT stage, etc), point cloud to surface reconstruction, volume rendering, batch processing, etc. The software package is currently used in a number of universities for teaching and research.

  8. Plume Ascent Tracker: Interactive Matlab software for analysis of ascending plumes in image data

    NASA Astrophysics Data System (ADS)

    Valade, S. A.; Harris, A. J. L.; Cerminara, M.

    2014-05-01

    This paper presents Matlab-based software designed to track and analyze an ascending plume as it rises above its source, in image data. It reads data recorded in various formats (video files, image files, or web-camera image streams), and at various wavelengths (infrared, visible, or ultra-violet). Using a set of filters which can be set interactively, the plume is first isolated from its background. A user-friendly interface then allows tracking of plume ascent and various parameters that characterize plume evolution during emission and ascent. These include records of plume height, velocity, acceleration, shape, volume, ash (fine-particle) loading, spreading rate, entrainment coefficient and inclination angle, as well as axial and radial profiles for radius and temperature (if data are radiometric). Image transformations (dilatation, rotation, resampling) can be performed to create new images with a vent-centered metric coordinate system. Applications may interest both plume observers (monitoring agencies) and modelers. For the first group, the software is capable of providing quantitative assessments of plume characteristics from image data, for post-event analysis or in near real-time analysis. For the second group, extracted data can serve as benchmarks for plume ascent models, and as inputs for cloud dispersal models. We here describe the software's tracking methodology and main graphical interfaces, using thermal infrared image data of an ascending volcanic ash plume at Santiaguito volcano.

  9. Development of a Standard for Verification and Validation of Software Used to Calculate Nuclear System Thermal Fluids Behavior

    SciTech Connect

    Richard R. Schultz; Edwin A. Harvego; Ryan L. Crane

    2010-05-01

    With the resurgence of nuclear power and increased interest in advanced nuclear reactors as an option to supply abundant energy without the associated greenhouse gas emissions of the more conventional fossil fuel energy sources, there is a need to establish internationally recognized standards for the verification and validation (V&V) of software used to calculate the thermal-hydraulic behavior of advanced reactor designs for both normal operation and hypothetical accident conditions. To address this need, ASME (American Society of Mechanical Engineers) Standards and Certification has established the V&V 30 Committee, under the responsibility of the V&V Standards Committee, to develop a consensus Standard for verification and validation of software used for design and analysis of advanced reactor systems. The initial focus of this committee will be on the V&V of system analysis and computational fluid dynamics (CFD) software for nuclear applications. To limit the scope of the effort, the committee will further limit its focus to software to be used in the licensing of High-Temperature Gas-Cooled Reactors. In this framework, the standard should conform to Nuclear Regulatory Commission (NRC) practices, procedures and methods for licensing of nuclear power plants as embodied in the United States (U.S.) Code of Federal Regulations and other pertinent documents such as Regulatory Guide 1.203, “Transient and Accident Analysis Methods” and NUREG-0800, “NRC Standard Review Plan”. In addition, the standard should be consistent with applicable sections of ASME Standard NQA-1 (“Quality Assurance Requirements for Nuclear Facility Applications (QA)”). This paper describes the general requirements for the V&V Standard, which includes; (a) the definition of the operational and accident domain of a nuclear system that must be considered if the system is to licensed, (b) the corresponding calculational domain of the software that should encompass the nuclear operational

  10. Development of HydroImage, A User Friendly Hydrogeophysical Characterization Software

    SciTech Connect

    Mok, Chin Man; Hubbard, Susan; Chen, Jinsong; Suribhatla, Raghu; Kaback, Dawn Samara

    2014-01-29

    HydroImage, user friendly software that utilizes high-resolution geophysical data for estimating hydrogeological parameters in subsurface strate, was developed under this grant. HydroImage runs on a personal computer platform to promote broad use by hydrogeologists to further understanding of subsurface processes that govern contaminant fate, transport, and remediation. The unique software provides estimates of hydrogeological properties over continuous volumes of the subsurface, whereas previous approaches only allow estimation of point locations. thus, this unique tool can be used to significantly enhance site conceptual models and improve design and operation of remediation systems. The HydroImage technical approach uses statistical models to integrate geophysical data with borehole geological data and hydrological measurements to produce hydrogeological parameter estimates as 2-D or 3-D images.

  11. The image-guided surgery toolkit IGSTK: an open source C++ software toolkit.

    PubMed

    Enquobahrie, Andinet; Cheng, Patrick; Gary, Kevin; Ibanez, Luis; Gobbi, David; Lindseth, Frank; Yaniv, Ziv; Aylward, Stephen; Jomier, Julien; Cleary, Kevin

    2007-11-01

    This paper presents an overview of the image-guided surgery toolkit (IGSTK). IGSTK is an open source C++ software library that provides the basic components needed to develop image-guided surgery applications. It is intended for fast prototyping and development of image-guided surgery applications. The toolkit was developed through a collaboration between academic and industry partners. Because IGSTK was designed for safety-critical applications, the development team has adopted lightweight software processes that emphasizes safety and robustness while, at the same time, supporting geographically separated developers. A software process that is philosophically similar to agile software methods was adopted emphasizing iterative, incremental, and test-driven development principles. The guiding principle in the architecture design of IGSTK is patient safety. The IGSTK team implemented a component-based architecture and used state machine software design methodologies to improve the reliability and safety of the components. Every IGSTK component has a well-defined set of features that are governed by state machines. The state machine ensures that the component is always in a valid state and that all state transitions are valid and meaningful. Realizing that the continued success and viability of an open source toolkit depends on a strong user community, the IGSTK team is following several key strategies to build an active user community. These include maintaining a users and developers' mailing list, providing documentation (application programming interface reference document and book), presenting demonstration applications, and delivering tutorial sessions at relevant scientific conferences. PMID:17703338

  12. Digital processing of side-scan sonar data with the Woods Hole image processing system software

    USGS Publications Warehouse

    Paskevich, Valerie F.

    1992-01-01

    Since 1985, the Branch of Atlantic Marine Geology has been involved in collecting, processing and digitally mosaicking high and low-resolution side-scan sonar data. Recent development of a UNIX-based image-processing software system includes a series of task specific programs for processing side-scan sonar data. This report describes the steps required to process the collected data and to produce an image that has equal along- and across-track resol

  13. Digital image measurement of specimen deformation based on CCD cameras and Image J software: an application to human pelvic biomechanics

    NASA Astrophysics Data System (ADS)

    Jia, Yongwei; Cheng, Liming; Yu, Guangrong; Lou, Yongjian; Yu, Yan; Chen, Bo; Ding, Zuquan

    2008-03-01

    A method of digital image measurement of specimen deformation based on CCD cameras and Image J software was developed. This method was used to measure the biomechanics behavior of human pelvis. Six cadaveric specimens from the third lumbar vertebra to the proximal 1/3 part of femur were tested. The specimens without any structural abnormalities were dissected of all soft tissue, sparing the hip joint capsules and the ligaments of the pelvic ring and floor. Markers with black dot on white background were affixed to the key regions of the pelvis. Axial loading from the proximal lumbar was applied by MTS in the gradient of 0N to 500N, which simulated the double feet standing stance. The anterior and lateral images of the specimen were obtained through two CCD cameras. Based on Image J software, digital image processing software, which can be freely downloaded from the National Institutes of Health, digital 8-bit images were processed. The procedure includes the recognition of digital marker, image invert, sub-pixel reconstruction, image segmentation, center of mass algorithm based on weighted average of pixel gray values. Vertical displacements of S1 (the first sacral vertebrae) in front view and micro-angular rotation of sacroiliac joint in lateral view were calculated according to the marker movement. The results of digital image measurement showed as following: marker image correlation before and after deformation was excellent. The average correlation coefficient was about 0.983. According to the 768 × 576 pixels image (pixel size 0.68mm × 0.68mm), the precision of the displacement detected in our experiment was about 0.018 pixels and the comparatively error could achieve 1.11\\perthou. The average vertical displacement of S1 of the pelvis was 0.8356+/-0.2830mm under vertical load of 500 Newtons and the average micro-angular rotation of sacroiliac joint in lateral view was 0.584+/-0.221°. The load-displacement curves obtained from our optical measure system

  14. Onboard utilization of ground control points for image correction. Volume 4: Correlation analysis software design

    NASA Technical Reports Server (NTRS)

    1981-01-01

    The software utilized for image correction accuracy measurement is described. The correlation analysis program is written to allow the user various tools to analyze different correlation algorithms. The algorithms were tested using LANDSAT imagery in two different spectral bands. Three classification algorithms are implemented.

  15. Informatics in radiology: automated structured reporting of imaging findings using the AIM standard and XML.

    PubMed

    Zimmerman, Stefan L; Kim, Woojin; Boonn, William W

    2011-01-01

    Quantitative and descriptive imaging data are a vital component of the radiology report and are frequently of paramount importance to the ordering physician. Unfortunately, current methods of recording these data in the report are both inefficient and error prone. In addition, the free-text, unstructured format of a radiology report makes aggregate analysis of data from multiple reports difficult or even impossible without manual intervention. A structured reporting work flow has been developed that allows quantitative data created at an advanced imaging workstation to be seamlessly integrated into the radiology report with minimal radiologist intervention. As an intermediary step between the workstation and the reporting software, quantitative and descriptive data are converted into an extensible markup language (XML) file in a standardized format specified by the Annotation and Image Markup (AIM) project of the National Institutes of Health Cancer Biomedical Informatics Grid. The AIM standard was created to allow image annotation data to be stored in a uniform machine-readable format. These XML files containing imaging data can also be stored on a local database for data mining and analysis. This structured work flow solution has the potential to improve radiologist efficiency, reduce errors, and facilitate storage of quantitative and descriptive imaging data for research. PMID:21357413

  16. Informatics in radiology: automated structured reporting of imaging findings using the AIM standard and XML.

    PubMed

    Zimmerman, Stefan L; Kim, Woojin; Boonn, William W

    2011-01-01

    Quantitative and descriptive imaging data are a vital component of the radiology report and are frequently of paramount importance to the ordering physician. Unfortunately, current methods of recording these data in the report are both inefficient and error prone. In addition, the free-text, unstructured format of a radiology report makes aggregate analysis of data from multiple reports difficult or even impossible without manual intervention. A structured reporting work flow has been developed that allows quantitative data created at an advanced imaging workstation to be seamlessly integrated into the radiology report with minimal radiologist intervention. As an intermediary step between the workstation and the reporting software, quantitative and descriptive data are converted into an extensible markup language (XML) file in a standardized format specified by the Annotation and Image Markup (AIM) project of the National Institutes of Health Cancer Biomedical Informatics Grid. The AIM standard was created to allow image annotation data to be stored in a uniform machine-readable format. These XML files containing imaging data can also be stored on a local database for data mining and analysis. This structured work flow solution has the potential to improve radiologist efficiency, reduce errors, and facilitate storage of quantitative and descriptive imaging data for research.

  17. Sharing Images Intelligently: The Astronomical Visualization Metadata Standard

    NASA Astrophysics Data System (ADS)

    Hurt, Robert L.; Christensen, L.; Gauthier, A.

    2006-12-01

    The astronomical education and public outreach (EPO) community plays a key role in conveying the results of scientific research to the general public. A key product of EPO development is a variety of non-scientific public image resources, both derived from scientific observations and created as artistic visualizations of scientific results. This refers to general image formats such as JPEG, TIFF, PNG, GIF, not scientific FITS datasets. Such resources are currently scattered across the internet in a variety of galleries and archives, but are not searchable in any coherent or unified way. Just as Virtual Observatory standards open up all data archives to a common query engine, the EPO community will benefit greatly from a similar mechanism for image search and retrieval. A new standard has been developed for astronomical imagery defining a common set of content fields suited for the needs of astronomical visualizations. This encompasses images derived from data, artist's conceptions, simulations, photography, and can be ultimately extensible to video products. The first generation of tools are now available to tag images with this metadata, which can be embedded with the image file using an XML-based format that functions similarly to a FITS header. As image collections are processed to include astronomy visualization metadata tags, extensive information providing educational context, credits, data sources, and even coordinate information will be readily accessible for uses spanning casual browsing, publication, and interactive media systems.

  18. Standard Fluorescent Imaging of Live Cells is Highly Genotoxic

    PubMed Central

    Ge, Jing; Wood, David K.; Weingeist, David M.; Prasongtanakij, Somsak; Navasumrit, Panida; Ruchirawat, Mathuros; Engelward, Bevin P.

    2013-01-01

    Fluorescence microscopy is commonly used for imaging live mammalian cells. Here, we describe studies aimed at revealing the potential genotoxic effects of standard fluorescence microscopy. To assess DNA damage, a high throughput platform for single cell gel electrophoresis is used (e.g., the CometChip). Light emitted by three standard filters was studied: a) violet light [340–380 nm], used to excite DAPI and other blue fluorophores, b) blue light [460–500 nm] commonly used to image GFP and Calcein AM, and c) green light [528–553], useful for imaging red fluorophores. Results show that exposure of samples to light during imaging is indeed genotoxic even when the selected wavelengths are outside the range known to induce significant levels. Shorter excitation wavelengths and longer irradiation times lead to higher levels of DNA damage. We have also measured DNA damage in cells expressing enhanced green fluorescent protein (GFP) or stained with Calcein AM, a widely used green fluorophore. Data show that Calcein AM leads to a synergistic increase in the levels of DNA damage and that even cells that are not being directly imaged sustain significant DNA damage from exposure to indirect light. The nature of light-induced DNA damage during imaging was assessed using the Fpg glycosylase, an enzyme that enables quantification of oxidative DNA damage. Oxidative damage was evident in cells exposed to violet light. Furthermore the Fpg glycosylase revealed the presence of oxidative DNA damage in blue-light exposed cells for which DNA damage was not detected using standard analysis conditions. Taken together, the results of these studies call attention to the potential confounding effects of DNA damage induced by standard imaging conditions, and identify wavelength, exposure time and fluorophore as parameters that can be modulated to reduce light-induced DNA damage. PMID:23650257

  19. Creation of three-dimensional craniofacial standards from CBCT images

    NASA Astrophysics Data System (ADS)

    Subramanyan, Krishna; Palomo, Martin; Hans, Mark

    2006-03-01

    Low-dose three-dimensional Cone Beam Computed Tomography (CBCT) is becoming increasingly popular in the clinical practice of dental medicine. Two-dimensional Bolton Standards of dentofacial development are routinely used to identify deviations from normal craniofacial anatomy. With the advent of CBCT three dimensional imaging, we propose a set of methods to extend these 2D Bolton Standards to anatomically correct surface based 3D standards to allow analysis of morphometric changes seen in craniofacial complex. To create 3D surface standards, we have implemented series of steps. 1) Converting bi-plane 2D tracings into set of splines 2) Converting the 2D splines curves from bi-plane projection into 3D space curves 3) Creating labeled template of facial and skeletal shapes and 4) Creating 3D average surface Bolton standards. We have used datasets from patients scanned with Hitachi MercuRay CBCT scanner providing high resolution and isotropic CT volume images, digitized Bolton Standards from age 3 to 18 years of lateral and frontal male, female and average tracings and converted them into facial and skeletal 3D space curves. This new 3D standard will help in assessing shape variations due to aging in young population and provide reference to correct facial anomalies in dental medicine.

  20. Customization of a standard practice management software program to satisfy the needs of a dental hygiene program.

    PubMed

    Baker, David L; Mills, Bernice A

    2003-01-01

    The University of New England (UNE) Dental Hygiene Program converted from a paper format to a digital format to manage the daily, dental hygiene clinic transactions. The use of this practice management software created new opportunities to enhance the program's teaching mission, monitor the progress of individual students, and facilitate the data collection necessary to meet accreditation standards. This report will describe how this dental hygiene program customized a standard practice management software program to enhance the specific requirements of a clinical teaching institution.

  1. New technique to count mosquito adults: using ImageJ software to estimate number of mosquito adults in a trap.

    PubMed

    Kesavaraju, Banugopan; Dickson, Sammie

    2012-12-01

    A new technique is described here to count mosquitoes using open-source software. We wanted to develop a protocol that would estimate the total number of mosquitoes from a picture using ImageJ. Adult mosquitoes from CO2-baited traps were spread on a tray and photographed. The total number of mosquitoes in a picture was estimated using various calibrations on ImageJ, and results were compared with manual counting to identify the ideal calibration. The average trap count was 1,541, and the average difference between the manual count and the best calibration was 174.11 +/- 21.59, with 93% correlation. Subsequently, contents of a trap were photographed 5 different times after they were shuffled between each picture to alter the picture pattern of adult mosquitoes. The standard error among variations stayed below 50, indicating limited variation for total count between pictures of the same trap when the pictures were processed through ImageJ. These results indicate the software could be utilized efficiently to estimate total number of mosquitoes from traps.

  2. WHIPPET: a collaborative software environment for medical image processing and analysis

    NASA Astrophysics Data System (ADS)

    Hu, Yangqiu; Haynor, David R.; Maravilla, Kenneth R.

    2007-03-01

    While there are many publicly available software packages for medical image processing, making them available to end users in clinical and research labs remains non-trivial. An even more challenging task is to mix these packages to form pipelines that meet specific needs seamlessly, because each piece of software usually has its own input/output formats, parameter sets, and so on. To address these issues, we are building WHIPPET (Washington Heterogeneous Image Processing Pipeline EnvironmenT), a collaborative platform for integrating image analysis tools from different sources. The central idea is to develop a set of Python scripts which glue the different packages together and make it possible to connect them in processing pipelines. To achieve this, an analysis is carried out for each candidate package for WHIPPET, describing input/output formats, parameters, ROI description methods, scripting and extensibility and classifying its compatibility with other WHIPPET components as image file level, scripting level, function extension level, or source code level. We then identify components that can be connected in a pipeline directly via image format conversion. We set up a TWiki server for web-based collaboration so that component analysis and task request can be performed online, as well as project tracking, knowledge base management, and technical support. Currently WHIPPET includes the FSL, MIPAV, FreeSurfer, BrainSuite, Measure, DTIQuery, and 3D Slicer software packages, and is expanding. Users have identified several needed task modules and we report on their implementation.

  3. A software tool to measure the geometric distortion in x-ray image systems

    NASA Astrophysics Data System (ADS)

    Prieto, Gabriel; Guibelalde, Eduardo; Chevalier, Margarita

    2010-04-01

    A software tool is presented to measure the geometric distortion in images obtained with X-ray systems that provides a more objective method than the usual measurements over the image of a phantom with usual rulers. In a first step, this software has been applied to mammography images and makes use of the grid included into the CDMAM phantom (University Hospital Nijmegen). For digital images, this software tool automatically locates the grid crossing points and obtains a set of corners (up to 237) that are used by the program to determine 6 different squares, at top, bottom, left, right and central positions. The sixth square is the largest that can be fitted in the grid (widest possible square). The distortion is calculated as ((length of left diagonal - length of right diagonal)/ length of left diagonal) (%) for the six positions. The algorithm error is of the order of 0.3%. The method might be applied to other radiological systems without any major changes to adjust the program code to other phantoms. In this work a set of measurements for 54 CDMAM images, acquired in 11 different mammography systems from 6 manufacturers are presented. We can conclude that the distortion of all equipments is smaller than the recommendations for maximum distortions in primary displays (2%)

  4. Capturing a failure of an ASIC in-situ, using infrared radiometry and image processing software

    NASA Technical Reports Server (NTRS)

    Ruiz, Ronald P.

    2003-01-01

    Failures in electronic devices can sometimes be tricky to locate-especially if they are buried inside radiation-shielded containers designed to work in outer space. Such was the case with a malfunctioning ASIC (Application Specific Integrated Circuit) that was drawing excessive power at a specific temperature during temperature cycle testing. To analyze the failure, infrared radiometry (thermography) was used in combination with image processing software to locate precisely where the power was being dissipated at the moment the failure took place. The IR imaging software was used to make the image of the target and background, appear as unity. As testing proceeded and the failure mode was reached, temperature changes revealed the precise location of the fault. The results gave the design engineers the information they needed to fix the problem. This paper describes the techniques and equipment used to accomplish this failure analysis.

  5. IHE cross-enterprise document sharing for imaging: interoperability testing software

    PubMed Central

    2010-01-01

    Background With the deployments of Electronic Health Records (EHR), interoperability testing in healthcare is becoming crucial. EHR enables access to prior diagnostic information in order to assist in health decisions. It is a virtual system that results from the cooperation of several heterogeneous distributed systems. Interoperability between peers is therefore essential. Achieving interoperability requires various types of testing. Implementations need to be tested using software that simulates communication partners, and that provides test data and test plans. Results In this paper we describe a software that is used to test systems that are involved in sharing medical images within the EHR. Our software is used as part of the Integrating the Healthcare Enterprise (IHE) testing process to test the Cross Enterprise Document Sharing for imaging (XDS-I) integration profile. We describe its architecture and functionalities; we also expose the challenges encountered and discuss the elected design solutions. Conclusions EHR is being deployed in several countries. The EHR infrastructure will be continuously evolving to embrace advances in the information technology domain. Our software is built on a web framework to allow for an easy evolution with web technology. The testing software is publicly available; it can be used by system implementers to test their implementations. It can also be used by site integrators to verify and test the interoperability of systems, or by developers to understand specifications ambiguities, or to resolve implementations difficulties. PMID:20858241

  6. Name-Value Pair Specification For Image Data Headers And Logical Standards For Image Data Exchange

    NASA Astrophysics Data System (ADS)

    Prewitt, J. M.; Selfridge, Peter G.; Anderson, Alicia C.

    1984-08-01

    A chronic barrier to rapid progress in image processing and pattern recognition research is the lack of a universal and facile method of transferring image data between different facilities. Comparison of different approaches and algorithms on a common data base is often the only means for establishing the validity of results. Data collected under known recording conditions is mandatory for improvement of analytic methodology, yet such valuable data is costly and time consuming to obtain. Therefore, the sharing and exchange of image data may be expendient. The proliferation of different image data formats has compouned the problem of exchange. The establishement of logical formats and standards for images and image data headers is the first step towards dissolving this barrier. This paper presents initial recommendations of the IEEE Computer Society PAMI (Pattern Analysis and Machine Intelligence) and CompMed (Computational Medicine) Technical Committees' Database Subcommittees on the first of a series of digital image data standards.

  7. 3D thermography imaging standardization technique for inflammation diagnosis

    NASA Astrophysics Data System (ADS)

    Ju, Xiangyang; Nebel, Jean-Christophe; Siebert, J. Paul

    2005-01-01

    We develop a 3D thermography imaging standardization technique to allow quantitative data analysis. Medical Digital Infrared Thermal Imaging is very sensitive and reliable mean of graphically mapping and display skin surface temperature. It allows doctors to visualise in colour and quantify temperature changes in skin surface. The spectrum of colours indicates both hot and cold responses which may co-exist if the pain associate with an inflammatory focus excites an increase in sympathetic activity. However, due to thermograph provides only qualitative diagnosis information, it has not gained acceptance in the medical and veterinary communities as a necessary or effective tool in inflammation and tumor detection. Here, our technique is based on the combination of visual 3D imaging technique and thermal imaging technique, which maps the 2D thermography images on to 3D anatomical model. Then we rectify the 3D thermogram into a view independent thermogram and conform it a standard shape template. The combination of these imaging facilities allows the generation of combined 3D and thermal data from which thermal signatures can be quantified.

  8. Grid-less imaging with antiscatter correction software in 2D mammography: the effects on image quality and MGD under a partial virtual clinical validation study

    NASA Astrophysics Data System (ADS)

    Van Peteghem, Nelis; Bemelmans, Frédéric; Bramaje Adversalo, Xenia; Salvagnini, Elena; Marshall, Nicholas; Bosmans, Hilde; Van Ongeval, Chantal

    2016-03-01

    This work investigated the effect of the grid-less acquisition mode with scatter correction software developed by Siemens Healthcare (PRIME mode) on image quality and mean glandular dose (MGD) in a comparative study against a standard mammography system with grid. Image quality was technically quantified with contrast-detail (c-d) analysis and by calculating detectability indices (d') using a non-prewhitening with eye filter model observer (NPWE). MGD was estimated technically using slabs of PMMA and clinically on a set of 11439 patient images. The c-d analysis gave similar results for all mammographic systems examined, although the d' values were slightly lower for the system with PRIME mode when compared to the same system in standard mode (-2.8% to -5.7%, depending on the PMMA thickness). The MGD values corresponding to the PMMA measurements with automatic exposure control indicated a dose reduction from 11.0% to 20.8% for the system with PRIME mode compared to the same system without PRIME mode. The largest dose reductions corresponded to the thinnest PMMA thicknesses. The results from the clinical dosimetry study showed an overall population-averaged dose reduction of 11.6% (up to 27.7% for thinner breasts) for PRIME mode compared to standard mode for breast thicknesses from 20 to 69 mm. These technical image quality measures were then supported using a clinically oriented study whereby simulated clusters of microcalcifications and masses were inserted into patient images and read by radiologists in an AFROC study to quantify their detectability. In line with the technical investigation, no significant difference was found between the two imaging modes (p-value 0.95).

  9. Image contrast enhancement based on a local standard deviation model

    SciTech Connect

    Chang, Dah-Chung; Wu, Wen-Rong

    1996-12-31

    The adaptive contrast enhancement (ACE) algorithm is a widely used image enhancement method, which needs a contrast gain to adjust high frequency components of an image. In the literature, the gain is usually inversely proportional to the local standard deviation (LSD) or is a constant. But these cause two problems in practical applications, i.e., noise overenhancement and ringing artifact. In this paper a new gain is developed based on Hunt`s Gaussian image model to prevent the two defects. The new gain is a nonlinear function of LSD and has the desired characteristic emphasizing the LSD regions in which details are concentrated. We have applied the new ACE algorithm to chest x-ray images and the simulations show the effectiveness of the proposed algorithm.

  10. Stain Specific Standardization of Whole-Slide Histopathological Images.

    PubMed

    Bejnordi, Babak Ehteshami; Litjens, Geert; Timofeeva, Nadya; Otte-Höller, Irene; Homeyer, André; Karssemeijer, Nico; van der Laak, Jeroen A W M

    2016-02-01

    Variations in the color and intensity of hematoxylin and eosin (H&E) stained histological slides can potentially hamper the effectiveness of quantitative image analysis. This paper presents a fully automated algorithm for standardization of whole-slide histopathological images to reduce the effect of these variations. The proposed algorithm, called whole-slide image color standardizer (WSICS), utilizes color and spatial information to classify the image pixels into different stain components. The chromatic and density distributions for each of the stain components in the hue-saturation-density color model are aligned to match the corresponding distributions from a template whole-slide image (WSI). The performance of the WSICS algorithm was evaluated on two datasets. The first originated from 125 H&E stained WSIs of lymph nodes, sampled from 3 patients, and stained in 5 different laboratories on different days of the week. The second comprised 30 H&E stained WSIs of rat liver sections. The result of qualitative and quantitative evaluations using the first dataset demonstrate that the WSICS algorithm outperforms competing methods in terms of achieving color constancy. The WSICS algorithm consistently yields the smallest standard deviation and coefficient of variation of the normalized median intensity measure. Using the second dataset, we evaluated the impact of our algorithm on the performance of an already published necrosis quantification system. The performance of this system was significantly improved by utilizing the WSICS algorithm. The results of the empirical evaluations collectively demonstrate the potential contribution of the proposed standardization algorithm to improved diagnostic accuracy and consistency in computer-aided diagnosis for histopathology data.

  11. HOLON/CADSE: integrating open software standards and formal methods to generate guideline-based decision support agents.

    PubMed Central

    Silverman, B. G.; Sokolsky, O.; Tannen, V.; Wong, A.; Lang, L.; Khoury, A.; Campbell, K.; Qiang, C.; Sahuguet, A.

    1999-01-01

    This paper describes the efforts of a consortium that is trying to develop and validate formal methods and a meta-environment for authoring, checking, and maintaining a large repository of machine executable practice guidelines. The goal is to integrate and extend a number of open software standards so that guidelines in the meta-environment become a resource that any vendor can plug their applications into and run in their proprietary environment provided they conform to the interface standards. PMID:10566502

  12. MedXViewer: an extensible web-enabled software package for medical imaging

    NASA Astrophysics Data System (ADS)

    Looney, P. T.; Young, K. C.; Mackenzie, Alistair; Halling-Brown, Mark D.

    2014-03-01

    MedXViewer (Medical eXtensible Viewer) is an application designed to allow workstation-independent, PACS-less viewing and interaction with anonymised medical images (e.g. observer studies). The application was initially implemented for use in digital mammography and tomosynthesis but the flexible software design allows it to be easily extended to other imaging modalities. Regions of interest can be identified by a user and any associated information about a mark, an image or a study can be added. The questions and settings can be easily configured depending on the need of the research allowing both ROC and FROC studies to be performed. The extensible nature of the design allows for other functionality and hanging protocols to be available for each study. Panning, windowing, zooming and moving through slices are all available while modality-specific features can be easily enabled e.g. quadrant zooming in mammographic studies. MedXViewer can integrate with a web-based image database allowing results and images to be stored centrally. The software and images can be downloaded remotely from this centralised data-store. Alternatively, the software can run without a network connection where the images and results can be encrypted and stored locally on a machine or external drive. Due to the advanced workstation-style functionality, the simple deployment on heterogeneous systems over the internet without a requirement for administrative access and the ability to utilise a centralised database, MedXViewer has been used for running remote paper-less observer studies and is capable of providing a training infrastructure and co-ordinating remote collaborative viewing sessions (e.g. cancer reviews, interesting cases).

  13. White Paper: Access to Standard Computers, Software, and Information Systems by Persons with Disabilities. Revised, Version 2.0.

    ERIC Educational Resources Information Center

    Vanderheiden, Gregg C.

    The paper focuses on low cost and no cost methods to allow access and use (via specialized interface and display aids) by the disabled of standard unmodified computers and of microcomputer software systems becoming increasingly common in daily life. First, relevant characteristics of persons with movement, sensory, hearing, or cognitive…

  14. [CASTOR-Radiology: software of management in a Unit of Medical Imaging: use in the CHU of Tours].

    PubMed

    Bertrand, P; Rouleau, P; Alison, D; Bristeau, M; Minard, P; Saad, B

    1993-01-01

    Despite the large volume of information circulating in radiology departments, very few of them are currently computerised, although computer processing is developing rapidly in hospitals, encouraged by the installation of PMSI. This article illustrates the example of an imaging department management software: CASTOR-Radiologie, Computerisation of part of the Hospital Information System (HIS) must allow an improvement in the efficacy of the service rendered, must reliably reflect the department's activity and must be able to monitor the running costs. CASTOR-Radiologie was developed in conformity with standard national specifications defined by the Public Hospitals Department of the French Ministry of Health. The functions of this software are: unique patient identification, HIS base, management of examination requests, allowing a rapid reply to clinician's requests, "real-time" follow-up of patients in the department, saving time for secretaries and technicians, medical files and file analysis, allowing analysis of diagnostic strategies and quality control, edition of analytical tables of the department's activity compatible with the PMSI procedures catalogue, allowing optimisation of the use of limited resources, aid to the management of human, equipment and consumable resources. Links with other hospital computers raise organisational rather than technical problems, but have been planned for in the CASTOR-Radiologie software. This new tool was very well accepted by the personnel.

  15. The Performance Evaluation of Multi-Image 3d Reconstruction Software with Different Sensors

    NASA Astrophysics Data System (ADS)

    Mousavi, V.; Khosravi, M.; Ahmadi, M.; Noori, N.; Naveh, A. Hosseini; Varshosaz, M.

    2015-12-01

    Today, multi-image 3D reconstruction is an active research field and generating three dimensional model of the objects is one the most discussed issues in Photogrammetry and Computer Vision that can be accomplished using range-based or image-based methods. Very accurate and dense point clouds generated by range-based methods such as structured light systems and laser scanners has introduced them as reliable tools in the industry. Image-based 3D digitization methodologies offer the option of reconstructing an object by a set of unordered images that depict it from different viewpoints. As their hardware requirements are narrowed down to a digital camera and a computer system, they compose an attractive 3D digitization approach, consequently, although range-based methods are generally very accurate, image-based methods are low-cost and can be easily used by non-professional users. One of the factors affecting the accuracy of the obtained model in image-based methods is the software and algorithm used to generate three dimensional model. These algorithms are provided in the form of commercial software, open source and web-based services. Another important factor in the accuracy of the obtained model is the type of sensor used. Due to availability of mobile sensors to the public, popularity of professional sensors and the advent of stereo sensors, a comparison of these three sensors plays an effective role in evaluating and finding the optimized method to generate three-dimensional models. Lots of research has been accomplished to identify a suitable software and algorithm to achieve an accurate and complete model, however little attention is paid to the type of sensors used and its effects on the quality of the final model. The purpose of this paper is deliberation and the introduction of an appropriate combination of a sensor and software to provide a complete model with the highest accuracy. To do this, different software, used in previous studies, were compared and

  16. The Spectral Image Processing System (SIPS): Software for integrated analysis of AVIRIS data

    NASA Technical Reports Server (NTRS)

    Kruse, F. A.; Lefkoff, A. B.; Boardman, J. W.; Heidebrecht, K. B.; Shapiro, A. T.; Barloon, P. J.; Goetz, A. F. H.

    1992-01-01

    The Spectral Image Processing System (SIPS) is a software package developed by the Center for the Study of Earth from Space (CSES) at the University of Colorado, Boulder, in response to a perceived need to provide integrated tools for analysis of imaging spectrometer data both spectrally and spatially. SIPS was specifically designed to deal with data from the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) and the High Resolution Imaging Spectrometer (HIRIS), but was tested with other datasets including the Geophysical and Environmental Research Imaging Spectrometer (GERIS), GEOSCAN images, and Landsat TM. SIPS was developed using the 'Interactive Data Language' (IDL). It takes advantage of high speed disk access and fast processors running under the UNIX operating system to provide rapid analysis of entire imaging spectrometer datasets. SIPS allows analysis of single or multiple imaging spectrometer data segments at full spatial and spectral resolution. It also allows visualization and interactive analysis of image cubes derived from quantitative analysis procedures such as absorption band characterization and spectral unmixing. SIPS consists of three modules: SIPS Utilities, SIPS_View, and SIPS Analysis. SIPS version 1.1 is described below.

  17. Oxygen octahedra picker: A software tool to extract quantitative information from STEM images.

    PubMed

    Wang, Yi; Salzberger, Ute; Sigle, Wilfried; Eren Suyolcu, Y; van Aken, Peter A

    2016-09-01

    In perovskite oxide based materials and hetero-structures there are often strong correlations between oxygen octahedral distortions and functionality. Thus, atomistic understanding of the octahedral distortion, which requires accurate measurements of atomic column positions, will greatly help to engineer their properties. Here, we report the development of a software tool to extract quantitative information of the lattice and of BO6 octahedral distortions from STEM images. Center-of-mass and 2D Gaussian fitting methods are implemented to locate positions of individual atom columns. The precision of atomic column distance measurements is evaluated on both simulated and experimental images. The application of the software tool is demonstrated using practical examples. PMID:27344044

  18. Luminosity and contrast normalization in color retinal images based on standard reference image

    NASA Astrophysics Data System (ADS)

    S. Varnousfaderani, Ehsan; Yousefi, Siamak; Belghith, Akram; Goldbaum, Michael H.

    2016-03-01

    Color retinal images are used manually or automatically for diagnosis and monitoring progression of a retinal diseases. Color retinal images have large luminosity and contrast variability within and across images due to the large natural variations in retinal pigmentation and complex imaging setups. The quality of retinal images may affect the performance of automatic screening tools therefore different normalization methods are developed to uniform data before applying any further analysis or processing. In this paper we propose a new reliable method to remove non-uniform illumination in retinal images and improve their contrast based on contrast of the reference image. The non-uniform illumination is removed by normalizing luminance image using local mean and standard deviation. Then the contrast is enhanced by shifting histograms of uniform illuminated retinal image toward histograms of the reference image to have similar histogram peaks. This process improve the contrast without changing inter correlation of pixels in different color channels. In compliance with the way humans perceive color, the uniform color space of LUV is used for normalization. The proposed method is widely tested on large dataset of retinal images with present of different pathologies such as Exudate, Lesion, Hemorrhages and Cotton-Wool and in different illumination conditions and imaging setups. Results shows that proposed method successfully equalize illumination and enhances contrast of retinal images without adding any extra artifacts.

  19. TiLIA: a software package for image analysis of firefly flash patterns.

    PubMed

    Konno, Junsuke; Hatta-Ohashi, Yoko; Akiyoshi, Ryutaro; Thancharoen, Anchana; Silalom, Somyot; Sakchoowong, Watana; Yiu, Vor; Ohba, Nobuyoshi; Suzuki, Hirobumi

    2016-05-01

    As flash signaling patterns of fireflies are species specific, signal-pattern analysis is important for understanding this system of communication. Here, we present time-lapse image analysis (TiLIA), a free open-source software package for signal and flight pattern analyses of fireflies that uses video-recorded image data. TiLIA enables flight path tracing of individual fireflies and provides frame-by-frame coordinates and light intensity data. As an example of TiLIA capabilities, we demonstrate flash pattern analysis of the fireflies Luciola cruciata and L. lateralis during courtship behavior. PMID:27069594

  20. 2D-CELL: image processing software for extraction and analysis of 2-dimensional cellular structures

    NASA Astrophysics Data System (ADS)

    Righetti, F.; Telley, H.; Leibling, Th. M.; Mocellin, A.

    1992-01-01

    2D-CELL is a software package for the processing and analyzing of photographic images of cellular structures in a largely interactive way. Starting from a binary digitized image, the programs extract the line network (skeleton) of the structure and determine the graph representation that best models it. Provision is made for manually correcting defects such as incorrect node positions or dangling bonds. Then a suitable algorithm retrieves polygonal contours which define individual cells — local boundary curvatures are neglected for simplicity. Using elementary analytical geometry relations, a range of metric and topological parameters describing the population are then computed, organized into statistical distributions and graphically displayed.

  1. TiLIA: a software package for image analysis of firefly flash patterns.

    PubMed

    Konno, Junsuke; Hatta-Ohashi, Yoko; Akiyoshi, Ryutaro; Thancharoen, Anchana; Silalom, Somyot; Sakchoowong, Watana; Yiu, Vor; Ohba, Nobuyoshi; Suzuki, Hirobumi

    2016-05-01

    As flash signaling patterns of fireflies are species specific, signal-pattern analysis is important for understanding this system of communication. Here, we present time-lapse image analysis (TiLIA), a free open-source software package for signal and flight pattern analyses of fireflies that uses video-recorded image data. TiLIA enables flight path tracing of individual fireflies and provides frame-by-frame coordinates and light intensity data. As an example of TiLIA capabilities, we demonstrate flash pattern analysis of the fireflies Luciola cruciata and L. lateralis during courtship behavior.

  2. SOFI Simulation Tool: A Software Package for Simulating and Testing Super-Resolution Optical Fluctuation Imaging

    PubMed Central

    Sharipov, Azat; Geissbuehler, Stefan; Leutenegger, Marcel; Vandenberg, Wim; Dedecker, Peter; Hofkens, Johan; Lasser, Theo

    2016-01-01

    Super-resolution optical fluctuation imaging (SOFI) allows one to perform sub-diffraction fluorescence microscopy of living cells. By analyzing the acquired image sequence with an advanced correlation method, i.e. a high-order cross-cumulant analysis, super-resolution in all three spatial dimensions can be achieved. Here we introduce a software tool for a simple qualitative comparison of SOFI images under simulated conditions considering parameters of the microscope setup and essential properties of the biological sample. This tool incorporates SOFI and STORM algorithms, displays and describes the SOFI image processing steps in a tutorial-like fashion. Fast testing of various parameters simplifies the parameter optimization prior to experimental work. The performance of the simulation tool is demonstrated by comparing simulated results with experimentally acquired data. PMID:27583365

  3. SOFI Simulation Tool: A Software Package for Simulating and Testing Super-Resolution Optical Fluctuation Imaging.

    PubMed

    Girsault, Arik; Lukes, Tomas; Sharipov, Azat; Geissbuehler, Stefan; Leutenegger, Marcel; Vandenberg, Wim; Dedecker, Peter; Hofkens, Johan; Lasser, Theo

    2016-01-01

    Super-resolution optical fluctuation imaging (SOFI) allows one to perform sub-diffraction fluorescence microscopy of living cells. By analyzing the acquired image sequence with an advanced correlation method, i.e. a high-order cross-cumulant analysis, super-resolution in all three spatial dimensions can be achieved. Here we introduce a software tool for a simple qualitative comparison of SOFI images under simulated conditions considering parameters of the microscope setup and essential properties of the biological sample. This tool incorporates SOFI and STORM algorithms, displays and describes the SOFI image processing steps in a tutorial-like fashion. Fast testing of various parameters simplifies the parameter optimization prior to experimental work. The performance of the simulation tool is demonstrated by comparing simulated results with experimentally acquired data. PMID:27583365

  4. IBIS integrated biological imaging system: electron micrograph image-processing software running on Unix workstations.

    PubMed

    Flifla, M J; Garreau, M; Rolland, J P; Coatrieux, J L; Thomas, D

    1992-12-01

    'IBIS' is a set of computer programs concerned with the processing of electron micrographs, with particular emphasis on the requirements for structural analyses of biological macromolecules. The software is written in FORTRAN 77 and runs on Unix workstations. A description of the various functions and the implementation mode is given. Some examples illustrate the user interface.

  5. Performing Quantitative Imaging Acquisition, Analysis and Visualization Using the Best of Open Source and Commercial Software Solutions

    PubMed Central

    Shenoy, Shailesh M.

    2016-01-01

    A challenge in any imaging laboratory, especially one that uses modern techniques, is to achieve a sustainable and productive balance between using open source and commercial software to perform quantitative image acquisition, analysis and visualization. In addition to considering the expense of software licensing, one must consider factors such as the quality and usefulness of the software’s support, training and documentation. Also, one must consider the reproducibility with which multiple people generate results using the same software to perform the same analysis, how one may distribute their methods to the community using the software and the potential for achieving automation to improve productivity. PMID:27516727

  6. Geoscience data standards, software implementations, and the Internet. Where we came from and where we might be going.

    NASA Astrophysics Data System (ADS)

    Blodgett, D. L.

    2014-12-01

    Geographic information science and the coupled database and software systems that have grown from it have been evolving since the early 1990s. The multi-file shapefile package, invented early in this evolution, is an example of a highly generalized file format that can be used as an archival, interchange, and format for program execution. There are other formats, such as GeoTIFF and NetCDF that have similar characteristics. These de-facto standard (in contrast to the formally defined and published standards) formats, while not initially designed for machine-readable web-services, are used in them extensively. Relying on these formats allows legacy software to be adapted to web-services, but may require complicate software development to handle dynamic introspection of these legacy file formats' metadata. A generalized system of web-service types that offer archive, interchange, and run-time capabilities based on commonly implemented file formats and established web-service specifications has emerged from exemplar implementations. For example, an Open Geospatial Consortium (OGC) Web Feature Service is used to serve sites or model polygons and an OGC Sensor Observation Service provides time series data for the sites. The broad system of data formats, web-service types, and freely available software that implements the system will be described. The presentation will include a perspective on the future of this basic system and how it relates to scientific domain specific information models such as the Open Geospatial Consortium standards for geographic, hydrologic, and hydrogeologic data.

  7. Image 100 procedures manual development: Applications system library definition and Image 100 software definition

    NASA Technical Reports Server (NTRS)

    Guseman, L. F., Jr.; Decell, H. P., Jr.

    1975-01-01

    An outline for an Image 100 procedures manual for Earth Resources Program image analysis was developed which sets forth guidelines that provide a basis for the preparation and updating of an Image 100 Procedures Manual. The scope of the outline was limited to definition of general features of a procedures manual together with special features of an interactive system. Computer programs were identified which should be implemented as part of an applications oriented library for the system.

  8. Accuracy and reliability of linear measurements using 3-dimensional computed tomographic imaging software for Le Fort I Osteotomy.

    PubMed

    Gaia, Bruno Felipe; Pinheiro, Lucas Rodrigues; Umetsubo, Otávio Shoite; Santos, Oseas; Costa, Felipe Ferreira; Cavalcanti, Marcelo Gusmão Paraíso

    2014-03-01

    Our purpose was to compare the accuracy and reliability of linear measurements for Le Fort I osteotomy using volume rendering software. We studied 11 dried skulls and used cone-beam computed tomography (CT) to generate 3-dimensional images. Linear measurements were based on craniometric anatomical landmarks that were predefined as specifically used for Le Fort I osteotomy, and identified twice each by 2 radiologists, independently, using Dolphin imaging version 11.5.04.35. A third examiner then made physical measurements using digital calipers. There was a significant difference between Dolphin imaging and the gold standard, particularly in the pterygoid process. The largest difference was 1.85mm (LLpPtg L). The mean differences between the physical and the 3-dimensional linear measurements ranged from -0.01 to 1.12mm for examiner 1, and 0 to 1.85mm for examiner 2. Interexaminer analysis ranged from 0.51 to 0.93. Intraexaminer correlation coefficients ranged from 0.81 to 0.96 and 0.57 to 0.92, for examiners 1 and 2, respectively. We conclude that the Dolphin imaging should be used sparingly during Le Fort I osteotomy.

  9. [Development of DICOM image viewing software for efficient image reading and evaluation of distributed server system for diagnostic environment].

    PubMed

    Ishikawa, K

    2000-12-01

    To construct an efficient diagnostic environment using computer displays, the author investigated the time of network transmission using clinical images. In our hospital, we introduced optical-fiber 100Base-Fx Ethernet connections between 22 HIS-segments and one RIS-segment. Although Ethernet architecture is inexpensive, the speed of image transmission becomes 2371 KB/sec. (4.6 CT-slice/sec.) in the RIS-segment and 996 KB/sec. (1.9 CT-slice/sec.) from the RIS-segment to HIS-segments. Because one examination is transmitted in one minute, it does not disturb image reading. Otherwise, a distributed server system using inexpensive personal computers helps in constructing an efficient system. This investigation showed that commercially based Digital Imaging and Communications in Medicine(DICOM) servers and RSNA Central Test Node servers are not so different in transmission speed. The author programmed and developed DICOM transmission and viewing software for Macintosh computers. This viewer includes two inventions, dynamic tiling window system (DTWS) and window binding mode(WBM). On DTWS, windows, tiles, and images are independent objects, which are movable and resizable. The tile-matrix is changeable by mouse dragging, which realizes suitable tile rectangles for wide-low or narrow-high images. The arranging window tool prevents windows from scattering. Using WBM, any operation affects each window similarly. This means that the relationship of compared images is always equivalent. DTWS and WBM contribute greatly to a filmless diagnostic environment.

  10. Automated Scoring of Chromogenic Media for Detection of Methicillin-Resistant Staphylococcus aureus by Use of WASPLab Image Analysis Software.

    PubMed

    Faron, Matthew L; Buchan, Blake W; Vismara, Chiara; Lacchini, Carla; Bielli, Alessandra; Gesu, Giovanni; Liebregts, Theo; van Bree, Anita; Jansz, Arjan; Soucy, Genevieve; Korver, John; Ledeboer, Nathan A

    2016-03-01

    Recently, systems have been developed to create total laboratory automation for clinical microbiology. These systems allow for the automation of specimen processing, specimen incubation, and imaging of bacterial growth. In this study, we used the WASPLab to validate software that discriminates and segregates positive and negative chromogenic methicillin-resistant Staphylococcus aureus (MRSA) plates by recognition of pigmented colonies. A total of 57,690 swabs submitted for MRSA screening were enrolled in the study. Four sites enrolled specimens following their standard of care. Chromogenic agar used at these sites included MRSASelect (Bio-Rad Laboratories, Redmond, WA), chromID MRSA (bioMérieux, Marcy l'Etoile, France), and CHROMagar MRSA (BD Diagnostics, Sparks, MD). Specimens were plated and incubated using the WASPLab. The digital camera took images at 0 and 16 to 24 h and the WASPLab software determined the presence of positive colonies based on a hue, saturation, and value (HSV) score. If the HSV score fell within a defined threshold, the plate was called positive. The performance of the digital analysis was compared to manual reading. Overall, the digital software had a sensitivity of 100% and a specificity of 90.7% with the specificity ranging between 90.0 and 96.0 across all sites. The results were similar using the three different agars with a sensitivity of 100% and specificity ranging between 90.7 and 92.4%. These data demonstrate that automated digital analysis can be used to accurately sort positive from negative chromogenic agar cultures regardless of the pigmentation produced. PMID:26719443

  11. Space Station Software Issues

    NASA Technical Reports Server (NTRS)

    Voigt, S. (Editor); Beskenis, S. (Editor)

    1985-01-01

    Issues in the development of software for the Space Station are discussed. Software acquisition and management, software development environment, standards, information system support for software developers, and a future software advisory board are addressed.

  12. Development of an Open Source Image-Based Flow Modeling Software - SimVascular

    NASA Astrophysics Data System (ADS)

    Updegrove, Adam; Merkow, Jameson; Schiavazzi, Daniele; Wilson, Nathan; Marsden, Alison; Shadden, Shawn

    2014-11-01

    SimVascular (www.simvascular.org) is currently the only comprehensive software package that provides a complete pipeline from medical image data segmentation to patient specific blood flow simulation. This software and its derivatives have been used in hundreds of conference abstracts and peer-reviewed journal articles, as well as the foundation of medical startups. SimVascular was initially released in August 2007, yet major challenges and deterrents for new adopters were the requirement of licensing three expensive commercial libraries utilized by the software, a complicated build process, and a lack of documentation, support and organized maintenance. In the past year, the SimVascular team has made significant progress to integrate open source alternatives for the linear solver, solid modeling, and mesh generation commercial libraries required by the original public release. In addition, the build system, available distributions, and graphical user interface have been significantly enhanced. Finally, the software has been updated to enable users to directly run simulations using models and boundary condition values, included in the Vascular Model Repository (vascularmodel.org). In this presentation we will briefly overview the capabilities of the new SimVascular 2.0 release. National Science Foundation.

  13. CONRAD—A software framework for cone-beam imaging in radiology

    SciTech Connect

    Maier, Andreas; Choi, Jang-Hwan; Riess, Christian; Keil, Andreas; Fahrig, Rebecca; Hofmann, Hannes G.; Berger, Martin; Fischer, Peter; Schwemmer, Chris; Wu, Haibo; Müller, Kerstin; Hornegger, Joachim

    2013-11-15

    Purpose: In the community of x-ray imaging, there is a multitude of tools and applications that are used in scientific practice. Many of these tools are proprietary and can only be used within a certain lab. Often the same algorithm is implemented multiple times by different groups in order to enable comparison. In an effort to tackle this problem, the authors created CONRAD, a software framework that provides many of the tools that are required to simulate basic processes in x-ray imaging and perform image reconstruction with consideration of nonlinear physical effects.Methods: CONRAD is a Java-based state-of-the-art software platform with extensive documentation. It is based on platform-independent technologies. Special libraries offer access to hardware acceleration such as OpenCL. There is an easy-to-use interface for parallel processing. The software package includes different simulation tools that are able to generate up to 4D projection and volume data and respective vector motion fields. Well known reconstruction algorithms such as FBP, DBP, and ART are included. All algorithms in the package are referenced to a scientific source.Results: A total of 13 different phantoms and 30 processing steps have already been integrated into the platform at the time of writing. The platform comprises 74.000 nonblank lines of code out of which 19% are used for documentation. The software package is available for download at http://conrad.stanford.edu. To demonstrate the use of the package, the authors reconstructed images from two different scanners, a table top system and a clinical C-arm system. Runtimes were evaluated using the RabbitCT platform and demonstrate state-of-the-art runtimes with 2.5 s for the 256 problem size and 12.4 s for the 512 problem size.Conclusions: As a common software framework, CONRAD enables the medical physics community to share algorithms and develop new ideas. In particular this offers new opportunities for scientific collaboration and

  14. The Human Physiome: how standards, software and innovative service infrastructures are providing the building blocks to make it achievable.

    PubMed

    Nickerson, David; Atalag, Koray; de Bono, Bernard; Geiger, Jörg; Goble, Carole; Hollmann, Susanne; Lonien, Joachim; Müller, Wolfgang; Regierer, Babette; Stanford, Natalie J; Golebiewski, Martin; Hunter, Peter

    2016-04-01

    Reconstructing and understanding the Human Physiome virtually is a complex mathematical problem, and a highly demanding computational challenge. Mathematical models spanning from the molecular level through to whole populations of individuals must be integrated, then personalized. This requires interoperability with multiple disparate and geographically separated data sources, and myriad computational software tools. Extracting and producing knowledge from such sources, even when the databases and software are readily available, is a challenging task. Despite the difficulties, researchers must frequently perform these tasks so that available knowledge can be continually integrated into the common framework required to realize the Human Physiome. Software and infrastructures that support the communities that generate these, together with their underlying standards to format, describe and interlink the corresponding data and computer models, are pivotal to the Human Physiome being realized. They provide the foundations for integrating, exchanging and re-using data and models efficiently, and correctly, while also supporting the dissemination of growing knowledge in these forms. In this paper, we explore the standards, software tooling, repositories and infrastructures that support this work, and detail what makes them vital to realizing the Human Physiome. PMID:27051515

  15. The Human Physiome: how standards, software and innovative service infrastructures are providing the building blocks to make it achievable.

    PubMed

    Nickerson, David; Atalag, Koray; de Bono, Bernard; Geiger, Jörg; Goble, Carole; Hollmann, Susanne; Lonien, Joachim; Müller, Wolfgang; Regierer, Babette; Stanford, Natalie J; Golebiewski, Martin; Hunter, Peter

    2016-04-01

    Reconstructing and understanding the Human Physiome virtually is a complex mathematical problem, and a highly demanding computational challenge. Mathematical models spanning from the molecular level through to whole populations of individuals must be integrated, then personalized. This requires interoperability with multiple disparate and geographically separated data sources, and myriad computational software tools. Extracting and producing knowledge from such sources, even when the databases and software are readily available, is a challenging task. Despite the difficulties, researchers must frequently perform these tasks so that available knowledge can be continually integrated into the common framework required to realize the Human Physiome. Software and infrastructures that support the communities that generate these, together with their underlying standards to format, describe and interlink the corresponding data and computer models, are pivotal to the Human Physiome being realized. They provide the foundations for integrating, exchanging and re-using data and models efficiently, and correctly, while also supporting the dissemination of growing knowledge in these forms. In this paper, we explore the standards, software tooling, repositories and infrastructures that support this work, and detail what makes them vital to realizing the Human Physiome.

  16. Performance of commercial and open source remote sensing/image processing software for land cover/use purposes

    NASA Astrophysics Data System (ADS)

    Teodoro, Ana C.; Ferreira, Dário; Sillero, Neftali

    2012-10-01

    We aim to compare the potentialities of four remote sensing/image processing software: PCI Geomatica V8.2, ENVI 4.7, SPRING 5.1.8, and ORFEO toolbox integrated in Monteverdi 1.11. We listed and assessed the performance of several classification algorithms. PCI Geomatica and ENVI are commercial/proprietary software and SPRING and ORFEO are open source software. We listed the main classification algorithms available in these four software, and divided them by the different types/approaches of classification (e.g., pixel-based, object-oriented, and data mining algorithms). We classified using these algorithms two images covering the same area (Porto-Vila Nova de Gaia, Northern Portugal): one Landsat TM image from October 2011 and one IKONOS image from September 2005. We compared time of performance and classification results using the confusion matrix (overall accuracy) and Kappa statistics. The algorithms tested presented different classification results according to the software used. In Landsat image, differences are greater than IKONOS image. This work could be very important for other researchers as it provides a qualitative and quantitative analysis of different image processing algorithms available in commercial and open source software.

  17. SPLASSH: Open source software for camera-based high-speed, multispectral in-vivo optical image acquisition

    PubMed Central

    Sun, Ryan; Bouchard, Matthew B.; Hillman, Elizabeth M. C.

    2010-01-01

    Camera-based in-vivo optical imaging can provide detailed images of living tissue that reveal structure, function, and disease. High-speed, high resolution imaging can reveal dynamic events such as changes in blood flow and responses to stimulation. Despite these benefits, commercially available scientific cameras rarely include software that is suitable for in-vivo imaging applications, making this highly versatile form of optical imaging challenging and time-consuming to implement. To address this issue, we have developed a novel, open-source software package to control high-speed, multispectral optical imaging systems. The software integrates a number of modular functions through a custom graphical user interface (GUI) and provides extensive control over a wide range of inexpensive IEEE 1394 Firewire cameras. Multispectral illumination can be incorporated through the use of off-the-shelf light emitting diodes which the software synchronizes to image acquisition via a programmed microcontroller, allowing arbitrary high-speed illumination sequences. The complete software suite is available for free download. Here we describe the software’s framework and provide details to guide users with development of this and similar software. PMID:21258475

  18. 76 FR 43724 - In the Matter of Certain Digital Imaging Devices and Related Software; Notice of Commission...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-07-21

    ... Cupertino, California (``Apple''). 75 FR 28058 (May 19, 2010). The complaint alleged ] violations of section... COMMISSION In the Matter of Certain Digital Imaging Devices and Related Software; Notice of Commission... related software by reason of infringement of various claims of United States Patent Nos. 6,031,964 and...

  19. Mississippi Company Using NASA Software Program to Provide Unique Imaging Service: DATASTAR Success Story

    NASA Technical Reports Server (NTRS)

    2001-01-01

    DATASTAR, Inc., of Picayune, Miss., has taken NASA's award-winning Earth Resources Laboratory Applications (ELAS) software program and evolved it to the point that the company is now providing a unique, spatial imagery service over the Internet. ELAS was developed in the early 80's to process satellite and airborne sensor imagery data of the Earth's surface into readable and useable information. While there are several software packages on the market that allow the manipulation of spatial data into useable products, this is usually a laborious task. The new program, called the DATASTAR Image Processing Exploitation, or DIPX, Delivery Service, is a subscription service available over the Internet that takes the work out of the equation and provides normalized geo-spatial data in the form of decision products.

  20. Advances in hardware, software, and automation for 193nm aerial image measurement systems

    NASA Astrophysics Data System (ADS)

    Zibold, Axel M.; Schmid, R.; Seyfarth, A.; Waechter, M.; Harnisch, W.; Doornmalen, H. v.

    2005-05-01

    A new, second generation AIMS fab 193 system has been developed which is capable of emulating lithographic imaging of any type of reticles such as binary and phase shift masks (PSM) including resolution enhancement technologies (RET) such as optical proximity correction (OPC) or scatter bars. The system emulates the imaging process by adjustment of the lithography equivalent illumination and imaging conditions of 193nm wafer steppers including circular, annular, dipole and quadrupole type illumination modes. The AIMS fab 193 allows a rapid prediction of wafer printability of critical mask features, including dense patterns and contacts, defects or repairs by acquiring through-focus image stacks by means of a CCD camera followed by quantitative image analysis. Moreover the technology can be readily applied to directly determine the process window of a given mask under stepper imaging conditions. Since data acquisition is performed electronically, AIMS in many applications replaces the need for costly and time consuming wafer prints using a wafer stepper/ scanner followed by CD SEM resist or wafer analysis. The AIMS fab 193 second generation system is designed for 193nm lithography mask printing predictability down to the 65nm node. In addition to hardware improvements a new modular AIMS software is introduced allowing for a fully automated operation mode. Multiple pre-defined points can be visited and through-focus AIMS measurements can be executed automatically in a recipe based mode. To increase the effectiveness of the automated operation mode, the throughput of the system to locate the area of interest, and to acquire the through-focus images is increased by almost a factor of two in comparison with the first generation AIMS systems. In addition a new software plug-in concept is realised for the tools. One new feature has been successfully introduced as "Global CD Map", enabling automated investigation of global mask quality based on the local determination of

  1. Quick and easy molecular weight determination with Macintosh computers and public domain image analysis software.

    PubMed

    Seebacher, T; Bade, E G

    1996-10-01

    The program "molecular weights" allows a fast and easy estimation of molecular weights (M(r)), isoelectric point (pI) values and band intensities directly from scanned, polyacrylamide gels, two-dimensional protein patterns and DNA gel images. The image coordinates of M(r) and pI reference standards enable the program to calculate M(r) and pI values in a real time manner for any cursor position. The program requires NIH-Image for Macintosh computers and includes automatic band detection coupled with a densitometric evaluation.

  2. Evaluation of cassette-based digital radiography detectors using standardized image quality metrics: AAPM TG-150 Draft Image Detector Tests.

    PubMed

    Li, Guang; Greene, Travis C; Nishino, Thomas K; Willis, Charles E

    2016-09-08

    The purpose of this study was to evaluate several of the standardized image quality metrics proposed by the American Association of Physics in Medicine (AAPM) Task Group 150. The task group suggested region-of-interest (ROI)-based techniques to measure nonuniformity, minimum signal-to-noise ratio (SNR), number of anomalous pixels, and modulation transfer function (MTF). This study evaluated the effects of ROI size and layout on the image metrics by using four different ROI sets, assessed result uncertainty by repeating measurements, and compared results with two commercially available quality control tools, namely the Carestream DIRECTVIEW Total Quality Tool (TQT) and the GE Healthcare Quality Assurance Process (QAP). Seven Carestream DRX-1C (CsI) detectors on mobile DR systems and four GE FlashPad detectors in radiographic rooms were tested. Images were analyzed using MATLAB software that had been previously validated and reported. Our values for signal and SNR nonuniformity and MTF agree with values published by other investigators. Our results show that ROI size affects nonuniformity and minimum SNR measurements, but not detection of anomalous pixels. Exposure geometry affects all tested image metrics except for the MTF. TG-150 metrics in general agree with the TQT, but agree with the QAP only for local and global signal nonuniformity. The difference in SNR nonuniformity and MTF values between the TG-150 and QAP may be explained by differences in the calculation of noise and acquisition beam quality, respectively. TG-150's SNR nonuniformity metrics are also more sensitive to detector nonuniformity compared to the QAP. Our results suggest that fixed ROI size should be used for consistency because nonuniformity metrics depend on ROI size. Ideally, detector tests should be performed at the exact calibration position. If not feasible, a baseline should be established from the mean of several repeated measurements. Our study indicates that the TG-150 tests can be

  3. Evaluation of cassette-based digital radiography detectors using standardized image quality metrics: AAPM TG-150 Draft Image Detector Tests.

    PubMed

    Li, Guang; Greene, Travis C; Nishino, Thomas K; Willis, Charles E

    2016-01-01

    The purpose of this study was to evaluate several of the standardized image quality metrics proposed by the American Association of Physics in Medicine (AAPM) Task Group 150. The task group suggested region-of-interest (ROI)-based techniques to measure nonuniformity, minimum signal-to-noise ratio (SNR), number of anomalous pixels, and modulation transfer function (MTF). This study evaluated the effects of ROI size and layout on the image metrics by using four different ROI sets, assessed result uncertainty by repeating measurements, and compared results with two commercially available quality control tools, namely the Carestream DIRECTVIEW Total Quality Tool (TQT) and the GE Healthcare Quality Assurance Process (QAP). Seven Carestream DRX-1C (CsI) detectors on mobile DR systems and four GE FlashPad detectors in radiographic rooms were tested. Images were analyzed using MATLAB software that had been previously validated and reported. Our values for signal and SNR nonuniformity and MTF agree with values published by other investigators. Our results show that ROI size affects nonuniformity and minimum SNR measurements, but not detection of anomalous pixels. Exposure geometry affects all tested image metrics except for the MTF. TG-150 metrics in general agree with the TQT, but agree with the QAP only for local and global signal nonuniformity. The difference in SNR nonuniformity and MTF values between the TG-150 and QAP may be explained by differences in the calculation of noise and acquisition beam quality, respectively. TG-150's SNR nonuniformity metrics are also more sensitive to detector nonuniformity compared to the QAP. Our results suggest that fixed ROI size should be used for consistency because nonuniformity metrics depend on ROI size. Ideally, detector tests should be performed at the exact calibration position. If not feasible, a baseline should be established from the mean of several repeated measurements. Our study indicates that the TG-150 tests can be

  4. Features of the Upgraded Imaging for Hypersonic Experimental Aeroheating Testing (IHEAT) Software

    NASA Technical Reports Server (NTRS)

    Mason, Michelle L.; Rufer, Shann J.

    2016-01-01

    The Imaging for Hypersonic Experimental Aeroheating Testing (IHEAT) software is used at the NASA Langley Research Center to analyze global aeroheating data on wind tunnel models tested in the Langley Aerothermodynamics Laboratory. One-dimensional, semi-infinite heating data derived from IHEAT are used in the design of thermal protection systems for hypersonic vehicles that are exposed to severe aeroheating loads, such as reentry vehicles during descent and landing procedures. This software program originally was written in the PV-WAVE(Registered Trademark) programming language to analyze phosphor thermography data from the two-color, relative-intensity system developed at Langley. To increase the efficiency, functionality, and reliability of IHEAT, the program was migrated to MATLAB(Registered Trademark) syntax and compiled as a stand-alone executable file labeled version 4.0. New features of IHEAT 4.0 include the options to perform diagnostic checks of the accuracy of the acquired data during a wind tunnel test, to extract data along a specified multi-segment line following a feature such as a leading edge or a streamline, and to batch process all of the temporal frame data from a wind tunnel run. Results from IHEAT 4.0 were compared on a pixel level to the output images from the legacy software to validate the program. The absolute differences between the heat transfer data output from the two programs were on the order of 10(exp -5) to 10(exp -7). IHEAT 4.0 replaces the PV-WAVE(Registered Trademark) version as the production software for aeroheating experiments conducted in the hypersonic facilities at NASA Langley.

  5. Simplified preparation of TO14 and Title III air toxic standards using a Windows software package and dynamic dilution schemes

    SciTech Connect

    Cardin, D.B.; Galoustian, E.A.

    1994-12-31

    The preparation of Air Toxic standards in the laboratory can be performed using several methods. These include injection of purge and trap standards, static dilution from pure compounds, and dynamic dilution from NIST traceable standards. A software package running under Windows has been developed that makes calculating dilution parameters for even complex mixtures fast and simple. Compound parameters such are name, molecular weight, boiling point, and density are saved in a data base for later access. Gas and liquid mixtures can be easily defined and saved as an inventory item, with preparation screens that calculate appropriate transfer volumes of each analyte. These mixtures can be utilized by both the static and dynamic dilution analysis windows to calculate proper flow rates and injection volumes for obtaining requested concentrations. A particularly useful approach for making accurate polar VOC standards will be presented.

  6. A complete software application for automatic registration of x-ray mammography and magnetic resonance images

    SciTech Connect

    Solves-Llorens, J. A.; Rupérez, M. J. Monserrat, C.; Lloret, M.

    2014-08-15

    Purpose: This work presents a complete and automatic software application to aid radiologists in breast cancer diagnosis. The application is a fully automated method that performs a complete registration of magnetic resonance (MR) images and x-ray (XR) images in both directions (from MR to XR and from XR to MR) and for both x-ray mammograms, craniocaudal (CC), and mediolateral oblique (MLO). This new approximation allows radiologists to mark points in the MR images and, without any manual intervention, it provides their corresponding points in both types of XR mammograms and vice versa. Methods: The application automatically segments magnetic resonance images and x-ray images using the C-Means method and the Otsu method, respectively. It compresses the magnetic resonance images in both directions, CC and MLO, using a biomechanical model of the breast that distinguishes the specific biomechanical behavior of each one of its three tissues (skin, fat, and glandular tissue) separately. It makes a projection of both compressions and registers them with the original XR images using affine transformations and nonrigid registration methods. Results: The application has been validated by two expert radiologists. This was carried out through a quantitative validation on 14 data sets in which the Euclidean distance between points marked by the radiologists and the corresponding points obtained by the application were measured. The results showed a mean error of 4.2 ± 1.9 mm for the MRI to CC registration, 4.8 ± 1.3 mm for the MRI to MLO registration, and 4.1 ± 1.3 mm for the CC and MLO to MRI registration. Conclusions: A complete software application that automatically registers XR and MR images of the breast has been implemented. The application permits radiologists to estimate the position of a lesion that is suspected of being a tumor in an imaging modality based on its position in another different modality with a clinically acceptable error. The results show that the

  7. Standard-based software-only video conferencing codec on Ultra SPARC

    NASA Astrophysics Data System (ADS)

    Ding, Wei; Mou, Alex Z.; Rice, Daniel S.

    1998-01-01

    Even though CIF-resolution video decoding in software has become popular, as evidenced by the popularity of MPEG-1 software decoders, video encoding at CIF resolution still needs hardware solution, especially with motion estimation. In this paper, we report our work of a high-quality and high-performance software-only H.261 video codec, with real- time transport protocol packetization capability, for H.323 video conferencing over LAN. The codec is implemented entirely in software using Ultra SPARC multimedia instruction extension - visual instruction set or VIS. The encoder can perform in real time change detection, motion estimation, motion-compensated spatio-temporal pre-filtering for noise reduction and adaptive quantization. Thus high quality video can be obtained at rate of 128 Kbits per second or even at a lower rate. It is capable of performing simultaneous encoding and decoding of near CIF resolution video at 15 frames per second. The design of encoder structure, data layout, and various techniques and algorithms developed can be extended to H.263 codec.

  8. A comparison of strain calculation using digital image correlation and finite element software

    NASA Astrophysics Data System (ADS)

    Iadicola, M.; Banerjee, D.

    2016-08-01

    Digital image correlation (DIC) data are being extensively used for many forming applications and for comparisons with finite element analysis (FEA) simulated results. The most challenging comparisons are often in the area of strain localizations just prior to material failure. While qualitative comparisons can be misleading, quantitative comparisons are difficult because of insufficient information about the type of strain output. In this work, strains computed from DIC displacements from a forming limit test are compared to those from three commercial FEA software. Quantitative differences in calculated strains are assessed to determine if the scale of variations seen between FEA and DIC calculated strains constitute real behavior or just calculation differences.

  9. Gemini planet imager integration to the Gemini South telescope software environment

    NASA Astrophysics Data System (ADS)

    Rantakyrö, Fredrik T.; Cardwell, Andrew; Chilcote, Jeffrey; Dunn, Jennifer; Goodsell, Stephen; Hibon, Pascale; Macintosh, Bruce; Quiroz, Carlos; Perrin, Marshall D.; Sadakuni, Naru; Saddlemyer, Leslie; Savransky, Dmitry; Serio, Andrew; Winge, Claudia; Galvez, Ramon; Gausachs, Gaston; Hardie, Kayla; Hartung, Markus; Luhrs, Javier; Poyneer, Lisa; Thomas, Sandrine

    2014-08-01

    The Gemini Planet Imager is an extreme AO instrument with an integral field spectrograph (IFS) operating in Y, J, H, and K bands. Both the Gemini telescope and the GPI instrument are very complex systems. Our goal is that the combined telescope and instrument system may be run by one observer operating the instrument, and one operator controlling the telescope and the acquisition of light to the instrument. This requires a smooth integration between the two systems and easily operated control interfaces. We discuss the definition of the software and hardware interfaces, their implementation and testing, and the integration of the instrument with the telescope environment.

  10. Multithreaded real-time 3D image processing software architecture and implementation

    NASA Astrophysics Data System (ADS)

    Ramachandra, Vikas; Atanassov, Kalin; Aleksic, Milivoje; Goma, Sergio R.

    2011-03-01

    Recently, 3D displays and videos have generated a lot of interest in the consumer electronics industry. To make 3D capture and playback popular and practical, a user friendly playback interface is desirable. Towards this end, we built a real time software 3D video player. The 3D video player displays user captured 3D videos, provides for various 3D specific image processing functions and ensures a pleasant viewing experience. Moreover, the player enables user interactivity by providing digital zoom and pan functionalities. This real time 3D player was implemented on the GPU using CUDA and OpenGL. The player provides user interactive 3D video playback. Stereo images are first read by the player from a fast drive and rectified. Further processing of the images determines the optimal convergence point in the 3D scene to reduce eye strain. The rationale for this convergence point selection takes into account scene depth and display geometry. The first step in this processing chain is identifying keypoints by detecting vertical edges within the left image. Regions surrounding reliable keypoints are then located on the right image through the use of block matching. The difference in the positions between the corresponding regions in the left and right images are then used to calculate disparity. The extrema of the disparity histogram gives the scene disparity range. The left and right images are shifted based upon the calculated range, in order to place the desired region of the 3D scene at convergence. All the above computations are performed on one CPU thread which calls CUDA functions. Image upsampling and shifting is performed in response to user zoom and pan. The player also consists of a CPU display thread, which uses OpenGL rendering (quad buffers). This also gathers user input for digital zoom and pan and sends them to the processing thread.

  11. Providing a Standardized Approach for Cataloging, Discovering, and Utilizing Open Source Geospatial Software and Tools through NASA's Global Change Master Directory (GCMD)

    NASA Astrophysics Data System (ADS)

    Stevens, T. B.

    2006-12-01

    A standardized approach for discovery and access to geospatial software packages is vital for a user community that depends on interoperable open source software for their organization. Many of the projects that provide software are dispersed and decentralized, thus creating a challenge to users discovering these packages. NASA's Global Change Master Directory (GCMD) contains descriptions of open source geospatial software and provides users direct access to the software. Users can search by full-text or controlled service topic keywords. These keywords, which are proposed by GCMD staff and open to community feedback, allow users to navigate to unique software types and refine by specific Open Geospatial Consortium (OGC) standards. These standards include Web Map Services, Web Feature Services, and Web Coverage Services. This poster will demonstrate how the geospatial user community can utilize the GCMD's intuitive keyword interface to search and directly access geospatial services. The poster will also show users how to create service descriptions using freely available online tools.

  12. Review of free software tools for image analysis of fluorescence cell micrographs.

    PubMed

    Wiesmann, V; Franz, D; Held, C; Münzenmayer, C; Palmisano, R; Wittenberg, T

    2015-01-01

    An increasing number of free software tools have been made available for the evaluation of fluorescence cell micrographs. The main users are biologists and related life scientists with no or little knowledge of image processing. In this review, we give an overview of available tools and guidelines about which tools the users should use to segment fluorescence micrographs. We selected 15 free tools and divided them into stand-alone, Matlab-based, ImageJ-based, free demo versions of commercial tools and data sharing tools. The review consists of two parts: First, we developed a criteria catalogue and rated the tools regarding structural requirements, functionality (flexibility, segmentation and image processing filters) and usability (documentation, data management, usability and visualization). Second, we performed an image processing case study with four representative fluorescence micrograph segmentation tasks with figure-ground and cell separation. The tools display a wide range of functionality and usability. In the image processing case study, we were able to perform figure-ground separation in all micrographs using mainly thresholding. Cell separation was not possible with most of the tools, because cell separation methods are provided only by a subset of the tools and are difficult to parametrize and to use. Most important is that the usability matches the functionality of a tool. To be usable, specialized tools with less functionality need to fulfill less usability criteria, whereas multipurpose tools need a well-structured menu and intuitive graphical user interface.

  13. MIA - A free and open source software for gray scale medical image analysis

    PubMed Central

    2013-01-01

    Background Gray scale images make the bulk of data in bio-medical image analysis, and hence, the main focus of many image processing tasks lies in the processing of these monochrome images. With ever improving acquisition devices, spatial and temporal image resolution increases, and data sets become very large. Various image processing frameworks exists that make the development of new algorithms easy by using high level programming languages or visual programming. These frameworks are also accessable to researchers that have no background or little in software development because they take care of otherwise complex tasks. Specifically, the management of working memory is taken care of automatically, usually at the price of requiring more it. As a result, processing large data sets with these tools becomes increasingly difficult on work station class computers. One alternative to using these high level processing tools is the development of new algorithms in a languages like C++, that gives the developer full control over how memory is handled, but the resulting workflow for the prototyping of new algorithms is rather time intensive, and also not appropriate for a researcher with little or no knowledge in software development. Another alternative is in using command line tools that run image processing tasks, use the hard disk to store intermediate results, and provide automation by using shell scripts. Although not as convenient as, e.g. visual programming, this approach is still accessable to researchers without a background in computer science. However, only few tools exist that provide this kind of processing interface, they are usually quite task specific, and don’t provide an clear approach when one wants to shape a new command line tool from a prototype shell script. Results The proposed framework, MIA, provides a combination of command line tools, plug-ins, and libraries that make it possible to run image processing tasks interactively in a command shell

  14. User's guide to the TCSTKF software library: a graphics library for emulation of TEKTRONIX display images in. TKF disk files

    SciTech Connect

    Gray, W.H.; Burris, R.D.

    1980-11-01

    This report documents the user-level subroutines of the TCSTKF software library for the Oak Ridge National Laboratory (ORNL) Fusion Energy Division (FED) DECsystem-10. The TCSTKF graphics library was written and is maintained so that large-production computer programs can access a small, efficient graphics library and produce device-independent graphics files. This library is presented as an alternative to the larger graphics software libraries, such as DISSPLA. The main external difference between this software and the TCSTEK software library is that the TCSTKF software will created .TKF formatted intermediate plot data files, as well as producing display images on the screen of a Tektronix 4000 series storage tube terminal. These intermediate plot data files can be subsequently postprocessed into report-quality images on a variety of other graphics devices at ORNL.

  15. Image processing in biodosimetry: A proposal of a generic free software platform.

    PubMed

    Dumpelmann, Matthias; Cadena da Matta, Mariel; Pereira de Lemos Pinto, Marcela Maria; de Salazar E Fernandes, Thiago; Borges da Silva, Edvane; Amaral, Ademir

    2015-08-01

    The scoring of chromosome aberrations is the most reliable biological method for evaluating individual exposure to ionizing radiation. However, microscopic analyses of chromosome human metaphases, generally employed to identify aberrations mainly dicentrics (chromosome with two centromeres), is a laborious task. This method is time consuming and its application in biological dosimetry would be almost impossible in case of a large scale radiation incidents. In this project, a generic software was enhanced for automatic chromosome image processing from a framework originally developed for the Framework V project Simbio, of the European Union for applications in the area of source localization from electroencephalographic signals. The platforms capability is demonstrated by a study comparing automatic segmentation strategies of chromosomes from microscopic images.

  16. User's Guide for the MapImage Reprojection Software Package, Version 1.01

    USGS Publications Warehouse

    Finn, Michael P.; Trent, Jason R.

    2004-01-01

    Scientists routinely accomplish small-scale geospatial modeling in the raster domain, using high-resolution datasets (such as 30-m data) for large parts of continents and low-resolution to high-resolution datasets for the entire globe. Recently, Usery and others (2003a) expanded on the previously limited empirical work with real geographic data by compiling and tabulating the accuracy of categorical areas in projected raster datasets of global extent. Geographers and applications programmers at the U.S. Geological Survey's (USGS) Mid-Continent Mapping Center (MCMC) undertook an effort to expand and evolve an internal USGS software package, MapImage, or mapimg, for raster map projection transformation (Usery and others, 2003a). Daniel R. Steinwand of Science Applications International Corporation, Earth Resources Observation Systems Data Center in Sioux Falls, S. Dak., originally developed mapimg for the USGS, basing it on the USGS's General Cartographic Transformation Package (GCTP). It operated as a command line program on the Unix operating system. Through efforts at MCMC, and in coordination with Mr. Steinwand, this program has been transformed from an application based on a command line into a software package based on a graphic user interface for Windows, Linux, and Unix machines. Usery and others (2003b) pointed out that many commercial software packages do not use exact projection equations and that even when exact projection equations are used, the software often results in error and sometimes does not complete the transformation for specific projections, at specific resampling resolutions, and for specific singularities. Direct implementation of point-to-point transformation with appropriate functions yields the variety of projections available in these software packages, but implementation with data other than points requires specific adaptation of the equations or prior preparation of the data to allow the transformation to succeed. Additional

  17. Hierarchical Image Segmentation of Remotely Sensed Data using Massively Parallel GNU-LINUX Software

    NASA Technical Reports Server (NTRS)

    Tilton, James C.

    2003-01-01

    A hierarchical set of image segmentations is a set of several image segmentations of the same image at different levels of detail in which the segmentations at coarser levels of detail can be produced from simple merges of regions at finer levels of detail. In [1], Tilton, et a1 describes an approach for producing hierarchical segmentations (called HSEG) and gave a progress report on exploiting these hierarchical segmentations for image information mining. The HSEG algorithm is a hybrid of region growing and constrained spectral clustering that produces a hierarchical set of image segmentations based on detected convergence points. In the main, HSEG employs the hierarchical stepwise optimization (HSWO) approach to region growing, which was described as early as 1989 by Beaulieu and Goldberg. The HSWO approach seeks to produce segmentations that are more optimized than those produced by more classic approaches to region growing (e.g. Horowitz and T. Pavlidis, [3]). In addition, HSEG optionally interjects between HSWO region growing iterations, merges between spatially non-adjacent regions (i.e., spectrally based merging or clustering) constrained by a threshold derived from the previous HSWO region growing iteration. While the addition of constrained spectral clustering improves the utility of the segmentation results, especially for larger images, it also significantly increases HSEG s computational requirements. To counteract this, a computationally efficient recursive, divide-and-conquer, implementation of HSEG (RHSEG) was devised, which includes special code to avoid processing artifacts caused by RHSEG s recursive subdivision of the image data. The recursive nature of RHSEG makes for a straightforward parallel implementation. This paper describes the HSEG algorithm, its recursive formulation (referred to as RHSEG), and the implementation of RHSEG using massively parallel GNU-LINUX software. Results with Landsat TM data are included comparing RHSEG with classic

  18. Spiked proteomic standard dataset for testing label-free quantitative software and statistical methods.

    PubMed

    Ramus, Claire; Hovasse, Agnès; Marcellin, Marlène; Hesse, Anne-Marie; Mouton-Barbosa, Emmanuelle; Bouyssié, David; Vaca, Sebastian; Carapito, Christine; Chaoui, Karima; Bruley, Christophe; Garin, Jérôme; Cianférani, Sarah; Ferro, Myriam; Dorssaeler, Alain Van; Burlet-Schiltz, Odile; Schaeffer, Christine; Couté, Yohann; Gonzalez de Peredo, Anne

    2016-03-01

    This data article describes a controlled, spiked proteomic dataset for which the "ground truth" of variant proteins is known. It is based on the LC-MS analysis of samples composed of a fixed background of yeast lysate and different spiked amounts of the UPS1 mixture of 48 recombinant proteins. It can be used to objectively evaluate bioinformatic pipelines for label-free quantitative analysis, and their ability to detect variant proteins with good sensitivity and low false discovery rate in large-scale proteomic studies. More specifically, it can be useful for tuning software tools parameters, but also testing new algorithms for label-free quantitative analysis, or for evaluation of downstream statistical methods. The raw MS files can be downloaded from ProteomeXchange with identifier PXD001819. Starting from some raw files of this dataset, we also provide here some processed data obtained through various bioinformatics tools (including MaxQuant, Skyline, MFPaQ, IRMa-hEIDI and Scaffold) in different workflows, to exemplify the use of such data in the context of software benchmarking, as discussed in details in the accompanying manuscript [1]. The experimental design used here for data processing takes advantage of the different spike levels introduced in the samples composing the dataset, and processed data are merged in a single file to facilitate the evaluation and illustration of software tools results for the detection of variant proteins with different absolute expression levels and fold change values.

  19. Image pixel guided tours: a software platform for non-destructive x-ray imaging

    NASA Astrophysics Data System (ADS)

    Lam, K. P.; Emery, R.

    2009-02-01

    Multivariate analysis seeks to describe the relationship between an arbitrary number of variables. To explore highdimensional data sets, projections are often used for data visualisation to aid discovering structure or patterns that lead to the formation of statistical hypothesis. The basic concept necessitates a systematic search for lower-dimensional representations of the data that might show interesting structure(s). Motivated by the recent research on the Image Grand Tour (IGT), which can be adapted to view guided projections by using objective indexes that are capable of revealing latent structures of the data, this paper presents a signal processing perspective on constructing such indexes under the unifying exploratory frameworks of Independent Component Analysis (ICA) and Projection Pursuit (PP). Our investigation begins with an overview of dimension reduction techniques by means of orthogonal transforms, including the classical procedure of Principal Component Analysis (PCA), and extends to an application of the more powerful techniques of ICA in the context of our recent work on non-destructive testing technology by element specific x-ray imaging.

  20. Imaging C. elegans Embryos using an Epifluorescent Microscope and Open Source Software

    PubMed Central

    Verbrugghe, Koen J. C.; Chan, Raymond C.

    2011-01-01

    Cellular processes, such as chromosome assembly, segregation and cytokinesis,are inherently dynamic. Time-lapse imaging of living cells, using fluorescent-labeled reporter proteins or differential interference contrast (DIC) microscopy, allows for the examination of the temporal progression of these dynamic events which is otherwise inferred from analysis of fixed samples1,2. Moreover, the study of the developmental regulations of cellular processes necessitates conducting time-lapse experiments on an intact organism during development. The Caenorhabiditis elegans embryo is light-transparent and has a rapid, invariant developmental program with a known cell lineage3, thus providing an ideal experiment model for studying questions in cell biology4,5and development6-9. C. elegans is amendable to genetic manipulation by forward genetics (based on random mutagenesis10,11) and reverse genetics to target specific genes (based on RNAi-mediated interference and targeted mutagenesis12-15). In addition, transgenic animals can be readily created to express fluorescently tagged proteins or reporters16,17. These traits combine to make it easy to identify the genetic pathways regulating fundamental cellular and developmental processes in vivo18-21. In this protocol we present methods for live imaging of C. elegans embryos using DIC optics or GFP fluorescence on a compound epifluorescent microscope. We demonstrate the ease with which readily available microscopes, typically used for fixed sample imaging, can also be applied for time-lapse analysis using open-source software to automate the imaging process. PMID:21490567

  1. Comprehensive, powerful, efficient, intuitive: a new software framework for clinical imaging applications

    NASA Astrophysics Data System (ADS)

    Augustine, Kurt E.; Holmes, David R., III; Hanson, Dennis P.; Robb, Richard A.

    2006-03-01

    One of the greatest challenges for a software engineer is to create a complex application that is comprehensive enough to be useful to a diverse set of users, yet focused enough for individual tasks to be carried out efficiently with minimal training. This "powerful yet simple" paradox is particularly prevalent in advanced medical imaging applications. Recent research in the Biomedical Imaging Resource (BIR) at Mayo Clinic has been directed toward development of an imaging application framework that provides powerful image visualization/analysis tools in an intuitive, easy-to-use interface. It is based on two concepts very familiar to physicians - Cases and Workflows. Each case is associated with a unique patient and a specific set of routine clinical tasks, or a workflow. Each workflow is comprised of an ordered set of general-purpose modules which can be re-used for each unique workflow. Clinicians help describe and design the workflows, and then are provided with an intuitive interface to both patient data and analysis tools. Since most of the individual steps are common to many different workflows, the use of general-purpose modules reduces development time and results in applications that are consistent, stable, and robust. While the development of individual modules may reflect years of research by imaging scientists, new customized workflows based on the new modules can be developed extremely fast. If a powerful, comprehensive application is difficult to learn and complicated to use, it will be unacceptable to most clinicians. Clinical image analysis tools must be intuitive and effective or they simply will not be used.

  2. New AIRS: The medical imaging software for segmentation and registration of elastic organs in SPECT/CT

    NASA Astrophysics Data System (ADS)

    Widita, R.; Kurniadi, R.; Darma, Y.; Perkasa, Y. S.; Trianti, N.

    2012-06-01

    We have been successfully improved our software, Automated Image Registration and Segmentation (AIRS), to fuse the CT and SPECT images of elastic organs. Segmentation and registration of elastic organs presents many challenges. Many artifacts can arise in SPECT/CT scans. Also, different organs and tissues have very similar gray levels, which consign thresholding to limited utility. We have been developed a new software to solve different registration and segmentation problems that arises in tomographic data sets. It will be demonstrated that the information obtained by SPECT/CT is more accurate in evaluating patients/objects than that obtained from either SPECT or CT alone. We used multi-modality registration which is amenable for images produced by different modalities and having unclear boundaries between tissues. The segmentation components used in this software is region growing algorithms which have proven to be an effective approach for image segmentation. Our method is designed to perform with clinically acceptable speed, using accelerated techniques (multiresolution).

  3. Software-based high-level synthesis design of FPGA beamformers for synthetic aperture imaging.

    PubMed

    Amaro, Joao; Yiu, Billy Y S; Falcao, Gabriel; Gomes, Marco A C; Yu, Alfred C H

    2015-05-01

    Field-programmable gate arrays (FPGAs) can potentially be configured as beamforming platforms for ultrasound imaging, but a long design time and skilled expertise in hardware programming are typically required. In this article, we present a novel approach to the efficient design of FPGA beamformers for synthetic aperture (SA) imaging via the use of software-based high-level synthesis techniques. Software kernels (coded in OpenCL) were first developed to stage-wise handle SA beamforming operations, and their corresponding FPGA logic circuitry was emulated through a high-level synthesis framework. After design space analysis, the fine-tuned OpenCL kernels were compiled into register transfer level descriptions to configure an FPGA as a beamformer module. The processing performance of this beamformer was assessed through a series of offline emulation experiments that sought to derive beamformed images from SA channel-domain raw data (40-MHz sampling rate, 12 bit resolution). With 128 channels, our FPGA-based SA beamformer can achieve 41 frames per second (fps) processing throughput (3.44 × 10(8) pixels per second for frame size of 256 × 256 pixels) at 31.5 W power consumption (1.30 fps/W power efficiency). It utilized 86.9% of the FPGA fabric and operated at a 196.5 MHz clock frequency (after optimization). Based on these findings, we anticipate that FPGA and high-level synthesis can together foster rapid prototyping of real-time ultrasound processor modules at low power consumption budgets.

  4. Software-based high-level synthesis design of FPGA beamformers for synthetic aperture imaging.

    PubMed

    Amaro, Joao; Yiu, Billy Y S; Falcao, Gabriel; Gomes, Marco A C; Yu, Alfred C H

    2015-05-01

    Field-programmable gate arrays (FPGAs) can potentially be configured as beamforming platforms for ultrasound imaging, but a long design time and skilled expertise in hardware programming are typically required. In this article, we present a novel approach to the efficient design of FPGA beamformers for synthetic aperture (SA) imaging via the use of software-based high-level synthesis techniques. Software kernels (coded in OpenCL) were first developed to stage-wise handle SA beamforming operations, and their corresponding FPGA logic circuitry was emulated through a high-level synthesis framework. After design space analysis, the fine-tuned OpenCL kernels were compiled into register transfer level descriptions to configure an FPGA as a beamformer module. The processing performance of this beamformer was assessed through a series of offline emulation experiments that sought to derive beamformed images from SA channel-domain raw data (40-MHz sampling rate, 12 bit resolution). With 128 channels, our FPGA-based SA beamformer can achieve 41 frames per second (fps) processing throughput (3.44 × 10(8) pixels per second for frame size of 256 × 256 pixels) at 31.5 W power consumption (1.30 fps/W power efficiency). It utilized 86.9% of the FPGA fabric and operated at a 196.5 MHz clock frequency (after optimization). Based on these findings, we anticipate that FPGA and high-level synthesis can together foster rapid prototyping of real-time ultrasound processor modules at low power consumption budgets. PMID:25965680

  5. Standard Reticle Slide To Objectively Evaluate Spatial Resolution and Instrument Performance in Imaging Mass Spectrometry.

    PubMed

    Zubair, Faizan; Prentice, Boone M; Norris, Jeremy L; Laibinis, Paul E; Caprioli, Richard M

    2016-07-19

    Spatial resolution is a key parameter in imaging mass spectrometry (IMS). Aside from being a primary determinant in overall image quality, spatial resolution has important consequences on the acquisition time of the IMS experiment and the resulting file size. Hardware and software modifications during instrumentation development can dramatically affect the spatial resolution achievable using a given imaging mass spectrometer. As such, an accurate and objective method to determine the working spatial resolution is needed to guide instrument development and ensure quality IMS results. We have used lithographic and self-assembly techniques to fabricate a pattern of crystal violet as a standard reticle slide for assessing spatial resolution in matrix-assisted laser desorption/ionization (MALDI) IMS experiments. The reticle is used to evaluate spatial resolution under user-defined instrumental conditions. Edgespread analysis measures the beam diameter for a Gaussian profile and line scans measure an "effective" spatial resolution that is a convolution of beam optics and sampling frequency. The patterned crystal violet reticle was also used to diagnose issues with IMS instrumentation such as intermittent losses of pixel data. PMID:27299987

  6. The wavelet/scalar quantization compression standard for digital fingerprint images

    SciTech Connect

    Bradley, J.N.; Brislawn, C.M.

    1994-04-01

    A new digital image compression standard has been adopted by the US Federal Bureau of Investigation for use on digitized gray-scale fingerprint images. The algorithm is based on adaptive uniform scalar quantization of a discrete wavelet transform image decomposition and is referred to as the wavelet/scalar quantization standard. The standard produces archival quality images at compression ratios of around 20:1 and will allow the FBI to replace their current database of paper fingerprint cards with digital imagery.

  7. Pre-Hardware Optimization of Spacecraft Image Processing Software Algorithms and Hardware Implementation

    NASA Technical Reports Server (NTRS)

    Kizhner, Semion; Flatley, Thomas P.; Hestnes, Phyllis; Jentoft-Nilsen, Marit; Petrick, David J.; Day, John H. (Technical Monitor)

    2001-01-01

    Spacecraft telemetry rates have steadily increased over the last decade presenting a problem for real-time processing by ground facilities. This paper proposes a solution to a related problem for the Geostationary Operational Environmental Spacecraft (GOES-8) image processing application. Although large super-computer facilities are the obvious heritage solution, they are very costly, making it imperative to seek a feasible alternative engineering solution at a fraction of the cost. The solution is based on a Personal Computer (PC) platform and synergy of optimized software algorithms and re-configurable computing hardware technologies, such as Field Programmable Gate Arrays (FPGA) and Digital Signal Processing (DSP). It has been shown in [1] and [2] that this configuration can provide superior inexpensive performance for a chosen application on the ground station or on-board a spacecraft. However, since this technology is still maturing, intensive pre-hardware steps are necessary to achieve the benefits of hardware implementation. This paper describes these steps for the GOES-8 application, a software project developed using Interactive Data Language (IDL) (Trademark of Research Systems, Inc.) on a Workstation/UNIX platform. The solution involves converting the application to a PC/Windows/RC platform, selected mainly by the availability of low cost, adaptable high-speed RC hardware. In order for the hybrid system to run, the IDL software was modified to account for platform differences. It was interesting to examine the gains and losses in performance on the new platform, as well as unexpected observations before implementing hardware. After substantial pre-hardware optimization steps, the necessity of hardware implementation for bottleneck code in the PC environment became evident and solvable beginning with the methodology described in [1], [2], and implementing a novel methodology for this specific application [6]. The PC-RC interface bandwidth problem for the

  8. Measuring the area of tear film break-up by image analysis software

    NASA Astrophysics Data System (ADS)

    Pena-Verdeal, Hugo; García-Resúa, Carlos; Ramos, Lucía.; Mosquera, Antonio; Yebra-Pimentel, Eva; Giráldez, María. Jesús

    2013-11-01

    Tear film breakup time (BUT) test only examines the first break in the tear film, but subsequent tear film events are not monitored. We present a method of measuring the area of breakup after the appearance of the first breakup by using open source software. Furthermore, the speed of the rupture was determined. 84 subjects participated in the study. 2 μl volume of 2% sodium fluorescein was instilled using a micropipette. The subject was seated behind a slit-lamp using a cobalt blue filter together with a Wratten 12 yellow filter. Then, the tear film was recorded by a camera attached to the slit lamp. 4 frames of each video was extracted, the first rupture (BUT_0), breakup after 1 second (BUT_1), rupture after 2 seconds (BUT_2) and breakup before the last blink (BUT_F). Open source software of measurement based on Java (NIH ImageJ) was used to measure the number of pixels in areas of breakup. These areas were divided by the area of exposed cornea to obtain the percentage of ruptures. Instantaneous breakup speed was calculated for second 1 as the difference between BUT_1 - BUT_0, whereas instant speed for second 2 was BUT_2 - BUT_1. Mean area of breakup obtained was: BUT_0 = 0.26%, BUT_1 = 0.48%, BUT_2 = 0.79% and BUT_F = 1.61%. Break speed was 0.22 area/sec for second 1 and 0.31 area/sec for second 2, showing a statistical difference between them (p = 0.007). Post BUT analysis may be easily monitoring with the aid of this software.

  9. Digital mapping of side-scan sonar data with the Woods Hole Image Processing System software

    USGS Publications Warehouse

    Paskevich, Valerie F.

    1992-01-01

    Since 1985, the Branch of Atlantic Marine Geology has been involved in collecting, processing and digitally mosaicking high and low resolution sidescan sonar data. In the past, processing and digital mosaicking has been accomplished with a dedicated, shore-based computer system. Recent development of a UNIX-based image-processing software system includes a series of task specific programs for pre-processing sidescan sonar data. To extend the capabilities of the UNIX-based programs, development of digital mapping techniques have been developed. This report describes the initial development of an automated digital mapping procedure. Included is a description of the programs and steps required to complete the digital mosaicking on a UNIXbased computer system, and a comparison of techniques that the user may wish to select.

  10. A software platform for the comparative analysis of electroanatomic and imaging data including conduction velocity mapping.

    PubMed

    Cantwell, Chris D; Roney, Caroline H; Ali, Rheeda L; Qureshi, Norman A; Lim, Phang Boon; Peters, Nicholas S

    2014-01-01

    Electroanatomic mapping systems collect increasingly large quantities of spatially-distributed electrical data which may be potentially further scrutinized post-operatively to expose mechanistic properties which sustain and perpetuate atrial fibrillation. We describe a modular software platform, developed to post-process and rapidly analyse data exported from electroanatomic mapping systems using a range of existing and novel algorithms. Imaging data highlighting regions of scar can also be overlaid for comparison. In particular, we describe the conduction velocity (CV) mapping algorithm used to highlight wavefront behaviour. CV was found to be particularly sensitive to the spatial distribution of the triangulation points and corresponding activation times. A set of geometric conditions were devised for selecting suitable triangulations of the electrogram set for generating CV maps.

  11. SU-E-J-264: Comparison of Two Commercially Available Software Platforms for Deformable Image Registration

    SciTech Connect

    Tuohy, R; Stathakis, S; Mavroidis, P; Bosse, C; Papanikolaou, N

    2014-06-01

    Purpose: To evaluate and compare the deformable image registration algorithms available in the Velocity (Velocity Medical Solutions, Atlanta, GA) and RayStation (RaySearch Americas, Inc., Garden city NY). Methods: Ten consecutive patient cone beam CTs (CBCT) for each fraction were collected. The CBCTs along with the simulation CT were exported to the Velocity and the RayStation software. Each CBCT was registered using deformable image registration to the simulation CT and the resulting deformable vector matrix was generated. Each registration was visually inspected by a physicist and the prescribing physician. The volumes of the critical organs were calculated for each deformable CT and used for comparison. Results: The resulting deformable registrations revealed differences between the two algorithms. These differences were realized when the organs at risk were contoured on each deformed CBCT. Differences in the order of 10% ±30% in volume were observed for bladder, 17 ±21% for rectum and 16±10% for sigmoid. The prostate and PTV volume differences were in the order of 3±5%. The volumetric differences observed had a respective impact on the DVHs of all organs at risk. Differences of 8–10% in the mean dose were observed for all organs above. Conclusion: Deformable registration is a powerful tool that aids in the definition of critical structures and is often used for the evaluation of daily dose delivered to the patient. It should be noted that extended QA should be performed before clinical implementation of the software and the users should be aware of advantages and limitations of the methods.

  12. Comparison between three methods to value lower tear meniscus measured by image software

    NASA Astrophysics Data System (ADS)

    García-Resúa, Carlos; Pena-Verdeal, Hugo; Lira, Madalena; Oliveira, M. Elisabete Real; Giráldez, María. Jesús; Yebra-Pimentel, Eva

    2013-11-01

    To measure different parameters of lower tear meniscus height (TMH) by using photography with open software of measurement. TMH was addressed from lower eyelid to the top of the meniscus (absolute TMH) and to the brightest meniscus reflex (reflex TMH). 121 young healthy subjects were included in the study. The lower tear meniscus was videotaped by a digital camera attached to a slit lamp. Three videos were recorded in central meniscus portion on three different methods: slit lamp without fluorescein instillation, slit lamp with fluorescein instillation and TearscopeTM without fluorescein instillation. Then, a masked observed obtained an image from each video and measured TMH by using open source software of measurement based on Java (NIH ImageJ). Absolute central (TMH-CA), absolute with fluorescein (TMH-F) and absolute using the Tearscope (TMH-Tc) were compared each other as well as reflex central (TMH-CR) and reflex Tearscope (TMH-TcR). Mean +/- S.D. values of TMH-CA, TMH-CR, TMH-F, TMH-Tc and TMH-TcR of 0.209 +/- 0.049, 0.139 +/- 0.031, 0.222 +/- 0.058, 0.175 +/- 0.045 and 0.109 +/- 0.029 mm, respectively were found. Paired t-test was performed for the relationship between TMH-CA - TMH-CR, TMH-CA - TMH-F, TMH-CA - TMH-Tc, TMH-F - TMH-Tc, TMH-Tc - TMH-TcR and TMH-CR - TMH-TcR. In all cases, it was found a significant difference between both variables (all p < 0.008). This study showed a useful tool to objectively measure TMH by photography. Eye care professionals should maintain the same TMH parameter in the follow-up visits, due to the difference between them.

  13. Fundus image fusion in EYEPLAN software: An evaluation of a novel technique for ocular melanoma radiation treatment planning

    SciTech Connect

    Daftari, Inder K.; Mishra, Kavita K.; O'Brien, Joan M.; and others

    2010-10-15

    Purpose: The purpose of this study is to evaluate a novel approach for treatment planning using digital fundus image fusion in EYEPLAN for proton beam radiation therapy (PBRT) planning for ocular melanoma. The authors used a prototype version of EYEPLAN software, which allows for digital registration of high-resolution fundus photographs. The authors examined the improvement in tumor localization by replanning with the addition of fundus photo superimposition in patients with macular area tumors. Methods: The new version of EYEPLAN (v3.05) software allows for the registration of fundus photographs as a background image. This is then used in conjunction with clinical examination, tantalum marker clips, surgeon's mapping, and ultrasound to draw the tumor contour accurately. In order to determine if the fundus image superimposition helps in tumor delineation and treatment planning, the authors identified 79 patients with choroidal melanoma in the macular location that were treated with PBRT. All patients were treated to a dose of 56 GyE in four fractions. The authors reviewed and replanned all 79 macular melanoma cases with superimposition of pretreatment and post-treatment fundus imaging in the new EYEPLAN software. For patients with no local failure, the authors analyzed whether fundus photograph fusion accurately depicted and confirmed tumor volumes as outlined in the original treatment plan. For patients with local failure, the authors determined whether the addition of the fundus photograph might have benefited in terms of more accurate tumor volume delineation. Results: The mean follow-up of patients was 33.6{+-}23 months. Tumor growth was seen in six eyes of the 79 macular lesions. All six patients were marginal failures or tumor miss in the region of dose fall-off, including one patient with both in-field recurrence as well as marginal. Among the six recurrences, three were managed by enucleation and one underwent retreatment with proton therapy. Three

  14. Standardizing the next generation of bioinformatics software development with BioHDF (HDF5).

    PubMed

    Mason, Christopher E; Zumbo, Paul; Sanders, Stephan; Folk, Mike; Robinson, Dana; Aydt, Ruth; Gollery, Martin; Welsh, Mark; Olson, N Eric; Smith, Todd M

    2010-01-01

    Next Generation Sequencing technologies are limited by the lack of standard bioinformatics infrastructures that can reduce data storage, increase data processing performance, and integrate diverse information. HDF technologies address these requirements and have a long history of use in data-intensive science communities. They include general data file formats, libraries, and tools for working with the data. Compared to emerging standards, such as the SAM/BAM formats, HDF5-based systems demonstrate significantly better scalability, can support multiple indexes, store multiple data types, and are self-describing. For these reasons, HDF5 and its BioHDF extension are well suited for implementing data models to support the next generation of bioinformatics applications. PMID:20865556

  15. A flexible software architecture for scalable real-time image and video processing applications

    NASA Astrophysics Data System (ADS)

    Usamentiaga, Rubén; Molleda, Julio; García, Daniel F.; Bulnes, Francisco G.

    2012-06-01

    Real-time image and video processing applications require skilled architects, and recent trends in the hardware platform make the design and implementation of these applications increasingly complex. Many frameworks and libraries have been proposed or commercialized to simplify the design and tuning of real-time image processing applications. However, they tend to lack flexibility because they are normally oriented towards particular types of applications, or they impose specific data processing models such as the pipeline. Other issues include large memory footprints, difficulty for reuse and inefficient execution on multicore processors. This paper presents a novel software architecture for real-time image and video processing applications which addresses these issues. The architecture is divided into three layers: the platform abstraction layer, the messaging layer, and the application layer. The platform abstraction layer provides a high level application programming interface for the rest of the architecture. The messaging layer provides a message passing interface based on a dynamic publish/subscribe pattern. A topic-based filtering in which messages are published to topics is used to route the messages from the publishers to the subscribers interested in a particular type of messages. The application layer provides a repository for reusable application modules designed for real-time image and video processing applications. These modules, which include acquisition, visualization, communication, user interface and data processing modules, take advantage of the power of other well-known libraries such as OpenCV, Intel IPP, or CUDA. Finally, we present different prototypes and applications to show the possibilities of the proposed architecture.

  16. Free digital image analysis software helps to resolve equivocal scores in HER2 immunohistochemistry.

    PubMed

    Helin, Henrik O; Tuominen, Vilppu J; Ylinen, Onni; Helin, Heikki J; Isola, Jorma

    2016-02-01

    Evaluation of human epidermal growth factor receptor 2 (HER2) immunohistochemistry (IHC) is subject to interobserver variation and lack of reproducibility. Digital image analysis (DIA) has been shown to improve the consistency and accuracy of the evaluation and its use is encouraged in current testing guidelines. We studied whether digital image analysis using a free software application (ImmunoMembrane) can assist in interpreting HER2 IHC in equivocal 2+ cases. We also compared digital photomicrographs with whole-slide images (WSI) as material for ImmunoMembrane DIA. We stained 750 surgical resection specimens of invasive breast cancers immunohistochemically for HER2 and analysed staining with ImmunoMembrane. The ImmunoMembrane DIA scores were compared with the originally responsible pathologists' visual scores, a researcher's visual scores and in situ hybridisation (ISH) results. The originally responsible pathologists reported 9.1 % positive 3+ IHC scores, for the researcher this was 8.4 % and for ImmunoMembrane 9.5 %. Equivocal 2+ scores were 34 % for the pathologists, 43.7 % for the researcher and 10.1 % for ImmunoMembrane. Negative 0/1+ scores were 57.6 % for the pathologists, 46.8 % for the researcher and 80.8 % for ImmunoMembrane. There were six false positive cases, which were classified as 3+ by ImmunoMembrane and negative by ISH. Six cases were false negative defined as 0/1+ by IHC and positive by ISH. ImmunoMembrane DIA using digital photomicrographs and WSI showed almost perfect agreement. In conclusion, digital image analysis by ImmunoMembrane can help to resolve a majority of equivocal 2+ cases in HER2 IHC, which reduces the need for ISH testing.

  17. SU-E-I-13: Evaluation of Metal Artifact Reduction (MAR) Software On Computed Tomography (CT) Images

    SciTech Connect

    Huang, V; Kohli, K

    2015-06-15

    Purpose: A new commercially available metal artifact reduction (MAR) software in computed tomography (CT) imaging was evaluated with phantoms in the presence of metals. The goal was to assess the ability of the software to restore the CT number in the vicinity of the metals without impacting the image quality. Methods: A Catphan 504 was scanned with a GE Optima RT 580 CT scanner (GE Healthcare, Milwaukee, WI) and the images were reconstructed with and without the MAR software. Both datasets were analyzed with Image Owl QA software (Image Owl Inc, Greenwich, NY). CT number sensitometry, MTF, low contrast, uniformity, noise and spatial accuracy were compared for scans with and without MAR software. In addition, an in-house made phantom was scanned with and without a stainless steel insert at three different locations. The accuracy of the CT number and metal insert dimension were investigated as well. Results: Comparisons between scans with and without MAR algorithm on the Catphan phantom demonstrate similar results for image quality. However, noise was slightly higher for the MAR algorithm. Evaluation of the CT number at various locations of the in-house made phantom was also performed. The baseline HU, obtained from the scan without metal insert, was compared to scans with the stainless steel insert at 3 different locations. The HU difference between the baseline scan versus metal scan was improved when the MAR algorithm was applied. In addition, the physical diameter of the stainless steel rod was over-estimated by the MAR algorithm by 0.9 mm. Conclusion: This work indicates with the presence of metal in CT scans, the MAR algorithm is capable of providing a more accurate CT number without compromising the overall image quality. Future work will include the dosimetric impact on the MAR algorithm.

  18. 3.0 Tesla magnetic resonance imaging: A new standard in liver imaging?

    PubMed

    Girometti, Rossano

    2015-07-28

    An ever-increasing number of 3.0 Tesla (T) magnets are installed worldwide. Moving from the standard of 1.5 T to higher field strength implies a number of potential advantage and drawbacks, requiring careful optimization of imaging protocols or implementation of novel hardware components. Clinical practice and literature review suggest that state-of-the-art 3.0 T is equivalent to 1.5 T in the assessment of focal liver lesions and diffuse liver disease. Therefore, further technical improvements are needed in order to fully exploit the potential of higher field strength.

  19. 5D CNS+ Software for Automatically Imaging Axial, Sagittal, and Coronal Planes of Normal and Abnormal Second-Trimester Fetal Brains.

    PubMed

    Rizzo, Giuseppe; Capponi, Alessandra; Persico, Nicola; Ghi, Tullio; Nazzaro, Giovanni; Boito, Simona; Pietrolucci, Maria Elena; Arduini, Domenico

    2016-10-01

    The purpose of this study was to test new 5D CNS+ software (Samsung Medison Co, Ltd, Seoul, Korea), which is designed to image axial, sagittal, and coronal planes of the fetal brain from volumes obtained by 3-dimensional sonography. The study consisted of 2 different steps. First in a prospective study, 3-dimensional fetal brain volumes were acquired in 183 normal consecutive singleton pregnancies undergoing routine sonographic examinations at 18 to 24 weeks' gestation. The 5D CNS+ software was applied, and the percentage of adequate visualization of brain diagnostic planes was evaluated by 2 independent observers. In the second step, the software was also tested in 22 fetuses with cerebral anomalies. In 180 of 183 fetuses (98.4%), 5D CNS+ successfully reconstructed all of the diagnostic planes. Using the software on healthy fetuses, the observers acknowledged the presence of diagnostic images with visualization rates ranging from 97.7% to 99.4% for axial planes, 94.4% to 97.7% for sagittal planes, and 92.2% to 97.2% for coronal planes. The Cohen κ coefficient was analyzed to evaluate the agreement rates between the observers and resulted in values of 0.96 or greater for axial planes, 0.90 or greater for sagittal planes, and 0.89 or greater for coronal planes. All 22 fetuses with brain anomalies were identified among a series that also included healthy fetuses, and in 21 of the 22 cases, a correct diagnosis was made. 5D CNS+ was efficient in successfully imaging standard axial, sagittal, and coronal planes of the fetal brain. This approach may simplify the examination of the fetal central nervous system and reduce operator dependency.

  20. Caltech/JPL Conference on Image Processing Technology, Data Sources and Software for Commercial and Scientific Applications

    NASA Technical Reports Server (NTRS)

    Redmann, G. H.

    1976-01-01

    Recent advances in image processing and new applications are presented to the user community to stimulate the development and transfer of this technology to industrial and commercial applications. The Proceedings contains 37 papers and abstracts, including many illustrations (some in color) and provides a single reference source for the user community regarding the ordering and obtaining of NASA-developed image-processing software and science data.

  1. WCPS: An Open Geospatial Consortium Standard Applied to Flight Hardware/Software

    NASA Astrophysics Data System (ADS)

    Cappelaere, P. G.; Mandl, D.; Stanley, J.; Frye, S.; Baumann, P.

    2009-12-01

    The Open GeoSpatial Consortium Web Coverage Processing Service (WCPS) has the potential to allow advanced users to define processing algorithms using the web environment and seamlessly provide the capability to upload them directly to the satellite for autonomous execution using smart agent technology. The Open Geospatial Consortium recently announced the adoption of a specification for a Web Coverage Processing Service on Mar 25, 2009. This effort has been spearheaded by Dr. Peter Baumann, Jacobs University, Bremen, Germany. The WCPS specifies a coverage processing language allowing clients to send processing requests for evaluation to a server. NASA has been taking the next step by wrapping the user-defined requests into dynamic agents that can be uploaded to a spacecraft for onboard processing. This could have a dramatic impact to the new decadal missions such as HyspIRI. Dynamic onboard classifiers are key to providing level 2 products in near-realtime directly to end-users on the ground. This capability, currently implemented on the Hyspiri pathfinder testbed using the NASA SpaceCube, will be demonstrated on EO-1, a NASA Hyperspectral/Multispectral imager, as the next capability for agile autonomous science experiments.

  2. Cloud computing geospatial application for water resources based on free and open source software and open standards - a prototype

    NASA Astrophysics Data System (ADS)

    Delipetrev, Blagoj

    2016-04-01

    Presently, most of the existing software is desktop-based, designed to work on a single computer, which represents a major limitation in many ways, starting from limited computer processing, storage power, accessibility, availability, etc. The only feasible solution lies in the web and cloud. This abstract presents research and development of a cloud computing geospatial application for water resources based on free and open source software and open standards using hybrid deployment model of public - private cloud, running on two separate virtual machines (VMs). The first one (VM1) is running on Amazon web services (AWS) and the second one (VM2) is running on a Xen cloud platform. The presented cloud application is developed using free and open source software, open standards and prototype code. The cloud application presents a framework how to develop specialized cloud geospatial application that needs only a web browser to be used. This cloud application is the ultimate collaboration geospatial platform because multiple users across the globe with internet connection and browser can jointly model geospatial objects, enter attribute data and information, execute algorithms, and visualize results. The presented cloud application is: available all the time, accessible from everywhere, it is scalable, works in a distributed computer environment, it creates a real-time multiuser collaboration platform, the programing languages code and components are interoperable, and it is flexible in including additional components. The cloud geospatial application is implemented as a specialized water resources application with three web services for 1) data infrastructure (DI), 2) support for water resources modelling (WRM), 3) user management. The web services are running on two VMs that are communicating over the internet providing services to users. The application was tested on the Zletovica river basin case study with concurrent multiple users. The application is a state

  3. 76 FR 51993 - Draft Guidance for Industry on Standards for Clinical Trial Imaging Endpoints; Availability

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-19

    ... HUMAN SERVICES Food and Drug Administration Draft Guidance for Industry on Standards for Clinical Trial... entitled ``Standards for Clinical Trial Imaging Endpoints.'' The purpose of this draft guidance is to assist sponsors in the use of imaging endpoints in clinical trials of therapeutic drugs and...

  4. ORBS: A data reduction software for the imaging Fourier transform spectrometers SpIOMM and SITELLE

    NASA Astrophysics Data System (ADS)

    Martin, T.; Drissen, L.; Joncas, G.

    2012-09-01

    SpIOMM (Spectromètre-Imageur de l'Observatoire du Mont Mégantic) is still the only operational astronomical Imaging Fourier Transform Spectrometer (IFTS) capable of obtaining the visible spectrum of every source of light in a field of view of 12 arc-minutes. Even if it has been designed to work with both outputs of the Michelson interferometer, up to now only one output has been used. Here we present ORBS (Outils de Réduction Binoculaire pour SpIOMM/SITELLE), the reduction software we designed in order to take advantage of the two output data. ORBS will also be used to reduce the data of SITELLE (Spectromètre-Imageur pour l' Étude en Long et en Large des raies d' Émissions) { the direct successor of SpIOMM, which will be in operation at the Canada-France- Hawaii Telescope (CFHT) in early 2013. SITELLE will deliver larger data cubes than SpIOMM (up to 2 cubes of 34 Go each). We thus have made a strong effort in optimizing its performance efficiency in terms of speed and memory usage in order to ensure the best compliance with the quality characteristics discussed with the CFHT team. As a result ORBS is now capable of reducing 68 Go of data in less than 20 hours using only 5 Go of random-access memory (RAM).

  5. 3D reconstruction of SEM images by use of optical photogrammetry software.

    PubMed

    Eulitz, Mona; Reiss, Gebhard

    2015-08-01

    Reconstruction of the three-dimensional (3D) surface of an object to be examined is widely used for structure analysis in science and many biological questions require information about their true 3D structure. For Scanning Electron Microscopy (SEM) there has been no efficient non-destructive solution for reconstruction of the surface morphology to date. The well-known method of recording stereo pair images generates a 3D stereoscope reconstruction of a section, but not of the complete sample surface. We present a simple and non-destructive method of 3D surface reconstruction from SEM samples based on the principles of optical close range photogrammetry. In optical close range photogrammetry a series of overlapping photos is used to generate a 3D model of the surface of an object. We adapted this method to the special SEM requirements. Instead of moving a detector around the object, the object itself was rotated. A series of overlapping photos was stitched and converted into a 3D model using the software commonly used for optical photogrammetry. A rabbit kidney glomerulus was used to demonstrate the workflow of this adaption. The reconstruction produced a realistic and high-resolution 3D mesh model of the glomerular surface. The study showed that SEM micrographs are suitable for 3D reconstruction by optical photogrammetry. This new approach is a simple and useful method of 3D surface reconstruction and suitable for various applications in research and teaching.

  6. Army technology development. IBIS query. Software to support the Image Based Information System (IBIS) expansion for mapping, charting and geodesy

    NASA Technical Reports Server (NTRS)

    Friedman, S. Z.; Walker, R. E.; Aitken, R. B.

    1986-01-01

    The Image Based Information System (IBIS) has been under development at the Jet Propulsion Laboratory (JPL) since 1975. It is a collection of more than 90 programs that enable processing of image, graphical, tabular data for spatial analysis. IBIS can be utilized to create comprehensive geographic data bases. From these data, an analyst can study various attributes describing characteristics of a given study area. Even complex combinations of disparate data types can be synthesized to obtain a new perspective on spatial phenomena. In 1984, new query software was developed enabling direct Boolean queries of IBIS data bases through the submission of easily understood expressions. An improved syntax methodology, a data dictionary, and display software simplified the analysts' tasks associated with building, executing, and subsequently displaying the results of a query. The primary purpose of this report is to describe the features and capabilities of the new query software. A secondary purpose of this report is to compare this new query software to the query software developed previously (Friedman, 1982). With respect to this topic, the relative merits and drawbacks of both approaches are covered.

  7. eWaterCycle: Building an operational global Hydrological forecasting system based on standards and open source software

    NASA Astrophysics Data System (ADS)

    Drost, Niels; Bierkens, Marc; Donchyts, Gennadii; van de Giesen, Nick; Hummel, Stef; Hut, Rolf; Kockx, Arno; van Meersbergen, Maarten; Sutanudjaja, Edwin; Verlaan, Martin; Weerts, Albrecht; Winsemius, Hessel

    2015-04-01

    At EGU 2015, the eWaterCycle project (www.ewatercycle.org) will launch an operational high-resolution Hydrological global model, including 14 day ensemble forecasts. Within the eWaterCycle project we aim to use standards and open source software as much as possible. This ensures the sustainability of the software created, and the ability to swap out components as newer technologies and solutions become available. It also allows us to build the system much faster than would otherwise be the case. At the heart of the eWaterCycle system is the PCRGLOB-WB Global Hydrological model (www.globalhydrology.nl) developed at Utrecht University. Version 2.0 of this model is implemented in Python, and models a wide range of Hydrological processes at 10 x 10km (and potentially higher) resolution. To assimilate near-real time satellite data into the model, and run an ensemble forecast we use the OpenDA system (www.openda.org). This allows us to make use of different data assimilation techniques without the need to implement these from scratch. As a data assimilation technique we currently use (variant of) an Ensemble Kalman Filter, specifically optimized for High Performance Computing environments. Coupling of the model with the DA is done with the Basic Model Interface (BMI), developed in the framework of the Community Surface Dynamics Modeling System (CSDMS) (csdms.colorado.edu). We have added support for BMI to PCRGLOB-WB, and developed a BMI adapter for OpenDA, allowing OpenDA to use any BMI compatible model. We currently use multiple different BMI models with OpenDA, already showing the benefits of using this standard. Throughout the system, all file based input and output is done via NetCDF files. We use several standard tools to be used for pre- and post-processing data. Finally we use ncWMS, an NetCDF based implementation of the Web Map Service (WMS) protocol to serve the forecasting result. We have build a 3D web application based on Cesium.js to visualize the output. In

  8. The FBI wavelet/scalar quantization standard for gray-scale fingerprint image compression

    SciTech Connect

    Bradley, J.N.; Brislawn, C.M. ); Hopper, T. )

    1993-01-01

    The FBI has recently adopted a standard for the compression of digitized 8-bit gray-scale fingerprint images. The standard is based on scalar quantization of a 64-subband discrete wavelet transform decomposition of the images, followed by Huffman coding. Novel features of the algorithm include the use of symmetric boundary conditions for transforming finite-length signals and a subband decomposition tailored for fingerprint images scanned at 500 dpi. The standard is intended for use in conjunction with ANSI/NBS-CLS 1-1993, American National Standard Data Format for the Interchange of Fingerprint Information, and the FBI's Integrated Automated Fingerprint Identification System.

  9. The FBI wavelet/scalar quantization standard for gray-scale fingerprint image compression

    SciTech Connect

    Bradley, J.N.; Brislawn, C.M.; Hopper, T.

    1993-05-01

    The FBI has recently adopted a standard for the compression of digitized 8-bit gray-scale fingerprint images. The standard is based on scalar quantization of a 64-subband discrete wavelet transform decomposition of the images, followed by Huffman coding. Novel features of the algorithm include the use of symmetric boundary conditions for transforming finite-length signals and a subband decomposition tailored for fingerprint images scanned at 500 dpi. The standard is intended for use in conjunction with ANSI/NBS-CLS 1-1993, American National Standard Data Format for the Interchange of Fingerprint Information, and the FBI`s Integrated Automated Fingerprint Identification System.

  10. Analyses of requirements for computer control and data processing experiment subsystems: Image data processing system (IDAPS) software description (7094 version), volume 2

    NASA Technical Reports Server (NTRS)

    1973-01-01

    A description of each of the software modules of the Image Data Processing System (IDAPS) is presented. The changes in the software modules are the result of additions to the application software of the system and an upgrade of the IBM 7094 Mod(1) computer to a 1301 disk storage configuration. Necessary information about IDAPS sofware is supplied to the computer programmer who desires to make changes in the software system or who desires to use portions of the software outside of the IDAPS system. Each software module is documented with: module name, purpose, usage, common block(s) description, method (algorithm of subroutine) flow diagram (if needed), subroutines called, and storage requirements.

  11. The 'Densitometric Image Analysis Software' and its application to determine stepwise equilibrium constants from electrophoretic mobility shift assays.

    PubMed

    van Oeffelen, Liesbeth; Peeters, Eveline; Nguyen Le Minh, Phu; Charlier, Daniël

    2014-01-01

    Current software applications for densitometric analysis, such as ImageJ, QuantityOne (BioRad) and the Intelligent or Advanced Quantifier (Bio Image) do not allow to take the non-linearity of autoradiographic films into account during calibration. As a consequence, quantification of autoradiographs is often regarded as problematic, and phosphorimaging is the preferred alternative. However, the non-linear behaviour of autoradiographs can be described mathematically, so it can be accounted for. Therefore, the 'Densitometric Image Analysis Software' has been developed, which allows to quantify electrophoretic bands in autoradiographs, as well as in gels and phosphorimages, while providing optimized band selection support to the user. Moreover, the program can determine protein-DNA binding constants from Electrophoretic Mobility Shift Assays (EMSAs). For this purpose, the software calculates a chosen stepwise equilibrium constant for each migration lane within the EMSA, and estimates the errors due to non-uniformity of the background noise, smear caused by complex dissociation or denaturation of double-stranded DNA, and technical errors such as pipetting inaccuracies. Thereby, the program helps the user to optimize experimental parameters and to choose the best lanes for estimating an average equilibrium constant. This process can reduce the inaccuracy of equilibrium constants from the usual factor of 2 to about 20%, which is particularly useful when determining position weight matrices and cooperative binding constants to predict genomic binding sites. The MATLAB source code, platform-dependent software and installation instructions are available via the website http://micr.vub.ac.be.

  12. IRCAMDR: IRCAM3 Data Reduction Software

    NASA Astrophysics Data System (ADS)

    Aspin, Colin; McCaughrean, Mark; Bridger, Alan B.; Baines, Dave; Beard, Steven; Chan, S.; Giddings, Jack; Hartley, K. F.; Horsfield, A. P.; Kelly, B. D.; Emerson, J. P.; Currie, Malcolm J.; Economou, Frossie

    2014-06-01

    The UKIRT IRCAM3 data reduction and analysis software package, IRCAMDR (formerly ircam_clred) analyzes and displays any 2D data image stored in the standard Starlink (ascl:1110.012) NDF data format. It reduces and analyzes IRCAM1/2 data images of 62x58 pixels and IRCAM3 images of 256x256 size. Most of the applications will work on NDF images of any physical (pixel) dimensions, for example, 1024x1024 CCD images can be processed.

  13. Parameter-based estimation of CT dose index and image quality using an in-house android™-based software

    NASA Astrophysics Data System (ADS)

    Mubarok, S.; Lubis, L. E.; Pawiro, S. A.

    2016-03-01

    Compromise between radiation dose and image quality is essential in the use of CT imaging. CT dose index (CTDI) is currently the primary dosimetric formalisms in CT scan, while the low and high contrast resolutions are aspects indicating the image quality. This study was aimed to estimate CTDIvol and image quality measures through a range of exposure parameters variation. CTDI measurements were performed using PMMA (polymethyl methacrylate) phantom of 16 cm diameter, while the image quality test was conducted by using catphan ® 600. CTDI measurements were carried out according to IAEA TRS 457 protocol using axial scan mode, under varied parameters of tube voltage, collimation or slice thickness, and tube current. Image quality test was conducted accordingly under the same exposure parameters with CTDI measurements. An Android™ based software was also result of this study. The software was designed to estimate the value of CTDIvol with maximum difference compared to actual CTDIvol measurement of 8.97%. Image quality can also be estimated through CNR parameter with maximum difference to actual CNR measurement of 21.65%.

  14. Thoughts on standardization of parameters for image evaluation

    NASA Technical Reports Server (NTRS)

    Billingsley, F. C.

    1976-01-01

    Images received for image processing and analysis are obtained from a wide variety of sources and with a wide variety of sensors. Because it is desirable to have image processing algorithms be as universally applicable as possible, they should be designed, where possible, to be insensitive to the parametric variations of the source material. Where this is not possible, these variations must be taken into account. Therefore, it is necessary to consider what parameters may be defined in common across a suite of image types. Objective parameters or measurements of images which, in the proper combinations, may serve as surrogates for real images may be pixel-specific, location dependent, or combinations thereof. Parameters which have proven useful in defining the characteristics of images include the gray scale linearity, granularity of the quantization, spectral content, geometrical fidelity, resolution of the system expressed as either the point spread function or the modulation transfer function, and the spatial frequency content and characteristics of the data itself.

  15. Standards of ultrasound imaging of the adrenal glands.

    PubMed

    Słapa, Rafał Z; Jakubowski, Wiesław S; Dobruch-Sobczak, Katarzyna; Kasperlik-Załuska, Anna A

    2015-12-01

    Adrenal glands are paired endocrine glands located over the upper renal poles. Adrenal pathologies have various clinical presentations. They can coexist with the hyperfunction of individual cortical zones or the medulla, insufficiency of the adrenal cortex or retained normal hormonal function. The most common adrenal masses are tumors incidentally detected in imaging examinations (ultrasound, tomography, magnetic resonance imaging), referred to as incidentalomas. They include a range of histopathological entities but cortical adenomas without hormonal hyperfunction are the most common. Each abdominal ultrasound scan of a child or adult should include the assessment of the suprarenal areas. If a previously non-reported, incidental solid focal lesion exceeding 1 cm (incidentaloma) is detected in the suprarenal area, computed tomography or magnetic resonance imaging should be conducted to confirm its presence and for differentiation and the tumor functional status should be determined. Ultrasound imaging is also used to monitor adrenal incidentaloma that is not eligible for a surgery. The paper presents recommendations concerning the performance and assessment of ultrasound examinations of the adrenal glands and their pathological lesions. The article includes new ultrasound techniques, such as tissue harmonic imaging, spatial compound imaging, three-dimensional ultrasound, elastography, contrast-enhanced ultrasound and parametric imaging. The guidelines presented above are consistent with the recommendations of the Polish Ultrasound Society.

  16. MR urography in children. Part 2: how to use ImageJ MR urography processing software.

    PubMed

    Vivier, Pierre-Hugues; Dolores, Michael; Taylor, Melissa; Dacher, Jean-Nicolas

    2010-05-01

    MR urography (MRU) is an emerging technique particularly useful in paediatric uroradiology. The most common indication is the investigation of hydronephrosis. Combined static and dynamic contrast-enhanced MRU (DCE-MRU) provides both morphological and functional information in a single examination. However, specific post-processing must be performed and to our knowledge, dedicated software is not available in conventional workstations. Investigators involved in MRU classically use homemade software that is not freely accessible. For these reasons, we have developed a software program that is freely downloadable on the National Institute of Health (NIH) website. We report and describe in this study the features of this software program.

  17. Comparison of Anterior Segment Optical Tomography Parameters Measured Using a Semi-Automatic Software to Standard Clinical Instruments

    PubMed Central

    Ang, Marcus; Chong, Wesley; Huang, Huiqi; Tay, Wan Ting; Wong, Tien Yin; He, Ming-Guang; Aung, Tin; Mehta, Jodhbir S.

    2013-01-01

    Objective To compare anterior segment parameters measured using a semi-automatic software (Zhongshan Angle Assessment Program, ZAP) applied to anterior segment optical coherence tomography (AS-OCT) images, with commonly used instruments. Methods Cross-sectional study of a total of 1069 subjects (1069 eyes) from three population-based studies of adults aged 40–80 years. All subjects underwent AS-OCT imaging and ZAP software was applied to determine anterior chamber depth (ACD), central corneal thickness (CCT), anterior and keratometry (K) – readings. These were compared to auto-refraction, keratometry and ocular biometry measured using an IOLMaster, ultrasound pachymeter and auto-refractor respectively. Agreements between AS-OCT (ZAP) and clinical instrument modalities were described using Bland-Altman, 95% limits of agreement (LOA). Results The mean age of our subjects was 56.9±9.5 years and 50.9% were male. The mean AS-OCT (ZAP) parameters of our study cohort were: ACD 3.29±0.35 mm, CCT 560.75±35.07 µm; K-reading 46.79±2.72 D. There was good agreement between the measurements from ZAP analysis and each instrument and no violations in the assumptions of the LOA; albeit with a systematic bias for each comparison: AS-OCT consistently measured a deeper ACD compared to IOLMaster (95% LOA −0.24, 0.55); and a thicker CCT for the AS-OCT compared to ultrasound pachymetry (16.8±0.53 µm 95% LOA −17.3, 50.8). AS-OCT had good agreement with auto-refractor with at least 95% of the measurements within the prediction interval (P value <0.001). Conclusion This study demonstrates that there is good agreement between the measurements from the AS-OCT (ZAP) and conventional tools. However, small systematic biases remain that suggest that these measurement tools may not be interchanged. PMID:23750265

  18. New Instruments for Survey: on Line Softwares for 3d Recontruction from Images

    NASA Astrophysics Data System (ADS)

    Fratus de Balestrini, E.; Guerra, F.

    2011-09-01

    3d scanning technologies had a significant development and have been widely used in documentation of cultural, architectural and archeological heritages. Modern methods of three-dimensional acquiring and modeling allow to represent an object through a digital model that combines visual potentialities of images (normally used for documentation) to the accuracy of the survey, becoming at the same time support for the visualization that for metric evaluation of any artefact that have an historical or artistic interest, opening up new possibilities for cultural heritage's fruition, cataloging and study. Despite this development, because of the small catchment area and the 3D laser scanner's sophisticated technologies, the cost of these instruments is very high and beyond the reach of most operators in the field of cultural heritages. This is the reason why they have appeared low-cost technologies or even free, allowing anyone to approach the issues of acquisition and 3D modeling, providing tools that allow to create three-dimensional models in a simple and economical way. The research, conducted by the Laboratory of Photogrammetry of the University IUAV of Venice, of which we present here some results, is intended to figure out whether, with Arc3D, it is possible to obtain results that can be somehow comparable, in therms of overall quality, to those of the laser scanner, and/or whether it is possible to integrate them. They were carried out a series of tests on certain types of objects, models made with Arc3D, from raster images, were compared with those obtained using the point clouds from laser scanner. We have also analyzed the conditions for an optimal use of Arc3D: environmental conditions (lighting), acquisition tools (digital cameras) and type and size of objects. After performing the tests described above, we analyzed the patterns generated by Arc3D to check what other graphic representations can be obtained from them: orthophotos and drawings. The research

  19. Software safety

    NASA Technical Reports Server (NTRS)

    Leveson, Nancy

    1987-01-01

    Software safety and its relationship to other qualities are discussed. It is shown that standard reliability and fault tolerance techniques will not solve the safety problem for the present. A new attitude requires: looking at what you do NOT want software to do along with what you want it to do; and assuming things will go wrong. New procedures and changes to entire software development process are necessary: special software safety analysis techniques are needed; and design techniques, especially eliminating complexity, can be very helpful.

  20. Comparison of the Number of Image Acquisitions and Procedural Time Required for Transarterial Chemoembolization of Hepatocellular Carcinoma with and without Tumor-Feeder Detection Software.

    PubMed

    Iwazawa, Jin; Ohue, Shoichi; Hashimoto, Naoko; Mitani, Takashi

    2013-01-01

    Purpose. To compare the number of image acquisitions and procedural time required for transarterial chemoembolization (TACE) with and without tumor-feeder detection software in cases of hepatocellular carcinoma (HCC). Materials and Methods. We retrospectively reviewed 50 cases involving software-assisted TACE (September 2011-February 2013) and 84 cases involving TACE without software assistance (January 2010-August 2011). We compared the number of image acquisitions, the overall procedural time, and the therapeutic efficacy in both groups. Results. Angiography acquisition per session reduced from 6.6 times to 4.6 times with software assistance (P < 0.001). Total image acquisition significantly decreased from 10.4 times to 8.7 times with software usage (P = 0.004). The mean procedural time required for a single session with software-assisted TACE (103 min) was significantly lower than that for a session without software (116 min, P = 0.021). For TACE with and without software usage, the complete (68% versus 63%, resp.) and objective (78% versus 80%, resp.) response rates did not differ significantly. Conclusion. In comparison with software-unassisted TACE, automated feeder-vessel detection software-assisted TACE for HCC involved fewer image acquisitions and could be completed faster while maintaining a comparable treatment response.

  1. ScanSAR interferometric processing using existing standard InSAR software for measuring large scale land deformation

    NASA Astrophysics Data System (ADS)

    Liang, Cunren; Zeng, Qiming; Jia, Jianying; Jiao, Jian; Cui, Xi'ai

    2013-02-01

    Scanning synthetic aperture radar (ScanSAR) mode is an efficient way to map large scale geophysical phenomena at low cost. The work presented in this paper is dedicated to ScanSAR interferometric processing and its implementation by making full use of existing standard interferometric synthetic aperture radar (InSAR) software. We first discuss the properties of the ScanSAR signal and its phase-preserved focusing using the full aperture algorithm in terms of interferometry. Then a complete interferometric processing flow is proposed. The standard ScanSAR product is decoded subswath by subswath with burst gaps padded with zero-pulses, followed by a Doppler centroid frequency estimation for each subswath and a polynomial fit of all of the subswaths for the whole scene. The burst synchronization of the interferometric pair is then calculated, and only the synchronized pulses are kept for further interferometric processing. After the complex conjugate multiplication of the interferometric pair, the residual non-integer pulse repetition interval (PRI) part between adjacent bursts caused by zero padding is compensated by resampling using a sinc kernel. The subswath interferograms are then mosaicked, in which a method is proposed to remove the subswath discontinuities in the overlap area. Then the following interferometric processing goes back to the traditional stripmap processing flow. A processor written with C and Fortran languages and controlled by Perl scripts is developed to implement these algorithms and processing flow based on the JPL/Caltech Repeat Orbit Interferometry PACkage (ROI_PAC). Finally, we use the processor to process ScanSAR data from the Envisat and ALOS satellites and obtain large scale deformation maps in the radar line-of-sight (LOS) direction.

  2. 3DVIEWNIX-AVS: a software package for the separate visualization of arteries and veins in CE-MRA images.

    PubMed

    Lei, Tianhu; Udupa, Jayaram K; Odhner, Dewey; Nyúl, László G; Saha, Punam K

    2003-01-01

    Our earlier study developed a computerized method, based on fuzzy connected object delineation principles and algorithms, for artery and vein separation in contrast enhanced Magnetic Resonance Angiography (CE-MRA) images. This paper reports its current development-a software package-for routine clinical use. The software package, termed 3DVIEWNIX-AVS, consists of the following major operational parts: (1) converting data from DICOM3 to 3DVIEWNIX format, (2) previewing slices and creating VOI and MIP Shell, (3) segmenting vessel, (4) separating artery and vein, (5) shell rendering vascular structures and creating animations. This package has been applied to EPIX Medical Inc's CE-MRA data (AngioMark MS-325). One hundred and thirty-five original CE-MRA data sets (of 52 patients) from 6 hospitals have been processed. In all case studies, unified parameter settings produce correct artery-vein separation. The current package is running on a Pentium PC under Linux and the total computation time per study is about 3 min. The strengths of this software package are (1) minimal user interaction, (2) minimal anatomic knowledge requirements on human vascular system, (3) clinically required speed, (4) free entry to any operational stages, (5) reproducible, reliable, high quality of results, and (6) cost effective computer implementation. To date, it seems to be the only software package (using an image processing approach) available for artery and vein separation of the human vascular system for routine use in a clinical setting. PMID:12821028

  3. Osteolytica: An automated image analysis software package that rapidly measures cancer-induced osteolytic lesions in in vivo models with greater reproducibility compared to other commonly used methods.

    PubMed

    Evans, H R; Karmakharm, T; Lawson, M A; Walker, R E; Harris, W; Fellows, C; Huggins, I D; Richmond, P; Chantry, A D

    2016-02-01

    Methods currently used to analyse osteolytic lesions caused by malignancies such as multiple myeloma and metastatic breast cancer vary from basic 2-D X-ray analysis to 2-D images of micro-CT datasets analysed with non-specialised image software such as ImageJ. However, these methods have significant limitations. They do not capture 3-D data, they are time-consuming and they often suffer from inter-user variability. We therefore sought to develop a rapid and reproducible method to analyse 3-D osteolytic lesions in mice with cancer-induced bone disease. To this end, we have developed Osteolytica, an image analysis software method featuring an easy to use, step-by-step interface to measure lytic bone lesions. Osteolytica utilises novel graphics card acceleration (parallel computing) and 3-D rendering to provide rapid reconstruction and analysis of osteolytic lesions. To evaluate the use of Osteolytica we analysed tibial micro-CT datasets from murine models of cancer-induced bone disease and compared the results to those obtained using a standard ImageJ analysis method. Firstly, to assess inter-user variability we deployed four independent researchers to analyse tibial datasets from the U266-NSG murine model of myeloma. Using ImageJ, inter-user variability between the bones was substantial (±19.6%), in contrast to using Osteolytica, which demonstrated minimal variability (±0.5%). Secondly, tibial datasets from U266-bearing NSG mice or BALB/c mice injected with the metastatic breast cancer cell line 4T1 were compared to tibial datasets from aged and sex-matched non-tumour control mice. Analyses by both Osteolytica and ImageJ showed significant increases in bone lesion area in tumour-bearing mice compared to control mice. These results confirm that Osteolytica performs as well as the current 2-D ImageJ osteolytic lesion analysis method. However, Osteolytica is advantageous in that it analyses over the entirety of the bone volume (as opposed to selected 2-D images), it

  4. Osteolytica: An automated image analysis software package that rapidly measures cancer-induced osteolytic lesions in in vivo models with greater reproducibility compared to other commonly used methods.

    PubMed

    Evans, H R; Karmakharm, T; Lawson, M A; Walker, R E; Harris, W; Fellows, C; Huggins, I D; Richmond, P; Chantry, A D

    2016-02-01

    Methods currently used to analyse osteolytic lesions caused by malignancies such as multiple myeloma and metastatic breast cancer vary from basic 2-D X-ray analysis to 2-D images of micro-CT datasets analysed with non-specialised image software such as ImageJ. However, these methods have significant limitations. They do not capture 3-D data, they are time-consuming and they often suffer from inter-user variability. We therefore sought to develop a rapid and reproducible method to analyse 3-D osteolytic lesions in mice with cancer-induced bone disease. To this end, we have developed Osteolytica, an image analysis software method featuring an easy to use, step-by-step interface to measure lytic bone lesions. Osteolytica utilises novel graphics card acceleration (parallel computing) and 3-D rendering to provide rapid reconstruction and analysis of osteolytic lesions. To evaluate the use of Osteolytica we analysed tibial micro-CT datasets from murine models of cancer-induced bone disease and compared the results to those obtained using a standard ImageJ analysis method. Firstly, to assess inter-user variability we deployed four independent researchers to analyse tibial datasets from the U266-NSG murine model of myeloma. Using ImageJ, inter-user variability between the bones was substantial (±19.6%), in contrast to using Osteolytica, which demonstrated minimal variability (±0.5%). Secondly, tibial datasets from U266-bearing NSG mice or BALB/c mice injected with the metastatic breast cancer cell line 4T1 were compared to tibial datasets from aged and sex-matched non-tumour control mice. Analyses by both Osteolytica and ImageJ showed significant increases in bone lesion area in tumour-bearing mice compared to control mice. These results confirm that Osteolytica performs as well as the current 2-D ImageJ osteolytic lesion analysis method. However, Osteolytica is advantageous in that it analyses over the entirety of the bone volume (as opposed to selected 2-D images), it

  5. Osteolytica: An automated image analysis software package that rapidly measures cancer-induced osteolytic lesions in in vivo models with greater reproducibility compared to other commonly used methods☆

    PubMed Central

    Evans, H.R.; Karmakharm, T.; Lawson, M.A.; Walker, R.E.; Harris, W.; Fellows, C.; Huggins, I.D.; Richmond, P.; Chantry, A.D.

    2016-01-01

    Methods currently used to analyse osteolytic lesions caused by malignancies such as multiple myeloma and metastatic breast cancer vary from basic 2-D X-ray analysis to 2-D images of micro-CT datasets analysed with non-specialised image software such as ImageJ. However, these methods have significant limitations. They do not capture 3-D data, they are time-consuming and they often suffer from inter-user variability. We therefore sought to develop a rapid and reproducible method to analyse 3-D osteolytic lesions in mice with cancer-induced bone disease. To this end, we have developed Osteolytica, an image analysis software method featuring an easy to use, step-by-step interface to measure lytic bone lesions. Osteolytica utilises novel graphics card acceleration (parallel computing) and 3-D rendering to provide rapid reconstruction and analysis of osteolytic lesions. To evaluate the use of Osteolytica we analysed tibial micro-CT datasets from murine models of cancer-induced bone disease and compared the results to those obtained using a standard ImageJ analysis method. Firstly, to assess inter-user variability we deployed four independent researchers to analyse tibial datasets from the U266-NSG murine model of myeloma. Using ImageJ, inter-user variability between the bones was substantial (± 19.6%), in contrast to using Osteolytica, which demonstrated minimal variability (± 0.5%). Secondly, tibial datasets from U266-bearing NSG mice or BALB/c mice injected with the metastatic breast cancer cell line 4T1 were compared to tibial datasets from aged and sex-matched non-tumour control mice. Analyses by both Osteolytica and ImageJ showed significant increases in bone lesion area in tumour-bearing mice compared to control mice. These results confirm that Osteolytica performs as well as the current 2-D ImageJ osteolytic lesion analysis method. However, Osteolytica is advantageous in that it analyses over the entirety of the bone volume (as opposed to selected 2-D images

  6. CellSeT: novel software to extract and analyze structured networks of plant cells from confocal images.

    PubMed

    Pound, Michael P; French, Andrew P; Wells, Darren M; Bennett, Malcolm J; Pridmore, Tony P

    2012-04-01

    It is increasingly important in life sciences that many cell-scale and tissue-scale measurements are quantified from confocal microscope images. However, extracting and analyzing large-scale confocal image data sets represents a major bottleneck for researchers. To aid this process, CellSeT software has been developed, which utilizes tissue-scale structure to help segment individual cells. We provide examples of how the CellSeT software can be used to quantify fluorescence of hormone-responsive nuclear reporters, determine membrane protein polarity, extract cell and tissue geometry for use in later modeling, and take many additional biologically relevant measures using an extensible plug-in toolset. Application of CellSeT promises to remove subjectivity from the resulting data sets and facilitate higher-throughput, quantitative approaches to plant cell research.

  7. Image-based tracking: a new emerging standard

    NASA Astrophysics Data System (ADS)

    Antonisse, Jim; Randall, Scott

    2012-06-01

    Automated moving object detection and tracking are increasingly viewed as solutions to the enormous data volumes resulting from emerging wide-area persistent surveillance systems. In a previous paper we described a Motion Imagery Standards Board (MISB) initiative to help address this problem: the specification of a micro-architecture for the automatic extraction of motion indicators and tracks. This paper reports on the development of an extended specification of the plug-and-play tracking micro-architecture, on its status as an emerging standard across DoD, the Intelligence Community, and NATO.

  8. Application of adaptive optics in retinal imaging: a quantitative and clinical comparison with standard cameras

    NASA Astrophysics Data System (ADS)

    Barriga, E. S.; Erry, G.; Yang, S.; Russell, S.; Raman, B.; Soliz, P.

    2005-04-01

    Aim: The objective of this project was to evaluate high resolution images from an adaptive optics retinal imager through comparisons with standard film-based and standard digital fundus imagers. Methods: A clinical prototype adaptive optics fundus imager (AOFI) was used to collect retinal images from subjects with various forms of retinopathy to determine whether improved visibility into the disease could be provided to the clinician. The AOFI achieves low-order correction of aberrations through a closed-loop wavefront sensor and an adaptive optics system. The remaining high-order aberrations are removed by direct deconvolution using the point spread function (PSF) or by blind deconvolution when the PSF is not available. An ophthalmologist compared the AOFI images with standard fundus images and provided a clinical evaluation of all the modalities and processing techniques. All images were also analyzed using a quantitative image quality index. Results: This system has been tested on three human subjects (one normal and two with retinopathy). In the diabetic patient vascular abnormalities were detected with the AOFI that cannot be resolved with the standard fundus camera. Very small features, such as the fine vascular structures on the optic disc and the individual nerve fiber bundles are easily resolved by the AOFI. Conclusion: This project demonstrated that adaptive optic images have great potential in providing clinically significant detail of anatomical and pathological structures to the ophthalmologist.

  9. DICOM for quantitative imaging biomarker development: a standards based approach to sharing clinical data and structured PET/CT analysis results in head and neck cancer research.

    PubMed

    Fedorov, Andriy; Clunie, David; Ulrich, Ethan; Bauer, Christian; Wahle, Andreas; Brown, Bartley; Onken, Michael; Riesmeier, Jörg; Pieper, Steve; Kikinis, Ron; Buatti, John; Beichel, Reinhard R

    2016-01-01

    Background. Imaging biomarkers hold tremendous promise for precision medicine clinical applications. Development of such biomarkers relies heavily on image post-processing tools for automated image quantitation. Their deployment in the context of clinical research necessitates interoperability with the clinical systems. Comparison with the established outcomes and evaluation tasks motivate integration of the clinical and imaging data, and the use of standardized approaches to support annotation and sharing of the analysis results and semantics. We developed the methodology and tools to support these tasks in Positron Emission Tomography and Computed Tomography (PET/CT) quantitative imaging (QI) biomarker development applied to head and neck cancer (HNC) treatment response assessment, using the Digital Imaging and Communications in Medicine (DICOM(®)) international standard and free open-source software. Methods. Quantitative analysis of PET/CT imaging data collected on patients undergoing treatment for HNC was conducted. Processing steps included Standardized Uptake Value (SUV) normalization of the images, segmentation of the tumor using manual and semi-automatic approaches, automatic segmentation of the reference regions, and extraction of the volumetric segmentation-based measurements. Suitable components of the DICOM standard were identified to model the various types of data produced by the analysis. A developer toolkit of conversion routines and an Application Programming Interface (API) were contributed and applied to create a standards-based representation of the data. Results. DICOM Real World Value Mapping, Segmentation and Structured Reporting objects were utilized for standards-compliant representation of the PET/CT QI analysis results and relevant clinical data. A number of correction proposals to the standard were developed. The open-source DICOM toolkit (DCMTK) was improved to simplify the task of DICOM encoding by introducing new API abstractions

  10. DICOM for quantitative imaging biomarker development: a standards based approach to sharing clinical data and structured PET/CT analysis results in head and neck cancer research.

    PubMed

    Fedorov, Andriy; Clunie, David; Ulrich, Ethan; Bauer, Christian; Wahle, Andreas; Brown, Bartley; Onken, Michael; Riesmeier, Jörg; Pieper, Steve; Kikinis, Ron; Buatti, John; Beichel, Reinhard R

    2016-01-01

    Background. Imaging biomarkers hold tremendous promise for precision medicine clinical applications. Development of such biomarkers relies heavily on image post-processing tools for automated image quantitation. Their deployment in the context of clinical research necessitates interoperability with the clinical systems. Comparison with the established outcomes and evaluation tasks motivate integration of the clinical and imaging data, and the use of standardized approaches to support annotation and sharing of the analysis results and semantics. We developed the methodology and tools to support these tasks in Positron Emission Tomography and Computed Tomography (PET/CT) quantitative imaging (QI) biomarker development applied to head and neck cancer (HNC) treatment response assessment, using the Digital Imaging and Communications in Medicine (DICOM(®)) international standard and free open-source software. Methods. Quantitative analysis of PET/CT imaging data collected on patients undergoing treatment for HNC was conducted. Processing steps included Standardized Uptake Value (SUV) normalization of the images, segmentation of the tumor using manual and semi-automatic approaches, automatic segmentation of the reference regions, and extraction of the volumetric segmentation-based measurements. Suitable components of the DICOM standard were identified to model the various types of data produced by the analysis. A developer toolkit of conversion routines and an Application Programming Interface (API) were contributed and applied to create a standards-based representation of the data. Results. DICOM Real World Value Mapping, Segmentation and Structured Reporting objects were utilized for standards-compliant representation of the PET/CT QI analysis results and relevant clinical data. A number of correction proposals to the standard were developed. The open-source DICOM toolkit (DCMTK) was improved to simplify the task of DICOM encoding by introducing new API abstractions

  11. DICOM for quantitative imaging biomarker development: a standards based approach to sharing clinical data and structured PET/CT analysis results in head and neck cancer research

    PubMed Central

    Clunie, David; Ulrich, Ethan; Bauer, Christian; Wahle, Andreas; Brown, Bartley; Onken, Michael; Riesmeier, Jörg; Pieper, Steve; Kikinis, Ron; Buatti, John; Beichel, Reinhard R.

    2016-01-01

    Background. Imaging biomarkers hold tremendous promise for precision medicine clinical applications. Development of such biomarkers relies heavily on image post-processing tools for automated image quantitation. Their deployment in the context of clinical research necessitates interoperability with the clinical systems. Comparison with the established outcomes and evaluation tasks motivate integration of the clinical and imaging data, and the use of standardized approaches to support annotation and sharing of the analysis results and semantics. We developed the methodology and tools to support these tasks in Positron Emission Tomography and Computed Tomography (PET/CT) quantitative imaging (QI) biomarker development applied to head and neck cancer (HNC) treatment response assessment, using the Digital Imaging and Communications in Medicine (DICOM®) international standard and free open-source software. Methods. Quantitative analysis of PET/CT imaging data collected on patients undergoing treatment for HNC was conducted. Processing steps included Standardized Uptake Value (SUV) normalization of the images, segmentation of the tumor using manual and semi-automatic approaches, automatic segmentation of the reference regions, and extraction of the volumetric segmentation-based measurements. Suitable components of the DICOM standard were identified to model the various types of data produced by the analysis. A developer toolkit of conversion routines and an Application Programming Interface (API) were contributed and applied to create a standards-based representation of the data. Results. DICOM Real World Value Mapping, Segmentation and Structured Reporting objects were utilized for standards-compliant representation of the PET/CT QI analysis results and relevant clinical data. A number of correction proposals to the standard were developed. The open-source DICOM toolkit (DCMTK) was improved to simplify the task of DICOM encoding by introducing new API abstractions

  12. First experiences with the implementation of the European standard EN 62304 on medical device software for the quality assurance of a radiotherapy unit

    PubMed Central

    2014-01-01

    Background According to the latest amendment of the Medical Device Directive standalone software qualifies as a medical device when intended by the manufacturer to be used for medical purposes. In this context, the EN 62304 standard is applicable which defines the life-cycle requirements for the development and maintenance of medical device software. A pilot project was launched to acquire skills in implementing this standard in a hospital-based environment (in-house manufacture). Methods The EN 62304 standard outlines minimum requirements for each stage of the software life-cycle, defines the activities and tasks to be performed and scales documentation and testing according to its criticality. The required processes were established for the pre-existent decision-support software FlashDumpComparator (FDC) used during the quality assurance of treatment-relevant beam parameters. As the EN 62304 standard implicates compliance with the EN ISO 14971 standard on the application of risk management to medical devices, a risk analysis was carried out to identify potential hazards and reduce the associated risks to acceptable levels. Results The EN 62304 standard is difficult to implement without proper tools, thus open-source software was selected and integrated into a dedicated development platform. The control measures yielded by the risk analysis were independently implemented and verified, and a script-based test automation was retrofitted to reduce the associated test effort. After all documents facilitating the traceability of the specified requirements to the corresponding tests and of the control measures to the proof of execution were generated, the FDC was released as an accessory to the HIT facility. Conclusions The implementation of the EN 62304 standard was time-consuming, and a learning curve had to be overcome during the first iterations of the associated processes, but many process descriptions and all software tools can be re-utilized in follow-up projects

  13. Interference-free ultrasound imaging during HIFU therapy, using software tools

    NASA Technical Reports Server (NTRS)

    Vaezy, Shahram (Inventor); Held, Robert (Inventor); Sikdar, Siddhartha (Inventor); Managuli, Ravi (Inventor); Zderic, Vesna (Inventor)

    2010-01-01

    Disclosed herein is a method for obtaining a composite interference-free ultrasound image when non-imaging ultrasound waves would otherwise interfere with ultrasound imaging. A conventional ultrasound imaging system is used to collect frames of ultrasound image data in the presence of non-imaging ultrasound waves, such as high-intensity focused ultrasound (HIFU). The frames are directed to a processor that analyzes the frames to identify portions of the frame that are interference-free. Interference-free portions of a plurality of different ultrasound image frames are combined to generate a single composite interference-free ultrasound image that is displayed to a user. In this approach, a frequency of the non-imaging ultrasound waves is offset relative to a frequency of the ultrasound imaging waves, such that the interference introduced by the non-imaging ultrasound waves appears in a different portion of the frames.

  14. Three-Dimensional Root Phenotyping with a Novel Imaging and Software Platform1[C][W][OA

    PubMed Central

    Clark, Randy T.; MacCurdy, Robert B.; Jung, Janelle K.; Shaff, Jon E.; McCouch, Susan R.; Aneshansley, Daniel J.; Kochian, Leon V.

    2011-01-01

    A novel imaging and software platform was developed for the high-throughput phenotyping of three-dimensional root traits during seedling development. To demonstrate the platform’s capacity, plants of two rice (Oryza sativa) genotypes, Azucena and IR64, were grown in a transparent gellan gum system and imaged daily for 10 d. Rotational image sequences consisting of 40 two-dimensional images were captured using an optically corrected digital imaging system. Three-dimensional root reconstructions were generated and analyzed using a custom-designed software, RootReader3D. Using the automated and interactive capabilities of RootReader3D, five rice root types were classified and 27 phenotypic root traits were measured to characterize these two genotypes. Where possible, measurements from the three-dimensional platform were validated and were highly correlated with conventional two-dimensional measurements. When comparing gellan gum-grown plants with those grown under hydroponic and sand culture, significant differences were detected in morphological root traits (P < 0.05). This highly flexible platform provides the capacity to measure root traits with a high degree of spatial and temporal resolution and will facilitate novel investigations into the development of entire root systems or selected components of root systems. In combination with the extensive genetic resources that are now available, this platform will be a powerful resource to further explore the molecular and genetic determinants of root system architecture. PMID:21454799

  15. Experiment Design Regularization-Based Hardware/Software Codesign for Real-Time Enhanced Imaging in Uncertain Remote Sensing Environment

    NASA Astrophysics Data System (ADS)

    Castillo Atoche, A.; Torres Roman, D.; Shkvarko, Y.

    2010-12-01

    A new aggregated Hardware/Software (HW/SW) codesign approach to optimization of the digital signal processing techniques for enhanced imaging with real-world uncertain remote sensing (RS) data based on the concept of descriptive experiment design regularization (DEDR) is addressed. We consider the applications of the developed approach to typical single-look synthetic aperture radar (SAR) imaging systems operating in the real-world uncertain RS scenarios. The software design is aimed at the algorithmic-level decrease of the computational load of the large-scale SAR image enhancement tasks. The innovative algorithmic idea is to incorporate into the DEDR-optimized fixed-point iterative reconstruction/enhancement procedure the convex convergence enforcement regularization via constructing the proper multilevel projections onto convex sets (POCS) in the solution domain. The hardware design is performed via systolic array computing based on a Xilinx Field Programmable Gate Array (FPGA) XC4VSX35-10ff668 and is aimed at implementing the unified DEDR-POCS image enhancement/reconstruction procedures in a computationally efficient multi-level parallel fashion that meets the (near) real-time image processing requirements. Finally, we comment on the simulation results indicative of the significantly increased performance efficiency both in resolution enhancement and in computational complexity reduction metrics gained with the proposed aggregated HW/SW co-design approach.

  16. Use of ImageJ software for histomorphometric evaluation of normal and severely affected canine ear canals.

    PubMed

    Zur, Gila; Klement, Eyal

    2015-10-01

    Morphological studies comparing normal and diseased ear canals use primarily subjective scoring. The aim of this study was to compare normal and severely affected ears in dogs with objective measurements using ImageJ software. Ear canals were harvested from cadavers with normal ears and from dogs that underwent total ear canal ablation for unresolved otitis. Histopathology samples from ear canals were evaluated by semi-quantitative scoring and also by using ImageJ-software for histomorphometric measurements. The normal ears were compared to the severely affected ears using the 2 methods. The 2 methods were significantly (P < 0.0001) correlated for epidermal hyperplasia, ceruminous gland dilation, and hyperplasia and tissue inflammation, which were significantly greater in the severely affected ears (P < 0.0001). This study demonstrated that there is a very high correlation between the 2 methods for the most markedly affected components of otitis externa and that ImageJ software can be efficiently used to measure and evaluate ear canal histomorphometry.

  17. SU-E-J-42: Customized Deformable Image Registration Using Open-Source Software SlicerRT

    SciTech Connect

    Gaitan, J Cifuentes; Chin, L; Pignol, J; Kirby, N; Pouliot, J; Lasso, A; Pinter, C; Fichtinger, G

    2014-06-01

    Purpose: SlicerRT is a flexible platform that allows the user to incorporate the necessary images registration and processing tools to improve clinical workflow. This work validates the accuracy and the versatility of the deformable image registration algorithm of the free open-source software SlicerRT using a deformable physical pelvic phantom versus available commercial image fusion algorithms. Methods: Optical camera images of nonradiopaque markers implanted in an anatomical pelvic phantom were used to measure the ground-truth deformation and evaluate the theoretical deformations for several DIR algorithms. To perform the registration, full and empty bladder computed tomography (CT) images of the phantom were obtained and used as fixed and moving images, respectively. The DIR module, found in SlicerRT, used a B-spline deformable image registration with multiple optimization parameters that allowed customization of the registration including a regularization term that controlled the amount of local voxel displacement. The virtual deformation field at the center of the phantom was obtained and compared to the experimental ground-truth values. The parameters of SlicerRT were then varied to improve spatial accuracy. To quantify image similarity, the mean absolute difference (MAD) parameter using Hounsfield units was calculated. In addition, the Dice coefficient of the contoured rectum was evaluated to validate the strength of the algorithm to transfer anatomical contours. Results: Overall, SlicerRT achieved one of the lowest MAD values across the algorithm spectrum, but slightly smaller mean spatial errors in comparison to MIM software (MIM). On the other hand, SlicerRT created higher mean spatial errors than Velocity Medical Solutions (VEL), although obtaining an improvement on the DICE to 0.91. The large spatial errors were attributed to the poor contrast in the prostate bladder interface of the phantom. Conclusion: Based phantom validation, SlicerRT is capable of

  18. Nuquantus: Machine learning software for the characterization and quantification of cell nuclei in complex immunofluorescent tissue images

    PubMed Central

    Gross, Polina; Honnorat, Nicolas; Varol, Erdem; Wallner, Markus; Trappanese, Danielle M.; Sharp, Thomas E.; Starosta, Timothy; Duran, Jason M.; Koller, Sarah; Davatzikos, Christos; Houser, Steven R.

    2016-01-01

    Determination of fundamental mechanisms of disease often hinges on histopathology visualization and quantitative image analysis. Currently, the analysis of multi-channel fluorescence tissue images is primarily achieved by manual measurements of tissue cellular content and sub-cellular compartments. Since the current manual methodology for image analysis is a tedious and subjective approach, there is clearly a need for an automated analytical technique to process large-scale image datasets. Here, we introduce Nuquantus (Nuclei quantification utility software) - a novel machine learning-based analytical method, which identifies, quantifies and classifies nuclei based on cells of interest in composite fluorescent tissue images, in which cell borders are not visible. Nuquantus is an adaptive framework that learns the morphological attributes of intact tissue in the presence of anatomical variability and pathological processes. Nuquantus allowed us to robustly perform quantitative image analysis on remodeling cardiac tissue after myocardial infarction. Nuquantus reliably classifies cardiomyocyte versus non-cardiomyocyte nuclei and detects cell proliferation, as well as cell death in different cell classes. Broadly, Nuquantus provides innovative computerized methodology to analyze complex tissue images that significantly facilitates image analysis and minimizes human bias. PMID:27005843

  19. Nuquantus: Machine learning software for the characterization and quantification of cell nuclei in complex immunofluorescent tissue images

    NASA Astrophysics Data System (ADS)

    Gross, Polina; Honnorat, Nicolas; Varol, Erdem; Wallner, Markus; Trappanese, Danielle M.; Sharp, Thomas E.; Starosta, Timothy; Duran, Jason M.; Koller, Sarah; Davatzikos, Christos; Houser, Steven R.

    2016-03-01

    Determination of fundamental mechanisms of disease often hinges on histopathology visualization and quantitative image analysis. Currently, the analysis of multi-channel fluorescence tissue images is primarily achieved by manual measurements of tissue cellular content and sub-cellular compartments. Since the current manual methodology for image analysis is a tedious and subjective approach, there is clearly a need for an automated analytical technique to process large-scale image datasets. Here, we introduce Nuquantus (Nuclei quantification utility software) - a novel machine learning-based analytical method, which identifies, quantifies and classifies nuclei based on cells of interest in composite fluorescent tissue images, in which cell borders are not visible. Nuquantus is an adaptive framework that learns the morphological attributes of intact tissue in the presence of anatomical variability and pathological processes. Nuquantus allowed us to robustly perform quantitative image analysis on remodeling cardiac tissue after myocardial infarction. Nuquantus reliably classifies cardiomyocyte versus non-cardiomyocyte nuclei and detects cell proliferation, as well as cell death in different cell classes. Broadly, Nuquantus provides innovative computerized methodology to analyze complex tissue images that significantly facilitates image analysis and minimizes human bias.

  20. Image intensity standardization in 3D rotational angiography and its application to vascular segmentation

    NASA Astrophysics Data System (ADS)

    Bogunović, Hrvoje; Radaelli, Alessandro G.; De Craene, Mathieu; Delgado, David; Frangi, Alejandro F.

    2008-03-01

    Knowledge-based vascular segmentation methods typically rely on a pre-built training set of segmented images, which is used to estimate the probability of each voxel to belong to a particular tissue. In 3D Rotational Angiography (3DRA) the same tissue can correspond to different intensity ranges depending on the imaging device, settings and contrast injection protocol. As a result, pre-built training sets do not apply to all images and the best segmentation results are often obtained when the training set is built specifically for each individual image. We present an Image Intensity Standardization (IIS) method designed to ensure a correspondence between specific tissues and intensity ranges common to every image that undergoes the standardization process. The method applies a piecewise linear transformation to the image that aligns the intensity histogram to the histogram taken as reference. The reference histogram has been selected from a high quality image not containing artificial objects such as coils or stents. This is a pre-processing step that allows employing a training set built on a limited number of standardized images for the segmentation of standardized images which were not part of the training set. The effectiveness of the presented IIS technique in combination with a well-validated knowledge-based vasculature segmentation method is quantified on a variety of 3DRA images depicting cerebral arteries and intracranial aneurysms. The proposed IIS method offers a solution to the standardization of tissue classes in routine medical images and effectively improves automation and usability of knowledge-based vascular segmentation algorithms.

  1. Software workflow for the automatic tagging of medieval manuscript images (SWATI)

    NASA Astrophysics Data System (ADS)

    Chandna, Swati; Tonne, Danah; Jejkal, Thomas; Stotzka, Rainer; Krause, Celia; Vanscheidt, Philipp; Busch, Hannah; Prabhune, Ajinkya

    2015-01-01

    Digital methods, tools and algorithms are gaining in importance for the analysis of digitized manuscript collections in the arts and humanities. One example is the BMBF-funded research project "eCodicology" which aims to design, evaluate and optimize algorithms for the automatic identification of macro- and micro-structural layout features of medieval manuscripts. The main goal of this research project is to provide better insights into high-dimensional datasets of medieval manuscripts for humanities scholars. The heterogeneous nature and size of the humanities data and the need to create a database of automatically extracted reproducible features for better statistical and visual analysis are the main challenges in designing a workflow for the arts and humanities. This paper presents a concept of a workflow for the automatic tagging of medieval manuscripts. As a starting point, the workflow uses medieval manuscripts digitized within the scope of the project Virtual Scriptorium St. Matthias". Firstly, these digitized manuscripts are ingested into a data repository. Secondly, specific algorithms are adapted or designed for the identification of macro- and micro-structural layout elements like page size, writing space, number of lines etc. And lastly, a statistical analysis and scientific evaluation of the manuscripts groups are performed. The workflow is designed generically to process large amounts of data automatically with any desired algorithm for feature extraction. As a result, a database of objectified and reproducible features is created which helps to analyze and visualize hidden relationships of around 170,000 pages. The workflow shows the potential of automatic image analysis by enabling the processing of a single page in less than a minute. Furthermore, the accuracy tests of the workflow on a small set of manuscripts with respect to features like page size and text areas show that automatic and manual analysis are comparable. The usage of a computer

  2. Image processing of standard grading scales for objective assessment of contact lens wear complications

    NASA Astrophysics Data System (ADS)

    Perez-Cabre, Elisabet; Millan, Maria S.; Abril, Hector C.; Otxoa, E.

    2004-10-01

    Ocular complications in contact lens wearers are usually graded by specialists using visual inspection and comparing with established standards. The standard grading scales consist of either a set of illustrations or photographs ordered from a normal situation to a severe complication. In this work, an objective assessment of contact lens wear complications is intended by applying different image processing techniques to two standard grading scales (Efron and CCLRU grading scales). In particular, conjunctival hyperemia and papillary conjunctivitis are considered. Given a set of standard illustrations or pictures for each considered ocular disorder, image preprocessing is needed to compare equivalent areas. Histogram analysis allows segmenting vessel and background pixel populations, which are used to determine the most relevant features in the measurement of contact lens effects. Features such as color, total area of vessels and vessel length are used to evaluate bulbar and lid redness. The procedure to obtain an automatic grading method by digital image analysis of standard grading scales is described.

  3. The 'Densitometric Image Analysis Software' and its application to determine stepwise equilibrium constants from electrophoretic mobility shift assays.

    PubMed

    van Oeffelen, Liesbeth; Peeters, Eveline; Nguyen Le Minh, Phu; Charlier, Daniël

    2014-01-01

    Current software applications for densitometric analysis, such as ImageJ, QuantityOne (BioRad) and the Intelligent or Advanced Quantifier (Bio Image) do not allow to take the non-linearity of autoradiographic films into account during calibration. As a consequence, quantification of autoradiographs is often regarded as problematic, and phosphorimaging is the preferred alternative. However, the non-linear behaviour of autoradiographs can be described mathematically, so it can be accounted for. Therefore, the 'Densitometric Image Analysis Software' has been developed, which allows to quantify electrophoretic bands in autoradiographs, as well as in gels and phosphorimages, while providing optimized band selection support to the user. Moreover, the program can determine protein-DNA binding constants from Electrophoretic Mobility Shift Assays (EMSAs). For this purpose, the software calculates a chosen stepwise equilibrium constant for each migration lane within the EMSA, and estimates the errors due to non-uniformity of the background noise, smear caused by complex dissociation or denaturation of double-stranded DNA, and technical errors such as pipetting inaccuracies. Thereby, the program helps the user to optimize experimental parameters and to choose the best lanes for estimating an average equilibrium constant. This process can reduce the inaccuracy of equilibrium constants from the usual factor of 2 to about 20%, which is particularly useful when determining position weight matrices and cooperative binding constants to predict genomic binding sites. The MATLAB source code, platform-dependent software and installation instructions are available via the website http://micr.vub.ac.be. PMID:24465496

  4. NiftyFit: a Software Package for Multi-parametric Model-Fitting of 4D Magnetic Resonance Imaging Data.

    PubMed

    Melbourne, Andrew; Toussaint, Nicolas; Owen, David; Simpson, Ivor; Anthopoulos, Thanasis; De Vita, Enrico; Atkinson, David; Ourselin, Sebastien

    2016-07-01

    Multi-modal, multi-parametric Magnetic Resonance (MR) Imaging is becoming an increasingly sophisticated tool for neuroimaging. The relationships between parameters estimated from different individual MR modalities have the potential to transform our understanding of brain function, structure, development and disease. This article describes a new software package for such multi-contrast Magnetic Resonance Imaging that provides a unified model-fitting framework. We describe model-fitting functionality for Arterial Spin Labeled MRI, T1 Relaxometry, T2 relaxometry and Diffusion Weighted imaging, providing command line documentation to generate the figures in the manuscript. Software and data (using the nifti file format) used in this article are simultaneously provided for download. We also present some extended applications of the joint model fitting framework applied to diffusion weighted imaging and T2 relaxometry, in order to both improve parameter estimation in these models and generate new parameters that link different MR modalities. NiftyFit is intended as a clear and open-source educational release so that the user may adapt and develop their own functionality as they require.

  5. NiftyFit: a Software Package for Multi-parametric Model-Fitting of 4D Magnetic Resonance Imaging Data.

    PubMed

    Melbourne, Andrew; Toussaint, Nicolas; Owen, David; Simpson, Ivor; Anthopoulos, Thanasis; De Vita, Enrico; Atkinson, David; Ourselin, Sebastien

    2016-07-01

    Multi-modal, multi-parametric Magnetic Resonance (MR) Imaging is becoming an increasingly sophisticated tool for neuroimaging. The relationships between parameters estimated from different individual MR modalities have the potential to transform our understanding of brain function, structure, development and disease. This article describes a new software package for such multi-contrast Magnetic Resonance Imaging that provides a unified model-fitting framework. We describe model-fitting functionality for Arterial Spin Labeled MRI, T1 Relaxometry, T2 relaxometry and Diffusion Weighted imaging, providing command line documentation to generate the figures in the manuscript. Software and data (using the nifti file format) used in this article are simultaneously provided for download. We also present some extended applications of the joint model fitting framework applied to diffusion weighted imaging and T2 relaxometry, in order to both improve parameter estimation in these models and generate new parameters that link different MR modalities. NiftyFit is intended as a clear and open-source educational release so that the user may adapt and develop their own functionality as they require. PMID:26972806

  6. ACR/NEMA Digital Image Interface Standard (An Illustrated Protocol Overview)

    NASA Astrophysics Data System (ADS)

    Lawrence, G. Robert

    1985-09-01

    The American College of Radiologists (ACR) and the National Electrical Manufacturers Association (NEMA) have sponsored a joint standards committee mandated to develop a universal interface standard for the transfer of radiology images among a variety of PACS imaging devicesl. The resulting standard interface conforms to the ISO/OSI standard reference model for network protocol layering. The standard interface specifies the lower layers of the reference model (Physical, Data Link, Transport and Session) and implies a requirement of the Network Layer should a requirement for a network exist. The message content has been considered and a flexible message and image format specified. The following Imaging Equipment modalities are supported by the standard interface... CT Computed Tomograpy DS Digital Subtraction NM Nuclear Medicine US Ultrasound MR Magnetic Resonance DR Digital Radiology The following data types are standardized over the transmission interface media.... IMAGE DATA DIGITIZED VOICE HEADER DATA RAW DATA TEXT REPORTS GRAPHICS OTHERS This paper consists of text supporting the illustrated protocol data flow. Each layer will be individually treated. Particular emphasis will be given to the Data Link layer (Frames) and the Transport layer (Packets). The discussion utilizes a finite state sequential machine model for the protocol layers.

  7. Filtering Chromatic Aberration for Wide Acceptance Angle Electrostatic Lenses II--Experimental Evaluation and Software-Based Imaging Energy Analyzer.

    PubMed

    Fazekas, Ádám; Daimon, Hiroshi; Matsuda, Hiroyuki; Tóth, László

    2016-03-01

    Here, the experimental results of the method of filtering the effect of chromatic aberration for wide acceptance angle electrostatic lens-based system are described. This method can eliminate the effect of chromatic aberration from the images of a measured spectral image sequence by determining and removing the effect of higher and lower kinetic energy electrons on each different energy image, which leads to significant improvement of image and spectral quality. The method is based on the numerical solution of a large system of linear equations and equivalent with a multivariate strongly nonlinear deconvolution method. A matrix whose elements describe the strongly nonlinear chromatic aberration-related transmission function of the lens system acts on the vector of the ordered pixels of the distortion free spectral image sequence, and produces the vector of the ordered pixels of the measured spectral image sequence. Since the method can be applied not only on 2D real- and $k$ -space diffraction images, but also along a third dimension of the image sequence that is along the optical or in the 3D parameter space, the energy axis, it functions as a software-based imaging energy analyzer (SBIEA). It can also be applied in cases of light or other type of optics for different optical aberrations and distortions. In case of electron optics, the SBIEA method makes possible the spectral imaging without the application of any other energy filter. It is notable that this method also eliminates the disturbing background significantly in the present investigated case of reflection electron energy loss spectra. It eliminates the instrumental effects and makes possible to measure the real physical processes better. PMID:26863662

  8. Novel development tool for software pipeline optimization for VLIW-DSPs used in real-time image processing

    NASA Astrophysics Data System (ADS)

    Fuertler, Johannes; Mayer, Konrad J.; Krattenthaler, Werner; Bajla, Ivan

    2003-04-01

    Although the hardware platform is often seen as the most important element of real-time imaging systems, software optimization can also provide remarkable reduction of overall computational costs. The recommended code development flow for digital signal processors based on the TMS320C6000(TM) architecture usually involves three phases: development of C code, refinement of C code, and programming linear assembly code. Each step requires a different level of knowledge of processor internals. The developer is not directly involved in the automatic scheduling process. In some cases, however, this may result in unacceptable code performance. A better solution can be achieved by scheduling the assembly code by hand. Unfortunately, scheduling of software pipelines by hand not only requires expert skills but is also time consuming, and moreover, prone to errors. To overcome these drawbacks we have designed an innovative development tool - the Software Pipeline Optimization Tool (SPOT(TM)). The SPOT is based on visualization of the scheduled assembly code by a two-dimensional interactive schedule editor, which is equipped with feedback mechanisms deduced from analysis of data dependencies and resource allocation conflicts. The paper addresses optimization techniques available by the application of the SPOT. Furthermore, the benefit of the SPOT is documented by more than 20 optimized image processing algorithms.

  9. NeuroGam Software Analysis in Epilepsy Diagnosis Using 99mTc-ECD Brain Perfusion SPECT Imaging.

    PubMed

    Fu, Peng; Zhang, Fang; Gao, Jianqing; Jing, Jianmin; Pan, Liping; Li, Dongxue; Wei, Lingge

    2015-09-20

    BACKGROUND The aim of this study was to explore the value of NeuroGam software in diagnosis of epilepsy by 99Tcm-ethyl cysteinate dimer (ECD) brain imaging. MATERIAL AND METHODS NeuroGam was used to analyze 52 cases of clinically proven epilepsy by 99Tcm-ECD brain imaging. The results were compared with EEG and MRI, and the positive rates and localization to epileptic foci were analyzed. RESULTS NeuroGam analysis showed that 42 of 52 epilepsy cases were abnormal. 99Tcm-ECD brain imaging revealed a positive rate of 80.8% (42/52), with 36 out of 42 patients (85.7%) clearly showing an abnormal area. Both were higher than that of brain perfusion SPECT, with a consistency of 64.5% (34/52) using these 2 methods. Decreased regional cerebral blood flow (rCBF) was observed in frontal (18), temporal (20), and parietal lobes (2). Decreased rCBF was seen in frontal and temporal lobes in 4 out of 36 patients, and in temporal and parietal lobes of 2 out of 36 patients. NeuroGam further showed that the abnormal area was located in a different functional area of the brain. EEG abnormalities were detected in 29 out of 52 patients (55.8%) with 16 cases (55.2%) clearly showing an abnormal area. MRI abnormalities were detected in 17 out of 43 cases (39.5%), including 9 cases (52.9%) clearly showing an abnormal area. The consistency of NeuroGam software analysis, and EEG and MRI were 48.1% (25/52) and 34.9% (15/43), respectively. CONCLUSIONS NeuroGam software analysis offers a higher sensitivity in detecting epilepsy than EEG or MRI. It is a powerful tool in 99Tcm-ECD brain imaging.

  10. Development of image and information management system for Korean standard brain

    NASA Astrophysics Data System (ADS)

    Chung, Soon Cheol; Choi, Do Young; Tack, Gye Rae; Sohn, Jin Hun

    2004-04-01

    The purpose of this study is to establish a reference for image acquisition for completing a standard brain for diverse Korean population, and to develop database management system that saves and manages acquired brain images and personal information of subjects. 3D MP-RAGE (Magnetization Prepared Rapid Gradient Echo) technique which has excellent Signal to Noise Ratio (SNR) and Contrast to Noise Ratio (CNR) as well as reduces image acquisition time was selected for anatomical image acquisition, and parameter values were obtained for the optimal image acquisition. Using these standards, image data of 121 young adults (early twenties) were obtained and stored in the system. System was designed to obtain, save, and manage not only anatomical image data but also subjects' basic demographic factors, medical history, handedness inventory, state-trait anxiety inventory, A-type personality inventory, self-assessment depression inventory, mini-mental state examination, intelligence test, and results of personality test via a survey questionnaire. Additionally this system was designed to have functions of saving, inserting, deleting, searching, and printing image data and personal information of subjects, and to have accessibility to them as well as automatic connection setup with ODBC. This newly developed system may have major contribution to the completion of a standard brain for diverse Korean population since it can save and manage their image data and personal information.

  11. SU-E-J-104: Evaluation of Accuracy for Various Deformable Image Registrations with Virtual Deformation QA Software

    SciTech Connect

    Han, S; Kim, K; Kim, M; Jung, H; Ji, Y; Choi, S; Park, S

    2015-06-15

    Purpose: The accuracy of deformable image registration (DIR) has a significant dosimetric impact in radiation treatment planning. We evaluated accuracy of various DIR algorithms using virtual deformation QA software (ImSimQA, Oncology System Limited, UK). Methods: The reference image (Iref) and volume (Vref) was first generated with IMSIMQA software. We deformed Iref with axial movement of deformation point and Vref depending on the type of deformation that are the deformation1 is to increase the Vref (relaxation) and the deformation 2 is to decrease the Vref (contraction) .The deformed image (Idef) and volume (Vdef) were inversely deformed to Iref and Vref using DIR algorithms. As a Result, we acquired deformed image (Iid) and volume (Vid). The DIR algorithms were optical flow (HS, IOF) and demons (MD, FD) of the DIRART. The image similarity evaluation between Iref and Iid was calculated by Normalized Mutual Information (NMI) and Normalized Cross Correlation (NCC). The value of Dice Similarity Coefficient (DSC) was used for evaluation of volume similarity. Results: When moving distance of deformation point was 4 mm, the value of NMI was above 1.81 and NCC was above 0.99 in all DIR algorithms. Since the degree of deformation was increased, the degree of image similarity was decreased. When the Vref increased or decreased about 12%, the difference between Vref and Vid was within ±5% regardless of the type of deformation. The value of DSC was above 0.95 in deformation1 except for the MD algorithm. In case of deformation 2, that of DSC was above 0.95 in all DIR algorithms. Conclusion: The Idef and Vdef have not been completely restored to Iref and Vref and the accuracy of DIR algorithms was different depending on the degree of deformation. Hence, the performance of DIR algorithms should be verified for the desired applications.

  12. New solutions for standardization, monitoring and quality management of fluorescence-based imaging systems (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Royon, Arnaud; Papon, Gautier

    2016-03-01

    Fluorescence microscopes have become ubiquitous in life sciences laboratories, including those focused on pharmaceuticals, diagnosis, and forensics. For the past few years, the need for both performance guarantees and quantifiable results has driven development in this area. However, the lack of appropriate standards and reference materials makes it difficult or impossible to compare the results of two fluorescence microscopes, or to measure performance fluctuations of one microscope over time. Therefore, the operation of fluorescence microscopes is not monitored as often as their use warrants - an issue that is recognized by both systems manufacturers and national metrology institutes. We have developed a new process that enables the etching of long-term stable fluorescent patterns with sub-micrometer sizes in three dimensions inside glass. In this paper, we present, based on this new process, a fluorescent multi-dimensional ruler and a dedicated software that are suitable for monitoring and quality management of fluorescence-based imaging systems (wide-field, confocal, multiphoton, high content machines). In addition to fluorescence, the same patterns exhibit bright- and dark-field contrast, DIC, and phase contrast, which make them also relevant to monitor these types of microscopes. Non-exhaustively, this new solution enables the measurement of: The stage repositioning accuracy; The illumination and detection homogeneities; The field flatness; The detectors' characteristics; The lateral and axial spatial resolutions; The spectral response (spectrum, intensity and lifetime) of the system. Thanks to the stability of the patterns, microscope performance assessment can be carried out as well in a daily basis as in the long term.

  13. Mission planning for Shuttle Imaging Radar-C (SIR-C) with a real-time interactive planning software

    NASA Technical Reports Server (NTRS)

    Potts, Su K.

    1993-01-01

    The Shuttle Imaging Radar-C (SIR-C) mission will operate from the payload bay of the space shuttle for 8 days, gathering Synthetic Aperture Radar (SAR) data over specific sites on the Earth. The short duration of the mission and the requirement for realtime planning offer challenges in mission planning and in the design of the Planning and Analysis Subsystem (PAS). The PAS generates shuttle ephemerides and mission planning data and provides an interactive real-time tool for quick mission replanning. It offers a multi-user and multiprocessing environment, and it is able to keep multiple versions of the mission timeline data while maintaining data integrity and security. Its flexible design allows one software to provide different menu options based on the user's operational function, and makes it easy to tailor the software for other Earth orbiting missions.

  14. INCITS W1.1 development update: appearance-based image quality standards for printers

    NASA Astrophysics Data System (ADS)

    Zeise, Eric K.; Rasmussen, D. René; Ng, Yee S.; Dalal, Edul; McCarthy, Ann; Williams, Don

    2008-01-01

    In September 2000, INCITS W1 (the U.S. representative of ISO/IEC JTC1/SC28, the standardization committee for office equipment) was chartered to develop an appearance-based image quality standard. (1),(2) The resulting W1.1 project is based on a proposal (3) that perceived image quality can be described by a small set of broad-based attributes. There are currently six ad hoc teams, each working towards the development of standards for evaluation of perceptual image quality of color printers for one or more of these image quality attributes. This paper summarizes the work in progress of the teams addressing the attributes of Macro-Uniformity, Colour Rendition, Gloss & Gloss Uniformity, Text & Line Quality and Effective Resolution.

  15. Histostitcher™: An informatics software platform for reconstructing whole-mount prostate histology using the extensible imaging platform framework

    PubMed Central

    Toth, Robert J.; Shih, Natalie; Tomaszewski, John E.; Feldman, Michael D.; Kutter, Oliver; Yu, Daphne N.; Paulus, John C.; Paladini, Ginaluca; Madabhushi, Anant

    2014-01-01

    Context: Co-registration of ex-vivo histologic images with pre-operative imaging (e.g., magnetic resonance imaging [MRI]) can be used to align and map disease extent, and to identify quantitative imaging signatures. However, ex-vivo histology images are frequently sectioned into quarters prior to imaging. Aims: This work presents Histostitcher™, a software system designed to create a pseudo whole mount histology section (WMHS) from a stitching of four individual histology quadrant images. Materials and Methods: Histostitcher™ uses user-identified fiducials on the boundary of two quadrants to stitch such quadrants. An original prototype of Histostitcher™ was designed using the Matlab programming languages. However, clinical use was limited due to slow performance, computer memory constraints and an inefficient workflow. The latest version was created using the extensible imaging platform (XIP™) architecture in the C++ programming language. A fast, graphics processor unit renderer was designed to intelligently cache the visible parts of the histology quadrants and the workflow was significantly improved to allow modifying existing fiducials, fast transformations of the quadrants and saving/loading sessions. Results: The new stitching platform yielded significantly more efficient workflow and reconstruction than the previous prototype. It was tested on a traditional desktop computer, a Windows 8 Surface Pro table device and a 27 inch multi-touch display, with little performance difference between the different devices. Conclusions: Histostitcher™ is a fast, efficient framework for reconstructing pseudo WMHS from individually imaged quadrants. The highly modular XIP™ framework was used to develop an intuitive interface and future work will entail mapping the disease extent from the pseudo WMHS onto pre-operative MRI. PMID:24843820

  16. The MicroAnalysis Toolkit: X-ray Fluorescence Image Processing Software

    SciTech Connect

    Webb, S. M.

    2011-09-09

    The MicroAnalysis Toolkit is an analysis suite designed for the processing of x-ray fluorescence microprobe data. The program contains a wide variety of analysis tools, including image maps, correlation plots, simple image math, image filtering, multiple energy image fitting, semi-quantitative elemental analysis, x-ray fluorescence spectrum analysis, principle component analysis, and tomographic reconstructions. To be as widely useful as possible, data formats from many synchrotron sources can be read by the program with more formats available by request. An overview of the most common features will be presented.

  17. Standardized quantitative measurements of wrist cartilage in healthy humans using 3T magnetic resonance imaging

    PubMed Central

    Zink, Jean-Vincent; Souteyrand, Philippe; Guis, Sandrine; Chagnaud, Christophe; Fur, Yann Le; Militianu, Daniela; Mattei, Jean-Pierre; Rozenbaum, Michael; Rosner, Itzhak; Guye, Maxime; Bernard, Monique; Bendahan, David

    2015-01-01

    AIM: To quantify the wrist cartilage cross-sectional area in humans from a 3D magnetic resonance imaging (MRI) dataset and to assess the corresponding reproducibility. METHODS: The study was conducted in 14 healthy volunteers (6 females and 8 males) between 30 and 58 years old and devoid of articular pain. Subjects were asked to lie down in the supine position with the right hand positioned above the pelvic region on top of a home-built rigid platform attached to the scanner bed. The wrist was wrapped with a flexible surface coil. MRI investigations were performed at 3T (Verio-Siemens) using volume interpolated breath hold examination (VIBE) and dual echo steady state (DESS) MRI sequences. Cartilage cross sectional area (CSA) was measured on a slice of interest selected from a 3D dataset of the entire carpus and metacarpal-phalangeal areas on the basis of anatomical criteria using conventional image processing radiology software. Cartilage cross-sectional areas between opposite bones in the carpal region were manually selected and quantified using a thresholding method. RESULTS: Cartilage CSA measurements performed on a selected predefined slice were 292.4 ± 39 mm2 using the VIBE sequence and slightly lower, 270.4 ± 50.6 mm2, with the DESS sequence. The inter (14.1%) and intra (2.4%) subject variability was similar for both MRI methods. The coefficients of variation computed for the repeated measurements were also comparable for the VIBE (2.4%) and the DESS (4.8%) sequences. The carpus length averaged over the group was 37.5 ± 2.8 mm with a 7.45% between-subjects coefficient of variation. Of note, wrist cartilage CSA measured with either the VIBE or the DESS sequences was linearly related to the carpal bone length. The variability between subjects was significantly reduced to 8.4% when the CSA was normalized with respect to the carpal bone length. CONCLUSION: The ratio between wrist cartilage CSA and carpal bone length is a highly reproducible standardized

  18. ImaSim, a software tool for basic education of medical x-ray imaging in radiotherapy and radiology

    NASA Astrophysics Data System (ADS)

    Landry, Guillaume; deBlois, François; Verhaegen, Frank

    2013-11-01

    Introduction: X-ray imaging is an important part of medicine and plays a crucial role in radiotherapy. Education in this field is mostly limited to textbook teaching due to equipment restrictions. A novel simulation tool, ImaSim, for teaching the fundamentals of the x-ray imaging process based on ray-tracing is presented in this work. ImaSim is used interactively via a graphical user interface (GUI). Materials and methods: The software package covers the main x-ray based medical modalities: planar kilo voltage (kV), planar (portal) mega voltage (MV), fan beam computed tomography (CT) and cone beam CT (CBCT) imaging. The user can modify the photon source, object to be imaged and imaging setup with three-dimensional editors. Objects are currently obtained by combining blocks with variable shapes. The imaging of three-dimensional voxelized geometries is currently not implemented, but can be added in a later release. The program follows a ray-tracing approach, ignoring photon scatter in its current implementation. Simulations of a phantom CT scan were generated in ImaSim and were compared to measured data in terms of CT number accuracy. Spatial variations in the photon fluence and mean energy from an x-ray tube caused by the heel effect were estimated from ImaSim and Monte Carlo simulations and compared. Results: In this paper we describe ImaSim and provide two examples of its capabilities. CT numbers were found to agree within 36 Hounsfield Units (HU) for bone, which corresponds to a 2% attenuation coefficient difference. ImaSim reproduced the heel effect reasonably well when compared to Monte Carlo simulations. Discussion: An x-ray imaging simulation tool is made available for teaching and research purposes. ImaSim provides a means to facilitate the teaching of medical x-ray imaging.

  19. Analysis of a marine phototrophic biofilm by confocal laser scanning microscopy using the new image quantification software PHLIP

    PubMed Central

    Mueller, Lukas N; de Brouwer, Jody FC; Almeida, Jonas S; Stal, Lucas J; Xavier, João B

    2006-01-01

    Background Confocal laser scanning microscopy (CLSM) is the method of choice to study interfacial biofilms and acquires time-resolved three-dimensional data of the biofilm structure. CLSM can be used in a multi-channel modus where the different channels map individual biofilm components. This communication presents a novel image quantification tool, PHLIP, for the quantitative analysis of large amounts of multichannel CLSM data in an automated way. PHLIP can be freely downloaded from Results PHLIP is an open source public license Matlab toolbox that includes functions for CLSM imaging data handling and ten image analysis operations describing various aspects of biofilm morphology. The use of PHLIP is here demonstrated by a study of the development of a natural marine phototrophic biofilm. It is shown how the examination of the individual biofilm components using the multi-channel capability of PHLIP allowed the description of the dynamic spatial and temporal separation of diatoms, bacteria and organic and inorganic matter during the shift from a bacteria-dominated to a diatom-dominated phototrophic biofilm. Reflection images and weight measurements complementing the PHLIP analyses suggest that a large part of the biofilm mass consisted of inorganic mineral material. Conclusion The presented case study reveals new insight into the temporal development of a phototrophic biofilm where multi-channel imaging allowed to parallel monitor the dynamics of the individual biofilm components over time. This application of PHLIP presents the power of biofilm image analysis by multi-channel CLSM software and demonstrates the importance of PHLIP for the scientific community as a flexible and extendable image analysis platform for automated image processing. PMID:16412253

  20. PCID and ASPIRE 2.0 - The Next Generation of AMOS Image Processing Software

    NASA Astrophysics Data System (ADS)

    Matson, C.; Soo Hoo, T.; Murphy, M.; Calef, B.; Beckner, C.; You, S.

    One of the missions of the Air Force Maui Optical and Supercomputing (AMOS) site is to generate high-resolution images of space objects using the Air Force telescopes located on Haleakala. Because atmospheric turbulence greatly reduces the resolution of space object images collected with ground-based telescopes, methods for overcoming atmospheric blurring are necessary. One such method is the use of adaptive optics systems to measure and compensate for atmospheric blurring in real time. A second method is to use image restoration algorithms on one or more short-exposure images of the space object under consideration. At AMOS, both methods are used routinely. In the case of adaptive optics, rarely can all atmospheric turbulence effects be removed from the imagery, so image restoration algorithms are useful even for adaptive-optics-corrected images. Historically, the bispectrum algorithm has been the primary image restoration algorithm used at AMOS. It has the advantages of being extremely fast (processing times of less than one second) and insensitive to atmospheric phase distortions. In addition, multi-frame blind deconvolution (MFBD) algorithms have also been used for image restoration. It has been observed empirically and with the use of computer simulation studies that MFBD algorithms produce higher-resolution image restorations than does the bispectrum algorithm. MFBD algorithms also do not need separate measurements of a star in order to work. However, in the past, MFBD algorithms have been factors of one hundred or more slower than the bispectrum algorithm, limiting their use to non-time-critical image restorations. Recently, with the financial support of AMOS and the High-Performance Computing Modernization Office, an MFBD algorithm called Physically-Constrained Iterative Deconvolution (PCID) has been efficiently parallelized and is able to produce image restorations in only a few seconds. In addition, with the financial support of AFOSR, it has been shown

  1. Software-based mitigation of image degradation due to atmospheric turbulence

    NASA Astrophysics Data System (ADS)

    Huebner, Claudia S.; Scheifling, Corinne

    2010-10-01

    Motion-Compensated Averaging (MCA) with blind deconvolution has proven successful in mitigating turbulence effects like image dancing and blurring. In this paper an image quality control according to the "Lucky Imaging" principle is combined with the MCA-procedure, weighting good frames more heavily than bad ones, skipping a given percentage of extremely degraded frames entirely. To account for local isoplanatism, when image dancing will effect local displacements between consecutive frames rather than global shifts only, a locally operating MCA variant with block matching, proposed in earlier work, is employed. In order to reduce loss of detail due to normal averaging, various combinations of temporal mode, median and mean are tested as reference image. The respective restoration results by means of a weighted blind deconvolution algorithm are presented and evaluated.

  2. Useful diagnostic biometabolic data obtained by PET/CT and MR fusion imaging using open source software.

    PubMed

    Antonica, Filippo; Asabella, Artor Niccoli; Ferrari, Cristina; Rubini, Domenico; Notaristefano, Antonio; Nicoletti, Adriano; Altini, Corinna; Merenda, Nunzio; Mossa, Emilio; Guarini, Attilio; Rubini, Giuseppe

    2014-01-01

    In the last decade numerous attempts were considered to co-register and integrate different imaging data. Like PET/CT the integration of PET to MR showed great interest. PET/MR scanners are recently tested on different distrectual or systemic pathologies. Unfortunately PET/MR scanners are expensive and diagnostic protocols are still under studies and investigations. Nuclear Medicine imaging highlights functional and biometabolic information but has poor anatomic details. The aim of this study is to integrate MR and PET data to produce distrectual or whole body fused images acquired from different scanners even in different days. We propose an offline method to fuse PET with MR data using an open-source software that has to be inexpensive, reproducible and capable to exchange data over the network. We also evaluate global quality, alignment quality, and diagnostic confidence of fused PET-MR images. We selected PET/CT studies performed in our Nuclear Medicine unit, MR studies provided by patients on DICOM CD media or network received. We used Osirix 5.7 open source version. We aligned CT slices with the first MR slice, pointed and marked for co-registration using MR-T1 sequence and CT as reference and fused with PET to produce a PET-MR image. A total of 100 PET/CT studies were fused with the following MR studies: 20 head, 15 thorax, 24 abdomen, 31 pelvis, 10 whole body. An interval of no more than 15 days between PET and MR was the inclusion criteria. PET/CT, MR and fused studies were evaluated by two experienced radiologist and two experienced nuclear medicine physicians. Each one filled a five point based evaluation scoring scheme based on image quality, image artifacts, segmentation errors, fusion misalignment and diagnostic confidence. Our fusion method showed best results for head, thorax and pelvic districts in terms of global quality, alignment quality and diagnostic confidence,while for the abdomen and pelvis alignement quality and global quality resulted

  3. User's Guide for MapIMG 2: Map Image Re-projection Software Package

    USGS Publications Warehouse

    Finn, Michael P.; Trent, Jason R.; Buehler, Robert A.

    2006-01-01

    BACKGROUND Scientists routinely accomplish small-scale geospatial modeling in the raster domain, using high-resolution datasets for large parts of continents and low-resolution to high-resolution datasets for the entire globe. Direct implementation of point-to-point transformation with appropriate functions yields the variety of projections available in commercial software packages, but implementation with data other than points requires specific adaptation of the transformation equations or prior preparation of the data to allow the transformation to succeed. It seems that some of these packages use the U.S. Geological Survey's (USGS) General Cartographic Transformation Package (GCTP) or similar point transformations without adaptation to the specific characteristics of raster data (Usery and others, 2003a). Usery and others (2003b) compiled and tabulated the accuracy of categorical areas in projected raster datasets of global extent. Based on the shortcomings identified in these studies, geographers and applications programmers at the USGS expanded and evolved a USGS software package, MapIMG, for raster map projection transformation (Finn and Trent, 2004). Daniel R. Steinwand of Science Applications International Corporation, National Center for Earth Resources Observation and Science, originally developed MapIMG for the USGS, basing it on GCTP. Through previous and continuing efforts at the USGS' National Geospatial Technical Operations Center, this program has been transformed from an application based on command line input into a software package based on a graphical user interface for Windows, Linux, and other UNIX machines.

  4. Integration of bio- and geoscience data with the ODM2 standards and software ecosystem for the CZOData and BiG CZ Data projects

    NASA Astrophysics Data System (ADS)

    Aufdenkampe, A. K.; Mayorga, E.; Horsburgh, J. S.; Lehnert, K. A.; Zaslavsky, I.

    2015-12-01

    We have developed a family of solutions to the challenges of integrating diverse data from of biological and geological (BiG) disciplines for Critical Zone (CZ) science. These standards and software solutions have been developed around the new Observations Data Model version 2.0 (ODM2, http://ODM2.org), which was designed as a profile of the Open Geospatial Consortium's (OGC) Observations and Measurements (O&M) standard. The ODM2 standards and software ecosystem has at it's core an information model that balances specificity with flexibility to powerfully and equally serve the needs of multiple dataset types, from multivariate sensor-generated time series to geochemical measurements of specimen hierarchies to multi-dimensional spectral data to biodiversity observations. ODM2 has been adopted as the information model guiding the next generation of cyberinfrastructure development for the Interdisciplinary Earth Data Alliance (http://www.iedadata.org/) and the CUAHSI Water Data Center (https://www.cuahsi.org/wdc). Here we present several components of the ODM2 standards and software ecosystem that were developed specifically to help CZ scientists and their data managers to share and manage data through the national Critical Zone Observatory data integration project (CZOData, http://criticalzone.org/national/data/) and the bio integration with geo for critical zone science data project (BiG CZ Data, http://bigcz.org/). These include the ODM2 Controlled Vocabulary system (http://vocabulary.odm2.org), the YAML Observation Data Archive & exchange (YODA) File Format (https://github.com/ODM2/YODA-File) and the BiG CZ Toolbox, which will combine easy-to-install ODM2 databases (https://github.com/ODM2/ODM2) with a variety of graphical software packages for data management such as ODMTools (https://github.com/ODM2/ODMToolsPython) and the ODM2 Streaming Data Loader (https://github.com/ODM2/ODM2StreamingDataLoader).

  5. JJ1017 image examination order codes: standardized codes supplementary to DICOM for imaging modality, region, and direction

    NASA Astrophysics Data System (ADS)

    Kimura, Michio; Kuranishi, Makoto; Sukenobu, Yoshiharu; Watanabe, Hiroki; Nakajima, Takashi; Morimura, Shinya; Kabata, Shun

    2002-05-01

    The DICOM standard includes non-image data information such as image study ordering data and performed procedure data, which are used for sharing information between HIS/RIS/PACS/modalities, which is essential for IHE. In order to bring such parts of the DICOM standard into force in Japan, a joint committee of JIRA and JAHIS (vendor associations) established JJ1017 management guideline. It specifies, for example, which items are legally required in Japan while remaining optional in the DICOM standard. Then, what should be used for the examination type, regional, and directional codes? Our investigation revealed that DICOM tables do not include items that are sufficiently detailed for use in Japan. This is because radiology departments (radiologists) in the US exercise greater discretion in image examination than in Japan, and the contents of orders from requesting physicians do not include the extra details used in Japan. Therefore, we have generated the JJ1017 code for these 3 codes for use based on the JJ1017 guidelines. The stem part of the JJ1017 code partially employs the DICOM codes in order to remain in line with the DICOM standard. JJ1017 codes are to be included not only in IHE-J specifications, also in Ministry recommendations of health data exchange.

  6. Learning Outcome Testing Program: Standardized Classroom Testing in West Virginia through Item Banking, Test Generation, and Curricular Management Software.

    ERIC Educational Resources Information Center

    Willis, John A.

    1990-01-01

    The Learning Outcome Testing Program of the West Virginia Department of Education is designed to provide public school teachers/administrators with test questions matching learning outcomes. The approach, software selection, results of pilot tests with teachers in 13 sites, and development of test items for item banks are described. (SLD)

  7. TGS[underscore]FIT: Image reconstruction software for quantitative, low-resolution tomographic assays

    SciTech Connect

    Estep, R J

    1993-01-01

    We developed the computer program TGS[underscore]FIT to aid in researching the tomographic gamma scanner method of nondestructive assay. This software, written in C-programming, language, implements a full Beer's Law attenuation correction in reconstructing low-resolution emission tomograms. The attenuation coefficients for the corrections are obtained by reconstructing a transmission tomogram of the same resolution. The command-driven interface, combined with (crude) simulation capabilities and command file control, allows design studies to be performed in a semi-automated manner.

  8. Application of Technical Measures and Software in Constructing Photorealistic 3D Models of Historical Building Using Ground-Based and Aerial (UAV) Digital Images

    NASA Astrophysics Data System (ADS)

    Zarnowski, Aleksander; Banaszek, Anna; Banaszek, Sebastian

    2015-12-01

    Preparing digital documentation of historical buildings is a form of protecting cultural heritage. Recently there have been several intensive studies using non-metric digital images to construct realistic 3D models of historical buildings. Increasingly often, non-metric digital images are obtained with unmanned aerial vehicles (UAV). Technologies and methods of UAV flights are quite different from traditional photogrammetric approaches. The lack of technical guidelines for using drones inhibits the process of implementing new methods of data acquisition. This paper presents the results of experiments in the use of digital images in the construction of photo-realistic 3D model of a historical building (Raphaelsohns' Sawmill in Olsztyn). The aim of the study at the first stage was to determine the meteorological and technical conditions for the acquisition of aerial and ground-based photographs. At the next stage, the technology of 3D modelling was developed using only ground-based or only aerial non-metric digital images. At the last stage of the study, an experiment was conducted to assess the possibility of 3D modelling with the comprehensive use of aerial (UAV) and ground-based digital photographs in terms of their labour intensity and precision of development. Data integration and automatic photo-realistic 3D construction of the models was done with Pix4Dmapper and Agisoft PhotoScan software Analyses have shown that when certain parameters established in an experiment are kept, the process of developing the stock-taking documentation for a historical building moves from the standards of analogue to digital technology with considerably reduced cost.

  9. Software Update.

    ERIC Educational Resources Information Center

    Currents, 2000

    2000-01-01

    A chart of 40 alumni-development database systems provides information on vendor/Web site, address, contact/phone, software name, price range, minimum suggested workstation/suggested server, standard reports/reporting tools, minimum/maximum record capacity, and number of installed sites/client type. (DB)

  10. Seismic reflection imaging of underground cavities using open-source software

    SciTech Connect

    Mellors, R J

    2011-12-20

    The Comprehensive Nuclear Test Ban Treaty (CTBT) includes provisions for an on-site inspection (OSI), which allows the use of specific techniques to detect underground anomalies including cavities and rubble zones. One permitted technique is active seismic surveys such as seismic refraction or reflection. The purpose of this report is to conduct some simple modeling to evaluate the potential use of seismic reflection in detecting cavities and to test the use of open-source software in modeling possible scenarios. It should be noted that OSI inspections are conducted under specific constraints regarding duration and logistics. These constraints are likely to significantly impact active seismic surveying, as a seismic survey typically requires considerable equipment, effort, and expertise. For the purposes of this study, which is a first-order feasibility study, these issues will not be considered. This report provides a brief description of the seismic reflection method along with some commonly used software packages. This is followed by an outline of a simple processing stream based on a synthetic model, along with results from a set of models representing underground cavities. A set of scripts used to generate the models are presented in an appendix. We do not consider detection of underground facilities in this work and the geologic setting used in these tests is an extremely simple one.

  11. Familiarity effects in the construction of facial-composite images using modern software systems.

    PubMed

    Frowd, Charlie D; Skelton, Faye C; Butt, Neelam; Hassan, Amal; Fields, Stephen; Hancock, Peter J B

    2011-12-01

    We investigate the effect of target familiarity on the construction of facial composites, as used by law enforcement to locate criminal suspects. Two popular software construction methods were investigated. Participants were shown a target face that was either familiar or unfamiliar to them and constructed a composite of it from memory using a typical 'feature' system, involving selection of individual facial features, or one of the newer 'holistic' types, involving repeated selection and breeding from arrays of whole faces. This study found that composites constructed of a familiar face were named more successfully than composites of an unfamiliar face; also, naming of composites of internal and external features was equivalent for construction of unfamiliar targets, but internal features were better named than the external features for familiar targets. These findings applied to both systems, although benefit emerged for the holistic type due to more accurate construction of internal features and evidence for a whole-face advantage. STATEMENT OF RELEVANCE: This work is of relevance to practitioners who construct facial composites with witnesses to and victims of crime, as well as for software designers to help them improve the effectiveness of their composite systems.

  12. Integration of instrumentation and processing software of a laser speckle contrast imaging system

    NASA Astrophysics Data System (ADS)

    Carrick, Jacob J.

    Laser speckle contrast imaging (LSCI) has the potential to be a powerful tool in medicine, but more research in the field is required so it can be used properly. To help in the progression of Michigan Tech's research in the field, a graphical user interface (GUI) was designed in Matlab to control the instrumentation of the experiments as well as process the raw speckle images into contrast images while they are being acquired. The design of the system was successful and is currently being used by Michigan Tech's Biomedical Engineering department. This thesis describes the development of the LSCI GUI as well as offering a full introduction into the history, theory and applications of LSCI.

  13. Automated image mosaics by non-automated light microscopes: the MicroMos software tool.

    PubMed

    Piccinini, F; Bevilacqua, A; Lucarelli, E

    2013-12-01

    Light widefield microscopes and digital imaging are the basis for most of the analyses performed in every biological laboratory. In particular, the microscope's user is typically interested in acquiring high-detailed images for analysing observed cells and tissues, meanwhile being representative of a wide area to have reliable statistics. The microscopist has to choose between higher magnification factor and extension of the observed area, due to the finite size of the camera's field of view. To overcome the need of arrangement, mosaicing techniques have been developed in the past decades for increasing the camera's field of view by stitching together more images. Nevertheless, these approaches typically work in batch mode and rely on motorized microscopes. Or alternatively, the methods are conceived just to provide visually pleasant mosaics not suitable for quantitative analyses. This work presents a tool for building mosaics of images acquired with nonautomated light microscopes. The method proposed is based on visual information only and the mosaics are built by incrementally stitching couples of images, making the approach available also for online applications. Seams in the stitching regions as well as tonal inhomogeneities are corrected by compensating the vignetting effect. In the experiments performed, we tested different registration approaches, confirming that the translation model is not always the best, despite the fact that the motion of the sample holder of the microscope is apparently translational and typically considered as such. The method's implementation is freely distributed as an open source tool called MicroMos. Its usability makes building mosaics of microscope images at subpixel accuracy easier. Furthermore, optional parameters for building mosaics according to different strategies make MicroMos an easy and reliable tool to compare different registration approaches, warping models and tonal corrections.

  14. Leveraging Open Standards and Technologies to Search and Display Planetary Image Data

    NASA Astrophysics Data System (ADS)

    Rose, M.; Schauer, C.; Quinol, M.; Trimble, J.

    2011-12-01

    Mars and the Moon have both been visited by multiple NASA spacecraft. A large number of images and other data have been gathered by the spacecraft and are publicly available in NASA's Planetary Data System. Through a collaboration with Google, Inc., the User Centered Technologies group at NASA Ames Resarch Center has developed at tool for searching and browsing among images from multiple Mars and Moon missions. Development of this tool was facilitated by the use of several open technologies and standards. First, an open-source full-text search engine is used to search both place names on the target and to find images matching a geographic region. Second, the published API of the Google Earth browser plugin is used to geolocate the images on a virtual globe and allow the user to navigate on the globe to see related images. The structure of the application also employs standard protocols and services. The back-end is exposed as RESTful APIs, which could be reused by other client systems in the future. Further, the communication between the front- and back-end portions of the system utilizes open data standards including XML and KML (Keyhole Markup Language) for representation of textual and geographic data. The creation of the search index was facilitated by reuse of existing, publicly available metadata, including the Gazetteer of Planetary Nomenclature from the USGS, available in KML format. And the image metadata was reused from standards-compliant archives in the Planetary Data System. The system also supports collaboration with other tools by allowing export of search results in KML, and the ability to display those results in the Google Earth desktop application. We will demonstrate the search and visualization capabilities of the system, with emphasis on how the system facilitates reuse of data and services through the adoption of open standards.

  15. Hardware, software, and scanning issues encountered during small animal imaging of photodynamic therapy in the athymic nude rat

    NASA Astrophysics Data System (ADS)

    Cross, Nathan; Sharma, Rahul; Varghai, Davood; Spring-Robinson, Chandra; Oleinick, Nancy L.; Muzic, Raymond F., Jr.; Dean, David

    2007-02-01

    Small animal imaging devices are now commonly used to study gene activation and model the effects of potential therapies. We are attempting to develop a protocol that non-invasively tracks the affect of Pc 4-mediated photodynamic therapy (PDT) in a human glioma model using structural image data from micro-CT and/or micro-MR scanning and functional data from 18F-fluorodeoxy-glucose (18F-FDG) micro-PET imaging. Methods: Athymic nude rat U87-derived glioma was imaged by micro-PET and either micro-CT or micro-MR prior to Pc 4-PDT. Difficulty insuring animal anesthesia and anatomic position during the micro-PET, micro-CT, and micro-MR scans required adaptation of the scanning bed hardware. Following Pc 4-PDT the animals were again 18F-FDG micro-PET scanned, euthanized one day later, and their brains were explanted and prepared for H&E histology. Histology provided the gold standard for tumor location and necrosis. The tumor and surrounding brain functional and structural image data were then isolated and coregistered. Results: Surprisingly, both the non-PDT and PDT groups showed an increase in tumor functional activity when we expected this signal to disappear in the group receiving PDT. Co-registration of the functional and structural image data was done manually. Discussion: As expected, micro-MR imaging provided better structural discrimination of the brain tumor than micro-CT. Contrary to expectations, in our preliminary analysis 18F-FDG micro-PET imaging does not readily discriminate the U87 tumors that received Pc 4-PDT. We continue to investigate the utility of micro-PET and other methods of functional imaging to remotely detect the specificity and sensitivity of Pc 4-PDT in deeply placed tumors.

  16. Updated standards and processes for accreditation of echocardiographic laboratories from The European Association of Cardiovascular Imaging.

    PubMed

    Popescu, Bogdan A; Stefanidis, Alexandros; Nihoyannopoulos, Petros; Fox, Kevin F; Ray, Simon; Cardim, Nuno; Rigo, Fausto; Badano, Luigi P; Fraser, Alan G; Pinto, Fausto; Zamorano, Jose Luis; Habib, Gilbert; Maurer, Gerald; Lancellotti, Patrizio; Andrade, Maria Joao; Donal, Erwan; Edvardsen, Thor; Varga, Albert

    2014-07-01

    Standards for echocardiographic laboratories were proposed by the European Association of Echocardiography (now the European Association of Cardiovascular Imaging) 7 years ago in order to raise standards of practice and improve the quality of care. Criteria and requirements were published at that time for transthoracic, transoesophageal, and stress echocardiography. This paper reassesses and updates the quality standards to take account of experience and the technical developments of modern echocardiographic practice. It also discusses quality control, the incentives for laboratories to apply for accreditation, the reaccreditation criteria, and the current status and future prospects of the laboratory accreditation process.

  17. Sub-diffraction imaging on standard microscopes through photobleaching microscopy with non-linear processing.

    PubMed

    Munck, Sebastian; Miskiewicz, Katarzyna; Sannerud, Ragna; Menchon, Silvia A; Jose, Liya; Heintzmann, Rainer; Verstreken, Patrik; Annaert, Wim

    2012-05-01

    Visualization of organelles and molecules at nanometer resolution is revolutionizing the biological sciences. However, such technology is still limited for many cell biologists. We present here a novel approach using photobleaching microscopy with non-linear processing (PiMP) for sub-diffraction imaging. Bleaching of fluorophores both within the single-molecule regime and beyond allows visualization of stochastic representations of sub-populations of fluorophores by imaging the same region over time. Our method is based on enhancing the probable positions of the fluorophores underlying the images. The random nature of the bleached fluorophores is assessed by calculating the deviation of the local actual bleached fluorescence intensity to the average bleach expectation as given by the overall decay of intensity. Subtracting measured from estimated decay images yields differential images. Non-linear enhancement of maxima in these diffraction-limited differential images approximates the positions of the underlying structure. Summing many such processed differential images yields a super-resolution PiMP image. PiMP allows multi-color, three-dimensional sub-diffraction imaging of cells and tissues using common fluorophores and can be implemented on standard wide-field or confocal systems.

  18. Improved modified pressure imaging and software for egg micro-crack detection and egg quality grading

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Cracks in the egg shell increase a food safety risk. Especially, eggs with very fine, hairline cracks (micro-cracks) are often undetected during the grading process because they are almost impossible to detect visually. A modified pressure imaging system was developed to detect eggs with micro-crack...

  19. Space Station Software Recommendations

    NASA Technical Reports Server (NTRS)

    Voigt, S. (Editor)

    1985-01-01

    Four panels of invited experts and NASA representatives focused on the following topics: software management, software development environment, languages, and software standards. Each panel deliberated in private, held two open sessions with audience participation, and developed recommendations for the NASA Space Station Program. The major thrusts of the recommendations were as follows: (1) The software management plan should establish policies, responsibilities, and decision points for software acquisition; (2) NASA should furnish a uniform modular software support environment and require its use for all space station software acquired (or developed); (3) The language Ada should be selected for space station software, and NASA should begin to address issues related to the effective use of Ada; and (4) The space station software standards should be selected (based upon existing standards where possible), and an organization should be identified to promulgate and enforce them. These and related recommendations are described in detail in the conference proceedings.

  20. Cognitive Factors in the Study of Visual Images: Moving Image Recognition Standards.

    ERIC Educational Resources Information Center

    Metallinos, Nikos

    This paper argues that a completed study pertaining to the various factors involved in the proper recognition and aesthetic application of moving images (primarily television pictures) should consider: (1) the individual viewer's general self-awareness, knowledge, expertise, confidence, values, beliefs, and motivation; (2) the viewer's…

  1. Standard resolution spectral domain optical coherence tomography in clinical ophthalmic imaging

    NASA Astrophysics Data System (ADS)

    Szkulmowska, Anna; Cyganek, Marta; Targowski, Piotr; Kowalczyk, Andrzej; Kaluzny, Jakub J.; Wojtkowski, Maciej; Fujimoto, James G.

    2005-04-01

    In this study we show clinical application of Spectral Optical Coherence Tomography (SOCT), which enables operation with 40 times higher speed than commercial Stratus OCT instrument. Using high speed SOCT instrument it is possible to collect more information and increase the quality of reconstructed cross-sectional retinal images. Two generations of compact and portable clinical SOCT instruments were constructed in Medical Physics Group at Nicolaus Copernicus University in Poland. The first SOCT instrument is a low-cost system operating with standard, 12 micrometer axial resolution and the second is high resolution system using combined superluminescent diodes light source, which enables imaging with 4.8 micrometer axial resolution. Both instruments have worked in Ophthalmology Clinic of Collegium Medicum in Bydgoszcz. During the study we have examined 44 patients with different pathologies of the retina including: Central Serous Chorioretinopathy (CSC), Choroidal Neovascularization (CNV), Pigment Epithelial Detachment (PED), Macular Hole, Epiretinal Membrane, Outer Retinal Infarction etc. All these pathologies were first diagnosed by classical methods (like fundus camera imaging and angiography) and then examined with the aid of SOCT system. In this contribution we present examples of SOCT cross-sectional retinal imaging of pathologic eyes measured with standard resolution. We also compare cross-sectional images of pathology obtained by standard and high resolution systems.

  2. Computer graphics: Programmers's Hierarchical Interactive Graphics System (PHIGS). Language bindings (Part 3. Ada). Category: Software standard. Subcategory: Graphics. Final report

    SciTech Connect

    Benigni, D.R.

    1990-01-01

    The publication announces the adoption of the American National Standard Programmer's Hierarchical Interactive Graphics System, ANSI X3.144-1988, as a Federal Information Processing Standard (FIPS). The standard specifies the control and data interchange between an application program and its graphic support system. It provides a set of functions and programming language bindings, (or toolbox package) for the definition, display and modification of two-dimensional (2D) or three-dimensional (3D) graphical data. In addition, the standard supports highly interactive processing and geometric articulation, multi-level or hierarchical graphics data, and rapid modification of both the graphics data and the relationships between the graphical data. The purpose of the standard is to promote portability of graphics application programs between different installations.

  3. Full-sun synchronic EUV and coronal hole mapping using multi-instrument images: Data and software made available

    NASA Astrophysics Data System (ADS)

    Caplan, R. M.; Downs, C.; Linker, J.

    2015-12-01

    A method for the automatic generation of EUV and coronal hole (CH) maps using simultaneous multi-instrument imaging data is described. Synchronized EUV images from STEREO/EUVI A&B 195Å and SDO/AIA 193Å undergo preprocessing steps that include PSF-deconvolution and the application of nonlinear data-derived intensity corrections that account for center-to-limb variations (limb-brightening) and inter-instrument intensity normalization. The latter two corrections are derived using a robust, systematic approach that takes advantage of unbiased long-term averages of data and serve to flatten the images by converting all pixel intensities to a unified disk center equivalent. While the number of applications are broad, we demonstrate how this technique is very useful for CH detection as it enables the use of a fast and simplified image segmentation algorithm to obtain consistent detection results. The multi-instrument nature of the technique also allows one to track evolving features consistently for longer periods than is possible with a single instrument, and preliminary results quantifying CH area and shape evolution are shown.Most importantly, several data and software products are made available to the community for use. For the ~4 year period of 6/10/2010 to 8/18/2014, we provide synchronic EUV and coronal hole maps at 6-hour cadence as well as the data-derived limb brightening and inter-instrument correction factors that we applied. We also make available a ready-to-use MATLAB script EUV2CHM used to generate the maps, which loads EUV images, applies our preprocessing steps, and then uses our GPU-accelerated/CPU-multithreaded segmentation algorithm EZSEG to detect coronal holes.

  4. Gamma-H2AX foci counting: image processing and control software for high-content screening

    NASA Astrophysics Data System (ADS)

    Barber, P. R.; Locke, R. J.; Pierce, G. P.; Rothkamm, K.; Vojnovic, B.

    2007-02-01

    Phosphorylation of the chromatin protein H2AX (forming γH2AX) is implicated in the repair of DNA double strand breaks (DSB's); a large number of H2AX molecules become phosphorylated at the sites of DSB's. Fluorescent staining of the cell nuclei for γH2AX, via an antibody, visualises the formation of these foci, allowing the quantification of DNA DSB's and forming the basis for a sensitive biological dosimeter of ionising radiation. We describe an automated fluorescence microscopy system, including automated image processing, to count γH2AX foci. The image processing is performed by a Hough transform based algorithm, CHARM, which has wide applicability for the detection and analysis of cells and cell colonies. This algorithm and its applications for cell nucleus and foci detection will be described. The system also relies heavily on robust control software, written using multi-threaded cbased modules in LabWindows/CVI that adapt to the timing requirements of a particular experiment for optimised slide/plate scanning and mosaicing, making use of modern multi-core processors. The system forms the basis of a general purpose high-content screening platform with wide ranging applications in live and fixed cell imaging and tissue micro arrays, that in future, can incorporate spectrally and time-resolved information.

  5. An image quality comparison of standard and dual-side read CR systems for pediatric radiology

    SciTech Connect

    Monnin, P.; Holzer, Z.; Wolf, R.; Neitzel, U.; Vock, P.; Gudinchet, F.; Verdun, F.R.

    2006-02-15

    An objective analysis of image quality parameters was performed for a computed radiography (CR) system using both standard single-side and prototype dual-side read plates. The pre-sampled modulation transfer function (MTF), noise power spectrum (NPS), and detective quantum efficiency (DQE) for the systems were determined at three different beam qualities representative of pediatric chest radiography, at an entrance detector air kerma of 5 {mu}Gy. The NPS and DQE measurements were realized under clinically relevant x-ray spectra for pediatric radiology, including x-ray scatter radiations. Compared to the standard single-side read system, the MTF for the dual-side read system is reduced, but this is offset by a significant decrease in image noise, resulting in a marked increase in DQE (+40%) in the low spatial frequency range. Thus, for the same image quality, the new technology permits the CR system to be used at a reduced dose level.

  6. Photon counting imaging and centroiding with an electron-bombarded CCD using single molecule localisation software

    NASA Astrophysics Data System (ADS)

    Hirvonen, Liisa M.; Barber, Matthew J.; Suhling, Klaus

    2016-06-01

    Photon event centroiding in photon counting imaging and single-molecule localisation in super-resolution fluorescence microscopy share many traits. Although photon event centroiding has traditionally been performed with simple single-iteration algorithms, we recently reported that iterative fitting algorithms originally developed for single-molecule localisation fluorescence microscopy work very well when applied to centroiding photon events imaged with an MCP-intensified CMOS camera. Here, we have applied these algorithms for centroiding of photon events from an electron-bombarded CCD (EBCCD). We find that centroiding algorithms based on iterative fitting of the photon events yield excellent results and allow fitting of overlapping photon events, a feature not reported before and an important aspect to facilitate an increased count rate and shorter acquisition times.

  7. Improved structure, function and compatibility for CellProfiler: modular high-throughput image analysis software

    PubMed Central

    Kamentsky, Lee; Jones, Thouis R.; Fraser, Adam; Bray, Mark-Anthony; Logan, David J.; Madden, Katherine L.; Ljosa, Vebjorn; Rueden, Curtis; Eliceiri, Kevin W.; Carpenter, Anne E.

    2011-01-01

    Summary: There is a strong and growing need in the biology research community for accurate, automated image analysis. Here, we describe CellProfiler 2.0, which has been engineered to meet the needs of its growing user base. It is more robust and user friendly, with new algorithms and features to facilitate high-throughput work. ImageJ plugins can now be run within a CellProfiler pipeline. Availability and Implementation: CellProfiler 2.0 is free and open source, available at http://www.cellprofiler.org under the GPL v. 2 license. It is available as a packaged application for Macintosh OS X and Microsoft Windows and can be compiled for Linux. Contact: anne@broadinstitute.org Supplementary information: Supplementary data are available at Bioinformatics online. PMID:21349861

  8. Photon counting imaging and centroiding with an electron-bombarded CCD using single molecule localisation software

    PubMed Central

    Hirvonen, Liisa M.; Barber, Matthew J.; Suhling, Klaus

    2016-01-01

    Photon event centroiding in photon counting imaging and single-molecule localisation in super-resolution fluorescence microscopy share many traits. Although photon event centroiding has traditionally been performed with simple single-iteration algorithms, we recently reported that iterative fitting algorithms originally developed for single-molecule localisation fluorescence microscopy work very well when applied to centroiding photon events imaged with an MCP-intensified CMOS camera. Here, we have applied these algorithms for centroiding of photon events from an electron-bombarded CCD (EBCCD). We find that centroiding algorithms based on iterative fitting of the photon events yield excellent results and allow fitting of overlapping photon events, a feature not reported before and an important aspect to facilitate an increased count rate and shorter acquisition times. PMID:27274604

  9. Software programmable multi-mode interface for nuclear-medical imaging

    SciTech Connect

    Zubal, I.G.; Rowe, R.W.; Bizais, Y.J.C.; Bennett, G.W.; Brill, A.B.

    1982-01-01

    An innovative multi-port interface allows gamma camera events (spatial coordinates and energy) to be acquired concurrently with a sampling of physiological patient data. The versatility of the interface permits all conventional static, dynamic, and tomographic imaging modes, in addition to multi-hole coded aperture acquisition. The acquired list mode data may be analyzed or gated on the basis of various camera, isotopic, or physiological parameters.

  10. Development of fast patient position verification software using 2D-3D image registration and its clinical experience.

    PubMed

    Mori, Shinichiro; Kumagai, Motoki; Miki, Kentaro; Fukuhara, Riki; Haneishi, Hideaki

    2015-09-01

    To improve treatment workflow, we developed a graphic processing unit (GPU)-based patient positional verification software application and integrated it into carbon-ion scanning beam treatment. Here, we evaluated the basic performance of the software. The algorithm provides 2D/3D registration matching using CT and orthogonal X-ray flat panel detector (FPD) images. The participants were 53 patients with tumors of the head and neck, prostate or lung receiving carbon-ion beam treatment. 2D/3D-ITchi-Gime (ITG) calculation accuracy was evaluated in terms of computation time and registration accuracy. Registration calculation was determined using the similarity measurement metrics gradient difference (GD), normalized mutual information (NMI), zero-mean normalized cross-correlation (ZNCC), and their combination. Registration accuracy was dependent on the particular metric used. Representative examples were determined to have target registration error (TRE) = 0.45 ± 0.23 mm and angular error (AE) = 0.35 ± 0.18° with ZNCC + GD for a head and neck tumor; TRE = 0.12 ± 0.07 mm and AE = 0.16 ± 0.07° with ZNCC for a pelvic tumor; and TRE = 1.19 ± 0.78 mm and AE = 0.83 ± 0.61° with ZNCC for lung tumor. Calculation time was less than 7.26 s.The new registration software has been successfully installed and implemented in our treatment process. We expect that it will improve both treatment workflow and treatment accuracy. PMID:26081313

  11. Development of fast patient position verification software using 2D-3D image registration and its clinical experience

    PubMed Central

    Mori, Shinichiro; Kumagai, Motoki; Miki, Kentaro; Fukuhara, Riki; Haneishi, Hideaki

    2015-01-01

    To improve treatment workflow, we developed a graphic processing unit (GPU)-based patient positional verification software application and integrated it into carbon-ion scanning beam treatment. Here, we evaluated the basic performance of the software. The algorithm provides 2D/3D registration matching using CT and orthogonal X-ray flat panel detector (FPD) images. The participants were 53 patients with tumors of the head and neck, prostate or lung receiving carbon-ion beam treatment. 2D/3D-ITchi-Gime (ITG) calculation accuracy was evaluated in terms of computation time and registration accuracy. Registration calculation was determined using the similarity measurement metrics gradient difference (GD), normalized mutual information (NMI), zero-mean normalized cross-correlation (ZNCC), and their combination. Registration accuracy was dependent on the particular metric used. Representative examples were determined to have target registration error (TRE) = 0.45 ± 0.23 mm and angular error (AE) = 0.35 ± 0.18° with ZNCC + GD for a head and neck tumor; TRE = 0.12 ± 0.07 mm and AE = 0.16 ± 0.07° with ZNCC for a pelvic tumor; and TRE = 1.19 ± 0.78 mm and AE = 0.83 ± 0.61° with ZNCC for lung tumor. Calculation time was less than 7.26 s.The new registration software has been successfully installed and implemented in our treatment process. We expect that it will improve both treatment workflow and treatment accuracy. PMID:26081313

  12. The Java Image Science Toolkit (JIST) for Rapid Prototyping and Publishing of Neuroimaging Software

    PubMed Central

    Lucas, Blake C.; Bogovic, John A.; Carass, Aaron; Bazin, Pierre-Louis; Prince, Jerry L.; Pham, Dzung

    2010-01-01

    Non-invasive neuroimaging techniques enable extraordinarily sensitive and specific in vivo study of the structure, functional response and connectivity of biological mechanisms. With these advanced methods comes a heavy reliance on computer-based processing, analysis and interpretation. While the neuroimaging community has produced many excellent academic and commercial tool packages, new tools are often required to interpret new modalities and paradigms. Developing custom tools and ensuring interoperability with existing tools is a significant hurdle. To address these limitations, we present a new framework for algorithm development that implicitly ensures tool interoperability, generates graphical user interfaces, provides advanced batch processing tools, and, most importantly, requires minimal additional programming or computational overhead. Java-based rapid prototyping with this system is an efficient and practical approach to evaluate new algorithms since the proposed system ensures that rapidly constructed prototypes are actually fully-functional processing modules with support for multiple GUI's, a broad range of file formats, and distributed computation. Herein, we demonstrate MRI image processing with the proposed system for cortical surface extraction in large cross-sectional cohorts, provide a system for fully automated diffusion tensor image analysis, and illustrate how the system can be used as a simulation framework for the development of a new image analysis method. The system is released as open source under the Lesser GNU Public License (LGPL) through the Neuroimaging Informatics Tools and Resources Clearinghouse (NITRC). PMID:20077162

  13. PITRE: software for phase-sensitive X-ray image processing and tomography reconstruction.

    PubMed

    Chen, Rong Chang; Dreossi, Diego; Mancini, Lucia; Menk, Ralf; Rigon, Luigi; Xiao, Ti Qiao; Longo, Renata

    2012-09-01

    Synchrotron-radiation computed tomography has been applied in many research fields. Here, PITRE (Phase-sensitive X-ray Image processing and Tomography REconstruction) and PITRE_BM (PITRE Batch Manager) are presented. PITRE supports phase retrieval for propagation-based phase-contrast imaging/tomography (PPCI/PPCT), extracts apparent absorption, refractive and scattering information of diffraction enhanced imaging (DEI), and allows parallel-beam tomography reconstruction for conventional absorption CT data and for PPCT phase retrieved and DEI-CT extracted information. PITRE_BM is a batch processing manager for PITRE: it executes a series of tasks, created via PITRE, without manual intervention. Both PITRE and PITRE_BM are coded in Interactive Data Language (IDL), and have a user-friendly graphical user interface. They are freeware and can run on Microsoft Windows systems via IDL Virtual Machine, which can be downloaded for free and does not require a license. The data-processing principle and some examples of application will be presented.

  14. Consensus recommendations for a standardized Brain Tumor Imaging Protocol in clinical trials

    PubMed Central

    Ellingson, Benjamin M.; Bendszus, Martin; Boxerman, Jerrold; Barboriak, Daniel; Erickson, Bradley J.; Smits, Marion; Nelson, Sarah J.; Gerstner, Elizabeth; Alexander, Brian; Goldmacher, Gregory; Wick, Wolfgang; Vogelbaum, Michael; Weller, Michael; Galanis, Evanthia; Kalpathy-Cramer, Jayashree; Shankar, Lalitha; Jacobs, Paula; Pope, Whitney B.; Yang, Dewen; Chung, Caroline; Knopp, Michael V.; Cha, Soonme; van den Bent, Martin J.; Chang, Susan; Al Yung, W.K.; Cloughesy, Timothy F.; Wen, Patrick Y.; Gilbert, Mark R.

    2015-01-01

    A recent joint meeting was held on January 30, 2014, with the US Food and Drug Administration (FDA), National Cancer Institute (NCI), clinical scientists, imaging experts, pharmaceutical and biotech companies, clinical trials cooperative groups, and patient advocate groups to discuss imaging endpoints for clinical trials in glioblastoma. This workshop developed a set of priorities and action items including the creation of a standardized MRI protocol for multicenter studies. The current document outlines consensus recommendations for a standardized Brain Tumor Imaging Protocol (BTIP), along with the scientific and practical justifications for these recommendations, resulting from a series of discussions between various experts involved in aspects of neuro-oncology neuroimaging for clinical trials. The minimum recommended sequences include: (i) parameter-matched precontrast and postcontrast inversion recovery-prepared, isotropic 3D T1-weighted gradient-recalled echo; (ii) axial 2D T2-weighted turbo spin-echo acquired after contrast injection and before postcontrast 3D T1-weighted images to control timing of images after contrast administration; (iii) precontrast, axial 2D T2-weighted fluid-attenuated inversion recovery; and (iv) precontrast, axial 2D, 3-directional diffusion-weighted images. Recommended ranges of sequence parameters are provided for both 1.5 T and 3 T MR systems. PMID:26250565

  15. ORBS, ORCS, OACS, a Software Suite for Data Reduction and Analysis of the Hyperspectral Imagers SITELLE and SpIOMM

    NASA Astrophysics Data System (ADS)

    Martin, T.; Drissen, L.; Joncas, G.

    2015-09-01

    SITELLE (installed in 2015 at the Canada-France-Hawaii Telescope) and SpIOMM (a prototype attached to the Observatoire du Mont-Mégantic) are the first Imaging Fourier Transform Spectrometers (IFTS) capable of obtaining a hyperspectral data cube which samples a 12 arc minutes field of view into four millions of visible spectra. The result of each observation is made up of two interferometric data cubes which need to be merged, corrected, transformed and calibrated in order to get a spectral cube of the observed region ready to be analysed. ORBS is a fully automatic data reduction software that has been entirely designed for this purpose. The data size (up to 68 Gb for larger science cases) and the computational needs have been challenging and the highly parallelized object-oriented architecture of ORBS reflects the solutions adopted which made possible to process 68 Gb of raw data in less than 11 hours using 8 cores and 22.6 Gb of RAM. It is based on a core framework (ORB) that has been designed to support the whole software suite for data analysis (ORCS and OACS), data simulation (ORUS) and data acquisition (IRIS). They all aim to provide a strong basis for the creation and development of specialized analysis modules that could benefit the scientific community working with SITELLE and SpIOMM.

  16. The Input Signal Step Function (ISSF), a Standard Method to Encode Input Signals in SBML Models with Software Support, Applied to Circadian Clock Models

    PubMed Central

    Adams, R.R.; Tsorman, N.; Stratford, K.; Akman, O.E.; Gilmore, S.; Juty, N.; Le Novère, N.; Millar, A.J.; Millar, A.J.

    2012-01-01

    Time-dependent light input is an important feature of computational models of the circadian clock. However, publicly available models encoded in standard representations such as the Systems Biology Markup Language (SBML) either do not encode this input or use different mechanisms to do so, which hinders reproducibility of published results as well as model reuse. The authors describe here a numerically continuous function suitable for use in SBML for models of circadian rhythms forced by periodic light-dark cycles. The Input Signal Step Function (ISSF) is broadly applicable to encoding experimental manipulations, such as drug treatments, temperature changes, or inducible transgene expression, which may be transient, periodic, or mixed. It is highly configurable and is able to reproduce a wide range of waveforms. The authors have implemented this function in SBML and demonstrated its ability to modify the behavior of publicly available models to accurately reproduce published results. The implementation of ISSF allows standard simulation software to reproduce specialized circadian protocols, such as the phase-response curve. To facilitate the reuse of this function in public models, the authors have developed software to configure its behavior without any specialist knowledge of SBML. A community-standard approach to represent the inputs that entrain circadian clock models could particularly facilitate research in chronobiology. PMID:22855577

  17. Predicting standard-dose PET image from low-dose PET and multimodal MR images using mapping-based sparse representation

    NASA Astrophysics Data System (ADS)

    Wang, Yan; Zhang, Pei; An, Le; Ma, Guangkai; Kang, Jiayin; Shi, Feng; Wu, Xi; Zhou, Jiliu; Lalush, David S.; Lin, Weili; Shen, Dinggang

    2016-01-01

    Positron emission tomography (PET) has been widely used in clinical diagnosis for diseases and disorders. To obtain high-quality PET images requires a standard-dose radionuclide (tracer) injection into the human body, which inevitably increases risk of radiation exposure. One possible solution to this problem is to predict the standard-dose PET image from its low-dose counterpart and its corresponding multimodal magnetic resonance (MR) images. Inspired by the success of patch-based sparse representation (SR) in super-resolution image reconstruction, we propose a mapping-based SR (m-SR) framework for standard-dose PET image prediction. Compared with the conventional patch-based SR, our method uses a mapping strategy to ensure that the sparse coefficients, estimated from the multimodal MR images and low-dose PET image, can be applied directly to the prediction of standard-dose PET image. As the mapping between multimodal MR images (or low-dose PET image) and standard-dose PET images can be particularly complex, one step of mapping is often insufficient. To this end, an incremental refinement framework is therefore proposed. Specifically, the predicted standard-dose PET image is further mapped to the target standard-dose PET image, and then the SR is performed again to predict a new standard-dose PET image. This procedure can be repeated for prediction refinement of the iterations. Also, a patch selection based dictionary construction method is further used to speed up the prediction process. The proposed method is validated on a human brain dataset. The experimental results show that our method can outperform benchmark methods in both qualitative and quantitative measures.

  18. Predicting standard-dose PET image from low-dose PET and multimodal MR images using mapping-based sparse representation.

    PubMed

    Wang, Yan; Zhang, Pei; An, Le; Ma, Guangkai; Kang, Jiayin; Shi, Feng; Wu, Xi; Zhou, Jiliu; Lalush, David S; Lin, Weili; Shen, Dinggang

    2016-01-21

    Positron emission tomography (PET) has been widely used in clinical diagnosis for diseases and disorders. To obtain high-quality PET images requires a standard-dose radionuclide (tracer) injection into the human body, which inevitably increases risk of radiation exposure. One possible solution to this problem is to predict the standard-dose PET image from its low-dose counterpart and its corresponding multimodal magnetic resonance (MR) images. Inspired by the success of patch-based sparse representation (SR) in super-resolution image reconstruction, we propose a mapping-based SR (m-SR) framework for standard-dose PET image prediction. Compared with the conventional patch-based SR, our method uses a mapping strategy to ensure that the sparse coefficients, estimated from the multimodal MR images and low-dose PET image, can be applied directly to the prediction of standard-dose PET image. As the mapping between multimodal MR images (or low-dose PET image) and standard-dose PET images can be particularly complex, one step of mapping is often insufficient. To this end, an incremental refinement framework is therefore proposed. Specifically, the predicted standard-dose PET image is further mapped to the target standard-dose PET image, and then the SR is performed again to predict a new standard-dose PET image. This procedure can be repeated for prediction refinement of the iterations. Also, a patch selection based dictionary construction method is further used to speed up the prediction process. The proposed method is validated on a human brain dataset. The experimental results show that our method can outperform benchmark methods in both qualitative and quantitative measures. PMID:26732849

  19. Orbit Determination and Gravity Field Estimation of the Dawn spacecraft at Vesta Using Radiometric and Image Constraints with GEODYN Software

    NASA Astrophysics Data System (ADS)

    Centinello, F. J.; Zuber, M. T.; Mazarico, E.

    2013-12-01

    The Dawn spacecraft orbited the protoplanet Vesta from May 3, 2011 to July 25, 2012. Precise orbit determination was critical for the geophysical investigation, as well as the definition of the Vesta-fixed reference frame and the subsequent registration of datasets to the surface. GEODYN, the orbit determination and geodetic parameter estimation software of NASA Goddard Spaceflight Center, was used to compute the orbit of the Dawn spacecraft and estimate the gravity field of Vesta. GEODYN utilizes radiometric Doppler and range measurements, and was modified to process image data from Dawn's cameras. X-band radiometric measurements were acquired by the NASA Deep Space Network (DSN). The addition of the capability to process image constraints decreases position uncertainty in the along- and cross-orbit track directions because of their geometric strengths compared with radiometric measurements. This capability becomes critical for planetary missions such as Dawn due to the weak gravity environment, where non-conservative forces affect the orbit more than typical of orbits at larger planetary bodies. Radiometric measurements were fit to less than 0.1 mm/s and 5 m for Doppler and range during the Survey orbit phase (compared with measurement noise RMS of about 0.05 mm/s and 2 m for Doppler and range). Image constraint RMS was fit to less than 100 m (resolution is 5 - 150 m/pixel, depending on the spacecraft altitude). Orbits computed using GEODYN were used to estimate a 20th degree and order gravity field of Vesta. The quality of the orbit determination and estimated gravity field with and without image constraints was assessed through comparison with the spacecraft trajectory and gravity model provided by the Dawn Science Team.

  20. C++ software integration for a high-throughput phase imaging platform

    NASA Astrophysics Data System (ADS)

    Kandel, Mikhail E.; Luo, Zelun; Han, Kevin; Popescu, Gabriel

    2015-03-01

    The multi-shot approach in SLIM requires reliable, synchronous, and parallel operation of three independent hardware devices - not meeting these challenges results in degraded phase and slow acquisition speeds, narrowing applications to holistic statements about complex phenomena. The relative youth of quantitative imaging and the lack of ready-made commercial hardware and tools further compounds the problem as Higher level programming languages result in inflexible, experiment specific instruments limited by ill-fitting computational modules, resulting in a palpable chasm between promised and realized hardware performance. Furthermore, general unfamiliarity with intricacies such as background calibration, objective lens attenuation, along with spatial light modular alignment, makes successful measurements difficult for the inattentive or uninitiated. This poses an immediate challenge for moving our techniques beyond the lab to biologically oriented collaborators and clinical practitioners. To meet these challenges, we present our new Quantitative Phase Imaging pipeline, with improved instrument performance, friendly user interface and robust data processing features, enabling us to acquire and catalog clinical datasets hundreds of gigapixels in size.

  1. High-throughput image analysis of tumor spheroids: a user-friendly software application to measure the size of spheroids automatically and accurately.

    PubMed

    Chen, Wenjin; Wong, Chung; Vosburgh, Evan; Levine, Arnold J; Foran, David J; Xu, Eugenia Y

    2014-07-08

    The increasing number of applications of three-dimensional (3D) tumor spheroids as an in vitro model for drug discovery requires their adaptation to large-scale screening formats in every step of a drug screen, including large-scale image analysis. Currently there is no ready-to-use and free image analysis software to meet this large-scale format. Most existing methods involve manually drawing the length and width of the imaged 3D spheroids, which is a tedious and time-consuming process. This study presents a high-throughput image analysis software application - SpheroidSizer, which measures the major and minor axial length of the imaged 3D tumor spheroids automatically and accurately; calculates the volume of each individual 3D tumor spheroid; then outputs the results in two different forms in spreadsheets for easy manipulations in the subsequent data analysis. The main advantage of this software is its powerful image analysis application that is adapted for large numbers of images. It provides high-throughput computation and quality-control workflow. The estimated time to process 1,000 images is about 15 min on a minimally configured laptop, or around 1 min on a multi-core performance workstation. The graphical user interface (GUI) is also designed for easy quality control, and users can manually override the computer results. The key method used in this software is adapted from the active contour algorithm, also known as Snakes, which is especially suitable for images with uneven illumination and noisy background that often plagues automated imaging processing in high-throughput screens. The complimentary "Manual Initialize" and "Hand Draw" tools provide the flexibility to SpheroidSizer in dealing with various types of spheroids and diverse quality images. This high-throughput image analysis software remarkably reduces labor and speeds up the analysis process. Implementing this software is beneficial for 3D tumor spheroids to become a routine in vitro model

  2. Positioning Standardized Acupuncture Points on the Whole Body Based on X-Ray Computed Tomography Images

    PubMed Central

    Kim, Jungdae

    2014-01-01

    Abstract Objective: The goal of this research was to position all the standardized 361 acupuncture points on the entire human body based on a 3-dimensional (3D) virtual body. Materials and Methods: Digital data from a healthy Korean male with a normal body shape were obtained in the form of cross-sectional images generated by X-ray computed tomography (CT), and the 3D models for the bones and the skin's surface were created through the image-processing steps. Results: The reference points or the landmarks were positioned based on the standard descriptions of the acupoints, and the formulae for the proportionalities between the acupoints and the reference points were presented. About 37% of the 361 standardized acupoints were automatically linked with the reference points, the reference points accounted for 11% of the 361 acupoints, and the remaining acupoints (52%) were positioned point-by-point by using the OpenGL 3D graphics libraries. Based on the projective 2D descriptions of the standard acupuncture points, the volumetric 3D acupoint model was developed; it was extracted from the X-ray CT images. Conclusions: This modality for positioning acupoints may modernize acupuncture research and enable acupuncture treatments to be more personalized. PMID:24761187

  3. SU-E-I-63: Quantitative Evaluation of the Effects of Orthopedic Metal Artifact Reduction (OMAR) Software On CT Images for Radiotherapy Simulation

    SciTech Connect

    Jani, S

    2014-06-01

    Purpose: CT simulation for patients with metal implants can often be challenging due to artifacts that obscure tumor/target delineation and normal organ definition. Our objective was to evaluate the effectiveness of Orthopedic Metal Artifact Reduction (OMAR), a commercially available software, in reducing metal-induced artifacts and its effect on computed dose during treatment planning. Methods: CT images of water surrounding metallic cylindrical rods made of aluminum, copper and iron were studied in terms of Hounsfield Units (HU) spread. Metal-induced artifacts were characterized in terms of HU/Volume Histogram (HVH) using the Pinnacle treatment planning system. Effects of OMAR on enhancing our ability to delineate organs on CT and subsequent dose computation were examined in nine (9) patients with hip implants and two (2) patients with breast tissue expanders. Results: Our study characterized water at 1000 HU with a standard deviation (SD) of about 20 HU. The HVHs allowed us to evaluate how the presence of metal changed the HU spread. For example, introducing a 2.54 cm diameter copper rod in water increased the SD in HU of the surrounding water from 20 to 209, representing an increase in artifacts. Subsequent use of OMAR brought the SD down to 78. Aluminum produced least artifacts whereas Iron showed largest amount of artifacts. In general, an increase in kVp and mA during CT scanning showed better effectiveness of OMAR in reducing artifacts. Our dose analysis showed that some isodose contours shifted by several mm with OMAR but infrequently and were nonsignificant in planning process. Computed volumes of various dose levels showed <2% change. Conclusions: In our experience, OMAR software greatly reduced the metal-induced CT artifacts for the majority of patients with implants, thereby improving our ability to delineate tumor and surrounding organs. OMAR had a clinically negligible effect on computed dose within tissues. Partially funded by unrestricted

  4. MO-G-9A-01: Imaging Refresher for Standard of Care Radiation Therapy

    SciTech Connect

    Labby, Z; Sensakovic, W; Hipp, E; Altman, M

    2014-06-15

    Imaging techniques and technology which were previously the domain of diagnostic medicine are becoming increasingly integrated and utilized in radiation therapy (RT) clinical practice. As such, there are a number of specific imaging topics that are highly applicable to modern radiation therapy physics. As imaging becomes more widely integrated into standard clinical radiation oncology practice, the impetus is on RT physicists to be informed and up-to-date on those imaging modalities relevant to the design and delivery of therapeutic radiation treatments. For example, knowing that, for a given situation, a fluid attenuated inversion recovery (FLAIR) image set is most likely what the physician would like to import and contour is helpful, but may not be sufficient to providing the best quality of care. Understanding the physics of how that pulse sequence works and why it is used could help assess its utility and determine if it is the optimal sequence for aiding in that specific clinical situation. It is thus important that clinical medical physicists be able to understand and explain the physics behind the imaging techniques used in all aspects of clinical radiation oncology practice. This session will provide the basic physics for a variety of imaging modalities for applications that are highly relevant to radiation oncology practice: computed tomography (CT) (including kV, MV, cone beam CT [CBCT], and 4DCT), positron emission tomography (PET)/CT, magnetic resonance imaging (MRI), and imaging specific to brachytherapy (including ultrasound and some brachytherapy specific topics in MR). For each unique modality, the image formation process will be reviewed, trade-offs between image quality and other factors (e.g. imaging time or radiation dose) will be clarified, and typically used cases for each modality will be introduced. The current and near-future uses of these modalities and techniques in radiation oncology clinical practice will also be discussed. Learning

  5. Anatomic standardization: Linear scaling and nonlinear warping of functional brain images

    SciTech Connect

    Minoshima, S.; Koeppe, R.A.; Frey, K.A.

    1994-09-01

    An automated method was proposed for anatomic standardization of PET scans in three dimensions, which enabled objective intersubject and cross-group comparisons of functional brain images. The method involved linear scaling to correct for individual brain size and nonlinear warping to minimize regional anatomic variations among subjects. In the linear-scaling step, the anteroposterior length and width of the brain were measured on the PET images, and the brain height was estimated by a contour-matching procedure using the midsagittal plane. In the nonlinear warping step, individual gray matter locations were matched with those of a standard brain by maximizing correlation coefficients of regional profile curves determined between predefined stretching centers (predominantly in white matter) and the gray matter landmarks. The accuracy of the brain height estimation was compared with skull x-ray estimations, showing comparable accuracy and better reproducibility. Linear-scaling and nonlinear warping methods were validated using ({sup 18}F)fluorodeoxyglucose and ({sup 15}O)water images. Regional anatomic variability on the glucose images was reduced markedly. The statistical significance of activation foci in paired water images was improved in both vibratory and visual activation paradigms. A group versus group comparison following the proposed anatomic standardization revealed highly significant glucose metabolic alterations in the brains of patients with Alzheimer`s disease compared with those of a normal control group. These results suggested that the method is well suited to both research and clinical settings and can facilitate pixel-by-pixel comparisons of PET images. 26 refs., 9 figs., 1 tab.

  6. FIRE: an open-software suite for real-time 2D/3D image registration for image guided radiotherapy research

    NASA Astrophysics Data System (ADS)

    Furtado, H.; Gendrin, C.; Spoerk, J.; Steiner, E.; Underwood, T.; Kuenzler, T.; Georg, D.; Birkfellner, W.

    2016-03-01

    Radiotherapy treatments have changed at a tremendously rapid pace. Dose delivered to the tumor has escalated while organs at risk (OARs) are better spared. The impact of moving tumors during dose delivery has become higher due to very steep dose gradients. Intra-fractional tumor motion has to be managed adequately to reduce errors in dose delivery. For tumors with large motion such as tumors in the lung, tracking is an approach that can reduce position uncertainty. Tumor tracking approaches range from purely image intensity based techniques to motion estimation based on surrogate tracking. Research efforts are often based on custom designed software platforms which take too much time and effort to develop. To address this challenge we have developed an open software platform especially focusing on tumor motion management. FLIRT is a freely available open-source software platform. The core method for tumor tracking is purely intensity based 2D/3D registration. The platform is written in C++ using the Qt framework for the user interface. The performance critical methods are implemented on the graphics processor using the CUDA extension. One registration can be as fast as 90ms (11Hz). This is suitable to track tumors moving due to respiration (~0.3Hz) or heartbeat (~1Hz). Apart from focusing on high performance, the platform is designed to be flexible and easy to use. Current use cases range from tracking feasibility studies, patient positioning and method validation. Such a framework has the potential of enabling the research community to rapidly perform patient studies or try new methods.

  7. Using Image Pro Plus Software to Develop Particle Mapping on Genesis Solar Wind Collector Surfaces

    NASA Technical Reports Server (NTRS)

    Rodriquez, Melissa C.; Allton, J. H.; Burkett, P. J.

    2012-01-01

    The continued success of the Genesis mission science team in analyzing solar wind collector array samples is partially based on close collaboration of the JSC curation team with science team members who develop cleaning techniques and those who assess elemental cleanliness at the levels of detection. The goal of this collaboration is to develop a reservoir of solar wind collectors of known cleanliness to be available to investigators. The heart and driving force behind this effort is Genesis mission PI Don Burnett. While JSC contributes characterization, safe clean storage, and benign collector cleaning with ultrapure water (UPW) and UV ozone, Burnett has coordinated more exotic and rigorous cleaning which is contributed by science team members. He also coordinates cleanliness assessment requiring expertise and instruments not available in curation, such as XPS, TRXRF [1,2] and synchrotron TRXRF. JSC participates by optically documenting the particle distributions as cleaning steps progress. Thus, optical document supplements SEM imaging and analysis, and elemental assessment by TRXRF.

  8. Cytopathology whole slide images and virtual microscopy adaptive tutorials: A software pilot

    PubMed Central

    Van Es, Simone L.; Pryor, Wendy M.; Belinson, Zack; Salisbury, Elizabeth L.; Velan, Gary M.

    2015-01-01

    Background: The constant growth in the body of knowledge in medicine requires pathologists and pathology trainees to engage in continuing education. Providing them with equitable access to efficient and effective forms of education in pathology (especially in remote and rural settings) is important, but challenging. Methods: We developed three pilot cytopathology virtual microscopy adaptive tutorials (VMATs) to explore a novel adaptive E-learning platform (AeLP) which can incorporate whole slide images for pathology education. We collected user feedback to further develop this educational material and to subsequently deploy randomized trials in both pathology specialist trainee and also medical student cohorts. Cytopathology whole slide images were first acquired then novel VMATs teaching cytopathology were created using the AeLP, an intelligent tutoring system developed by Smart Sparrow. The pilot was run for Australian pathologists and trainees through the education section of Royal College of Pathologists of Australasia website over a period of 9 months. Feedback on the usability, impact on learning and any technical issues was obtained using 5-point Likert scale items and open-ended feedback in online questionnaires. Results: A total of 181 pathologists and pathology trainees anonymously attempted the three adaptive tutorials, a smaller proportion of whom went on to provide feedback at the end of each tutorial. VMATs were perceived as effective and efficient E-learning tools for pathology education. User feedback was positive. There were no significant technical issues. Conclusion: During this pilot, the user feedback on the educational content and interface and the lack of technical issues were helpful. Large scale trials of similar online cytopathology adaptive tutorials were planned for the future. PMID:26605119

  9. An Upgrade of the Imaging for Hypersonic Experimental Aeroheating Testing (IHEAT) Software

    NASA Technical Reports Server (NTRS)

    Mason, Michelle L.; Rufer, Shann J.

    2015-01-01

    The Imaging for Hypersonic Experimental Aeroheating Testing (IHEAT) code is used at NASA Langley Research Center to analyze global aeroheating data on wind tunnel models tested in the Langley Aerothermodynamics Laboratory. One-dimensional, semi-infinite heating data derived from IHEAT are used to design thermal protection systems to mitigate the risks due to the aeroheating loads on hypersonic vehicles, such as re-entry vehicles during descent and landing procedures. This code was originally written in the PV-WAVE programming language to analyze phosphor thermography data from the two-color, relativeintensity system developed at Langley. To increase the efficiency, functionality, and reliability of IHEAT, the code was migrated to MATLAB syntax and compiled as a stand-alone executable file labeled version 4.0. New features of IHEAT 4.0 include the options to batch process all of the data from a wind tunnel run, to map the two-dimensional heating distribution to a three-dimensional computer-aided design model of the vehicle to be viewed in Tecplot, and to extract data from a segmented line that follows an interesting feature in the data. Results from IHEAT 4.0 were compared on a pixel level to the output images from the legacy code to validate the program. The differences between the two codes were on the order of 10-5 to 10-7. IHEAT 4.0 replaces the PV-WAVE version as the production code for aeroheating experiments conducted in the hypersonic facilities at NASA Langley.

  10. Estimating elastic properties of tissues from standard 2D ultrasound images

    NASA Astrophysics Data System (ADS)

    Kybic, Jan; Smutek, Daniel

    2005-04-01

    We propose a way of measuring elastic properties of tissues in-vivo, using standard medical image ultrasound machine without any special hardware. Images are acquired while the tissue is being deformed by a varying pressure applied by the operator on the hand-held ultrasound probe. The local elastic shear modulus is either estimated from a local displacement field reconstructed by an elastic registration algorithm, or both the modulus and the displacement are estimated simultaneously. The relation between modulus and displacement is calculated using a finite element method (FEM). The estimation algorithms were tested on both synthetic, phantom and real subject data.

  11. Imaging of the optic disk in caring for patients with glaucoma: ophthalmoscopy and photography remain the gold standard.

    PubMed

    Spaeth, George L; Reddy, Swathi C

    2014-01-01

    Optic disk imaging is integral to the diagnosis and treatment of patients with glaucoma. We discuss the various forms of imaging the optic nerve, including ophthalmoscopy, photography, and newer imaging modalities, including optical coherence tomography (OCT), confocal scanning laser ophthalmoscopy (HRT), and scanning laser polarimetry (GDx), specifically highlighting their benefits and disadvantages. We argue that ophthalmoscopy and photography remain the gold standard of imaging due to portability, ease of interpretation, and the presence of a large database of images for comparison.

  12. On Software Compatibility.

    ERIC Educational Resources Information Center

    Ershov, Andrei P.

    The problem of compatibility of software hampers the development of computer application. One solution lies in standardization of languages, terms, peripherais, operating systems and computer characteristics. (AB)

  13. Fire service and first responder thermal imaging camera (TIC) advances and standards

    NASA Astrophysics Data System (ADS)

    Konsin, Lawrence S.; Nixdorff, Stuart

    2007-04-01

    Fire Service and First Responder Thermal Imaging Camera (TIC) applications are growing, saving lives and preventing injury and property damage. Firefighters face a wide range of serious hazards. TICs help mitigate the risks by protecting Firefighters and preventing injury, while reducing time spent fighting the fire and resources needed to do so. Most fire safety equipment is covered by performance standards. Fire TICs, however, are not covered by such standards and are also subject to inadequate operational performance and insufficient user training. Meanwhile, advancements in Fire TICs and lower costs are driving product demand. The need for a Fire TIC Standard was spurred in late 2004 through a Government sponsored Workshop where experts from the First Responder community, component manufacturers, firefighter training, and those doing research on TICs discussed strategies, technologies, procedures, best practices and R&D that could improve Fire TICs. The workshop identified pressing image quality, performance metrics, and standards issues. Durability and ruggedness metrics and standard testing methods were also seen as important, as was TIC training and certification of end-users. A progress report on several efforts in these areas and their impact on the IR sensor industry will be given. This paper is a follow up to the SPIE Orlando 2004 paper on Fire TIC usage (entitled Emergency Responders' Critical Infrared) which explored the technological development of this IR industry segment from the viewpoint of the end user, in light of the studies and reports that had established TICs as a mission critical tool for firefighters.

  14. Leap Motion Gesture Control With Carestream Software in the Operating Room to Control Imaging: Installation Guide and Discussion.

    PubMed

    Pauchot, Julien; Di Tommaso, Laetitia; Lounis, Ahmed; Benassarou, Mourad; Mathieu, Pierre; Bernot, Dominique; Aubry, Sébastien

    2015-12-01

    Nowadays, routine cross-sectional imaging viewing during a surgical procedure requires physical contact with an interface (mouse or touch-sensitive screen). Such contact risks exposure to aseptic conditions and causes loss of time. Devices such as the recently introduced Leap Motion (Leap Motion Society, San Francisco, CA), which enables interaction with the computer without any physical contact, are of wide interest in the field of surgery, but configuration and ergonomics are key challenges for the practitioner, imaging software, and surgical environment. This article aims to suggest an easy configuration of Leap Motion on a PC for optimized use with Carestream Vue PACS v11.3.4 (Carestream Health, Inc, Rochester, NY) using a plug-in (to download at https://drive.google.com/open?id=0B_F4eBeBQc3yNENvTXlnY09qS00&authuser=0) and a video tutorial (https://www.youtube.com/watch?v=yVPTgxg-SIk). Videos of surgical procedure and discussion about innovative gesture control technology and its various configurations are provided in this article.

  15. Leap Motion Gesture Control With Carestream Software in the Operating Room to Control Imaging: Installation Guide and Discussion.

    PubMed

    Pauchot, Julien; Di Tommaso, Laetitia; Lounis, Ahmed; Benassarou, Mourad; Mathieu, Pierre; Bernot, Dominique; Aubry, Sébastien

    2015-12-01

    Nowadays, routine cross-sectional imaging viewing during a surgical procedure requires physical contact with an interface (mouse or touch-sensitive screen). Such contact risks exposure to aseptic conditions and causes loss of time. Devices such as the recently introduced Leap Motion (Leap Motion Society, San Francisco, CA), which enables interaction with the computer without any physical contact, are of wide interest in the field of surgery, but configuration and ergonomics are key challenges for the practitioner, imaging software, and surgical environment. This article aims to suggest an easy configuration of Leap Motion on a PC for optimized use with Carestream Vue PACS v11.3.4 (Carestream Health, Inc, Rochester, NY) using a plug-in (to download at https://drive.google.com/open?id=0B_F4eBeBQc3yNENvTXlnY09qS00&authuser=0) and a video tutorial (https://www.youtube.com/watch?v=yVPTgxg-SIk). Videos of surgical procedure and discussion about innovative gesture control technology and its various configurations are provided in this article. PMID:26002115

  16. Software reengineering

    NASA Technical Reports Server (NTRS)

    Fridge, Ernest M., III

    1991-01-01

    Programs in use today generally have all of the function and information processing capabilities required to do their specified job. However, older programs usually use obsolete technology, are not integrated properly with other programs, and are difficult to maintain. Reengineering is becoming a prominent discipline as organizations try to move their systems to more modern and maintainable technologies. The Johnson Space Center (JSC) Software Technology Branch (STB) is researching and developing a system to support reengineering older FORTRAN programs into more maintainable forms that can also be more readily translated to a modern languages such as FORTRAN 8x, Ada, or C. This activity has led to the development of maintenance strategies for design recovery and reengineering. These strategies include a set of standards, methodologies, and the concepts for a software environment to support design recovery and reengineering. A brief description of the problem being addressed and the approach that is being taken by the STB toward providing an economic solution to the problem is provided. A statement of the maintenance problems, the benefits and drawbacks of three alternative solutions, and a brief history of the STB experience in software reengineering are followed by the STB new FORTRAN standards, methodology, and the concepts for a software environment.

  17. Should software hold data hostage?

    SciTech Connect

    Wiley, H S.; Michaels, George S.

    2004-08-01

    Software tools have become an indispensable part of modern biology, but issues surrounding propriety file formats and closed software architectures threaten to stunt the growth of this rapidly expanding area of research. In an effort to ensure continuous software upgrades to provide a continuous income stream, some software companies have resorted to holding the user?s data hostage by locking them into proprietary file and data formats. Although this might make sense from a business perspective, it violates fundamental principles of data ownership and control. Such tactics should not be tolerated by the scientific community. The future of data-intensive biology depends on ensuring open data standards and freely exchangeable file formats. Compared to the engineering and chemistry fields, computers are a relatively recent addition to the arsenal of biological tools. Thus the pool of potential users of biology-oriented software is comparatively small. Biology itself is a broad field with many sub-disciplines, such as neurobiology, biochemistry, genomics and cell biology. This creates the need for task-oriented software tools that necessarily have a small user base. Simultaneously, the task of developing software has become more complex with the need for multi-platform software and increasing user expectations of sophisticated interfaces and a high degree of usability. Writing successful software in such an environment is very challenging, but progress in biology will increasingly depend on the success of companies and individuals in creating powerful new software tools. The trend to open source software could have an enormous impact on biology by providing the large number of specialized analysis tools that are required. Indeed, in the field of bioinformatics, open source software has become pervasive, largely because of the high degree of computer skill necessary for workers in this field. For these tools to be usable by non-specialists, however, requires the

  18. Software reengineering

    NASA Technical Reports Server (NTRS)

    Fridge, Ernest M., III

    1991-01-01

    Today's software systems generally use obsolete technology, are not integrated properly with other software systems, and are difficult and costly to maintain. The discipline of reverse engineering is becoming prominent as organizations try to move their systems up to more modern and maintainable technology in a cost effective manner. JSC created a significant set of tools to develop and maintain FORTRAN and C code during development of the Space Shuttle. This tool set forms the basis for an integrated environment to re-engineer existing code into modern software engineering structures which are then easier and less costly to maintain and which allow a fairly straightforward translation into other target languages. The environment will support these structures and practices even in areas where the language definition and compilers do not enforce good software engineering. The knowledge and data captured using the reverse engineering tools is passed to standard forward engineering tools to redesign or perform major upgrades to software systems in a much more cost effective manner than using older technologies. A beta vision of the environment was released in Mar. 1991. The commercial potential for such re-engineering tools is very great. CASE TRENDS magazine reported it to be the primary concern of over four hundred of the top MIS executives.

  19. Collecting field data from Mars Exploration Rover Spirit and Opportunity Images: Development of 3-D Visualization and Data-Mining Software

    NASA Astrophysics Data System (ADS)

    Eppes, M. C.; Willis, A.; Zhou, B.

    2010-12-01

    NASA’s two Mars rover spacecraft, Spirit and Opportunity, have collected more than 4 years worth of data from nine imaging instruments producing greater than 200k images. To date, however, the potential ‘field’ data that these images represent has remained relatively untapped because of a lack of software with which to readily analyze the images quantitatively. We have developed prototype software that allows scientists to locate and explore 2D and 3D imagery captured by the NASA's Mars Exploratory Rover (MER) mission robots Spirit and Opportunity. For example, using our software, a person could measure the dimensions of a rock or the strike and dip of a bedding plane. The developed software has three aspects that make it distinct from existing approaches for indexing large sets of imagery: (1) a computationally efficient image search engine capable of locating MER images containing features of interest, rocks in particular, (2) an interface for making measurements (distances and orientations) from stereographic image pairs and (3) remote browsing and storage capabilities that removes the burden of storing and managing these very large image sets. Two methods of search are supported (i) a rock detection algorithm for finding images that contain rock-like structures having a specified size and (ii) a generic query-by-image search which uses exemplar image(s) of a desired object to locate other images within the MER data repository that contain similar structures (i.e. one could search for all images of sand dunes). Query by image capabilities are made possible via a bag-of-features (e.g. Labeznik et. al. 2003; Schmid et. al. 2002) representation of the image data which compresses the image into a small set of features which are robust to changes in illumination and perspective. Searches are then reduced to looking for feature sets which have similar values; a task that is computationally tractable providing quick search results for complex image-based queries

  20. Development and Evaluation of Reference Standards for Image-based Telemedicine Diagnosis and Clinical Research Studies in Ophthalmology

    PubMed Central

    Ryan, Michael C.; Ostmo, Susan; Jonas, Karyn; Berrocal, Audina; Drenser, Kimberly; Horowitz, Jason; Lee, Thomas C.; Simmons, Charles; Martinez-Castellanos, Maria-Ana; Chan, R.V. Paul; Chiang, Michael F.

    2014-01-01

    Information systems managing image-based data for telemedicine or clinical research applications require a reference standard representing the correct diagnosis. Accurate reference standards are difficult to establish because of imperfect agreement among physicians, and discrepancies between clinical vs. image-based diagnosis. This study is designed to describe the development and evaluation of reference standards for image-based diagnosis, which combine diagnostic impressions of multiple image readers with the actual clinical diagnoses. We show that agreement between image reading and clinical examinations was imperfect (689 [32%] discrepancies in 2148 image readings), as was inter-reader agreement (kappa 0.490-0.652). This was improved by establishing an image-based reference standard defined as the majority diagnosis given by three readers (13% discrepancies with image readers). It was further improved by establishing an overall reference standard that incorporated the clinical diagnosis (10% discrepancies with image readers). These principles of establishing reference standards may be applied to improve robustness of real-world systems supporting image-based diagnosis. PMID:25954463

  1. Development and Evaluation of Reference Standards for Image-based Telemedicine Diagnosis and Clinical Research Studies in Ophthalmology.

    PubMed

    Ryan, Michael C; Ostmo, Susan; Jonas, Karyn; Berrocal, Audina; Drenser, Kimberly; Horowitz, Jason; Lee, Thomas C; Simmons, Charles; Martinez-Castellanos, Maria-Ana; Chan, R V Paul; Chiang, Michael F

    2014-01-01

    Information systems managing image-based data for telemedicine or clinical research applications require a reference standard representing the correct diagnosis. Accurate reference standards are difficult to establish because of imperfect agreement among physicians, and discrepancies between clinical vs. image-based diagnosis. This study is designed to describe the development and evaluation of reference standards for image-based diagnosis, which combine diagnostic impressions of multiple image readers with the actual clinical diagnoses. We show that agreement between image reading and clinical examinations was imperfect (689 [32%] discrepancies in 2148 image readings), as was inter-reader agreement (kappa 0.490-0.652). This was improved by establishing an image-based reference standard defined as the majority diagnosis given by three readers (13% discrepancies with image readers). It was further improved by establishing an overall reference standard that incorporated the clinical diagnosis (10% discrepancies with image readers). These principles of establishing reference standards may be applied to improve robustness of real-world systems supporting image-based diagnosis.

  2. Mountaintop Software for the Dark Energy Survey

    NASA Astrophysics Data System (ADS)

    Thaler, Jon; Abbott, T.; Karliner, I.; Qian, T.; Honscheid, K.; Merritt, W.; Buckley-Geer, L.

    2006-12-01

    The DES mountaintop software must perform several tasks: * Collect image data from the 496 megapixel camera and collect the associated environmental metadata. * Collect guider data (guiding is done with CCDs on the focal plane) and send correction signals to the Blanco telescope control system. * Perform sufficient real-time monitoring, analysis, and user interfaces to assure the quality of the data being taken and to allow both automatic and manual configuration and sequencing of the apparatus. To facilitate community access, DES mountaintop software must be maintainable by CTIO staff. To the extent possible we are employing software packages that are either commercial standards or developed and maintained by CTIO.

  3. The Decoding Toolbox (TDT): a versatile software package for multivariate analyses of functional imaging data

    PubMed Central

    Hebart, Martin N.; Görgen, Kai; Haynes, John-Dylan

    2015-01-01

    The multivariate analysis of brain signals has recently sparked a great amount of interest, yet accessible and versatile tools to carry out decoding analyses are scarce. Here we introduce The Decoding Toolbox (TDT) which represents a user-friendly, powerful and flexible package for multivariate analysis of functional brain imaging data. TDT is written in Matlab and equipped with an interface to the widely used brain data analysis package SPM. The toolbox allows running fast whole-brain analyses, region-of-interest analyses and searchlight analyses, using machine learning classifiers, pattern correlation analysis, or representational similarity analysis. It offers automatic creation and visualization of diverse cross-validation schemes, feature scaling, nested parameter selection, a variety of feature selection methods, multiclass capabilities, and pattern reconstruction from classifier weights. While basic users can implement a generic analysis in one line of code, advanced users can extend the toolbox to their needs or exploit the structure to combine it with external high-performance classification toolboxes. The toolbox comes with an example data set which can be used to try out the various analysis methods. Taken together, TDT offers a promising option for researchers who want to employ multivariate analyses of brain activity patterns. PMID:25610393

  4. User interface software development for the WIYN One Degree Imager (ODI)

    NASA Astrophysics Data System (ADS)

    Ivens, John; Yeatts, Andrey; Harbeck, Daniel; Martin, Pierre

    2010-07-01

    User interfaces (UIs) are a necessity for almost any data acquisition system. The development team for the WIYN One Degree Imager (ODI) chose to develop a user interface that allows access to most of the instrument control for both scientists and engineers through the World Wide Web, because of the web's ease of use and accessibility around the world. Having a web based UI allows ODI to grow from a visitor-mode instrument to a queue-managed instrument and also facilitate remote servicing and troubleshooting. The challenges of developing such a system involve the difficulties of browser inter-operability, speed, presentation, and the choices involved with integrating browser and server technologies. To this end, the team has chosen a combination of Java, JBOSS, AJAX technologies, XML data descriptions, Oracle XML databases, and an emerging technology called the Google Web Toolkit (GWT) that compiles Java into Javascript for presentation in a browser. Advantages of using GWT include developing the front end browser code in Java, GWT's native support for AJAX, the use of XML to describe the user interface, the ability to profile code speed and discover bottlenecks, the ability to efficiently communicate with application servers such as JBOSS, and the ability to optimize and test code for multiple browsers. We discuss the inter-operation of all of these technologies to create fast, flexible, and robust user interfaces that are scalable, manageable, separable, and as much as possible allow maintenance of all code in Java.

  5. [Dynamic imaging of gastric ulcer healing using the most modern Morph-Software].

    PubMed

    Jaspersen, D; Keerl, R; Weber, R; Huppmann, A; Hammar, C H; Draf, W

    1996-06-01

    The presentation of gastric ulcer healing taken from video endoscopy as a dynamic process could not be realized till now. The documentation of the dynamic healing process shattered either on the patient's compliance or on the inconstancy of the image cut due to wobbling. The replay should be performed as a time lapse whereby the picture disturbances would become an essential part.-Instead of presenting a continuous film, instant takes of ulcer healing were processed. A dynamic effect was produced by computer-assisted production of intermediate pictures. A video was created in which short video sequences in definite time intervals were recorded endoscopically. Single stills-so-called original pictures-fitting together from each sequence were selected and spliced together. The missing intermediate pictures were made with a special computer technique according to the mathematical concept of interpolation. With this technique, the dynamic documentation of gastric ulcer healing in a 47-year-old male patient was performed. The technique enables an almost natural and real observation of ulcer healing and promises new physiological and patho-physiological knowledge in gastroenterologic endoscopy.

  6. The Decoding Toolbox (TDT): a versatile software package for multivariate analyses of functional imaging data.

    PubMed

    Hebart, Martin N; Görgen, Kai; Haynes, John-Dylan

    2014-01-01

    The multivariate analysis of brain signals has recently sparked a great amount of interest, yet accessible and versatile tools to carry out decoding analyses are scarce. Here we introduce The Decoding Toolbox (TDT) which represents a user-friendly, powerful and flexible package for multivariate analysis of functional brain imaging data. TDT is written in Matlab and equipped with an interface to the widely used brain data analysis package SPM. The toolbox allows running fast whole-brain analyses, region-of-interest analyses and searchlight analyses, using machine learning classifiers, pattern correlation analysis, or representational similarity analysis. It offers automatic creation and visualization of diverse cross-validation schemes, feature scaling, nested parameter selection, a variety of feature selection methods, multiclass capabilities, and pattern reconstruction from classifier weights. While basic users can implement a generic analysis in one line of code, advanced users can extend the toolbox to their needs or exploit the structure to combine it with external high-performance classification toolboxes. The toolbox comes with an example data set which can be used to try out the various analysis methods. Taken together, TDT offers a promising option for researchers who want to employ multivariate analyses of brain activity patterns. PMID:25610393

  7. User's guide for mapIMG 3--Map image re-projection software package

    USGS Publications Warehouse

    Finn, Michael P.; Mattli, David M.

    2012-01-01

    Version 0.0 (1995), Dan Steinwand, U.S. Geological Survey (USGS)/Earth Resources Observation Systems (EROS) Data Center (EDC)--Version 0.0 was a command line version for UNIX that required four arguments: the input metadata, the output metadata, the input data file, and the output destination path. Version 1.0 (2003), Stephen Posch and Michael P. Finn, USGS/Mid-Continent Mapping Center (MCMC--Version 1.0 added a GUI interface that was built using the Qt library for cross platform development. Version 1.01 (2004), Jason Trent and Michael P. Finn, USGS/MCMC--Version 1.01 suggested bounds for the parameters of each projection. Support was added for larger input files, storage of the last used input and output folders, and for TIFF/ GeoTIFF input images. Version 2.0 (2005), Robert Buehler, Jason Trent, and Michael P. Finn, USGS/National Geospatial Technical Operations Center (NGTOC)--Version 2.0 added Resampling Methods (Mean, Mode, Min, Max, and Sum), updated the GUI design, and added the viewer/pre-viewer. The metadata style was changed to XML and was switched to a new naming convention. Version 3.0 (2009), David Mattli and Michael P. Finn, USGS/Center of Excellence for Geospatial Information Science (CEGIS)--Version 3.0 brings optimized resampling methods, an updated GUI, support for less than global datasets, UTM support and the whole codebase was ported to Qt4.

  8. On-Line Access to Weather Satellite Imagery and Image Manipulation Software

    NASA Technical Reports Server (NTRS)

    Emery, William J.; Kelley, T.; Dozier, J.; Rotar, P.

    1995-01-01

    Advanced Very High Resolution Radiometer and Geostationary Operational Environmental Satellite Imagery, received by antennas located at the University of Colorado, are made available to the Internet users through an on-line data access system. Created as a 'test bed' data system for the National Aeronautics and Space Administration's future Earth Observing System Data and Information System, this test bed provides an opportunity to test both the technical requirements of an on-line data system and the different ways in which the general user community would employ such a system. Initiated in December 1991, the basic data system experienced four major evolutionary changes in response to user requests and requirements. Features added with these changes were the addition of on-line browse, user subsetting, and dynamic image processing/navigation. Over its lifetime the system has grown to a maximum of over 2500 registered users, and after losing many of these users due to hardware changes, the system is once again growing with its own independent mass storage system.

  9. On-line access to weather satellite imagery and image manipulation software

    NASA Technical Reports Server (NTRS)

    Emery, W.; Kelley, T.; Dozier, J.; Rotar, P.

    1995-01-01

    Advanced Very High Resolution Radiometer and Geostationary Operational Environmental Satellite imagery, received by antennas located at the University of Colorado, are made available to the Internet users through an on-line data access system. Created as a 'test bed' system for the National Aeronautics and Space Administration's future Earth Observing System Data and Information System, this test bed provides an opportunity to test both the technical requirements of an on-line data system and the different ways in which the general user community would employ such a system. Initiated in December 1991, the basic data system experiment four major evolutionary changes in response to user requests and requirements. Features added with these changes were the addition of on-line browse, user subsetting, and dynamic image processing/navigation. Over its lifetime the system has grown to a maximum of over 2500 registered users, and after losing many of these users due to hardware changes, the system is once again growing with its own independent mass storage system.

  10. Validation of a digital image processing software package for the in vivo measurement of wear in cemented Charnley total hip arthroplasties.

    PubMed

    Kennard, Emma; Wilcox, Ruth K; Hall, Richard M

    2006-05-01

    Computer-generated images were used to assess image processing software employed in the radiographic evaluation of penetration in total hip replacement. The images were corrupted using Laplacian noise and smoothed to simulate different modulation transfer functions in a range associated with hospital digital radiographic systems. With no corruption, the penetration depth measurements were both precise and accurate. However, as the noise increased so did the inaccuracy and imprecision to levels that may make changes in the penetration observed clinically difficult to discern between follow-up assessments. Simulated rotation of the wire marker produced significant bias in the measured penetration depth. The use of these simulated radiographs allows the evaluation of the software used to process the digital images alone rather than the whole measurement system.

  11. Image-guided Tumor Ablation: Standardization of Terminology and Reporting Criteria—A 10-Year Update

    PubMed Central

    Solbiati, Luigi; Brace, Christopher L.; Breen, David J.; Callstrom, Matthew R.; Charboneau, J. William; Chen, Min-Hua; Choi, Byung Ihn; de Baère, Thierry; Dodd, Gerald D.; Dupuy, Damian E.; Gervais, Debra A.; Gianfelice, David; Gillams, Alice R.; Lee, Fred T.; Leen, Edward; Lencioni, Riccardo; Littrup, Peter J.; Livraghi, Tito; Lu, David S.; McGahan, John P.; Meloni, Maria Franca; Nikolic, Boris; Pereira, Philippe L.; Liang, Ping; Rhim, Hyunchul; Rose, Steven C.; Salem, Riad; Sofocleous, Constantinos T.; Solomon, Stephen B.; Soulen, Michael C.; Tanaka, Masatoshi; Vogl, Thomas J.; Wood, Bradford J.; Goldberg, S. Nahum

    2014-01-01

    Image-guided tumor ablation has become a well-established hallmark of local cancer therapy. The breadth of options available in this growing field increases the need for standardization of terminology and reporting criteria to facilitate effective communication of ideas and appropriate comparison among treatments that use different technologies, such as chemical (eg, ethanol or acetic acid) ablation, thermal therapies (eg, radiofrequency, laser, microwave, focused ultrasound, and cryoablation) and newer ablative modalities such as irreversible electroporation. This updated consensus document provides a framework that will facilitate the clearest communication among investigators regarding ablative technologies. An appropriate vehicle is proposed for reporting the various aspects of image-guided ablation therapy including classification of therapies, procedure terms, descriptors of imaging guidance, and terminology for imaging and pathologic findings. Methods are addressed for standardizing reporting of technique, follow-up, complications, and clinical results. As noted in the original document from 2003, adherence to the recommendations will improve the precision of communications in this field, leading to more accurate comparison of technologies and results, and ultimately to improved patient outcomes. © RSNA, 2014 Online supplemental material is available for this article. PMID:24927329

  12. Image-guided tumor ablation: standardization of terminology and reporting criteria--a 10-year update.

    PubMed

    Ahmed, Muneeb; Solbiati, Luigi; Brace, Christopher L; Breen, David J; Callstrom, Matthew R; Charboneau, J William; Chen, Min-Hua; Choi, Byung Ihn; de Baère, Thierry; Dodd, Gerald D; Dupuy, Damian E; Gervais, Debra A; Gianfelice, David; Gillams, Alice R; Lee, Fred T; Leen, Edward; Lencioni, Riccardo; Littrup, Peter J; Livraghi, Tito; Lu, David S; McGahan, John P; Meloni, Maria Franca; Nikolic, Boris; Pereira, Philippe L; Liang, Ping; Rhim, Hyunchul; Rose, Steven C; Salem, Riad; Sofocleous, Constantinos T; Solomon, Stephen B; Soulen, Michael C; Tanaka, Masatoshi; Vogl, Thomas J; Wood, Bradford J; Goldberg, S Nahum

    2014-11-01

    Image-guided tumor ablation has become a well-established hallmark of local cancer therapy. The breadth of options available in this growing field increases the need for standardization of terminology and reporting criteria to facilitate effective communication of ideas and appropriate comparison among treatments that use different technologies, such as chemical (eg, ethanol or acetic acid) ablation, thermal therapies (eg, radiofrequency, laser, microwave, focused ultrasound, and cryoablation) and newer ablative modalities such as irreversible electroporation. This updated consensus document provides a framework that will facilitate the clearest communication among investigators regarding ablative technologies. An appropriate vehicle is proposed for reporting the various aspects of image-guided ablation therapy including classification of therapies, procedure terms, descriptors of imaging guidance, and terminology for imaging and pathologic findings. Methods are addressed for standardizing reporting of technique, follow-up, complications, and clinical results. As noted in the original document from 2003, adherence to the recommendations will improve the precision of communications in this field, leading to more accurate comparison of technologies and results, and ultimately to improved patient outcomes. PMID:25442132

  13. The analysis and rationale behind the upgrading of existing standard definition thermal imagers to high definition

    NASA Astrophysics Data System (ADS)

    Goss, Tristan M.

    2016-05-01

    With 640x512 pixel format IR detector arrays having been on the market for the past decade, Standard Definition (SD) thermal imaging sensors have been developed and deployed across the world. Now with 1280x1024 pixel format IR detector arrays becoming readily available designers of thermal imager systems face new challenges as pixel sizes reduce and the demand and applications for High Definition (HD) thermal imaging sensors increases. In many instances the upgrading of existing under-sampled SD thermal imaging sensors into more optimally sampled or oversampled HD thermal imaging sensors provides a more cost effective and reduced time to market option than to design and develop a completely new sensor. This paper presents the analysis and rationale behind the selection of the best suited HD pixel format MWIR detector for the upgrade of an existing SD thermal imaging sensor to a higher performing HD thermal imaging sensor. Several commercially available and "soon to be" commercially available HD small pixel IR detector options are included as part of the analysis and are considered for this upgrade. The impact the proposed detectors have on the sensor's overall sensitivity, noise and resolution is analyzed, and the improved range performance is predicted. Furthermore with reduced dark currents due to the smaller pixel sizes, the candidate HD MWIR detectors are operated at higher temperatures when compared to their SD predecessors. Therefore, as an additional constraint and as a design goal, the feasibility of achieving upgraded performance without any increase in the size, weight and power consumption of the thermal imager is discussed herein.

  14. Comparison of retinal thickness by Fourier-domain optical coherence tomography and OCT retinal image analysis software segmentation analysis derived from Stratus optical coherence tomography images

    NASA Astrophysics Data System (ADS)

    Tátrai, Erika; Ranganathan, Sudarshan; Ferencz, Mária; Debuc, Delia Cabrera; Somfai, Gábor Márk

    2011-05-01

    Purpose: To compare thickness measurements between Fourier-domain optical coherence tomography (FD-OCT) and time-domain OCT images analyzed with a custom-built OCT retinal image analysis software (OCTRIMA). Methods: Macular mapping (MM) by StratusOCT and MM5 and MM6 scanning protocols by an RTVue-100 FD-OCT device are performed on 11 subjects with no retinal pathology. Retinal thickness (RT) and the thickness of the ganglion cell complex (GCC) obtained with the MM6 protocol are compared for each early treatment diabetic retinopathy study (ETDRS)-like region with corresponding results obtained with OCTRIMA. RT results are compared by analysis of variance with Dunnett post hoc test, while GCC results are compared by paired t-test. Results: A high correlation is obtained for the RT between OCTRIMA and MM5 and MM6 protocols. In all regions, the StratusOCT provide the lowest RT values (mean difference 43 +/- 8 μm compared to OCTRIMA, and 42 +/- 14 μm compared to RTVue MM6). All RTVue GCC measurements were significantly thicker (mean difference between 6 and 12 μm) than the GCC measurements of OCTRIMA. Conclusion: High correspondence of RT measurements is obtained not only for RT but also for the segmentation of intraretinal layers between FD-OCT and StratusOCT-derived OCTRIMA analysis. However, a correction factor is required to compensate for OCT-specific differences to make measurements more comparable to any available OCT device.

  15. Phantom evaluation of an image-guided navigation system based on electromagnetic tracking and open source software

    NASA Astrophysics Data System (ADS)

    Lin, Ralph; Cheng, Peng; Lindisch, David; Banovac, Filip; Lee, Justin; Cleary, Kevin

    2008-03-01

    We have developed an image-guided navigation system using electromagnetically-tracked tools, with potential applications for abdominal procedures such as biopsies, radiofrequency ablations, and radioactive seed placements. We present the results of two phantom studies using our navigation system in a clinical environment. In the first study, a physician and medical resident performed a total of 18 targeting passes in the abdomen of an anthropomorphic phantom based solely upon image guidance. The distance between the target and needle tip location was measured based on confirmatory scans which gave an average of 3.56 mm. In the second study, three foam nodules were placed at different depths in a gelatin phantom. Ten targeting passes were attempted in each of the three depths. Final distances between the target and needle tip were measured which gave an average of 3.00 mm. In addition to these targeting studies, we discuss our refinement to the standard four-quadrant image-guided navigation user interface, based on clinician preferences. We believe these refinements increase the usability of our system while decreasing targeting error.

  16. A survey on performance status of mammography machines: image quality and dosimetry studies using a standard mammography imaging phantom.

    PubMed

    Sharma, Reena; Sharma, Sunil Dutt; Mayya, Y S

    2012-07-01

    It is essential to perform quality control (QC) tests on mammography equipment in order to produce an appropriate image quality at a lower radiation dose to patients. Imaging and dosimetric measurements on 15 mammography machines located at the busiest radiology centres of Mumbai, India were carried out using a standard CIRS breast imaging phantom in order to see the level of image quality and breast doses. The QC tests include evaluations of image quality and the mean glandular doses (MGD), which is derived from the breast entrance exposure, half-value layer (HVL), compressed breast thickness (CBT) and breast tissue compositions. At the majority of the centres, film-processing and darkroom conditions were not found to be maintained, which is required to meet the technical development specifications for the mammography film in use as recommended by the American College of Radiology (ACR). In most of the surveyed centres, the viewbox luminance and room illuminance conditions were not found to be in line with the mammography requirements recommended by the ACR. The measured HVL values of the machines were in the range of 0.27-0.39 mm aluminium (Al) with a mean value of 0.33±0.04 mm Al at 28 kV(p) following the recommendation provided by ACR. The measured MGDs were in the range of 0.14-3.80 mGy with a mean value of 1.34 mGy. The measured MGDs vary between centre to centre by a factor of 27.14. Referring to patient doses and image quality, it was observed that only one mammography centre has exceeded the recommended MGD, i.e. 3.0 mGy per view with the value of 3.80 mGy and at eight mammography centres the measured central background density (CBD) values for mammography phantom image are found to be less than the recommended CBD limit value of 1.2-2.0 optical density. PMID:22090414

  17. Towards a repository for standardized medical image and signal case data annotated with ground truth.

    PubMed

    Deserno, Thomas M; Welter, Petra; Horsch, Alexander

    2012-04-01

    Validation of medical signal and image processing systems requires quality-assured, representative and generally acknowledged databases accompanied by appropriate reference (ground truth) and clinical metadata, which are composed laboriously for each project and are not shared with the scientific community. In our vision, such data will be stored centrally in an open repository. We propose an architecture for a standardized case data and ground truth information repository supporting the evaluation and analysis of computer-aided diagnosis based on (a) the Reference Model for an Open Archival Information System (OAIS) provided by the NASA Consultative Committee for Space Data Systems (ISO 14721:2003), (b) the Dublin Core Metadata Initiative (DCMI) Element Set (ISO 15836:2009), (c) the Open Archive Initiative (OAI) Protocol for Metadata Harvesting, and (d) the Image Retrieval in Medical Applications (IRMA) framework. In our implementation, a portal bunches all of the functionalities that are needed for data submission and retrieval. The complete life cycle of the data (define, create, store, sustain, share, use, and improve) is managed. Sophisticated search tools make it easier to use the datasets, which may be merged from different providers. An integrated history record guarantees reproducibility. A standardized creation report is generated with a permanent digital object identifier. This creation report must be referenced by all of the data users. Peer-reviewed e-publishing of these reports will create a reputation for the data contributors and will form de-facto standards regarding image and signal datasets. Good practice guidelines for validation methodology complement the concept of the case repository. This procedure will increase the comparability of evaluation studies for medical signal and image processing methods and applications. PMID:22075810

  18. Stereo Imaging Velocimetry Technique Using Standard Off-the-Shelf CCD Cameras

    NASA Technical Reports Server (NTRS)

    McDowell, Mark; Gray, Elizabeth

    2004-01-01

    Stereo imaging velocimetry is a fluid physics technique for measuring three-dimensional (3D) velocities at a plurality of points. This technique provides full-field 3D analysis of any optically clear fluid or gas experiment seeded with tracer particles. Unlike current 3D particle imaging velocimetry systems that rely primarily on laser-based systems, stereo imaging velocimetry uses standard off-the-shelf charge-coupled device (CCD) cameras to provide accurate and reproducible 3D velocity profiles for experiments that require 3D analysis. Using two cameras aligned orthogonally, we present a closed mathematical solution resulting in an accurate 3D approximation of the observation volume. The stereo imaging velocimetry technique is divided into four phases: 3D camera calibration, particle overlap decomposition, particle tracking, and stereo matching. Each phase is explained in detail. In addition to being utilized for space shuttle experiments, stereo imaging velocimetry has been applied to the fields of fluid physics, bioscience, and colloidal microscopy.

  19. A format standard for efficient interchange of high-contrast direct imaging science products

    NASA Astrophysics Data System (ADS)

    Choquet, Élodie; Vigan, Arthur; Soummer, Rémi; Chauvin, Gaël.; Pueyo, Laurent; Perrin, Marshall D.; Hines, Dean C.

    2014-07-01

    The present and next few years will see the arrival of several new coronagraphic instruments dedicated to the detection and characterization of planetary systems. These ground- and space-based instruments (Gemini/GPI, VLT/SPHERE, Subaru/ CHARIS, JWST NIRCam and MIRI coronagraphs among others), will provide a large number of new candidates, through multiple nearby-star surveys and will complete and extend those acquired with current generation instruments (Palomar P1640, VLT/NACO, Keck, HST). To optimize the use of the wealth of data, including non-detection results, the science products of these instruments will require to be shared among the community. In the long term such data exchange will significantly ease companion confirmations, planet characterization via different type of instruments (integral field spectrographs, polarimetric imagers, etc.), and Monte-Carlo population studies from detection and non-detection results. In this context, we initiated a collaborative effort between the teams developing the data reduction pipelines for SPHERE, GPI, and the JWST coronagraphs, and the ALICE (Archival Legacy Investigations of Circumstellar Environment) collaboration, which is currently reprocessing all the HST/NICMOS coronagraphic surveys. We are developing a standard format for the science products generated by high-contrast direct imaging instruments (reduced image, sensitivity limits, noise image, candidate list, etc.), that is directly usable for astrophysical investigations. In this paper, we present first results of this work and propose a preliminary format adopted for the science product. We call for discussions in the high-contrast direct imaging community to develop this effort, reach a consensus and finalize this standard. This action will be critical to enable data interchange and combination in a consistent way between several instruments and to stiffen the scientific production in the community.

  20. Repeatability and Reproducibility of Quantitative Corneal Shape Analysis after Orthokeratology Treatment Using Image-Pro Plus Software

    PubMed Central

    Mei, Ying; Tang, Zhiping

    2016-01-01

    Purpose. To evaluate the repeatability and reproducibility of quantitative analysis of the morphological corneal changes after orthokeratology treatment using “Image-Pro Plus 6.0” software (IPP). Methods. Three sets of measurements were obtained: two sets by examiner 1 with 5 days apart and one set by examiner 2 on the same day. Parameters of the eccentric distance, eccentric angle, area, and roundness of the corneal treatment zone were measured using IPP. The intraclass correlation coefficient (ICC) and repetitive coefficient (COR) were used to calculate the repeatability and reproducibility of these three sets of measurements. Results. ICC analysis suggested “excellent” reliability of more than 0.885 for all variables, and COR values were less than 10% for all variables within the same examiner. ICC analysis suggested “excellent” reliability for all variables of more than 0.90, and COR values were less than 10% for all variables between different examiners. All extreme values of the eccentric distance and area of the treatment zone pointed to the same material number in three sets of measurements. Conclusions. IPP could be used to acquire the exact data of the characteristic morphological corneal changes after orthokeratology treatment with good repeatability and reproducibility. This trial is registered with trial registration number: ChiCTR-IPR-14005505. PMID:27774312

  1. Computer Software.

    ERIC Educational Resources Information Center

    Kay, Alan

    1984-01-01

    Discusses the nature and development of computer software. Programing, programing languages, types of software (including dynamic spreadsheets), and software of the future are among the topics considered. (JN)

  2. A User's Software Dilemma.

    ERIC Educational Resources Information Center

    Splittgerber, Fred; Stirzaker, N. A.

    1989-01-01

    Discusses several issues associated with purchasing computer software packages: (1) continual updates; (2) lack of industrial standards for software development; and (3) expense. Many packages fail to provide technical assistance from a local dealer or the package developer. Without standards, costs to business, education, and the general public…

  3. A user's guide for the signal processing software for image and speech compression developed in the Communications and Signal Processing Laboratory (CSPL), version 1

    NASA Technical Reports Server (NTRS)

    Kumar, P.; Lin, F. Y.; Vaishampayan, V.; Farvardin, N.

    1986-01-01

    A complete documentation of the software developed in the Communication and Signal Processing Laboratory (CSPL) during the period of July 1985 to March 1986 is provided. Utility programs and subroutines that were developed for a user-friendly image and speech processing environment are described. Additional programs for data compression of image and speech type signals are included. Also, programs for the zero-memory and block transform quantization in the presence of channel noise are described. Finally, several routines for simulating the perfromance of image compression algorithms are included.

  4. In vivo validation of cardiac output assessment in non-standard 3D echocardiographic images

    NASA Astrophysics Data System (ADS)

    Nillesen, M. M.; Lopata, R. G. P.; de Boode, W. P.; Gerrits, I. H.; Huisman, H. J.; Thijssen, J. M.; Kapusta, L.; de Korte, C. L.

    2009-04-01

    Automatic segmentation of the endocardial surface in three-dimensional (3D) echocardiographic images is an important tool to assess left ventricular (LV) geometry and cardiac output (CO). The presence of speckle noise as well as the nonisotropic characteristics of the myocardium impose strong demands on the segmentation algorithm. In the analysis of normal heart geometries of standardized (apical) views, it is advantageous to incorporate a priori knowledge about the shape and appearance of the heart. In contrast, when analyzing abnormal heart geometries, for example in children with congenital malformations, this a priori knowledge about the shape and anatomy of the LV might induce erroneous segmentation results. This study describes a fully automated segmentation method for the analysis of non-standard echocardiographic images, without making strong assumptions on the shape and appearance of the heart. The method was validated in vivo in a piglet model. Real-time 3D echocardiographic image sequences of five piglets were acquired in radiofrequency (rf) format. These ECG-gated full volume images were acquired intra-operatively in a non-standard view. Cardiac blood flow was measured simultaneously by an ultrasound transit time flow probe positioned around the common pulmonary artery. Three-dimensional adaptive filtering using the characteristics of speckle was performed on the demodulated rf data to reduce the influence of speckle noise and to optimize the distinction between blood and myocardium. A gradient-based 3D deformable simplex mesh was then used to segment the endocardial surface. A gradient and a speed force were included as external forces of the model. To balance data fitting and mesh regularity, one fixed set of weighting parameters of internal, gradient and speed forces was used for all data sets. End-diastolic and end-systolic volumes were computed from the segmented endocardial surface. The cardiac output derived from this automatic segmentation was

  5. Comparative Performance Of A Standard And High Line Rate Video Imaging System In A Cardiac Catherization Laboratory

    NASA Astrophysics Data System (ADS)

    Rossi, Raymond P.; Ahrens, Charles; Groves, Bertron M.

    1985-09-01

    The performance of a new high line rate (1023) video imaging system (VHR) installed in the cardiac catherization laboratory at the University of Colorado Health Sciences Center is compared to the previously installed standard line rate (525) video imaging system (pre-VHR). Comparative performance was assessed both quantitatively using a standardized evaluation protocol and qualitatively based on analysis of data collected during the observation of clinical procedures for which the cardiologists were asked to rank the quality of the fluoroscopic image. The results of this comparative study are presented and suggest that the performance of the high line rate system is significantly improved over the standard line rate system.

  6. Estimation of Beef Marbling Standard Number Based on Dynamic Ultrasound Image

    NASA Astrophysics Data System (ADS)

    Fukuda, Osamu; Nabeoka, Natsuko; Miyajima, Tsuneharu; Hashimoto, Daisuke; Okushi, Masaaki

    Up to the present time, estimation of Beef Marbling Standard (BMS) number based on ultrasound echo imaging of live beef cattle has been studied. However, previous attempts to establish the objective and high accurate estimation method have not been satisfactory. Our previous work showed that estimation of BMS number was achieved by neural network modeling with non-linear mapping ablity. This paper reports a significant improvement of the estimation method based on dynamic ultrasound image. The proposed method consists of four processes: the extraction of dynamic and static texture features, frequency analysis, principal component analysis, and the estimation of BMS number by neural network. In order to evaluate the effectiveness of the proposed method, the experiments were conducted with or without dynamic image information. The number of target regions was set to 1 or 2, and two groups of samples, Case 1 and Case 2, were used for the experiments. Case 1 and Case 2 included 18 and 27 samples, which were measured at Saga Livestock Experiment Station and Nagasaki Agricultural and Forestry Technical Development Center, respectively. The image analysis was performed using only Case 1 or using the mixed group of Case 1 and 2. The experimental results with Case 1 showed the correlation coefficient of the estimated and the actual BMS number was improved from r=0.55 to r=0.79 by adding dynamic image information. Moreover, the correlation coefficient was further raised to r=0.84 with the number of target region increased from 1 to 2. Similarly, as for the mixed group of Case 1 and 2, the correlation coefficients were r=0.77, r=0.76, and r=0.88, respectively. These results suggested that a high estimation accuracy was achieved by adding dynamic image information and increasing target region.

  7. Vehicle occupancy detection camera position optimization using design of experiments and standard image references

    NASA Astrophysics Data System (ADS)

    Paul, Peter; Hoover, Martin; Rabbani, Mojgan

    2013-03-01

    Camera positioning and orientation is important to applications in domains such as transportation since the objects to be imaged vary greatly in shape and size. In a typical transportation application that requires capturing still images, inductive loops buried in the ground or laser trigger sensors are used when a vehicle reaches the image capture zone to trigger the image capture system. The camera in such a system is in a fixed position pointed at the roadway and at a fixed orientation. Thus the problem is to determine the optimal location and orientation of the camera when capturing images from a wide variety of vehicles. Methods from Design for Six Sigma, including identifying important parameters and noise sources and performing systematically designed experiments (DOE) can be used to determine an effective set of parameter settings for the camera position and orientation under these conditions. In the transportation application of high occupancy vehicle lane enforcement, the number of passengers in the vehicle is to be counted. Past work has described front seat vehicle occupant counting using a camera mounted on an overhead gantry looking through the front windshield in order to capture images of vehicle occupants. However, viewing rear seat passengers is more problematic due to obstructions including the vehicle body frame structures and seats. One approach is to view the rear seats through the side window. In this situation the problem of optimally positioning and orienting the camera to adequately capture the rear seats through the side window can be addressed through a designed experiment. In any automated traffic enforcement system it is necessary for humans to be able to review any automatically captured digital imagery in order to verify detected infractions. Thus for defining an output to be optimized for the designed experiment, a human defined standard image reference (SIR) was used to quantify the quality of the line-of-sight to the rear seats of

  8. Comparisons of neural networks to standard techniques for image classification and correlation

    NASA Technical Reports Server (NTRS)

    Paola, Justin D.; Schowengerdt, Robert A.

    1994-01-01

    Neural network techniques for multispectral image classification and spatial pattern detection are compared to the standard techniques of maximum-likelihood classification and spatial correlation. The neural network produced a more accurate classification than maximum-likelihood of a Landsat scene of Tucson, Arizona. Some of the errors in the maximum-likelihood classification are illustrated using decision region and class probability density plots. As expected, the main drawback to the neural network method is the long time required for the training stage. The network was trained using several different hidden layer sizes to optimize both the classification accuracy and training speed, and it was found that one node per class was optimal. The performance improved when 3x3 local windows of image data were entered into the net. This modification introduces texture into the classification without explicit calculation of a texture measure. Larger windows were successfully used for the detection of spatial features in Landsat and Magellan synthetic aperture radar imagery.

  9. ASTER VNIR 15 years growth to the standard imaging radiometer in remote sensing

    NASA Astrophysics Data System (ADS)

    Hiramatsu, Masaru; Inada, Hitomi; Kikuchi, Masakuni; Sakuma, Fumihiro

    2015-10-01

    The Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) Visible and Near Infrared Radiometer (VNIR) is the remote sensing equipment which has 3 spectral bands and one along-track stereoscopic band radiometer. ASTER VNIR's planned long life design (more than 5 years) is successfully achieved. ASTER VNIR has been imaging the World-wide Earth surface multiband images and the Global Digital Elevation Model (GDEM). VNIR data create detailed world-wide maps and change-detection of the earth surface as utilization transitions and topographical changes. ASTER VNIR's geometric resolution is 15 meters; it is the highest spatial resolution instrument on NASA's Terra spacecraft. Then, ASTER VNIR was planned for the geometrical basis map makers in Terra instruments. After 15-years VNIR growth to the standard map-maker for space remote-sensing. This paper presents VNIR's feature items during 15-year operation as change-detection images , DEM and calibration result. VNIR observed the World-wide Earth images for biological, climatological, geological, and hydrological study, those successful work shows a way on space remote sensing instruments. Still more, VNIR 15 years observation data trend and onboard calibration trend data show several guide or support to follow-on instruments.

  10. A passive autofocus system by using standard deviation of the image on a liquid lens

    NASA Astrophysics Data System (ADS)

    Rasti, Pejman; Kesküla, Arko; Haus, Henry; Schlaak, Helmut F.; Anbarjafari, Gholamreza; Aabloo, Alvo; Kiefer, Rudolf

    2015-04-01

    Today most of applications have a small camera such as cell phones, tablets and medical devices. A micro lens is required in order to reduce the size of the devices. In this paper an auto focus system is used in order to find the best position of a liquid lens without any active components such as ultrasonic or infrared. In fact a passive auto focus system by using standard deviation of the images on a liquid lens which consist of a Dielectric Elastomer Actuator (DEA) membrane between oil and water is proposed.

  11. A standard CMOS high-voltage transmitter for ultrasound medical imaging applications

    NASA Astrophysics Data System (ADS)

    Cha, Hyouk-Kyu

    2014-03-01

    A high-voltage (HV) transmitter for ultrasound medical imaging applications is designed using 0.18-µm CMOS (complementary metal oxide semiconductor) technology. The proposed HV transmitter achieves high integration by employing standard CMOS transistors in a stacked configuration with dynamic gate biasing circuit while successfully driving the capacitive output load with an HV pulse without device breakdown reliability issues. The HV transmitter, which includes the output driver and voltage level-shifters, generates up to 30-Vp-p pulses at 1.25 MHz frequency and occupies 0.035 mm² of layout area.

  12. A no-gold-standard technique for objective assessment of quantitative nuclear-medicine imaging methods

    NASA Astrophysics Data System (ADS)

    Jha, Abhinav K.; Caffo, Brian; Frey, Eric C.

    2016-04-01

    The objective optimization and evaluation of nuclear-medicine quantitative imaging methods using patient data is highly desirable but often hindered by the lack of a gold standard. Previously, a regression-without-truth (RWT) approach has been proposed for evaluating quantitative imaging methods in the absence of a gold standard, but this approach implicitly assumes that bounds on the distribution of true values are known. Several quantitative imaging methods in nuclear-medicine imaging measure parameters where these bounds are not known, such as the activity concentration in an organ or the volume of a tumor. We extended upon the RWT approach to develop a no-gold-standard (NGS) technique for objectively evaluating such quantitative nuclear-medicine imaging methods with patient data in the absence of any ground truth. Using the parameters estimated with the NGS technique, a figure of merit, the noise-to-slope ratio (NSR), can be computed, which can rank the methods on the basis of precision. An issue with NGS evaluation techniques is the requirement of a large number of patient studies. To reduce this requirement, the proposed method explored the use of multiple quantitative measurements from the same patient, such as the activity concentration values from different organs in the same patient. The proposed technique was evaluated using rigorous numerical experiments and using data from realistic simulation studies. The numerical experiments demonstrated that the NSR was estimated accurately using the proposed NGS technique when the bounds on the distribution of true values were not precisely known, thus serving as a very reliable metric for ranking the methods on the basis of precision. In the realistic simulation study, the NGS technique was used to rank reconstruction methods for quantitative single-photon emission computed tomography (SPECT) based on their performance on the task of estimating the mean activity concentration within a known volume of interest

  13. A no-gold-standard technique for objective assessment of quantitative nuclear-medicine imaging methods.

    PubMed

    Jha, Abhinav K; Caffo, Brian; Frey, Eric C

    2016-04-01

    The objective optimization and evaluation of nuclear-medicine quantitative imaging methods using patient data is highly desirable but often hindered by the lack of a gold standard. Previously, a regression-without-truth (RWT) approach has been proposed for evaluating quantitative imaging methods in the absence of a gold standard, but this approach implicitly assumes that bounds on the distribution of true values are known. Several quantitative imaging methods in nuclear-medicine imaging measure parameters where these bounds are not known, such as the activity concentration in an organ or the volume of a tumor. We extended upon the RWT approach to develop a no-gold-standard (NGS) technique for objectively evaluating such quantitative nuclear-medicine imaging methods with patient data in the absence of any ground truth. Using the parameters estimated with the NGS technique, a figure of merit, the noise-to-slope ratio (NSR), can be computed, which can rank the methods on the basis of precision. An issue with NGS evaluation techniques is the requirement of a large number of patient studies. To reduce this requirement, the proposed method explored the use of multiple quantitative measurements from the same patient, such as the activity concentration values from different organs in the same patient. The proposed technique was evaluated using rigorous numerical experiments and using data from realistic simulation studies. The numerical experiments demonstrated that the NSR was estimated accurately using the proposed NGS technique when the bounds on the distribution of true values were not precisely known, thus serving as a very reliable metric for ranking the methods on the basis of precision. In the realistic simulation study, the NGS technique was used to rank reconstruction methods for quantitative single-photon emission computed tomography (SPECT) based on their performance on the task of estimating the mean activity concentration within a known volume of interest

  14. Diagnostic use of facial image analysis software in endocrine and genetic disorders: review, current results and future perspectives.

    PubMed

    Kosilek, R P; Frohner, R; Würtz, R P; Berr, C M; Schopohl, J; Reincke, M; Schneider, H J

    2015-10-01

    Cushing's syndrome (CS) and acromegaly are endocrine diseases that are currently diagnosed with a delay of several years from disease onset. Novel diagnostic approaches and increased awareness among physicians are needed. Face classification technology has recently been introduced as a promising diagnostic tool for CS and acromegaly in pilot studies. It has also been used to classify various genetic syndromes using regular facial photographs. The authors provide a basic explanation of the technology, review available literature regarding its use in a medical setting, and discuss possible future developments. The method the authors have employed in previous studies uses standardized frontal and profile facial photographs for classification. Image analysis is based on applying mathematical functions evaluating geometry and image texture to a grid of nodes semi-automatically placed on relevant facial structures, yielding a binary classification result. Ongoing research focuses on improving diagnostic algorithms of this method and bringing it closer to clinical use. Regarding future perspectives, the authors propose an online interface that facilitates submission of patient data for analysis and retrieval of results as a possible model for clinical application. PMID:26162404

  15. Software Quality Assurance Metrics

    NASA Technical Reports Server (NTRS)

    McRae, Kalindra A.

    2004-01-01

    Software Quality Assurance (SQA) is a planned and systematic set of activities that ensures conformance of software life cycle processes and products conform to requirements, standards and procedures. In software development, software quality means meeting requirements and a degree of excellence and refinement of a project or product. Software Quality is a set of attributes of a software product by which its quality is described and evaluated. The set of attributes includes functionality, reliability, usability, efficiency, maintainability, and portability. Software Metrics help us understand the technical process that is used to develop a product. The process is measured to improve it and the product is measured to increase quality throughout the life cycle of software. Software Metrics are measurements of the quality of software. Software is measured to indicate the quality of the product, to assess the productivity of the people who produce the product, to assess the benefits derived from new software engineering methods and tools, to form a baseline for estimation, and to help justify requests for new tools or additional training. Any part of the software development can be measured. If Software Metrics are implemented in software development, it can save time, money, and allow the organization to identify the caused of defects which have the greatest effect on software development. The summer of 2004, I worked with Cynthia Calhoun and Frank Robinson in the Software Assurance/Risk Management department. My task was to research and collect, compile, and analyze SQA Metrics that have been used in other projects that are not currently being used by the SA team and report them to the Software Assurance team to see if any metrics can be implemented in their software assurance life cycle process.

  16. Comparison of Ultrahigh- and Standard-Resolution Optical Coherence Tomography for Imaging Macular Hole Pathology and Repair

    PubMed Central

    Ko, Tony H.; Fujimoto, James G.; Duker, Jay S.; Paunescu, Lelia A.; Drexler, Wolfgang; Baumal, Caroline R.; Puliafito, Carmen A.; Reichel, Elias; Rogers, Adam H.; Schuman, Joel S.

    2007-01-01

    Purpose To compare ultrahigh-resolution optical coherence tomography (UHR-OCT) technology to a standard-resolution OCT instrument for the imaging of macular hole pathology and repair; to identify situations where UHR-OCT provides additional information on disease morphology, pathogenesis, and management; and to use UHR-OCT as a baseline for improving the interpretation of the standard-resolution images. Design Observational and interventional case series. Participants Twenty-nine eyes of 24 patients clinically diagnosed with macular hole in at least one eye. Methods A UHR-OCT system has been developed and employed in a tertiary-care ophthalmology clinic. Using a femtosecond laser as the low-coherence light source, this new UHR-OCT system can achieve an unprecedented 3-μm axial resolution for retinal OCT imaging. Comparative imaging was performed with UHR-OCT and standard 10-μm resolution OCT in 29 eyes of 24 patients with various stages of macular holes. Imaging was also performed on a subset of the population before and after macular hole surgery. Main Outcome Measures Ultrahigh- and standard-resolution cross-sectional OCT images of macular hole pathologies. Results Both UHR-OCT and standard-resolution OCT exhibited comparable performance in differentiating various stages of macular holes. The UHR-OCT provided improved imaging of finer intraretinal structures, such as the external limiting membrane and photoreceptor inner segment (IS) and outer segment (OS), and identification of the anatomy of successful surgical repair. The improved resolution of UHR-OCT enabled imaging of previously unidentified changes in photoreceptor morphology associated with macular hole pathology and postoperative repair. Visualization of the junction between the photoreceptor IS and OS was found to be an important indicator of photoreceptor integrity for both standard-resolution and UHR-OCT images. Conclusions Ultrahigh-resolution optical coherence tomography improves the visualization

  17. Model-based software engineering for an imaging CubeSat and its extrapolation to other missions

    NASA Astrophysics Data System (ADS)

    Mohammad, Atif; Straub, Jeremy; Korvald, Christoffer; Grant, Emanuel

    Small satellites with their limited computational capabilities require that software engineering techniques promote efficient use of spacecraft resources. A model-driven approach to software engineering is an excellent solution to this resource maximization challenge as it facilitates visualization of the key solution processes and data elements.

  18. 123I-Meta-iodobenzylguanidine Sympathetic Imaging: Standardization and Application to Neurological Diseases

    PubMed Central

    Yamada, Masahito

    2016-01-01

    123I-meta-iodobenzylguanidine (MIBG) has become widely applied in Japan since its introduction to clinical cardiology and neurology practice in the 1990s. Neurological studies found decreased cardiac uptake of 123I-MIBG in Lewy-body diseases including Parkinson's disease and dementia with Lewy bodies. Thus, cardiac MIBG uptake is now considered a biomarker of Lewy body diseases. Although scintigraphic images of 123I-MIBG can be visually interpreted, an average count ratio of heart-to-mediastinum (H/M) has commonly served as a semi-quantitative marker of sympathetic activity. Since H/M ratios significantly vary according to acquisition and processing conditions, quality control should be appropriate, and quantitation should be standardized. The threshold H/M ratio for differentiating Lewy-body disease is 2.0-2.1, and was based on standardized H/M ratios to comparable values of medium-energy collimators. Parkinson's disease can be separated from various types of parkinsonian syndromes using cardiac 123I-MIBG, whereas activity is decreased on images of Lewy-body diseases using both 123I-ioflupane for the striatum and 123I-MIBG. Despite being a simple index, the H/M ratio of 123I-MIBG uptake is reproducible and can serve as an effective tool to support a diagnosis of Lewy-body diseases in neurological practice. PMID:27689024

  19. 123I-Meta-iodobenzylguanidine Sympathetic Imaging: Standardization and Application to Neurological Diseases

    PubMed Central

    Yamada, Masahito

    2016-01-01

    123I-meta-iodobenzylguanidine (MIBG) has become widely applied in Japan since its introduction to clinical cardiology and neurology practice in the 1990s. Neurological studies found decreased cardiac uptake of 123I-MIBG in Lewy-body diseases including Parkinson's disease and dementia with Lewy bodies. Thus, cardiac MIBG uptake is now considered a biomarker of Lewy body diseases. Although scintigraphic images of 123I-MIBG can be visually interpreted, an average count ratio of heart-to-mediastinum (H/M) has commonly served as a semi-quantitative marker of sympathetic activity. Since H/M ratios significantly vary according to acquisition and processing conditions, quality control should be appropriate, and quantitation should be standardized. The threshold H/M ratio for differentiating Lewy-body disease is 2.0-2.1, and was based on standardized H/M ratios to comparable values of medium-energy collimators. Parkinson's disease can be separated from various types of parkinsonian syndromes using cardiac 123I-MIBG, whereas activity is decreased on images of Lewy-body diseases using both 123I-ioflupane for the striatum and 123I-MIBG. Despite being a simple index, the H/M ratio of 123I-MIBG uptake is reproducible and can serve as an effective tool to support a diagnosis of Lewy-body diseases in neurological practice.

  20. (123)I-Meta-iodobenzylguanidine Sympathetic Imaging: Standardization and Application to Neurological Diseases.

    PubMed

    Nakajima, Kenichi; Yamada, Masahito

    2016-09-01

    (123)I-meta-iodobenzylguanidine (MIBG) has become widely applied in Japan since its introduction to clinical cardiology and neurology practice in the 1990s. Neurological studies found decreased cardiac uptake of (123)I-MIBG in Lewy-body diseases including Parkinson's disease and dementia with Lewy bodies. Thus, cardiac MIBG uptake is now considered a biomarker of Lewy body diseases. Although scintigraphic images of (123)I-MIBG can be visually interpreted, an average count ratio of heart-to-mediastinum (H/M) has commonly served as a semi-quantitative marker of sympathetic activity. Since H/M ratios significantly vary according to acquisition and processing conditions, quality control should be appropriate, and quantitation should be standardized. The threshold H/M ratio for differentiating Lewy-body disease is 2.0-2.1, and was based on standardized H/M ratios to comparable values of medium-energy collimators. Parkinson's disease can be separated from various types of parkinsonian syndromes using cardiac (123)I-MIBG, whereas activity is decreased on images of Lewy-body diseases using both (123)I-ioflupane for the striatum and (123)I-MIBG. Despite being a simple index, the H/M ratio of (123)I-MIBG uptake is reproducible and can serve as an effective tool to support a diagnosis of Lewy-body diseases in neurological practice. PMID:27689024

  1. Automated digital image analysis of islet cell mass using Nikon's inverted eclipse Ti microscope and software to improve engraftment may help to advance the therapeutic efficacy and accessibility of islet transplantation across centers.

    PubMed

    Gmyr, Valery; Bonner, Caroline; Lukowiak, Bruno; Pawlowski, Valerie; Dellaleau, Nathalie; Belaich, Sandrine; Aluka, Isanga; Moermann, Ericka; Thevenet, Julien; Ezzouaoui, Rimed; Queniat, Gurvan; Pattou, Francois; Kerr-Conte, Julie

    2015-01-01

    Reliable assessment of islet viability, mass, and purity must be met prior to transplanting an islet preparation into patients with type 1 diabetes. The standard method for quantifying human islet preparations is by direct microscopic analysis of dithizone-stained islet samples, but this technique may be susceptible to inter-/intraobserver variability, which may induce false positive/negative islet counts. Here we describe a simple, reliable, automated digital image analysis (ADIA) technique for accurately quantifying islets into total islet number, islet equivalent number (IEQ), and islet purity before islet transplantation. Islets were isolated and purified from n = 42 human pancreata according to the automated method of Ricordi et al. For each preparation, three islet samples were stained with dithizone and expressed as IEQ number. Islets were analyzed manually by microscopy or automatically quantified using Nikon's inverted Eclipse Ti microscope with built-in NIS-Elements Advanced Research (AR) software. The AIDA method significantly enhanced the number of islet preparations eligible for engraftment compared to the standard manual method (p < 0.001). Comparisons of individual methods showed good correlations between mean values of IEQ number (r(2) = 0.91) and total islet number (r(2) = 0.88) and thus increased to r(2) = 0.93 when islet surface area was estimated comparatively with IEQ number. The ADIA method showed very high intraobserver reproducibility compared to the standard manual method (p < 0.001). However, islet purity was routinely estimated as significantly higher with the manual method versus the ADIA method (p < 0.001). The ADIA method also detected small islets between 10 and 50 µm in size. Automated digital image analysis utilizing the Nikon Instruments software is an unbiased, simple, and reliable teaching tool to comprehensively assess the individual size of each islet cell preparation prior to transplantation. Implementation of this

  2. Piezoresistive AFM cantilevers surpassing standard optical beam deflection in low noise topography imaging

    PubMed Central

    Dukic, Maja; Adams, Jonathan D.; Fantner, Georg E.

    2015-01-01

    Optical beam deflection (OBD) is the most prevalent method for measuring cantilever deflections in atomic force microscopy (AFM), mainly due to its excellent noise performance. In contrast, piezoresistive strain-sensing techniques provide benefits over OBD in readout size and the ability to image in light-sensitive or opaque environments, but traditionally have worse noise performance. Miniaturisation of cantilevers, however, brings much greater benefit to the noise performance of piezoresistive sensing than to OBD. In this paper, we show both theoretically and experimentally that by using small-sized piezoresistive cantilevers, the AFM imaging noise equal or lower than the OBD readout noise is feasible, at standard scanning speeds and power dissipation. We demonstrate that with both readouts we achieve a system noise of ≈0.3 Å at 20 kHz measurement bandwidth. Finally, we show that small-sized piezoresistive cantilevers are well suited for piezoresistive nanoscale imaging of biological and solid state samples in air. PMID:26574164

  3. Piezoresistive AFM cantilevers surpassing standard optical beam deflection in low noise topography imaging

    NASA Astrophysics Data System (ADS)

    Dukic, Maja; Adams, Jonathan D.; Fantner, Georg E.

    2015-11-01

    Optical beam deflection (OBD) is the most prevalent method for measuring cantilever deflections in atomic force microscopy (AFM), mainly due to its excellent noise performance. In contrast, piezoresistive strain-sensing techniques provide benefits over OBD in readout size and the ability to image in light-sensitive or opaque environments, but traditionally have worse noise performance. Miniaturisation of cantilevers, however, brings much greater benefit to the noise performance of piezoresistive sensing than to OBD. In this paper, we show both theoretically and experimentally that by using small-sized piezoresistive cantilevers, the AFM imaging noise equal or lower than the OBD readout noise is feasible, at standard scanning speeds and power dissipation. We demonstrate that with both readouts we achieve a system noise of ≈0.3 Å at 20 kHz measurement bandwidth. Finally, we show that small-sized piezoresistive cantilevers are well suited for piezoresistive nanoscale imaging of biological and solid state samples in air.

  4. New secure communication-layer standard for medical image management (ISCL)

    NASA Astrophysics Data System (ADS)

    Kita, Kouichi; Nohara, Takashi; Hosoba, Minoru; Yachida, Masuyoshi; Yamaguchi, Masahiro; Ohyama, Nagaaki

    1999-07-01

    This paper introduces a summary of the standard draft of ISCL 1.00 which will be published by MEDIS-DC officially. ISCL is abbreviation of Integrated Secure Communication Layer Protocols for Secure Medical Image Management Systems. ISCL is a security layer which manages security function between presentation layer and TCP/IP layer. ISCL mechanism depends on basic function of a smart IC card and symmetric secret key mechanism. A symmetry key for each session is made by internal authentication function of a smart IC card with a random number. ISCL has three functions which assure authentication, confidently and integrity. Entity authentication process is done through 3 path 4 way method using functions of internal authentication and external authentication of a smart iC card. Confidentially algorithm and MAC algorithm for integrity are able to be selected. ISCL protocols are communicating through Message Block which consists of Message Header and Message Data. ISCL protocols are evaluating by applying to regional collaboration system for image diagnosis, and On-line Secure Electronic Storage system for medical images. These projects are supported by Medical Information System Development Center. These project shows ISCL is useful to keep security.

  5. Quantitative MALDI tandem mass spectrometric imaging of cocaine from brain tissue with a deuterated internal standard.

    PubMed

    Pirman, David A; Reich, Richard F; Kiss, András; Heeren, Ron M A; Yost, Richard A

    2013-01-15

    Mass spectrometric imaging (MSI) is an analytical technique used to determine the distribution of individual analytes within a given sample. A wide array of analytes and samples can be investigated by MSI, including drug distribution in rats, lipid analysis from brain tissue, protein differentiation in tumors, and plant metabolite distributions. Matrix-assisted laser desorption/ionization (MALDI) is a soft ionization technique capable of desorbing and ionizing a large range of compounds, and it is the most common ionization source used in MSI. MALDI mass spectrometry (MS) is generally considered to be a qualitative analytical technique because of significant ion-signal variability. Consequently, MSI is also thought to be a qualitative technique because of the quantitative limitations of MALDI coupled with the homogeneity of tissue sections inherent in an MSI experiment. Thus, conclusions based on MS images are often limited by the inability to correlate ion signal increases with actual concentration increases. Here, we report a quantitative MSI method for the analysis of cocaine (COC) from brain tissue using a deuterated internal standard (COC-d(3)) combined with wide-isolation MS/MS for analysis of the tissue extracts with scan-by-scan COC-to-COC-d(3) normalization. This resulted in significant improvements in signal reproducibility and calibration curve linearity. Quantitative results from the MSI experiments were compared with quantitative results from liquid chromatography (LC)-MS/MS results from brain tissue extracts. Two different quantitative MSI techniques (standard addition and external calibration) produced quantitative results comparable to LC-MS/MS data. Tissue extracts were also analyzed by MALDI wide-isolation MS/MS, and quantitative results were nearly identical to those from LC-MS/MS. These results clearly demonstrate the necessity for an internal standard for quantitative MSI experiments. PMID:23214490

  6. GOATS Image Projection Component

    NASA Technical Reports Server (NTRS)

    Haber, Benjamin M.; Green, Joseph J.

    2011-01-01

    When doing mission analysis and design of an imaging system in orbit around the Earth, answering the fundamental question of imaging performance requires an understanding of the image products that will be produced by the imaging system. GOATS software represents a series of MATLAB functions to provide for geometric image projections. Unique features of the software include function modularity, a standard MATLAB interface, easy-to-understand first-principles-based analysis, and the ability to perform geometric image projections of framing type imaging systems. The software modules are created for maximum analysis utility, and can all be used independently for many varied analysis tasks, or used in conjunction with other orbit analysis tools.

  7. Recent progress in the development of INCITS W1.1: appearance-based image quality standards for printers

    NASA Astrophysics Data System (ADS)

    Bouk, Theodore; Dalal, Edul N.; Donohue, Kevin D.; Farnand, Susan; Gaykema, Frans; Gusev, Dmitri; Haley, Allan; Jeran, Paul L.; Kozak, Don; Kress, William C.; Martinez, Óscar; Mashtare, Dale; McCarthy, Ann; Ng, Yee S.; Rasmussen, D. René; Robb, Mark; Shin, Helen; Quiroga Slickers, Myriam; Barney Smith, Elisa H.; Tse, Ming-Kai; Zeise, Eric; Zoltner, Susan

    2007-01-01

    In September 2000, INCITS W1 (the U.S. representative of ISO/IEC JTC1/SC28, the standardization committee for office equipment) was chartered to develop an appearance-based image quality standard. (1),(2) The resulting W1.1 project is based on a proposal (4) that perceived image quality can be described by a small set of broad-based attributes. There are currently five ad hoc teams, each working towards the development of standards for evaluation of perceptual image quality of color printers for one or more of these image quality attributes. This paper summarizes the work in progress of the teams addressing the attributes of Macro-Uniformity, Color Rendition, Text and Line Quality and Micro-Uniformity.

  8. Calibration standard of body tissue with magnetic nanocomposites for MRI and X-ray imaging

    NASA Astrophysics Data System (ADS)

    Rahn, Helene; Woodward, Robert; House, Michael; Engineer, Diana; Feindel, Kirk; Dutz, Silvio; Odenbach, Stefan; StPierre, Tim

    2016-05-01

    We present a first study of a long-term phantom for Magnetic Resonance Imaging (MRI) and X-ray imaging of biological tissues with magnetic nanocomposites (MNC) suitable for 3-dimensional and quantitative imaging of tissues after, e.g. magnetically assisted cancer treatments. We performed a cross-calibration of X-ray microcomputed tomography (XμCT) and MRI with a joint calibration standard for both imaging techniques. For this, we have designed a phantom for MRI and X-ray computed tomography which represents biological tissue enriched with MNC. The developed phantoms consist of an elastomer with different concentrations of multi-core MNC. The matrix material is a synthetic thermoplastic gel, PermaGel (PG). The developed phantoms have been analyzed with Nuclear Magnetic Resonance (NMR) Relaxometry (Bruker minispec mq 60) at 1.4 T to obtain R2 transverse relaxation rates, with SQUID (Superconducting QUantum Interference Device) magnetometry and Inductively Coupled Plasma Mass Spectrometry (ICP-MS) to verify the magnetite concentration, and with XμCT and 9.4 T MRI to visualize the phantoms 3-dimensionally and also to obtain T2 relaxation times. A specification of a sensitivity range is determined for standard imaging techniques X-ray computed tomography (XCT) and MRI as well as with NMR. These novel phantoms show a long-term stability over several months up to years. It was possible to suspend a particular MNC within the PG reaching a concentration range from 0 mg/ml to 6.914 mg/ml. The R2 relaxation rates from 1.4 T NMR-relaxometry show a clear connection (R2=0.994) with MNC concentrations between 0 mg/ml and 4.5 mg/ml. The MRI experiments have shown a linear correlation of R2 relaxation and MNC concentrations as well but in a range between MNC concentrations of 0 mg/ml and 1.435 mg/ml. It could be shown that XμCT displays best moderate and high MNC concentrations. The sensitivity range for this particular XμCT apparatus yields from 0.569 mg/ml to 6.914 mg/ml. The

  9. Comparison of Standard Versus Wide-Field Composite Images of the Corneal Subbasal Layer by In Vivo Confocal Microscopy

    PubMed Central

    Kheirkhah, Ahmad; Muller, Rodrigo; Mikolajczak, Janine; Ren, Ai; Kadas, Ella Maria; Zimmermann, Hanna; Pruess, Harald; Paul, Friedemann; Brandt, Alexander U.; Hamrah, Pedram

    2015-01-01

    Purpose To evaluate whether the densities of corneal subbasal nerves and epithelial immune dendritiform cells (DCs) are comparable between a set of three representative standard images of in vivo confocal microscopy (IVCM) and the wide-field mapped composite IVCM images. Methods This prospective, cross-sectional, and masked study included 110 eyes of 58 patients seen in a neurology clinic who underwent laser-scanning IVCM (Heidelberg Retina Tomograph 3) of the central cornea. Densities of subbasal corneal nerves and DCs were compared between the average of three representative standard images and the wide-field mapped composite images, which were reconstructed by automated mapping. Results There were no statistically significant differences between the average of three representative standard images (0.16 mm2 each) and the wide-field composite images (1.29 ± 0.64 mm2) in terms of mean subbasal nerve density (17.10 ± 6.10 vs. 17.17 ± 5.60 mm/mm2, respectively, P = 0.87) and mean subbasal DC density (53.2 ± 67.8 vs. 49.0 ± 54.3 cells/mm2, respectively, P = 0.43). However, there were notable differences in subbasal nerve and DC densities between these two methods in eyes with very low nerve density or very high DC density. Conclusions There are no significant differences in the mean subbasal nerve and DC densities between the average values of three representative standard IVCM images and wide-field mapped composite images. Therefore, these standard images can be used in clinical studies to accurately measure cellular structures in the subbasal layer. PMID:26325419

  10. Software for minimalistic data management in large camera trap studies.

    PubMed

    Krishnappa, Yathin S; Turner, Wendy C

    2014-11-01

    The use of camera traps is now widespread and their importance in wildlife studies well understood. Camera trap studies can produce millions of photographs and there is a need for software to help manage photographs efficiently. In this paper, we describe a software system that was built to successfully manage a large behavioral camera trap study that produced more than a million photographs. We describe the software architecture and the design decisions that shaped the evolution of the program over the study's three year period. The software system has the ability to automatically extract metadata from images, and add customized metadata to the images in a standardized format. The software system can be installed as a standalone application on popular operating systems. It is minimalistic, scalable and extendable so that it can be used by small teams or individual researchers for a broad variety of camera trap studies.

  11. Software for minimalistic data management in large camera trap studies

    PubMed Central

    Krishnappa, Yathin S.; Turner, Wendy C.

    2014-01-01

    The use of camera traps is now widespread and their importance in wildlife studies well understood. Camera trap studies can produce millions of photographs and there is a need for software to help manage photographs efficiently. In this paper, we describe a software system that was built to successfully manage a large behavioral camera trap study that produced more than a million photographs. We describe the software architecture and the design decisions that shaped the evolution of the program over the study’s three year period. The software system has the ability to automatically extract metadata from images, and add customized metadata to the images in a standardized format. The software system can be installed as a standalone application on popular operating systems. It is minimalistic, scalable and extendable so that it can be used by small teams or individual researchers for a broad variety of camera trap studies. PMID:25110471

  12. An intelligent pre-processing framework for standardizing medical images for CAD and other post-processing applications

    NASA Astrophysics Data System (ADS)

    Raghupathi, Lakshminarasimhan; Devarakota, Pandu R.; Wolf, Matthias

    2012-03-01

    There is an increasing need to provide end-users with seamless and secure access to healthcare information acquired from a diverse range of sources. This might include local and remote hospital sites equipped with different vendors and practicing varied acquisition protocols and also heterogeneous external sources such as the Internet cloud. In such scenarios, image post-processing tools such as CAD (computer-aided diagnosis) which were hitherto developed using a smaller set of images may not always work optimally on newer set of images having entirely different characteristics. In this paper, we propose a framework that assesses the quality of a given input image and automatically applies an appropriate pre-processing method in such a manner that the image characteristics are normalized regardless of its source. We focus mainly on medical images, and the objective of the said preprocessing method is to standardize the performance of various image processing and workflow applications like CAD to perform in a consistent manner. First, our system consists of an assessment step wherein an image is evaluated based on criteria such as noise, image sharpness, etc. Depending on the measured characteristic, we then apply an appropriate normalization technique thus giving way to our overall pre-processing framework. A systematic evaluation of the proposed scheme is carried out on large set of CT images acquired from various vendors including images reconstructed with next generation iterative methods. Results demonstrate that the images are normalized and thus suitable for an existing LungCAD prototype1.

  13. Healthcare Software Assurance

    PubMed Central

    Cooper, Jason G.; Pauley, Keith A.

    2006-01-01

    Software assurance is a rigorous, lifecycle phase-independent set of activities which ensure completeness, safety, and reliability of software processes and products. This is accomplished by guaranteeing conformance to all requirements, standards, procedures, and regulations. These assurance processes are even more important when coupled with healthcare software systems, embedded software in medical instrumentation, and other healthcare-oriented life-critical systems. The current Food and Drug Administration (FDA) regulatory requirements and guidance documentation do not address certain aspects of complete software assurance activities. In addition, the FDA’s software oversight processes require enhancement to include increasingly complex healthcare systems such as Hospital Information Systems (HIS). The importance of complete software assurance is introduced, current regulatory requirements and guidance discussed, and the necessity for enhancements to the current processes shall be highlighted. PMID:17238324

  14. Selecting Software.

    ERIC Educational Resources Information Center

    Pereus, Steven C.

    2002-01-01

    Describes a comprehensive computer software selection and evaluation process, including documenting district needs, evaluating software packages, weighing the alternatives, and making the purchase. (PKP)

  15. ASPIC: STARLINK image processing package

    NASA Astrophysics Data System (ADS)

    Davenhall, A. C.; Hartley, Ken F.; Penny, Alan J.; Kelly, B. D.; King, Dave J.; Lupton, W. F.; Tudhope, D.; Pike, C. D.; Cooke, J. A.; Pence, W. D.; Wallace, Patrick T.; Brownrigg, D. R. K.; Baines, Dave W. T.; Warren-Smith, Rodney F.; McNally, B. V.; Bell, L. L.; Jones, T. A.; Terrett, Dave L.; Pearce, D. J.; Carey, J. V.; Currie, Malcolm J.; Benn, Chris; Beard, S. M.; Giddings, Jack R.; Balona, Luis A.; Harrison, B.; Wood, Roger; Sparkes, Bill; Allan, Peter M.; Berry, David S.; Shirt, J. V.

    2015-10-01

    ASPIC handled basic astronomical image processing. Early releases concentrated on image arithmetic, standard filters, expansion/contraction/selection/combination of images, and displaying and manipulating images on the ARGS and other devices. Later releases added new astronomy-specific applications to this sound framework. The ASPIC collection of about 400 image-processing programs was written using the Starlink "interim" environment in the 1980; the software is now obsolete.

  16. Fastbus software progress

    SciTech Connect

    Gustavson, D.B.

    1982-01-01

    The current status of the Fastbus software development program of the Fastbus Software Working Group is reported, and future plans are discussed. A package of Fastbus interface subroutines has been prepared as a proposed standard, language support for diagnostics and bench testing has been developed, and new documentation to help users find these resources and use them effectively is being written.

  17. JPL Robotics Laboratory computer vision software library

    NASA Technical Reports Server (NTRS)

    Cunningham, R.

    1984-01-01

    The past ten years of research on computer vision have matured into a powerful real time system comprised of standardized commercial hardware, computers, and pipeline processing laboratory prototypes, supported by anextensive set of image processing algorithms. The software system was constructed to be transportable via the choice of a popular high level language (PASCAL) and a widely used computer (VAX-11/750), it comprises a whole realm of low level and high level processing software that has proven to be versatile for applications ranging from factory automation to space satellite tracking and grappling.

  18. The DTI Challenge: Towards Standardized Evaluation of Diffusion Tensor Imaging Tractography for Neurosurgery

    PubMed Central

    Pujol, Sonia; Wells, William; Pierpaoli, Carlo; Brun, Caroline; Gee, James; Cheng, Guang; Vemuri, Baba; Commowick, Olivier; Prima, Sylvain; Stamm, Aymeric; Goubran, Maged; Khan, Ali; Peters, Terry; Neher, Peter; Maier-Hein, Klaus H.; Shi, Yundi; Tristan-Vega, Antonio; Veni, Gopalkrishna; Whitaker, Ross; Styner, Martin; Westin, Carl-Fredrik; Gouttard, Sylvain; Norton, Isaiah; Chauvin, Laurent; Mamata, Hatsuho; Gerig, Guido; Nabavi, Arya; Golby, Alexandra; Kikinis, Ron

    2015-01-01

    Background and Purpose Diffusion tensor imaging tractography reconstruction of white matter pathways can help guide brain tumor resection. However, DTI tracts are complex mathematical objects and the validity of tractography-derived information in clinical settings has yet to be fully established. To address this issue, we initiated the DTI Challenge, an international working group of clinicians and scientists whose goal was to provide standardized evaluation of tractography methods for neurosurgery. The purpose of this empirical study was to evaluate different tractography techniques in the first DTI Challenge workshop. Methods Eight international teams from leading institutions reconstructed the pyramidal tract in four neurosurgical cases presenting with a glioma near the motor cortex. Tractography methods included deterministic, probabilistic, filtered, and global approaches. Standardized evaluation of the tracts consisted in the qualitative review of the pyramidal pathways by a panel of neurosurgeons and DTI experts and the quantitative evaluation of the degree of agreement among methods. Results The evaluation of tractography reconstructions showed a great inter-algorithm variability. Although most methods found projections of the pyramidal tract from the medial portion of the motor strip, only a few algorithms could trace the lateral projections from the hand, face, and tongue area. In addition, the structure of disagreement among methods was similar across hemispheres despite the anatomical distortions caused by pathological tissues. Conclusions The DTI Challenge provides a benchmark for the standardized evaluation of tractography methods on neurosurgical data. This study suggests that there are still limitations to the clinical use of tractography for neurosurgical decision-making. PMID:26259925

  19. Split-screen display system and standardized methods for ultrasound image acquisition and multi-frame data processing

    NASA Technical Reports Server (NTRS)

    Selzer, Robert H. (Inventor); Hodis, Howard N. (Inventor)

    2011-01-01

    A standardized acquisition methodology assists operators to accurately replicate high resolution B-mode ultrasound images obtained over several spaced-apart examinations utilizing a split-screen display in which the arterial ultrasound image from an earlier examination is displayed on one side of the screen while a real-time "live" ultrasound image from a current examination is displayed next to the earlier image on the opposite side of the screen. By viewing both images, whether simultaneously or alternately, while manually adjusting the ultrasound transducer, an operator is able to bring into view the real-time image that best matches a selected image from the earlier ultrasound examination. Utilizing this methodology, dynamic material properties of arterial structures, such as IMT and diameter, are measured in a standard region over successive image frames. Each frame of the sequence has its echo edge boundaries automatically determined by using the immediately prior frame's true echo edge coordinates as initial boundary conditions. Computerized echo edge recognition and tracking over multiple successive image frames enhances measurement of arterial diameter and IMT and allows for improved vascular dimension measurements, including vascular stiffness and IMT determinations.

  20. [The Development of a Normal Database of Elderly People for Use with the Statistical Analysis Software Easy Z-score Imaging System with 99mTc-ECD SPECT].

    PubMed

    Nemoto, Hirobumi; Iwasaka, Akemi; Hashimoto, Shingo; Hara, Tadashi; Nemoto, Kiyotaka; Asada, Takashi

    2015-11-01

    We created a new normal database of elderly individuals (Tsukuba-NDB) for easy Z-score Imaging System (eZIS), a statistical imaging analysis software, comprised of 44 healthy individuals aged 75 to 89 years. The Tsukuba-NDB was compared with a conventional NDB (Musashi-NDB) using Statistical Parametric Mapping (SPM8), eZIS analysis, mean images, standard deviation (SD) images, SD values, specific volume of interest analysis (SVA). Furthermore, the association of the mean cerebral blood flow (mCBF) with various clinical indicators was statistically analyzed. A group comparison using SPM8 indicated that the t-value of the Tsukuba-NDB was lower in the frontoparietal region but tended to be higher in the bilateral temporal lobes and the base of the brain than that of the Musashi-NDB. The results of eZIS analysis by Musashi-NDB in 48 subjects indicated the presence of mild decreases in cerebral blood flow in the bilateral frontoparietal lobes of 9 subjects, precuneus and posterior cingulate gyrus of 5 subjects, lingual gyrus of 4 subjects, and near the left frontal gyrus, temporal lobe, superior temporal gyrus, and lenticular nucleus of 12 subjects. The mean images showed that there were no visual differences between both NDBs. The SD images intensities and SD values were lower in Tsukuba-NDB. Clinical case comparison and visual evaluation demonstrated that the sites of decrease in blood flow were more clearly indicated by the Tsukuba-NDB. Furthermore, mCBF was 40.87 ± 0.52 ml/100 g/min (mean ± SE), and tended to decrease with age. The tendency was stronger in male subjects than female subjects. Among various clinical indicators, the platelet count was statistically significantly correlated with CBF. In conclusion, our results suggest that Tsukuba-NDB, which is incorporated into a statistical imaging analysis software, eZIS, is sensitive to changes in cerebral blood flow caused by Cranial nerve disease, dementia and cerebrovascular accidents, and can provide precise

  1. [The Development of a Normal Database of Elderly People for Use with the Statistical Analysis Software Easy Z-score Imaging System with 99mTc-ECD SPECT].

    PubMed

    Nemoto, Hirobumi; Iwasaka, Akemi; Hashimoto, Shingo; Hara, Tadashi; Nemoto, Kiyotaka; Asada, Takashi

    2015-11-01

    We created a new normal database of elderly individuals (Tsukuba-NDB) for easy Z-score Imaging System (eZIS), a statistical imaging analysis software, comprised of 44 healthy individuals aged 75 to 89 years. The Tsukuba-NDB was compared with a conventional NDB (Musashi-NDB) using Statistical Parametric Mapping (SPM8), eZIS analysis, mean images, standard deviation (SD) images, SD values, specific volume of interest analysis (SVA). Furthermore, the association of the mean cerebral blood flow (mCBF) with various clinical indicators was statistically analyzed. A group comparison using SPM8 indicated that the t-value of the Tsukuba-NDB was lower in the frontoparietal region but tended to be higher in the bilateral temporal lobes and the base of the brain than that of the Musashi-NDB. The results of eZIS analysis by Musashi-NDB in 48 subjects indicated the presence of mild decreases in cerebral blood flow in the bilateral frontoparietal lobes of 9 subjects, precuneus and posterior cingulate gyrus of 5 subjects, lingual gyrus of 4 subjects, and near the left frontal gyrus, temporal lobe, superior temporal gyrus, and lenticular nucleus of 12 subjects. The mean images showed that there were no visual differences between both NDBs. The SD images intensities and SD values were lower in Tsukuba-NDB. Clinical case comparison and visual evaluation demonstrated that the sites of decrease in blood flow were more clearly indicated by the Tsukuba-NDB. Furthermore, mCBF was 40.87 ± 0.52 ml/100 g/min (mean ± SE), and tended to decrease with age. The tendency was stronger in male subjects than female subjects. Among various clinical indicators, the platelet count was statistically significantly correlated with CBF. In conclusion, our results suggest that Tsukuba-NDB, which is incorporated into a statistical imaging analysis software, eZIS, is sensitive to changes in cerebral blood flow caused by Cranial nerve disease, dementia and cerebrovascular accidents, and can provide precise

  2. Software-only IR image generation and reticle simulation for the HWIL testing of a single detector frequency modulated reticle seeker

    NASA Astrophysics Data System (ADS)

    Delport, Jan Peet; le Roux, Francois P. J.; du Plooy, Matthys J. U.; Theron, Hendrik J.; Annamalai, Leeandran

    2004-08-01

    Hardware-in-the-Loop (HWIL) testing of seeker systems usually requires a 5-axis flight motion simulator (FMS) coupled to expensive hardware for infrared (IR) scene generation and projection. Similar tests can be conducted by using a 3-axis flight motion simulator, bypassing the seeker optics and injecting a synthetically calculated detector signal directly into the seeker. The constantly increasing speed and memory bandwidth of high-end personal computers make them attractive software rendering platforms. A software OpenGL pipeline provides flexibility in terms of access to the rendered output, colour channel dynamic range and lighting equations. This paper describes how a system was constructed using personal computer hardware to perform closed tracking loop HWIL testing of a single detector frequency modulated reticle seeker. The main parts of the system that are described include: * The software-only implementation of OpenGL used to render the IR image with floating point accuracy directly to system memory. * The software used to inject the detector signal and extract the seeker look position. * The architecture used to control the flight motion simulator.

  3. SynPAnal: software for rapid quantification of the density and intensity of protein puncta from fluorescence microscopy images of neurons.

    PubMed

    Danielson, Eric; Lee, Sang H

    2014-01-01

    Continuous modification of the protein composition at synapses is a driving force for the plastic changes of synaptic strength, and provides the fundamental molecular mechanism of synaptic plasticity and information storage in the brain. Studying synaptic protein turnover is not only important for understanding learning and memory, but also has direct implication for understanding pathological conditions like aging, neurodegenerative diseases, and psychiatric disorders. Proteins involved in synaptic transmission and synaptic plasticity are typically concentrated at synapses of neurons and thus appear as puncta (clusters) in immunofluorescence microscopy images. Quantitative measurement of the changes in puncta density, intensity, and sizes of specific proteins provide valuable information on their function in synaptic transmission, circuit development, synaptic plasticity, and synaptopathy. Unfortunately, puncta quantification is very labor intensive and time consuming. In this article, we describe a software tool designed for the rapid semi-automatic detection and quantification of synaptic protein puncta from 2D immunofluorescence images generated by confocal laser scanning microscopy. The software, dubbed as SynPAnal (for Synaptic Puncta Analysis), streamlines data quantification for puncta density and average intensity, thereby increases data analysis throughput compared to a manual method. SynPAnal is stand-alone software written using the JAVA programming language, and thus is portable and platform-free.

  4. Environmental, scanning electron and optical microscope image analysis software for determining volume and occupied area of solid-state fermentation fungal cultures.

    PubMed

    Osma, Johann F; Toca-Herrera, José L; Rodríguez-Couto, Susana

    2011-01-01

    Here we propose a software for the estimation of the occupied area and volume of fungal cultures. This software was developed using a Matlab platform and allows analysis of high-definition images from optical, electronic or atomic force microscopes. In a first step, a single hypha grown on potato dextrose agar was monitored using optical microscopy to estimate the change in occupied area and volume. Weight measurements were carried out to compare them with the estimated volume, revealing a slight difference of less than 1.5%. Similarly, samples from two different solid-state fermentation cultures were analyzed using images from a scanning electron microscope (SEM) and an environmental SEM (ESEM). Occupied area and volume were calculated for both samples, and the results obtained were correlated with the dry weight of the cultures. The difference between the estimated volume ratio and the dry weight ratio of the two cultures showed a difference of 10%. Therefore, this software is a promising non-invasive technique to determine fungal biomass in solid-state cultures. PMID:21154435

  5. Onboard utilization of ground control points for image correction. Volume 3: Ground control point simulation software design

    NASA Technical Reports Server (NTRS)

    1981-01-01

    The software developed to simulate the ground control point navigation system is described. The Ground Control Point Simulation Program (GCPSIM) is designed as an analysis tool to predict the performance of the navigation system. The system consists of two star trackers, a global positioning system receiver, a gyro package, and a landmark tracker.

  6. Comparison of estimates of left ventricular ejection fraction obtained from gated blood pool imaging, different software packages and cameras

    PubMed Central

    Steyn, Rachelle; Boniaszczuk, John; Geldenhuys, Theodore

    2014-01-01

    Summary Objective To determine how two software packages, supplied by Siemens and Hermes, for processing gated blood pool (GBP) studies should be used in our department and whether the use of different cameras for the acquisition of raw data influences the results. Methods The study had two components. For the first component, 200 studies were acquired on a General Electric (GE) camera and processed three times by three operators using the Siemens and Hermes software packages. For the second part, 200 studies were acquired on two different cameras (GE and Siemens). The matched pairs of raw data were processed by one operator using the Siemens and Hermes software packages. Results The Siemens method consistently gave estimates that were 4.3% higher than the Hermes method (p < 0.001). The differences were not associated with any particular level of left ventricular ejection fraction (LVEF). There was no difference in the estimates of LVEF obtained by the three operators (p = 0.1794). The reproducibility of estimates was good. In 95% of patients, using the Siemens method, the SD of the three estimates of LVEF by operator 1 was ≤ 1.7, operator 2 was ≤ 2.1 and operator 3 was ≤ 1.3. The corresponding values for the Hermes method were ≤ 2.5, ≤ 2.0 and ≤ 2.1. There was no difference in the results of matched pairs of data acquired on different cameras (p = 0.4933) Conclusion Software packages for processing GBP studies are not interchangeable. The report should include the name and version of the software package used. Wherever possible, the same package should be used for serial studies. If this is not possible, the report should include the limits of agreement of the different packages. Data acquisition on different cameras did not influence the results. PMID:24844547

  7. Secure Video Surveillance System Acquisition Software

    SciTech Connect

    2009-12-04

    The SVSS Acquisition Software collects and displays video images from two cameras through a VPN, and store the images onto a collection controller. The software is configured to allow a user to enter a time window to display up to 2 1/2, hours of video review. The software collects images from the cameras at a rate of 1 image per second and automatically deletes images older than 3 hours. The software code operates in a linux environment and can be run in a virtual machine on Windows XP. The Sandia software integrates the different COTS software together to build the video review system.

  8. Secure Video Surveillance System Acquisition Software

    2009-12-04

    The SVSS Acquisition Software collects and displays video images from two cameras through a VPN, and store the images onto a collection controller. The software is configured to allow a user to enter a time window to display up to 2 1/2, hours of video review. The software collects images from the cameras at a rate of 1 image per second and automatically deletes images older than 3 hours. The software code operates in amore » linux environment and can be run in a virtual machine on Windows XP. The Sandia software integrates the different COTS software together to build the video review system.« less

  9. Software Formal Inspections Guidebook

    NASA Technical Reports Server (NTRS)

    1993-01-01

    The Software Formal Inspections Guidebook is designed to support the inspection process of software developed by and for NASA. This document provides information on how to implement a recommended and proven method for conducting formal inspections of NASA software. This Guidebook is a companion document to NASA Standard 2202-93, Software Formal Inspections Standard, approved April 1993, which provides the rules, procedures, and specific requirements for conducting software formal inspections. Application of the Formal Inspections Standard is optional to NASA program or project management. In cases where program or project management decide to use the formal inspections method, this Guidebook provides additional information on how to establish and implement the process. The goal of the formal inspections process as documented in the above-mentioned Standard and this Guidebook is to provide a framework and model for an inspection process that will enable the detection and elimination of defects as early as possible in the software life cycle. An ancillary aspect of the formal inspection process incorporates the collection and analysis of inspection data to effect continual improvement in the inspection process and the quality of the software subjected to the process.

  10. Automatic standard plane adjustment on mobile C-Arm CT images of the calcaneus using atlas-based feature registration

    NASA Astrophysics Data System (ADS)

    Brehler, Michael; Görres, Joseph; Wolf, Ivo; Franke, Jochen; von Recum, Jan; Grützner, Paul A.; Meinzer, Hans-Peter; Nabers, Diana

    2014-03-01

    Intraarticular fractures of the calcaneus are routinely treated by open reduction and internal fixation followed by intraoperative imaging to validate the repositioning of bone fragments. C-Arm CT offers surgeons the possibility to directly verify the alignment of the fracture parts in 3D. Although the device provides more mobility, there is no sufficient information about the device-to-patient orientation for standard plane reconstruction. Hence, physicians have to manually align the image planes in a position that intersects with the articular surfaces. This can be a time-consuming step and imprecise adjustments lead to diagnostic errors. We address this issue by introducing novel semi-/automatic methods for adjustment of the standard planes on mobile C-Arm CT images. With the semi-automatic method, physicians can quickly adjust the planes by setting six points based on anatomical landmarks. The automatic method reconstructs the standard planes in two steps, first SURF keypoints (2D and newly introduced pseudo-3D) are generated for each image slice; secondly, these features are registered to an atlas point set and the parameters of the image planes are transformed accordingly. The accuracy of our method was evaluated on 51 mobile C-Arm CT images from clinical routine with manually adjusted standard planes by three physicians of different expertise. The average time of the experts (46s) deviated from the intermediate user (55s) by 9 seconds. By applying 2D SURF key points 88% of the articular surfaces were intersected correctly by the transformed standard planes with a calculation time of 10 seconds. The pseudo-3D features performed even better with 91% and 8 seconds.

  11. Analysis of He I 1083 nm Imaging Spectroscopy Using a Spectral Standard

    NASA Technical Reports Server (NTRS)

    Malanushenko, Elena V.; Jones, Harrison P.

    2004-01-01

    We develop a technique. for the analysis of He I 1083 nanometer spectra which addresses several difficulties through determination of a continuum background by comparison with a well calibrated standard and through removal of nearby solar and telluric blends by differential comparison to an average spectrum. The method is compared with earlier analysis of imaging spectroscopy obtained at the National Solar Observatory/Kitt Peak Vacuum Telescope (NSO/KPVT) with the NASA/NSO Spectromagnetograph (SPM). We examine distributions of Doppler velocity and line width as a function of central intensity for an active region, filament, quiet Sun, and coronal hole. For our example, we find that line widths and central intensity are oppositely correlated in a coronal hole and quiet Sun. Line widths are comparable to the quiet sun in the active region, are systematically lower in the filament, and extend to higher values in the coronal hole. Outward velocities of approximately equal to 2 to 4 kilometers per second are typically observed in the coronal hole. The sensitivity of these results to analysis technique is discussed.

  12. TIA Software User's Manual

    NASA Technical Reports Server (NTRS)

    Cramer, K. Elliott; Syed, Hazari I.

    1995-01-01

    This user's manual describes the installation and operation of TIA, the Thermal-Imaging acquisition and processing Application, developed by the Nondestructive Evaluation Sciences Branch at NASA Langley Research Center, Hampton, Virginia. TIA is a user friendly graphical interface application for the Macintosh 2 and higher series computers. The software has been developed to interface with the Perceptics/Westinghouse Pixelpipe(TM) and PixelStore(TM) NuBus cards and the GW Instruments MacADIOS(TM) input-output (I/O) card for the Macintosh for imaging thermal data. The software is also capable of performing generic image-processing functions.

  13. A game-based platform for crowd-sourcing biomedical image diagnosis and standardized remote training and education of diagnosticians

    NASA Astrophysics Data System (ADS)

    Feng, Steve; Woo, Minjae; Chandramouli, Krithika; Ozcan, Aydogan

    2015-03-01

    Over the past decade, crowd-sourcing complex image analysis tasks to a human crowd has emerged as an alternative to energy-inefficient and difficult-to-implement computational approaches. Following this trend, we have developed a mathematical framework for statistically combining human crowd-sourcing of biomedical image analysis and diagnosis through games. Using a web-based smart game (BioGames), we demonstrated this platform's effectiveness for telediagnosis of malaria from microscopic images of individual red blood cells (RBCs). After public release in early 2012 (http://biogames.ee.ucla.edu), more than 3000 gamers (experts and non-experts) used this BioGames platform to diagnose over 2800 distinct RBC images, marking them as positive (infected) or negative (non-infected). Furthermore, we asked expert diagnosticians to tag the same set of cells with labels of positive, negative, or questionable (insufficient information for a reliable diagnosis) and statistically combined their decisions to generate a gold standard malaria image library. Our framework utilized minimally trained gamers' diagnoses to generate a set of statistical labels with an accuracy that is within 98% of our gold standard image library, demonstrating the "wisdom of the crowd". Using the same image library, we have recently launched a web-based malaria training and educational game allowing diagnosticians to compare their performance with their peers. After diagnosing a set of ~500 cells per game, diagnosticians can compare their quantified scores against a leaderboard and view their misdiagnosed cells. Using this platform, we aim to expand our gold standard library with new RBC images and provide a quantified digital tool for measuring and improving diagnostician training globally.

  14. ChiMS: Open-source instrument control software platform on LabVIEW for imaging/depth profiling mass spectrometers

    PubMed Central

    Cui, Yang; Hanley, Luke

    2015-01-01

    ChiMS is an open-source data acquisition and control software program written within LabVIEW for high speed imaging and depth profiling mass spectrometers. ChiMS can also transfer large datasets from a digitizer to computer memory at high repetition rate, save data to hard disk at high throughput, and perform high speed data processing. The data acquisition mode generally simulates a digital oscilloscope, but with peripheral devices integrated for control as well as advanced data sorting and processing capabilities. Customized user-designed experiments can be easily written based on several included templates. ChiMS is additionally well suited to non-laser based mass spectrometers imaging and various other experiments in laser physics, physical chemistry, and surface science. PMID:26133872

  15. ChiMS: Open-source instrument control software platform on LabVIEW for imaging/depth profiling mass spectrometers.

    PubMed

    Cui, Yang; Hanley, Luke

    2015-06-01

    ChiMS is an open-source data acquisition and control software program written within LabVIEW for high speed imaging and depth profiling mass spectrometers. ChiMS can also transfer large datasets from a digitizer to computer memory at high repetition rate, save data to hard disk at high throughput, and perform high speed data processing. The data acquisition mode generally simulates a digital oscilloscope, but with peripheral devices integrated for control as well as advanced data sorting and processing capabilities. Customized user-designed experiments can be easily written based on several included templates. ChiMS is additionally well suited to non-laser based mass spectrometers imaging and various other experiments in laser physics, physical chemistry, and surface science. PMID:26133872

  16. Standard Imaging Techniques for Assessment of Portal Venous System and its Tributaries by Linear Endoscopic Ultrasound: A Pictorial Essay

    PubMed Central

    Rameshbabu, C. S.; Wani, Zeeshn Ahamad; Rai, Praveer; Abdulqader, Almessabi; Garg, Shubham; Sharma, Malay

    2013-01-01

    Linear Endosonography has been used to image the Portal Venous System but no established standard guidelines exist. This article presents techniques to visualize the portal venous system and its tributaries by linear endosonography. Attempt has been made to show most of the first order tributaries and some second order tributaries of splenic vein, superior mesenteric vein and portal vein. PMID:24949362

  17. A cross-platform survey of CT image quality and dose from routine abdomen protocols and a method to systematically standardize image quality

    NASA Astrophysics Data System (ADS)

    Favazza, Christopher P.; Duan, Xinhui; Zhang, Yi; Yu, Lifeng; Leng, Shuai; Kofler, James M.; Bruesewitz, Michael R.; McCollough, Cynthia H.

    2015-11-01

    Through this investigation we developed a methodology to evaluate and standardize CT image quality from routine abdomen protocols across different manufacturers and models. The influence of manufacturer-specific automated exposure control systems on image quality was directly assessed to standardize performance across a range of patient sizes. We evaluated 16 CT scanners across our health system, including Siemens, GE, and Toshiba models. Using each practice’s routine abdomen protocol, we measured spatial resolution, image noise, and scanner radiation output (CTDIvol). Axial and in-plane spatial resolutions were assessed through slice sensitivity profile (SSP) and modulation transfer function (MTF) measurements, respectively. Image noise and CTDIvol values were obtained for three different phantom sizes. SSP measurements demonstrated a bimodal distribution in slice widths: an average of 6.2  ±  0.2 mm using GE’s ‘Plus’ mode reconstruction setting and 5.0  ±  0.1 mm for all other scanners. MTF curves were similar for all scanners. Average spatial frequencies at 50%, 10%, and 2% MTF values were 3.24  ±  0.37, 6.20  ±  0.34, and 7.84  ±  0.70 lp cm-1, respectively. For all phantom sizes, image noise and CTDIvol varied considerably: 6.5-13.3 HU (noise) and 4.8-13.3 mGy (CTDIvol) for the smallest phantom; 9.1-18.4 HU and 9.3-28.8 mGy for the medium phantom; and 7.8-23.4 HU and 16.0-48.1 mGy for the largest phantom. Using these measurements and benchmark SSP, MTF, and image noise targets, CT image quality can be standardized across a range of patient sizes.

  18. Software Reviews.

    ERIC Educational Resources Information Center

    Smith, Richard L., Ed.

    1985-01-01

    Reviews software packages by providing extensive descriptions and discussions of their strengths and weaknesses. Software reviewed include (1) "VISIFROG: Vertebrate Anatomy" (grade seven-adult); (2) "Fraction Bars Computer Program" (grades three to six) and (3) four telecommunications utilities. (JN)

  19. Comparison of performance of object-based image analysis techniques available in open source software (Spring and Orfeo Toolbox/Monteverdi) considering very high spatial resolution data

    NASA Astrophysics Data System (ADS)

    Teodoro, Ana C.; Araujo, Ricardo

    2016-01-01

    The use of unmanned aerial vehicles (UAVs) for remote sensing applications is becoming more frequent. However, this type of information can result in several software problems related to the huge amount of data available. Object-based image analysis (OBIA) has proven to be superior to pixel-based analysis for very high-resolution images. The main objective of this work was to explore the potentialities of the OBIA methods available in two different open source software applications, Spring and OTB/Monteverdi, in order to generate an urban land cover map. An orthomosaic derived from UAVs was considered, 10 different regions of interest were selected, and two different approaches were followed. The first one (Spring) uses the region growing segmentation algorithm followed by the Bhattacharya classifier. The second approach (OTB/Monteverdi) uses the mean shift segmentation algorithm followed by the support vector machine (SVM) classifier. Two strategies were followed: four classes were considered using Spring and thereafter seven classes were considered for OTB/Monteverdi. The SVM classifier produces slightly better results and presents a shorter processing time. However, the poor spectral resolution of the data (only RGB bands) is an important factor that limits the performance of the classifiers applied.

  20. A New Measurement Technique of the Characteristics of Nutrient Artery Canals in Tibias Using Materialise's Interactive Medical Image Control System Software

    PubMed Central

    Li, Jiantao; Zhang, Hao; Yin, Peng; Su, Xiuyun; Zhao, Zhe; Zhou, Jianfeng; Li, Chen; Li, Zhirui; Zhang, Lihai; Tang, Peifu

    2015-01-01

    We established a novel measurement technique to evaluate the anatomic information of nutrient artery canals using Mimics (Materialise's Interactive Medical Image Control System) software, which will provide full knowledge of nutrient artery canals to assist in the diagnosis of longitudinal fractures of tibia and choosing an optimal therapy. Here we collected Digital Imaging and Communications in Medicine (DICOM) format of 199 patients hospitalized in our hospital. All three-dimensional models of tibia in Mimics were reconstructed. In 3-matic software, we marked five points in tibia which located at intercondylar eminence, tibia tuberosity, outer ostium, inner ostium, and bottom of medial malleolus. We then recorded Z-coordinates values of the five points and performed statistical analysis. Our results indicate that foramen was found to be absent in 9 (2.3%) tibias, and 379 (95.2%) tibias had single nutrient foramen. The double foramina was observed in 10 (2.5%) tibias. The mean of tibia length was 358 ± 22 mm. The mean foraminal index was 31.8%  ± 3%. The mean distance between tibial tuberosity and foramen (TFD) is 66 ± 12 mm. Foraminal index has significant positive correlation with TFD (r = 0.721, P < 0.01). Length of nutrient artery canals has significant negative correlation with TFD (r = −0.340, P < 0.01) and has significant negative correlation with foraminal index (r = −0.541, P < 0.01). PMID:26788498

  1. Software Program: Software Management Guidebook

    NASA Technical Reports Server (NTRS)

    1996-01-01

    The purpose of this NASA Software Management Guidebook is twofold. First, this document defines the core products and activities required of NASA software projects. It defines life-cycle models and activity-related methods but acknowledges that no single life-cycle model is appropriate for all NASA software projects. It also acknowledges that the appropriate method for accomplishing a required activity depends on characteristics of the software project. Second, this guidebook provides specific guidance to software project managers and team leaders in selecting appropriate life cycles and methods to develop a tailored plan for a software engineering project.

  2. Proprietary software

    NASA Technical Reports Server (NTRS)

    Marnock, M. J.

    1971-01-01

    The protection of intellectual property by a patent, a copyright, or trade secrets is reviewed. The present and future use of computers and software are discussed, along with the governmental uses of software. The popularity of contractual agreements for sale or lease of computer programs and software services is also summarized.

  3. SU-E-T-451: Accuracy and Application of the Standard Imaging W1 Scintillator Dosimeter

    SciTech Connect

    Kowalski, M; McEwen, M

    2014-06-01

    Purpose: To evaluate the Standard Imaging W1 scintillator dosimeter in a range of clinical radiation beams to determine its range of possible applications. Methods: The W1 scintillator is a small perturbation-free dosimeter which is of interest in absolute and relative clinical dosimetry due to its small size and water equivalence. A single version of this detector was evaluated in Co-60 and linac photon and electron beams to investigate the following: linearity, sensitivity, precision, and dependence on electrometer type. In addition, depth-dose and cross-plane profiles were obtained in both photon and electron beams and compared with data obtained with wellbehaved ionization chambers. Results: In linac beams the precision and linearity was very impressive, with typical values of 0.3% and 0.1% respectively. Performance in a Co-60 beam was much poorer (approximately three times worse) and it is not clear whether this is due to the lower signal current or the effect of the continuous beam (rather than pulsed beam of the linac measurements). There was no significant difference in the detector reading when using either the recommended SI Supermax electrometer or two independent high-quality electrometers, except for low signal levels, where the Supermax exhibited an apparent threshold effect, preventing the measurement of the bremsstrahlung background in electron depth-dose curves. Comparisons with ion chamber measurements in linac beams were somewhat variable: good agreement was seen for cross-profiles (photon and electron beams) and electron beam depth-dose curves, generally within the 0.3% precision of the scintillator but systematic differences were observed as a function of measurement depth in photon beam depth-dose curves. Conclusion: A first look would suggest that the W1 scintillator has applications beyond small field dosimetry but performance appears to be limited to higher doserate and/or pulsed radiation beams. Further work is required to resolve

  4. AIRS Maps from Space Processing Software

    NASA Technical Reports Server (NTRS)

    Thompson, Charles K.; Licata, Stephen J.

    2012-01-01

    This software package processes Atmospheric Infrared Sounder (AIRS) Level 2 swath standard product geophysical parameters, and generates global, colorized, annotated maps. It automatically generates daily and multi-day averaged colorized and annotated maps of various AIRS Level 2 swath geophysical parameters. It also generates AIRS input data sets for Eyes on Earth, Puffer-sphere, and Magic Planet. This program is tailored to AIRS Level 2 data products. It re-projects data into 1/4-degree grids that can be combined and averaged for any number of days. The software scales and colorizes global grids utilizing AIRS-specific color tables, and annotates images with title and color bar. This software can be tailored for use with other swath data products for the purposes of visualization.

  5. Software Safety Progress in NASA

    NASA Technical Reports Server (NTRS)

    Radley, Charles F.

    1995-01-01

    NASA has developed guidelines for development and analysis of safety-critical software. These guidelines have been documented in a Guidebook for Safety Critical Software Development and Analysis. The guidelines represent a practical 'how to' approach, to assist software developers and safety analysts in cost effective methods for software safety. They provide guidance in the implementation of the recent NASA Software Safety Standard NSS-1740.13 which was released as 'Interim' version in June 1994, scheduled for formal adoption late 1995. This paper is a survey of the methods in general use, resulting in the NASA guidelines for safety critical software development and analysis.

  6. Computer software.

    PubMed

    Rosenthal, L E

    1986-10-01

    Software is the component in a computer system that permits the hardware to perform the various functions that a computer system is capable of doing. The history of software and its development can be traced to the early nineteenth century. All computer systems are designed to utilize the "stored program concept" as first developed by Charles Babbage in the 1850s. The concept was lost until the mid-1940s, when modern computers made their appearance. Today, because of the complex and myriad tasks that a computer system can perform, there has been a differentiation of types of software. There is software designed to perform specific business applications. There is software that controls the overall operation of a computer system. And there is software that is designed to carry out specialized tasks. Regardless of types, software is the most critical component of any computer system. Without it, all one has is a collection of circuits, transistors, and silicone chips.

  7. Real-time navigation in transoral robotic nasopharyngectomy utilizing on table fluoroscopy and image overlay software: a cadaveric feasibility study.

    PubMed

    Tsang, Raymond K; Sorger, Jonathan M; Azizian, Mahdi; Holsinger, Christopher F

    2015-12-01

    Inability to integrate surgical navigation systems into current surgical robot is one of the reasons for the lack of development of robotic endoscopic skull base surgery. We describe an experiment to adapt current technologies for real-time navigation during transoral robotic nasopharyngectomy. A cone-beam CT was performed with a robotic C-arm after the injecting contrast into common carotid artery. 3D reconstruction of the skull images with the internal carotid artery (ICA) highlighted red was projected on the console. Robotic nasopharyngectomy was then performed. Fluoroscopy was performed with the C-arm. Fluoroscopic image was then overlaid on the reconstructed skull image. The relationship of the robotic instruments with the bony landmarks and ICA could then been viewed in real-time, acting as a surgical navigation system. Navigation during robotic skull base surgery is feasible with available technologies and can increase the safety of robotic skull base surgery.

  8. Standard and novel imaging methods for multiple myeloma: correlates with prognostic laboratory variables including gene expression profiling data.

    PubMed

    Waheed, Sarah; Mitchell, Alan; Usmani, Saad; Epstein, Joshua; Yaccoby, Shmuel; Nair, Bijay; van Hemert, Rudy; Angtuaco, Edgardo; Brown, Tracy; Bartel, Twyla; McDonald, James; Anaissie, Elias; van Rhee, Frits; Crowley, John; Barlogie, Bart

    2013-01-01

    Multiple myeloma causes major morbidity resulting from osteolytic lesions that can be detected by metastatic bone surveys. Magnetic resonance imaging and positron emission tomography can detect bone marrow focal lesions long before development of osteolytic lesions. Using data from patients enrolled in Total Therapy 3 for newly diagnosed myeloma (n=303), we analyzed associations of these imaging techniques with baseline standard laboratory variables assessed before initiating treatment. Of 270 patients with complete imaging data, 245 also had gene expression profiling data. Osteolytic lesions detected on metastatic bone surveys correlated with focal lesions detected by magnetic resonance imaging and positron emission tomography, although, in two-way comparisons, focal lesion counts based on both magnetic resonance imaging and positron emission tomography tended to be greater than those based on metastatic bone survey. Higher numbers of focal lesions detected by magnetic resonance imaging and positron emission tomography were positively linked to high serum concentrations of C-reactive protein, gene-expression-profiling-defined high risk, and the proliferation molecular subgroup. Positron emission tomography focal lesion maximum standardized unit values were significantly correlated with gene-expression-profiling-defined high risk and higher numbers of focal lesions detected by positron emission tomography. Interestingly, four genes associated with high-risk disease (related to cell cycle and metabolism) were linked to counts of focal lesions detected by magnetic resonance imaging and positron emission tomography. Collectively, our results demonstrate significant associations of all three imaging techniques with tumor burden and, especially, disease aggressiveness captured by gene-expression-profiling-risk designation. (Clinicaltrials.gov identifier: NCT00081939).

  9. Gammasphere software development. Progress report

    SciTech Connect

    Piercey, R.B.

    1994-01-01

    This report describes the activities of the nuclear physics group at Mississippi State University which were performed during 1993. Significant progress has been made in the focus areas: chairing the Gammasphere Software Working Group (SWG); assisting with the porting and enhancement of the ORNL UPAK histogramming software package; and developing standard formats for Gammasphere data products. In addition, they have established a new public ftp archive to distribute software and software development tools and information.

  10. Software Solutions for ICME

    NASA Astrophysics Data System (ADS)

    Schmitz, G. J.; Engstrom, A.; Bernhardt, R.; Prahl, U.; Adam, L.; Seyfarth, J.; Apel, M.; de Saracibar, C. Agelet; Korzhavyi, P.; Ågren, J.; Patzak, B.

    2016-01-01

    The Integrated Computational Materials Engineering expert group (ICMEg), a coordination activity of the European Commission, aims at developing a global and open standard for information exchange between the heterogeneous varieties of numerous simulation tools. The ICMEg consortium coordinates respective developments by a strategy of networking stakeholders in the first International Workshop on Software Solutions for ICME, compiling identified and relevant software tools into the Handbook of Software Solutions for ICME, discussing strategies for interoperability between different software tools during a second (planned) international workshop, and eventually proposing a scheme for standardized information exchange in a future book or document. The present article summarizes these respective actions to provide the ICME community with some additional insights and resources from which to help move this field forward.

  11. Developing safety-critical software.

    PubMed

    Garnsworthy, J

    1996-05-01

    The role of safety-critical software in the development of medical devices is becoming increasingly important and ever more exacting demands are being made of software developers. This article considers safety issues and a software development life-cycle based on the safety life-cycle described in IEC 1508, "Safety-Related Systems: Functional Safety." It identifies relevant standards, both emerging and published, and provides guidance on methods that could be used to meet those standards.

  12. Towards a software profession

    NASA Technical Reports Server (NTRS)

    Berard, Edward V.

    1986-01-01

    An increasing number of programmers have attempted to change their image. They have made it plain that they wish not only to be taken seriously, but they also wish to be regarded as professionals. Many programmers now wish to referred to as software engineers. If programmers wish to be considered professionals in every sense of the word, two obstacles must be overcome: the inability to think of software as a product, and the idea that little or no skill is required to create and handle software throughout its life cycle. The steps to be taken toward professionalization are outlined along with recommendations.

  13. Software quality in 1997

    SciTech Connect

    Jones, C.

    1997-11-01

    For many years, software quality assurance lagged behind hardware quality assurance in terms of methods, metrics, and successful results. New approaches such as Quality Function Deployment (QFD) the ISO 9000-9004 standards, the SEI maturity levels, and Total Quality Management (TQM) are starting to attract wide attention, and in some cases to bring software quality levels up to a parity with manufacturing quality levels. Since software is on the critical path for many engineered products, and for internal business systems as well, the new approaches are starting to affect global competition and attract widespread international interest. It can be hypothesized that success in mastering software quality will be a key strategy for dominating global software markets in the 21st century.

  14. Revealing text in a complexly rolled silver scroll from Jerash with computed tomography and advanced imaging software

    NASA Astrophysics Data System (ADS)

    Hoffmann Barfod, Gry; Larsen, John Møller; Lichtenberger, Achim; Raja, Rubina

    2015-12-01

    Throughout Antiquity magical amulets written on papyri, lead and silver were used for apotropaic reasons. While papyri often can be unrolled and deciphered, metal scrolls, usually very thin and tightly rolled up, cannot easily be unrolled without damaging the metal. This leaves us with unreadable results due to the damage done or with the decision not to unroll the scroll. The texts vary greatly and tell us about the cultural environment and local as well as individual practices at a variety of locations across the Mediterranean. Here we present the methodology and the results of the digital unfolding of a silver sheet from Jerash in Jordan from the mid-8th century CE. The scroll was inscribed with 17 lines in presumed pseudo-Arabic as well as some magical signs. The successful unfolding shows that it is possible to digitally unfold complexly folded scrolls, but that it requires a combination of the know-how of the software and linguistic knowledge.

  15. Rationale and development of image-guided intensity-modulated radiotherapy post-prostatectomy: the present standard of care?

    PubMed Central

    Murray, Julia R; McNair, Helen A; Dearnaley, David P

    2015-01-01

    The indications for post-prostatectomy radiotherapy have evolved over the last decade, although the optimal timing, dose, and target volume remain to be well defined. The target volume is susceptible to anatomical variations with its borders interfacing with the rectum and bladder. Image-guided intensity-modulated radiotherapy has become the gold standard for radical prostate radiotherapy. Here we review the current evidence for image-guided techniques with intensity-modulated radiotherapy to the prostate bed and describe current strategies to reduce or account for interfraction and intrafraction motion. PMID:26635484

  16. The influence of the microscope lamp filament colour temperature on the process of digital images of histological slides acquisition standardization

    PubMed Central

    2014-01-01

    Background The aim of this study is to compare the digital images of the tissue biopsy captured with optical microscope using bright field technique under various light conditions. The range of colour's variation in immunohistochemically stained with 3,3'-Diaminobenzidine and Haematoxylin tissue samples is immense and coming from various sources. One of them is inadequate setting of camera's white balance to microscope's light colour temperature. Although this type of error can be easily handled during the stage of image acquisition, it can be eliminated with use of colour adjustment algorithms. The examination of the dependence of colour variation from microscope's light temperature and settings of the camera is done as an introductory research to the process of automatic colour standardization. Methods Six fields of view with empty space among the tissue samples have been selected for analysis. Each field of view has been acquired 225 times with various microscope light temperature and camera white balance settings. The fourteen randomly chosen images have been corrected and compared, with the reference image, by the following methods: Mean Square Error, Structural SIMilarity and visual assessment of viewer. Results For two types of backgrounds and two types of objects, the statistical image descriptors: range, median, mean and its standard deviation of chromaticity on a and b channels from CIELab colour space, and luminance L, and local colour variability for objects' specific area have been calculated. The results have been averaged for 6 images acquired in the same light conditions and camera settings for each sample. Conclusions The analysis of the results leads to the following conclusions: (1) the images collected with white balance setting adjusted to light colour temperature clusters in certain area of chromatic space, (2) the process of white balance correction for images collected with white balance camera settings not matched to the light temperature

  17. IRIS explorer software for radial-depth cueing reovirus particles and other macromolecular structures determined by cryoelectron microscopy and image reconstruction.

    PubMed

    Spencer, S M; Sgro, J Y; Dryden, K A; Baker, T S; Nibert, M L

    1997-10-01

    Structures of biological macromolecules determined by transmission cryoelectron microscopy (cryo-TEM) and three-dimensional image reconstruction are often displayed as surface-shaded representations with depth cueing along the viewed direction (Z cueing). Depth cueing to indicate distance from the center of virus particles (radial-depth cueing, or R cueing) has also been used. We have found that a style of R cueing in which color is applied in smooth or discontinuous gradients using the IRIS Explorer software is an informative technique for displaying the structures of virus particles solved by cryo-TEM and image reconstruction. To develop and test these methods, we used existing cryo-TEM reconstructions of mammalian reovirus particles. The newly applied visualization techniques allowed us to discern several new structural features, including sites in the inner capsid through which the viral mRNAs may be extruded after they are synthesized by the reovirus transcriptase complexes. To demonstrate the broad utility of the methods, we also applied them to cryo-TEM reconstructions of human rhinovirus, native and swollen forms of cowpea chlorotic mottle virus, truncated core of pyruvate dehydrogenase complex from Saccharomyces cerevisiae, and flagellar filament of Salmonella typhimurium. We conclude that R cueing with color gradients is a useful tool for displaying virus particles and other macromolecules analyzed by cryo-TEM and image reconstruction.

  18. Software Reviews.

    ERIC Educational Resources Information Center

    Beezer, Robert A.; And Others

    1988-01-01

    Reviews for three software packages are given. Those packages are: Linear Algebra Computer Companion; Probability and Statistics Demonstrations and Tutorials; and Math Utilities: CURVES, SURFS, AND DIFFS. (PK)

  19. Revealing text in a complexly rolled silver scroll from Jerash with computed tomography and advanced imaging software

    PubMed Central

    Hoffmann Barfod, Gry; Larsen, John Møller; Raja, Rubina

    2015-01-01

    Throughout Antiquity magical amulets written on papyri, lead and silver