Science.gov

Sample records for imaging workspace software

  1. Image Processing Software

    NASA Technical Reports Server (NTRS)

    1990-01-01

    The Ames digital image velocimetry technology has been incorporated in a commercially available image processing software package that allows motion measurement of images on a PC alone. The software, manufactured by Werner Frei Associates, is IMAGELAB FFT. IMAGELAB FFT is a general purpose image processing system with a variety of other applications, among them image enhancement of fingerprints and use by banks and law enforcement agencies for analysis of videos run during robberies.

  2. SAPHIRE 8 Volume 5 - Workspaces

    SciTech Connect

    C. L. Smith; J. K. Knudsen; D. O'Neal

    2011-03-01

    The Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) Version 8 is a software application developed for performing a complete probabilistic risk assessment using a personal computer running the Microsoft Windows™ operating system. SAPHIRE 8 is funded by the U.S. Nuclear Regulatory Commission (NRC). The role of the Idaho National Laboratory (INL) in this project is that of software developer and tester. In older versions of SAPHIRE, the model creation and analysis functions were intermingled. However, in SAPHIRE 8, the act of creating a model has been separated from the analysis of that model in order to improve the quality of both the model (e.g., by avoiding inadvertent changes) and the analysis. Consequently, in SAPHIRE 8, the analysis of models is performed by using what are called Workspaces. Currently, there are Workspaces for three types of analyses: (1) the NRC’s Accident Sequence Precursor program, where the workspace is called “Events and Condition Assessment (ECA);” (2) the NRC’s Significance Determination Process (SDP); and (3) the General Analysis (GA) workspace. Workspaces for each type are created and saved separately from the base model which keeps the original database intact. Workspaces are independent of each other and modifications or calculations made within one workspace will not affect another. In addition, each workspace has a user interface and reports tailored for their intended uses.

  3. Cathodoluminescence Spectrum Imaging Software

    2011-04-07

    The software developed for spectrum imaging is applied to the analysis of the spectrum series generated by our cathodoluminescence instrumentation. This software provides advanced processing capabilities s such: reconstruction of photon intensity (resolved in energy) and photon energy maps, extraction of the spectrum from selected areas, quantitative imaging mode, pixel-to-pixel correlation spectrum line scans, ASCII, output, filling routines, drift correction, etc.

  4. Biological Imaging Software Tools

    PubMed Central

    Eliceiri, Kevin W.; Berthold, Michael R.; Goldberg, Ilya G.; Ibáñez, Luis; Manjunath, B.S.; Martone, Maryann E.; Murphy, Robert F.; Peng, Hanchuan; Plant, Anne L.; Roysam, Badrinath; Stuurman, Nico; Swedlow, Jason R.; Tomancak, Pavel; Carpenter, Anne E.

    2013-01-01

    Few technologies are more widespread in modern biological laboratories than imaging. Recent advances in optical technologies and instrumentation are providing hitherto unimagined capabilities. Almost all these advances have required the development of software to enable the acquisition, management, analysis, and visualization of the imaging data. We review each computational step that biologists encounter when dealing with digital images, the challenges in that domain, and the overall status of available software for bioimage informatics, focusing on open source options. PMID:22743775

  5. Image Processing Software

    NASA Technical Reports Server (NTRS)

    1992-01-01

    To convert raw data into environmental products, the National Weather Service and other organizations use the Global 9000 image processing system marketed by Global Imaging, Inc. The company's GAE software package is an enhanced version of the TAE, developed by Goddard Space Flight Center to support remote sensing and image processing applications. The system can be operated in three modes and is combined with HP Apollo workstation hardware.

  6. Flyover Animation of Phoenix Workspace

    NASA Technical Reports Server (NTRS)

    2008-01-01

    [figure removed for brevity, see original site] Click on image for animation

    This animated 'flyover' of the workspace of NASA's Phoenix Mars Lander's was created from images taken by the Surface Stereo Imager on Sol 14 (June 8, 2008), or the 14th Martian day after landing.

    The visualization uses both of the camera's 'eyes' to provide depth perception and ranging. The camera is looking north over the workspace.

    The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by NASA's Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.

  7. Confined Space Imager (CSI) Software

    SciTech Connect

    Karelilz, David

    2013-07-03

    The software provides real-time image capture, enhancement, and display, and sensor control for the Confined Space Imager (CSI) sensor system The software captures images over a Cameralink connection and provides the following image enhancements: camera pixel to pixel non-uniformity correction, optical distortion correction, image registration and averaging, and illumination non-uniformity correction. The software communicates with the custom CSI hardware over USB to control sensor parameters and is capable of saving enhanced sensor images to an external USB drive. The software provides sensor control, image capture, enhancement, and display for the CSI sensor system. It is designed to work with the custom hardware.

  8. Software thermal imager simulator

    NASA Astrophysics Data System (ADS)

    Le Noc, Loic; Pancrati, Ovidiu; Doucet, Michel; Dufour, Denis; Debaque, Benoit; Turbide, Simon; Berthiaume, Francois; Saint-Laurent, Louis; Marchese, Linda; Bolduc, Martin; Bergeron, Alain

    2014-10-01

    A software application, SIST, has been developed for the simulation of the video at the output of a thermal imager. The approach offers a more suitable representation than current identification (ID) range predictors do: the end user can evaluate the adequacy of a virtual camera as if he was using it in real operating conditions. In particular, the ambiguity in the interpretation of ID range is cancelled. The application also allows for a cost-efficient determination of the optimal design of an imager and of its subsystems without over- or under-specification: the performances are known early in the development cycle, for targets, scene and environmental conditions of interest. The simulated image is also a powerful method for testing processing algorithms. Finally, the display, which can be a severe system limitation, is also fully considered in the system by the use of real hardware components. The application consists in Matlabtm routines that simulate the effect of the subsystems atmosphere, optical lens, detector, and image processing algorithms. Calls to MODTRAN® for the atmosphere modeling and to Zemax for the optical modeling have been implemented. The realism of the simulation depends on the adequacy of the input scene for the application and on the accuracy of the subsystem parameters. For high accuracy results, measured imager characteristics such as noise can be used with SIST instead of less accurate models. The ID ranges of potential imagers were assessed for various targets, backgrounds and atmospheric conditions. The optimal specifications for an optical design were determined by varying the Seidel aberration coefficients to find the worst MTF that still respects the desired ID range.

  9. Confined Space Imager (CSI) Software

    2013-07-03

    The software provides real-time image capture, enhancement, and display, and sensor control for the Confined Space Imager (CSI) sensor system The software captures images over a Cameralink connection and provides the following image enhancements: camera pixel to pixel non-uniformity correction, optical distortion correction, image registration and averaging, and illumination non-uniformity correction. The software communicates with the custom CSI hardware over USB to control sensor parameters and is capable of saving enhanced sensor images to anmore » external USB drive. The software provides sensor control, image capture, enhancement, and display for the CSI sensor system. It is designed to work with the custom hardware.« less

  10. Spotlight-8 Image Analysis Software

    NASA Technical Reports Server (NTRS)

    Klimek, Robert; Wright, Ted

    2006-01-01

    Spotlight is a cross-platform GUI-based software package designed to perform image analysis on sequences of images generated by combustion and fluid physics experiments run in a microgravity environment. Spotlight can perform analysis on a single image in an interactive mode or perform analysis on a sequence of images in an automated fashion. Image processing operations can be employed to enhance the image before various statistics and measurement operations are performed. An arbitrarily large number of objects can be analyzed simultaneously with independent areas of interest. Spotlight saves results in a text file that can be imported into other programs for graphing or further analysis. Spotlight can be run on Microsoft Windows, Linux, and Apple OS X platforms.

  11. FITS Liberator: Image processing software

    NASA Astrophysics Data System (ADS)

    Lindberg Christensen, Lars; Nielsen, Lars Holm; Nielsen, Kaspar K.; Johansen, Teis; Hurt, Robert; de Martin, David

    2012-06-01

    The ESA/ESO/NASA FITS Liberator makes it possible to process and edit astronomical science data in the FITS format to produce stunning images of the universe. Formerly a plugin for Adobe Photoshop, the current version of FITS Liberator is a stand-alone application and no longer requires Photoshop. This image processing software makes it possible to create color images using raw observations from a range of telescopes; the FITS Liberator continues to support the FITS and PDS formats, preferred by astronomers and planetary scientists respectively, which enables data to be processed from a wide range of telescopes and planetary probes, including ESO's Very Large Telescope, the NASA/ESA Hubble Space Telescope, NASA's Spitzer Space Telescope, ESA's XMM-Newton Telescope and Cassini-Huygens or Mars Reconnaissance Orbiter.

  12. Software for Automated Image-to-Image Co-registration

    NASA Technical Reports Server (NTRS)

    Benkelman, Cody A.; Hughes, Heidi

    2007-01-01

    The project objectives are: a) Develop software to fine-tune image-to-image co-registration, presuming images are orthorectified prior to input; b) Create a reusable software development kit (SDK) to enable incorporation of these tools into other software; d) provide automated testing for quantitative analysis; and e) Develop software that applies multiple techniques to achieve subpixel precision in the co-registration of image pairs.

  13. Acoustic image-processing software

    NASA Astrophysics Data System (ADS)

    Several algorithims that display, enhance and analyze side-scan sonar images of the seafloor, have been developed by the University of Washington, Seattle, as part of an Office of Naval Research funded program in acoustic image analysis. One of these programs, PORTAL, is a small (less than 100K) image display and enhancement program that can run on MS-DOS computers with VGA boards. This program is now available in the public domain for general use in acoustic image processing.PORTAL is designed to display side-scan sonar data that is stored in most standard formats, including SeaMARC I, II, 150 and GLORIA data. (See image.) In addition to the “standard” formats, PORTAL has a module “front end” that allows the user to modify the program to accept other image formats. In addition to side-scan sonar data, the program can also display digital optical images from scanners and “framegrabbers,” gridded bathymetry data from Sea Beam and other sources, and potential field (magnetics/gravity) data. While limited in image analysis capability, the program allows image enhancement by histogram manipulation, and basic filtering operations, including multistage filtering. PORTAL can print reasonably high-quality images on Postscript laser printers and lower-quality images on non-Postscript printers with HP Laserjet emulation. Images suitable only for index sheets are also possible on dot matrix printers.

  14. Image analysis library software development

    NASA Technical Reports Server (NTRS)

    Guseman, L. F., Jr.; Bryant, J.

    1977-01-01

    The Image Analysis Library consists of a collection of general purpose mathematical/statistical routines and special purpose data analysis/pattern recognition routines basic to the development of image analysis techniques for support of current and future Earth Resources Programs. Work was done to provide a collection of computer routines and associated documentation which form a part of the Image Analysis Library.

  15. False Color Terrain Model of Phoenix Workspace

    NASA Technical Reports Server (NTRS)

    2008-01-01

    This is a terrain model of Phoenix's Robotic Arm workspace. It has been color coded by depth with a lander model for context. The model has been derived using images from the depth perception feature from Phoenix's Surface Stereo Imager (SSI). Red indicates low-lying areas that appear to be troughs. Blue indicates higher areas that appear to be polygons.

    The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by NASA's Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.

  16. Video Image Stabilization and Registration (VISAR) Software

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Two scientists at NASA's Marshall Space Flight Center, atmospheric scientist Paul Meyer and solar physicist Dr. David Hathaway, developed promising new software, called Video Image Stabilization and Registration (VISAR), which is illustrated in this Quick Time movie. VISAR is a computer algorithm that stabilizes camera motion in the horizontal and vertical as well as rotation and zoom effects producing clearer images of moving objects, smoothes jagged edges, enhances still images, and reduces video noise or snow. It could steady images of ultrasounds, which are infamous for their grainy, blurred quality. VISAR could also have applications in law enforcement, medical, and meteorological imaging. The software can be used for defense application by improving reconnaissance video imagery made by military vehicles, aircraft, and ships traveling in harsh, rugged environments.

  17. Video Image Stabilization and Registration (VISAR) Software

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Two scientists at NASA's Marshall Space Flight Center,atmospheric scientist Paul Meyer and solar physicist Dr. David Hathaway, developed promising new software, called Video Image stabilization and Registration (VISAR), which is illustrated in this Quick Time movie. VISAR is a computer algorithm that stabilizes camera motion in the horizontal and vertical as well as rotation and zoom effects producing clearer images of moving objects, smoothes jagged edges, enhances still images, and reduces video noise or snow. It could steady images of ultrasounds, which are infamous for their grainy, blurred quality. VISAR could also have applications in law enforcement, medical, and meteorological imaging. The software can be used for defense application by improving reconnaissance video imagery made by military vehicles, aircraft, and ships traveling in harsh, rugged environments.

  18. Video Image Stabilization and Registration (VISAR) Software

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Two scientists at NASA's Marshall Space Flight Center,atmospheric scientist Paul Meyer and solar physicist Dr. David Hathaway, developed promising new software, called Video Image Stabilization and Registration (VISAR). VISAR may help law enforcement agencies catch criminals by improving the quality of video recorded at crime scenes. In this photograph, the single frame at left, taken at night, was brightened in order to enhance details and reduce noise or snow. To further overcome the video defects in one frame, Law enforcement officials can use VISAR software to add information from multiple frames to reveal a person. Images from less than a second of videotape were added together to create the clarified image at right. VISAR stabilizes camera motion in the horizontal and vertical as well as rotation and zoom effects producing clearer images of moving objects, smoothes jagged edges, enhances still images, and reduces video noise or snow. VISAR could also have applications in medical and meteorological imaging. It could steady images of ultrasounds, which are infamous for their grainy, blurred quality. The software can be used for defense application by improving recornaissance video imagery made by military vehicles, aircraft, and ships traveling in harsh, rugged environments.

  19. Video Image Stabilization and Registration (VISAR) Software

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Two scientists at NASA Marshall Space Flight Center, atmospheric scientist Paul Meyer (left) and solar physicist Dr. David Hathaway, have developed promising new software, called Video Image Stabilization and Registration (VISAR), that may help law enforcement agencies to catch criminals by improving the quality of video recorded at crime scenes, VISAR stabilizes camera motion in the horizontal and vertical as well as rotation and zoom effects; produces clearer images of moving objects; smoothes jagged edges; enhances still images; and reduces video noise of snow. VISAR could also have applications in medical and meteorological imaging. It could steady images of Ultrasounds which are infamous for their grainy, blurred quality. It would be especially useful for tornadoes, tracking whirling objects and helping to determine the tornado's wind speed. This image shows two scientists reviewing an enhanced video image of a license plate taken from a moving automobile.

  20. Imaging Sensor Flight and Test Equipment Software

    NASA Technical Reports Server (NTRS)

    Freestone, Kathleen; Simeone, Louis; Robertson, Byran; Frankford, Maytha; Trice, David; Wallace, Kevin; Wilkerson, DeLisa

    2007-01-01

    The Lightning Imaging Sensor (LIS) is one of the components onboard the Tropical Rainfall Measuring Mission (TRMM) satellite, and was designed to detect and locate lightning over the tropics. The LIS flight code was developed to run on a single onboard digital signal processor, and has operated the LIS instrument since 1997 when the TRMM satellite was launched. The software provides controller functions to the LIS Real-Time Event Processor (RTEP) and onboard heaters, collects the lightning event data from the RTEP, compresses and formats the data for downlink to the satellite, collects housekeeping data and formats the data for downlink to the satellite, provides command processing and interface to the spacecraft communications and data bus, and provides watchdog functions for error detection. The Special Test Equipment (STE) software was designed to operate specific test equipment used to support the LIS hardware through development, calibration, qualification, and integration with the TRMM spacecraft. The STE software provides the capability to control instrument activation, commanding (including both data formatting and user interfacing), data collection, decompression, and display and image simulation. The LIS STE code was developed for the DOS operating system in the C programming language. Because of the many unique data formats implemented by the flight instrument, the STE software was required to comprehend the same formats, and translate them for the test operator. The hardware interfaces to the LIS instrument using both commercial and custom computer boards, requiring that the STE code integrate this variety into a working system. In addition, the requirement to provide RTEP test capability dictated the need to provide simulations of background image data with short-duration lightning transients superimposed. This led to the development of unique code used to control the location, intensity, and variation above background for simulated lightning strikes

  1. Terahertz/mm wave imaging simulation software

    NASA Astrophysics Data System (ADS)

    Fetterman, M. R.; Dougherty, J.; Kiser, W. L., Jr.

    2006-10-01

    We have developed a mm wave/terahertz imaging simulation package from COTS graphic software and custom MATLAB code. In this scheme, a commercial ray-tracing package was used to simulate the emission and reflections of radiation from scenes incorporating highly realistic imagery. Accurate material properties were assigned to objects in the scenes, with values obtained from the literature, and from our own terahertz spectroscopy measurements. The images were then post-processed with custom Matlab code to include the blur introduced by the imaging system and noise levels arising from system electronics and detector noise. The Matlab code was also used to simulate the effect of fog, an important aspect for mm wave imaging systems. Several types of image scenes were evaluated, including bar targets, contrast detail targets, a person in a portal screening situation, and a sailboat on the open ocean. The images produced by this simulation are currently being used as guidance for a 94 GHz passive mm wave imaging system, but have broad applicability for frequencies extending into the terahertz region.

  2. ICER-3D Hyperspectral Image Compression Software

    NASA Technical Reports Server (NTRS)

    Xie, Hua; Kiely, Aaron; Klimesh, matthew; Aranki, Nazeeh

    2010-01-01

    Software has been developed to implement the ICER-3D algorithm. ICER-3D effects progressive, three-dimensional (3D), wavelet-based compression of hyperspectral images. If a compressed data stream is truncated, the progressive nature of the algorithm enables reconstruction of hyperspectral data at fidelity commensurate with the given data volume. The ICER-3D software is capable of providing either lossless or lossy compression, and incorporates an error-containment scheme to limit the effects of data loss during transmission. The compression algorithm, which was derived from the ICER image compression algorithm, includes wavelet-transform, context-modeling, and entropy coding subalgorithms. The 3D wavelet decomposition structure used by ICER-3D exploits correlations in all three dimensions of sets of hyperspectral image data, while facilitating elimination of spectral ringing artifacts, using a technique summarized in "Improving 3D Wavelet-Based Compression of Spectral Images" (NPO-41381), NASA Tech Briefs, Vol. 33, No. 3 (March 2009), page 7a. Correlation is further exploited by a context-modeling subalgorithm, which exploits spectral dependencies in the wavelet-transformed hyperspectral data, using an algorithm that is summarized in "Context Modeler for Wavelet Compression of Hyperspectral Images" (NPO-43239), which follows this article. An important feature of ICER-3D is a scheme for limiting the adverse effects of loss of data during transmission. In this scheme, as in the similar scheme used by ICER, the spatial-frequency domain is partitioned into rectangular error-containment regions. In ICER-3D, the partitions extend through all the wavelength bands. The data in each partition are compressed independently of those in the other partitions, so that loss or corruption of data from any partition does not affect the other partitions. Furthermore, because compression is progressive within each partition, when data are lost, any data from that partition received

  3. FITSH- a software package for image processing

    NASA Astrophysics Data System (ADS)

    Pál, András.

    2012-04-01

    In this paper we describe the main features of the software package named FITSH, intended to provide a standalone environment for analysis of data acquired by imaging astronomical detectors. The package both provides utilities for the full pipeline of subsequent related data-processing steps (including image calibration, astrometry, source identification, photometry, differential analysis, low-level arithmetic operations, multiple-image combinations, spatial transformations and interpolations) and aids the interpretation of the (mainly photometric and/or astrometric) results. The package also features a consistent implementation of photometry based on image subtraction, point spread function fitting and aperture photometry and provides easy-to-use interfaces for comparisons and for picking the most suitable method for a particular problem. The set of utilities found in this package is built on top of the commonly used UNIX/POSIX shells (hence the name of the package); therefore, both frequently used and well-documented tools for such environments can be exploited and managing a massive amount of data is rather convenient.

  4. Workspaces in the Semantic Web

    NASA Technical Reports Server (NTRS)

    Wolfe, Shawn R.; Keller, RIchard M.

    2005-01-01

    Due to the recency and relatively limited adoption of Semantic Web technologies. practical issues related to technology scaling have received less attention than foundational issues. Nonetheless, these issues must be addressed if the Semantic Web is to realize its full potential. In particular, we concentrate on the lack of scoping methods that reduce the size of semantic information spaces so they are more efficient to work with and more relevant to an agent's needs. We provide some intuition to motivate the need for such reduced information spaces, called workspaces, give a formal definition, and suggest possible methods of deriving them.

  5. Perspective automated inkless fingerprinting imaging software for fingerprint research.

    PubMed

    Nanakorn, Somsong; Poosankam, Pongsakorn; Mongconthawornchai, Paiboon

    2008-01-01

    Fingerprint collection using ink-and-paper image is a conventional method i.e. an ink-print, transparent-adhesive tape techniques which are slower and cumbersome. This is a pilot research for software development aimed at imaging an automated, inkless fingerprint using a fingerprint sensor, a development kit of the IT WORKS Company Limited, PC camera, and printer The development of software was performed to connect with the fingerprint sensor for collection of fingerprint images and recorded into a hard disk. It was also developed to connect with the PC camera for recording a face image of persons' fingerprints or identification card images. These images had been appropriately arranged in a PDF file prior to printing. This software is able to scan ten fingerprints and store high-quality electronics fingertip images with rapid, large, and clear images without dirt of ink or carbon. This fingerprint technology is helpful in a potential application in public health and clinical medicine research.

  6. Volume Measurement of Various Tissues Using the Image J Software.

    PubMed

    Rha, Eun Young; Kim, Ji Min; Yoo, Gyeol

    2015-09-01

    Various methods have been introduced to assess the tissue volume because volumetric evaluation is recognized as one of the most important steps in reconstructive surgery. Advanced volume measurement methods proposed recently use three-dimensional images. They are convenient but have drawbacks such as requiring expensive equipment and volume-analysis software. The authors devised a volume measurement method using the Image J software, which is in the public domain and does not require specific devices or software packages. The orbital and breast volumes were measured by our method using Image J data from facial computed tomography (CT) and breast magnetic resonance imaging (MRI). The authors obtained the final volume results, which were similar to the known volume values. The authors propose here a cost-effective, simple, and easily accessible volume measurement method using the Image J software.

  7. A Global Workspace perspective on mental disorders

    PubMed Central

    Wallace, Rodrick

    2005-01-01

    Background Recent developments in Global Workspace theory suggest that human consciousness can suffer interpenetrating dysfunctions of mutual and reciprocal interaction with embedding environments which will have early onset and often insidious staged developmental progression, possibly according to a cancer model, in which a set of long-evolved control strategies progressively fails. Methods and results A rate distortion argument implies that, if an external information source carries a damaging 'message', then sufficient exposure to it, particularly during critical developmental periods, is sure to write a sufficiently accurate image of it on mind and body in a punctuated manner so as to initiate or promote similarly progressively punctuated developmental disorder, in essence either a staged failure affecting large-scale brain connectivity, which is the sine qua non of human consciousness, or else damaging the ability of embedding goal contexts to contain conscious dynamics. Conclusion The key intervention, at the population level, is clearly to limit exposure to factors triggering developmental disorders, a question of proper environmental sanitation, in a large sense, primarily a matter of social justice which has long been known to be determined almost entirely by the interactions of cultural trajectory, group power relations, and economic structure, with public policy. Intervention at the individual level appears limited to triggering or extending periods of remission, representing reestablishment of an extensive, but largely unexplored, spectrum of evolved control strategies, in contrast with the far better-understood case of cancer. PMID:16371149

  8. Software Helps Extract Information From Astronomical Images

    NASA Technical Reports Server (NTRS)

    Hartley, Booth; Ebert, Rick; Laughlin, Gaylin

    1995-01-01

    PAC Skyview 2.0 is interactive program for display and analysis of astronomical images. Includes large set of functions for display, analysis and manipulation of images. "Man" pages with descriptions of functions and examples of usage included. Skyview used interactively or in "server" mode, in which another program calls Skyview and executes commands itself. Skyview capable of reading image data files of four types, including those in FITS, S, IRAF, and Z formats. Written in C.

  9. Earth Observation Services (Image Processing Software)

    NASA Technical Reports Server (NTRS)

    1992-01-01

    San Diego State University and Environmental Systems Research Institute, with other agencies, have applied satellite imaging and image processing techniques to geographic information systems (GIS) updating. The resulting images display land use and are used by a regional planning agency for applications like mapping vegetation distribution and preserving wildlife habitats. The EOCAP program provides government co-funding to encourage private investment in, and to broaden the use of NASA-developed technology for analyzing information about Earth and ocean resources.

  10. Software for Viewing Landsat Mosaic Images

    NASA Technical Reports Server (NTRS)

    Watts, Zack; Farve, Catharine L.; Harvey, Craig

    2003-01-01

    A Windows-based computer program has been written to enable novice users (especially educators and students) to view images of large areas of the Earth (e.g., the continental United States) generated from image data acquired in the Landsat observations performed circa the year 1990. The large-area images are constructed as mosaics from the original Landsat images, which were acquired in several wavelength bands and each of which spans an area (in effect, one tile of a mosaic) of .5 in latitude by .6 in longitude. Whereas the original Landsat data are registered on a universal transverse Mercator (UTM) grid, the program converts the UTM coordinates of a mouse pointer in the image to latitude and longitude, which are continuously updated and displayed as the pointer is moved. The mosaic image currently on display can be exported as a Windows bitmap file. Other images (e.g., of state boundaries or interstate highways) can be overlaid on Landsat mosaics. The program interacts with the user via standard toolbar, keyboard, and mouse user interfaces. The program is supplied on a compact disk along with tutorial and educational information.

  11. Development and implementation of software systems for imaging spectroscopy

    USGS Publications Warehouse

    Boardman, J.W.; Clark, R.N.; Mazer, A.S.; Biehl, L.L.; Kruse, F.A.; Torson, J.; Staenz, K.

    2006-01-01

    Specialized software systems have played a crucial role throughout the twenty-five year course of the development of the new technology of imaging spectroscopy, or hyperspectral remote sensing. By their very nature, hyperspectral data place unique and demanding requirements on the computer software used to visualize, analyze, process and interpret them. Often described as a marriage of the two technologies of reflectance spectroscopy and airborne/spaceborne remote sensing, imaging spectroscopy, in fact, produces data sets with unique qualities, unlike previous remote sensing or spectrometer data. Because of these unique spatial and spectral properties hyperspectral data are not readily processed or exploited with legacy software systems inherited from either of the two parent fields of study. This paper provides brief reviews of seven important software systems developed specifically for imaging spectroscopy.

  12. The image related services of the HELIOS software engineering environment.

    PubMed

    Engelmann, U; Meinzer, H P; Schröter, A; Günnel, U; Demiris, A M; Makabe, M; Evers, H; Jean, F C; Degoulet, P

    1995-01-01

    This paper describes the approach of the European HELIOS project to integrate image processing tools into ward information systems. The image processing tools are the result of the basic research in image analysis in the Department Medical and Biological Informatics at the German Cancer Research Center. These tools for the analysis of two-dimensional images and three-dimensional data volumes with 3D reconstruction and visualization ae part of the Image Related Services of HELIOS. The HELIOS software engineering environment allows to use the image processing functionality in integrated applications.

  13. gr-MRI: A software package for magnetic resonance imaging using software defined radios.

    PubMed

    Hasselwander, Christopher J; Cao, Zhipeng; Grissom, William A

    2016-09-01

    The goal of this work is to develop software that enables the rapid implementation of custom MRI spectrometers using commercially-available software defined radios (SDRs). The developed gr-MRI software package comprises a set of Python scripts, flowgraphs, and signal generation and recording blocks for GNU Radio, an open-source SDR software package that is widely used in communications research. gr-MRI implements basic event sequencing functionality, and tools for system calibrations, multi-radio synchronization, and MR signal processing and image reconstruction. It includes four pulse sequences: a single-pulse sequence to record free induction signals, a gradient-recalled echo imaging sequence, a spin echo imaging sequence, and an inversion recovery spin echo imaging sequence. The sequences were used to perform phantom imaging scans with a 0.5Tesla tabletop MRI scanner and two commercially-available SDRs. One SDR was used for RF excitation and reception, and the other for gradient pulse generation. The total SDR hardware cost was approximately $2000. The frequency of radio desynchronization events and the frequency with which the software recovered from those events was also measured, and the SDR's ability to generate frequency-swept RF waveforms was validated and compared to the scanner's commercial spectrometer. The spin echo images geometrically matched those acquired using the commercial spectrometer, with no unexpected distortions. Desynchronization events were more likely to occur at the very beginning of an imaging scan, but were nearly eliminated if the user invoked the sequence for a short period before beginning data recording. The SDR produced a 500kHz bandwidth frequency-swept pulse with high fidelity, while the commercial spectrometer produced a waveform with large frequency spike errors. In conclusion, the developed gr-MRI software can be used to develop high-fidelity, low-cost custom MRI spectrometers using commercially-available SDRs. PMID:27394165

  14. gr-MRI: A software package for magnetic resonance imaging using software defined radios.

    PubMed

    Hasselwander, Christopher J; Cao, Zhipeng; Grissom, William A

    2016-09-01

    The goal of this work is to develop software that enables the rapid implementation of custom MRI spectrometers using commercially-available software defined radios (SDRs). The developed gr-MRI software package comprises a set of Python scripts, flowgraphs, and signal generation and recording blocks for GNU Radio, an open-source SDR software package that is widely used in communications research. gr-MRI implements basic event sequencing functionality, and tools for system calibrations, multi-radio synchronization, and MR signal processing and image reconstruction. It includes four pulse sequences: a single-pulse sequence to record free induction signals, a gradient-recalled echo imaging sequence, a spin echo imaging sequence, and an inversion recovery spin echo imaging sequence. The sequences were used to perform phantom imaging scans with a 0.5Tesla tabletop MRI scanner and two commercially-available SDRs. One SDR was used for RF excitation and reception, and the other for gradient pulse generation. The total SDR hardware cost was approximately $2000. The frequency of radio desynchronization events and the frequency with which the software recovered from those events was also measured, and the SDR's ability to generate frequency-swept RF waveforms was validated and compared to the scanner's commercial spectrometer. The spin echo images geometrically matched those acquired using the commercial spectrometer, with no unexpected distortions. Desynchronization events were more likely to occur at the very beginning of an imaging scan, but were nearly eliminated if the user invoked the sequence for a short period before beginning data recording. The SDR produced a 500kHz bandwidth frequency-swept pulse with high fidelity, while the commercial spectrometer produced a waveform with large frequency spike errors. In conclusion, the developed gr-MRI software can be used to develop high-fidelity, low-cost custom MRI spectrometers using commercially-available SDRs.

  15. gr-MRI: A software package for magnetic resonance imaging using software defined radios

    NASA Astrophysics Data System (ADS)

    Hasselwander, Christopher J.; Cao, Zhipeng; Grissom, William A.

    2016-09-01

    The goal of this work is to develop software that enables the rapid implementation of custom MRI spectrometers using commercially-available software defined radios (SDRs). The developed gr-MRI software package comprises a set of Python scripts, flowgraphs, and signal generation and recording blocks for GNU Radio, an open-source SDR software package that is widely used in communications research. gr-MRI implements basic event sequencing functionality, and tools for system calibrations, multi-radio synchronization, and MR signal processing and image reconstruction. It includes four pulse sequences: a single-pulse sequence to record free induction signals, a gradient-recalled echo imaging sequence, a spin echo imaging sequence, and an inversion recovery spin echo imaging sequence. The sequences were used to perform phantom imaging scans with a 0.5 Tesla tabletop MRI scanner and two commercially-available SDRs. One SDR was used for RF excitation and reception, and the other for gradient pulse generation. The total SDR hardware cost was approximately 2000. The frequency of radio desynchronization events and the frequency with which the software recovered from those events was also measured, and the SDR's ability to generate frequency-swept RF waveforms was validated and compared to the scanner's commercial spectrometer. The spin echo images geometrically matched those acquired using the commercial spectrometer, with no unexpected distortions. Desynchronization events were more likely to occur at the very beginning of an imaging scan, but were nearly eliminated if the user invoked the sequence for a short period before beginning data recording. The SDR produced a 500 kHz bandwidth frequency-swept pulse with high fidelity, while the commercial spectrometer produced a waveform with large frequency spike errors. In conclusion, the developed gr-MRI software can be used to develop high-fidelity, low-cost custom MRI spectrometers using commercially-available SDRs.

  16. The influence of software filtering in digital mammography image quality

    NASA Astrophysics Data System (ADS)

    Michail, C.; Spyropoulou, V.; Kalyvas, N.; Valais, I.; Dimitropoulos, N.; Fountos, G.; Kandarakis, I.; Panayiotakis, G.

    2009-05-01

    Breast cancer is one of the most frequently diagnosed cancers among women. Several techniques have been developed to help in the early detection of breast cancer such as conventional and digital x-ray mammography, positron and single-photon emission mammography, etc. A key advantage in digital mammography is that images can be manipulated as simple computer image files. Thus non-dedicated commercially available image manipulation software can be employed to process and store the images. The image processing tools of the Photoshop (CS 2) software usually incorporate digital filters which may be used to reduce image noise, enhance contrast and increase spatial resolution. However, improving an image quality parameter may result in degradation of another. The aim of this work was to investigate the influence of three sharpening filters, named hereafter sharpen, sharpen more and sharpen edges on image resolution and noise. Image resolution was assessed by means of the Modulation Transfer Function (MTF).In conclusion it was found that the correct use of commercial non-dedicated software on digital mammograms may improve some aspects of image quality.

  17. Image-Processing Software For A Hypercube Computer

    NASA Technical Reports Server (NTRS)

    Lee, Meemong; Mazer, Alan S.; Groom, Steven L.; Williams, Winifred I.

    1992-01-01

    Concurrent Image Processing Executive (CIPE) is software system intended to develop and use image-processing application programs on concurrent computing environment. Designed to shield programmer from complexities of concurrent-system architecture, it provides interactive image-processing environment for end user. CIPE utilizes architectural characteristics of particular concurrent system to maximize efficiency while preserving architectural independence from user and programmer. CIPE runs on Mark-IIIfp 8-node hypercube computer and associated SUN-4 host computer.

  18. Uses of software in digital image analysis: a forensic report

    NASA Astrophysics Data System (ADS)

    Sharma, Mukesh; Jha, Shailendra

    2010-02-01

    Forensic image analysis is required an expertise to interpret the content of an image or the image itself in legal matters. Major sub-disciplines of forensic image analysis with law enforcement applications include photo-grammetry, photographic comparison, content analysis and image authentication. It has wide applications in forensic science range from documenting crime scenes to enhancing faint or indistinct patterns such as partial fingerprints. The process of forensic image analysis can involve several different tasks, regardless of the type of image analysis performed. Through this paper authors have tried to explain these tasks, which are described in to three categories: Image Compression, Image Enhancement & Restoration and Measurement Extraction. With the help of examples like signature comparison, counterfeit currency comparison and foot-wear sole impression using the software Canvas and Corel Draw.

  19. Single-molecule localization software applied to photon counting imaging.

    PubMed

    Hirvonen, Liisa M; Kilfeather, Tiffany; Suhling, Klaus

    2015-06-01

    Centroiding in photon counting imaging has traditionally been accomplished by a single-step, noniterative algorithm, often implemented in hardware. Single-molecule localization techniques in superresolution fluorescence microscopy are conceptually similar, but use more sophisticated iterative software-based fitting algorithms to localize the fluorophore. Here, we discuss common features and differences between single-molecule localization and photon counting imaging and investigate the suitability of single-molecule localization software for photon event localization. We find that single-molecule localization software packages designed for superresolution microscopy-QuickPALM, rapidSTORM, and ThunderSTORM-can work well when applied to photon counting imaging with a microchannel-plate-based intensified camera system: photon event recognition can be excellent, fixed pattern noise can be low, and the microchannel plate pores can easily be resolved. PMID:26192667

  20. Single-molecule localization software applied to photon counting imaging.

    PubMed

    Hirvonen, Liisa M; Kilfeather, Tiffany; Suhling, Klaus

    2015-06-01

    Centroiding in photon counting imaging has traditionally been accomplished by a single-step, noniterative algorithm, often implemented in hardware. Single-molecule localization techniques in superresolution fluorescence microscopy are conceptually similar, but use more sophisticated iterative software-based fitting algorithms to localize the fluorophore. Here, we discuss common features and differences between single-molecule localization and photon counting imaging and investigate the suitability of single-molecule localization software for photon event localization. We find that single-molecule localization software packages designed for superresolution microscopy-QuickPALM, rapidSTORM, and ThunderSTORM-can work well when applied to photon counting imaging with a microchannel-plate-based intensified camera system: photon event recognition can be excellent, fixed pattern noise can be low, and the microchannel plate pores can easily be resolved.

  1. Computer Software Configuration Item-Specific Flight Software Image Transfer Script Generator

    NASA Technical Reports Server (NTRS)

    Bolen, Kenny; Greenlaw, Ronald

    2010-01-01

    A K-shell UNIX script enables the International Space Station (ISS) Flight Control Team (FCT) operators in NASA s Mission Control Center (MCC) in Houston to transfer an entire or partial computer software configuration item (CSCI) from a flight software compact disk (CD) to the onboard Portable Computer System (PCS). The tool is designed to read the content stored on a flight software CD and generate individual CSCI transfer scripts that are capable of transferring the flight software content in a given subdirectory on the CD to the scratch directory on the PCS. The flight control team can then transfer the flight software from the PCS scratch directory to the Electronically Erasable Programmable Read Only Memory (EEPROM) of an ISS Multiplexer/ Demultiplexer (MDM) via the Indirect File Transfer capability. The individual CSCI scripts and the CSCI Specific Flight Software Image Transfer Script Generator (CFITSG), when executed a second time, will remove all components from their original execution. The tool will identify errors in the transfer process and create logs of the transferred software for the purposes of configuration management.

  2. Software Development for Ring Imaging Detector

    NASA Astrophysics Data System (ADS)

    Torisky, Benjamin

    2016-03-01

    Jefferson Lab (Jlab) is performing a large-scale upgrade to their Continuous Electron Beam Accelerator Facility (CEBAF) up to 12GeV beam. The Large Acceptance Spectrometer (CLAS12) in Hall B is being upgraded and a new Ring Imaging Cherenkov (RICH) detector is being developed to provide better kaon - pion separation throughout the 3 to 12 GeV range. With this addition, when the electron beam hits the target, the resulting pions, kaons, and other particles will pass through a wall of translucent aerogel tiles and create Cherenkov radiation. This light can then be accurately detected by a large array of Multi-Anode PhotoMultiplier Tubes (MA-PMT). I am presenting an update on my work on the implementation of Java based reconstruction programs for the RICH in the CLAS12 main analysis package.

  3. Software development for a Ring Imaging Detector

    NASA Astrophysics Data System (ADS)

    Torisky, Benjamin; Benmokhtar, Fatiha

    2015-04-01

    Jefferson Lab (Jlab) is performing a large-scale upgrade to their Continuous Electron Beam Accelerator Facility (CEBAF) up to 12 GeV beam. The Large Acceptance Spectrometer (CLAS12) in Hall B is being upgraded and a new Ring Imaging CHerenkov (RICH) detector is being developed to provide better kaon - pion separation throughout the 3 to 12 GeV range. With this addition, when the electron beam hits the target, the resulting pions, kaons, and other particles will pass through a wall of translucent aerogel tiles and create Cherenkov radiation. This light can then be accurately detected by a large array of Multi-Anode PhotoMultiplier Tubes (MA-PMT). I am presenting my work on the implementation of Java based reconstruction programs for the RICH in the CLAS12 main analysis package.

  4. Software to model AXAF image quality

    NASA Technical Reports Server (NTRS)

    Ahmad, Anees

    1993-01-01

    This draft final report describes the work performed under this delivery order from May 1992 through June 1993. The purpose of this contract was to enhance and develop an integrated optical performance modeling software for complex x-ray optical systems such as AXAF. The GRAZTRACE program developed by the MSFC Optical Systems Branch for modeling VETA-I was used as the starting baseline program. The original program was a large single file program and, therefore, could not be modified very efficiently. The original source code has been reorganized, and a 'Make Utility' has been written to update the original program. The new version of the source code consists of 36 small source files to make it easier for the code developer to manage and modify the program. A user library has also been built and a 'Makelib' utility has been furnished to update the library. With the user library, the users can easily access the GRAZTRACE source files and build a custom library. A user manual for the new version of GRAZTRACE has been compiled. The plotting capability for the 3-D point spread functions and contour plots has been provided in the GRAZTRACE using the graphics package DISPLAY. The Graphics emulator over the network has been set up for programming the graphics routine. The point spread function and the contour plot routines have also been modified to display the plot centroid, and to allow the user to specify the plot range, and the viewing angle options. A Command Mode version of GRAZTRACE has also been developed. More than 60 commands have been implemented in a Code-V like format. The functions covered in this version include data manipulation, performance evaluation, and inquiry and setting of internal parameters. The user manual for these commands has been formatted as in Code-V, showing the command syntax, synopsis, and options. An interactive on-line help system for the command mode has also been accomplished to allow the user to find valid commands, command syntax

  5. Open environment for image processing and software development

    NASA Astrophysics Data System (ADS)

    Rasure, John R.; Young, Mark

    1992-04-01

    The main goal of the Khoros software project is to create and provide an integrated software development environment for information processing and data visualization. The Khoros software system is now being used as a foundation to improve productivity and promote software reuse in a wide variety of application domain. A powerful feature of the Khoros system is the high-level, abstract visual language that can be employed to significantly boost the productivity of the researcher. Central to the Khoros system is the need for a consistent yet flexible user interface development system that provides cohesiveness to the vast number of programs that make up the Khoros system. Automated tools assist in maintenance as well as development of programs. The software structure that embodies this system provides for extensibility and portability, and allows for easy tailoring to target specific application domains and processing environments. First, an overview of the Khoros software environment is given. Then this paper presents the abstract applications programmer interface, API, the data services that are provided in Khoros to support it, and the Khoros visualization and image file format. The authors contend that Khoros is an excellent environment for the exploration and implementation of imaging standards.

  6. Digital hardware and software design for infrared sensor image processing

    NASA Astrophysics Data System (ADS)

    Bekhtin, Yuri; Barantsev, Alexander; Solyakov, Vladimir; Medvedev, Alexander

    2005-06-01

    The example of the digital hardware-and-software complex consisting of the multi-element matrix sensor the personal computer along with the installed special card AMBPCI is described. The problems of elimination socalled fixed pattern noise (FPN) are considered. To improve current imaging the residual FPN is represented as a multiplicative noise. The wavelet-based de-noising algorithm using sets of noisy and non-noisy data of images is applied.

  7. Software to model AXAF-I image quality

    NASA Technical Reports Server (NTRS)

    Ahmad, Anees; Feng, Chen

    1995-01-01

    A modular user-friendly computer program for the modeling of grazing-incidence type x-ray optical systems has been developed. This comprehensive computer software GRAZTRACE covers the manipulation of input data, ray tracing with reflectivity and surface deformation effects, convolution with x-ray source shape, and x-ray scattering. The program also includes the capabilities for image analysis, detector scan modeling, and graphical presentation of the results. A number of utilities have been developed to interface the predicted Advanced X-ray Astrophysics Facility-Imaging (AXAF-I) mirror structural and thermal distortions with the ray-trace. This software is written in FORTRAN 77 and runs on a SUN/SPARC station. An interactive command mode version and a batch mode version of the software have been developed.

  8. SIMA: Python software for analysis of dynamic fluorescence imaging data

    PubMed Central

    Kaifosh, Patrick; Zaremba, Jeffrey D.; Danielson, Nathan B.; Losonczy, Attila

    2014-01-01

    Fluorescence imaging is a powerful method for monitoring dynamic signals in the nervous system. However, analysis of dynamic fluorescence imaging data remains burdensome, in part due to the shortage of available software tools. To address this need, we have developed SIMA, an open source Python package that facilitates common analysis tasks related to fluorescence imaging. Functionality of this package includes correction of motion artifacts occurring during in vivo imaging with laser-scanning microscopy, segmentation of imaged fields into regions of interest (ROIs), and extraction of signals from the segmented ROIs. We have also developed a graphical user interface (GUI) for manual editing of the automatically segmented ROIs and automated registration of ROIs across multiple imaging datasets. This software has been designed with flexibility in mind to allow for future extension with different analysis methods and potential integration with other packages. Software, documentation, and source code for the SIMA package and ROI Buddy GUI are freely available at http://www.losonczylab.org/sima/. PMID:25295002

  9. SIMA: Python software for analysis of dynamic fluorescence imaging data.

    PubMed

    Kaifosh, Patrick; Zaremba, Jeffrey D; Danielson, Nathan B; Losonczy, Attila

    2014-01-01

    Fluorescence imaging is a powerful method for monitoring dynamic signals in the nervous system. However, analysis of dynamic fluorescence imaging data remains burdensome, in part due to the shortage of available software tools. To address this need, we have developed SIMA, an open source Python package that facilitates common analysis tasks related to fluorescence imaging. Functionality of this package includes correction of motion artifacts occurring during in vivo imaging with laser-scanning microscopy, segmentation of imaged fields into regions of interest (ROIs), and extraction of signals from the segmented ROIs. We have also developed a graphical user interface (GUI) for manual editing of the automatically segmented ROIs and automated registration of ROIs across multiple imaging datasets. This software has been designed with flexibility in mind to allow for future extension with different analysis methods and potential integration with other packages. Software, documentation, and source code for the SIMA package and ROI Buddy GUI are freely available at http://www.losonczylab.org/sima/.

  10. Stromatoporoid biometrics using image analysis software: A first order approach

    NASA Astrophysics Data System (ADS)

    Wolniewicz, Pawel

    2010-04-01

    Strommetric is a new image analysis computer program that performs morphometric measurements of stromatoporoid sponges. The program measures 15 features of skeletal elements (pillars and laminae) visible in both longitudinal and transverse thin sections. The software is implemented in C++, using the Open Computer Vision (OpenCV) library. The image analysis system distinguishes skeletal elements from sparry calcite using Otsu's method for image thresholding. More than 150 photos of thin sections were used as a test set, from which 36,159 measurements were obtained. The software provided about one hundred times more data than the current method applied until now. The data obtained are reproducible, even if the work is repeated by different workers. Thus the method makes the biometric studies of stromatoporoids objective.

  11. NASA's MERBoard: An Interactive Collaborative Workspace Platform. Chapter 4

    NASA Technical Reports Server (NTRS)

    Trimble, Jay; Wales, Roxana; Gossweiler, Rich

    2003-01-01

    This chapter describes the ongoing process by which a multidisciplinary group at NASA's Ames Research Center is designing and implementing a large interactive work surface called the MERBoard Collaborative Workspace. A MERBoard system involves several distributed, large, touch-enabled, plasma display systems with custom MERBoard software. A centralized server and database back the system. We are continually tuning MERBoard to support over two hundred scientists and engineers during the surface operations of the Mars Exploration Rover Missions. These scientists and engineers come from various disciplines and are working both in small and large groups over a span of space and time. We describe the multidisciplinary, human-centered process by which this h4ERBoard system is being designed, the usage patterns and social interactions that we have observed, and issues we are currently facing.

  12. Data to Pictures to Data: Outreach Imaging Software and Metadata

    NASA Astrophysics Data System (ADS)

    Levay, Z.

    2011-07-01

    A convergence between astronomy science and digital photography has enabled a steady stream of visually rich imagery from state-of-the-art data. The accessibility of hardware and software has facilitated an explosion of astronomical images for outreach, from space-based observatories, ground-based professional facilities and among the vibrant amateur astrophotography community. Producing imagery from science data involves a combination of custom software to understand FITS data (FITS Liberator), off-the-shelf, industry-standard software to composite multi-wavelength data and edit digital photographs (Adobe Photoshop), and application of photo/image-processing techniques. Some additional effort is needed to close the loop and enable this imagery to be conveniently available for various purposes beyond web and print publication. The metadata paradigms in digital photography are now complying with FITS and science software to carry information such as keyword tags and world coordinates, enabling these images to be usable in more sophisticated, imaginative ways exemplified by Sky in Google Earth and World Wide Telescope.

  13. Image Fusion Software in the Clearpem-Sonic Project

    NASA Astrophysics Data System (ADS)

    Pizzichemi, M.; di Vara, N.; Cucciati, G.; Ghezzi, A.; Paganoni, M.; Farina, F.; Frisch, B.; Bugalho, R.

    2012-08-01

    ClearPEM-Sonic is a mammography scanner that combines Positron Emission Tomography with 3D ultrasound echographic and elastographic imaging. It has been developed to improve early stage detection of breast cancer by combining metabolic and anatomical information. The PET system has been developed by the Crystal Clear Collaboration, while the 3D ultrasound probe has been provided by SuperSonic Imagine. In this framework, the visualization and fusion software is an essential tool for the radiologists in the diagnostic process. This contribution discusses the design choices, the issues faced during the implementation, and the commissioning of the software tools developed for ClearPEM-Sonic.

  14. Performance of Personal Workspace Controls Final Report

    SciTech Connect

    Rubinstein, Francis; Kiliccote, Sila; Loffeld, John; Pettler,Pete; Snook, Joel

    2004-12-01

    One of the key deliverables for the DOE-funded controls research at LBNL for FY04 was the development of a prototype Personal Workspace Control system. The successful development of this system is a critical milestone for the LBNL Lighting Controls Research effort because this system demonstrates how IBECS can add value to today's Task Ambient lighting systems. LBNL has argued that by providing both the occupant and the facilities manager with the ability to precisely control the operation of overhead lighting and all task lighting in a coordinated manner, that task ambient lighting can optimize energy performance and occupant comfort simultaneously [Reference Task Ambient Foundation Document]. The Personal Workspace Control system is the application of IBECS to this important lighting problem. This report discusses the development of the Personal Workspace Control to date including descriptions of the different fixture types that have been converted to IBECS operation and a detailed description of the operation of PWC Scene Controller, which provides the end user with precise control of his task ambient lighting system. The objective, from the Annual Plan, is to demonstrate improvements in efficiency, lighting quality and occupant comfort realized using Personal Workspace Controls (PWC) designed to optimize the delivery of lighting to the individual's workstation regardless of which task-ambient lighting solution is chosen. The PWC will be capable of controlling floor-mounted, desk lamps, furniture-mounted and overhead lighting fixtures from a personal computer and handheld remote. The PWC will use an environmental sensor to automatically monitor illuminance, temperature and occupancy and to appropriately modulate ambient lighting according to daylight availability and to switch off task lighting according to local occupancy. [Adding occupancy control to the system would blunt the historical criticism of occupant-controlled lighting - the tendency of the occupant

  15. Parallel algorithm for computing 3-D reachable workspaces

    NASA Astrophysics Data System (ADS)

    Alameldin, Tarek K.; Sobh, Tarek M.

    1992-03-01

    The problem of computing the 3-D workspace for redundant articulated chains has applications in a variety of fields such as robotics, computer aided design, and computer graphics. The computational complexity of the workspace problem is at least NP-hard. The recent advent of parallel computers has made practical solutions for the workspace problem possible. Parallel algorithms for computing the 3-D workspace for redundant articulated chains with joint limits are presented. The first phase of these algorithms computes workspace points in parallel. The second phase uses workspace points that are computed in the first phase and fits a 3-D surface around the volume that encompasses the workspace points. The second phase also maps the 3- D points into slices, uses region filling to detect the holes and voids in the workspace, extracts the workspace boundary points by testing the neighboring cells, and tiles the consecutive contours with triangles. The proposed algorithms are efficient for computing the 3-D reachable workspace for articulated linkages, not only those with redundant degrees of freedom but also those with joint limits.

  16. The application of image processing software: Photoshop in environmental design

    NASA Astrophysics Data System (ADS)

    Dong, Baohua; Zhang, Chunmi; Zhuo, Chen

    2011-02-01

    In the process of environmental design and creation, the design sketch holds a very important position in that it not only illuminates the design's idea and concept but also shows the design's visual effects to the client. In the field of environmental design, computer aided design has made significant improvement. Many types of specialized design software for environmental performance of the drawings and post artistic processing have been implemented. Additionally, with the use of this software, working efficiency has greatly increased and drawings have become more specific and more specialized. By analyzing the application of photoshop image processing software in environmental design and comparing and contrasting traditional hand drawing and drawing with modern technology, this essay will further explore the way for computer technology to play a bigger role in environmental design.

  17. An ion beam analysis software based on ImageJ

    NASA Astrophysics Data System (ADS)

    Udalagama, C.; Chen, X.; Bettiol, A. A.; Watt, F.

    2013-07-01

    The suit of techniques (RBS, STIM, ERDS, PIXE, IL, IF,…) available in ion beam analysis yields a variety of rich information. Typically, after the initial challenge of acquiring data we are then faced with the task of having to extract relevant information or to present the data in a format with the greatest impact. This process sometimes requires developing new software tools. When faced with such situations the usual practice at the Centre for Ion Beam Applications (CIBA) in Singapore has been to use our computational expertise to develop ad hoc software tools as and when we need them. It then became apparent that the whole ion beam community can benefit from such tools; specifically from a common software toolset that can be developed and maintained by everyone with freedom to use and allowance to modify. In addition to the benefits of readymade tools and sharing the onus of development, this also opens up the possibility for collaborators to access and analyse ion beam data without having to depend on an ion beam lab. This has the virtue of making the ion beam techniques more accessible to a broader scientific community. We have identified ImageJ as an appropriate software base to develop such a common toolset. In addition to being in the public domain and been setup for collaborative tool development, ImageJ is accompanied by hundreds of modules (plugins) that allow great breadth in analysis. The present work is the first step towards integrating ion beam analysis into ImageJ. Some of the features of the current version of the ImageJ ‘ion beam' plugin are: (1) reading list mode or event-by-event files, (2) energy gates/sorts, (3) sort stacks, (4) colour function, (5) real time map updating, (6) real time colour updating and (7) median & average map creation.

  18. Software for visualization, analysis, and manipulation of laser scan images

    NASA Astrophysics Data System (ADS)

    Burnsides, Dennis B.

    1997-03-01

    The recent introduction of laser surface scanning to scientific applications presents a challenge to computer scientists and engineers. Full utilization of this two- dimensional (2-D) and three-dimensional (3-D) data requires advances in techniques and methods for data processing and visualization. This paper explores the development of software to support the visualization, analysis and manipulation of laser scan images. Specific examples presented are from on-going efforts at the Air Force Computerized Anthropometric Research and Design (CARD) Laboratory.

  19. Parallel-Processing Software for Creating Mosaic Images

    NASA Technical Reports Server (NTRS)

    Klimeck, Gerhard; Deen, Robert; McCauley, Michael; DeJong, Eric

    2008-01-01

    A computer program implements parallel processing for nearly real-time creation of panoramic mosaics of images of terrain acquired by video cameras on an exploratory robotic vehicle (e.g., a Mars rover). Because the original images are typically acquired at various camera positions and orientations, it is necessary to warp the images into the reference frame of the mosaic before stitching them together to create the mosaic. [Also see "Parallel-Processing Software for Correlating Stereo Images," Software Supplement to NASA Tech Briefs, Vol. 31, No. 9 (September 2007) page 26.] The warping algorithm in this computer program reflects the considerations that (1) for every pixel in the desired final mosaic, a good corresponding point must be found in one or more of the original images and (2) for this purpose, one needs a good mathematical model of the cameras and a good correlation of individual pixels with respect to their positions in three dimensions. The desired mosaic is divided into slices, each of which is assigned to one of a number of central processing units (CPUs) operating simultaneously. The results from the CPUs are gathered and placed into the final mosaic. The time taken to create the mosaic depends upon the number of CPUs, the speed of each CPU, and whether a local or a remote data-staging mechanism is used.

  20. Software components for medical image visualization and surgical planning

    NASA Astrophysics Data System (ADS)

    Starreveld, Yves P.; Gobbi, David G.; Finnis, Kirk; Peters, Terence M.

    2001-05-01

    Purpose: The development of new applications in medical image visualization and surgical planning requires the completion of many common tasks such as image reading and re-sampling, segmentation, volume rendering, and surface display. Intra-operative use requires an interface to a tracking system and image registration, and the application requires basic, easy to understand user interface components. Rapid changes in computer and end-application hardware, as well as in operating systems and network environments make it desirable to have a hardware and operating system as an independent collection of reusable software components that can be assembled rapidly to prototype new applications. Methods: Using the OpenGL based Visualization Toolkit as a base, we have developed a set of components that implement the above mentioned tasks. The components are written in both C++ and Python, but all are accessible from Python, a byte compiled scripting language. The components have been used on the Red Hat Linux, Silicon Graphics Iris, Microsoft Windows, and Apple OS X platforms. Rigorous object-oriented software design methods have been applied to ensure hardware independence and a standard application programming interface (API). There are components to acquire, display, and register images from MRI, MRA, CT, Computed Rotational Angiography (CRA), Digital Subtraction Angiography (DSA), 2D and 3D ultrasound, video and physiological recordings. Interfaces to various tracking systems for intra-operative use have also been implemented. Results: The described components have been implemented and tested. To date they have been used to create image manipulation and viewing tools, a deep brain functional atlas, a 3D ultrasound acquisition and display platform, a prototype minimally invasive robotic coronary artery bypass graft planning system, a tracked neuro-endoscope guidance system and a frame-based stereotaxy neurosurgery planning tool. The frame-based stereotaxy module has been

  1. Woods Hole Image Processing System Software implementation; using NetCDF as a software interface for image processing

    USGS Publications Warehouse

    Paskevich, Valerie F.

    1992-01-01

    The Branch of Atlantic Marine Geology has been involved in the collection, processing and digital mosaicking of high, medium and low-resolution side-scan sonar data during the past 6 years. In the past, processing and digital mosaicking has been accomplished with a dedicated, shore-based computer system. With the need to process sidescan data in the field with increased power and reduced cost of major workstations, a need to have an image processing package on a UNIX based computer system which could be utilized in the field as well as be more generally available to Branch personnel was identified. This report describes the initial development of that package referred to as the Woods Hole Image Processing System (WHIPS). The software was developed using the Unidata NetCDF software interface to allow data to be more readily portable between different computer operating systems.

  2. Development of Software to Model AXAF-I Image Quality

    NASA Technical Reports Server (NTRS)

    Ahmad, Anees; Hawkins, Lamar

    1996-01-01

    This draft final report describes the work performed under the delivery order number 145 from May 1995 through August 1996. The scope of work included a number of software development tasks for the performance modeling of AXAF-I. A number of new capabilities and functions have been added to the GT software, which is the command mode version of the GRAZTRACE software, originally developed by MSFC. A structural data interface has been developed for the EAL (old SPAR) finite element analysis FEA program, which is being used by MSFC Structural Analysis group for the analysis of AXAF-I. This interface utility can read the structural deformation file from the EAL and other finite element analysis programs such as NASTRAN and COSMOS/M, and convert the data to a suitable format that can be used for the deformation ray-tracing to predict the image quality for a distorted mirror. There is a provision in this utility to expand the data from finite element models assuming 180 degrees symmetry. This utility has been used to predict image characteristics for the AXAF-I HRMA, when subjected to gravity effects in the horizontal x-ray ground test configuration. The development of the metrology data processing interface software has also been completed. It can read the HDOS FITS format surface map files, manipulate and filter the metrology data, and produce a deformation file, which can be used by GT for ray tracing for the mirror surface figure errors. This utility has been used to determine the optimum alignment (axial spacing and clocking) for the four pairs of AXAF-I mirrors. Based on this optimized alignment, the geometric images and effective focal lengths for the as built mirrors were predicted to cross check the results obtained by Kodak.

  3. Demineralization Depth Using QLF and a Novel Image Processing Software.

    PubMed

    Wu, Jun; Donly, Zachary R; Donly, Kevin J; Hackmyer, Steven

    2010-01-01

    Quantitative Light-Induced fluorescence (QLF) has been widely used to detect tooth demineralization indicated by fluorescence loss with respect to surrounding sound enamel. The correlation between fluorescence loss and demineralization depth is not fully understood. The purpose of this project was to study this correlation to estimate demineralization depth. Extracted teeth were collected. Artificial caries-like lesions were created and imaged with QLF. Novel image processing software was developed to measure the largest percent of fluorescence loss in the region of interest. All teeth were then sectioned and imaged by polarized light microscopy. The largest depth of demineralization was measured by NIH ImageJ software. The statistical linear regression method was applied to analyze these data. The linear regression model was Y = 0.32X + 0.17, where X was the percent loss of fluorescence and Y was the depth of demineralization. The correlation coefficient was 0.9696. The two-tailed t-test for coefficient was 7.93, indicating the P-value = .0014. The F test for the entire model was 62.86, which shows the P-value = .0013. The results indicated statistically significant linear correlation between the percent loss of fluorescence and depth of the enamel demineralization.

  4. Demineralization Depth Using QLF and a Novel Image Processing Software

    PubMed Central

    Wu, Jun; Donly, Zachary R.; Donly, Kevin J.; Hackmyer, Steven

    2010-01-01

    Quantitative Light-Induced fluorescence (QLF) has been widely used to detect tooth demineralization indicated by fluorescence loss with respect to surrounding sound enamel. The correlation between fluorescence loss and demineralization depth is not fully understood. The purpose of this project was to study this correlation to estimate demineralization depth. Extracted teeth were collected. Artificial caries-like lesions were created and imaged with QLF. Novel image processing software was developed to measure the largest percent of fluorescence loss in the region of interest. All teeth were then sectioned and imaged by polarized light microscopy. The largest depth of demineralization was measured by NIH ImageJ software. The statistical linear regression method was applied to analyze these data. The linear regression model was Y = 0.32X + 0.17, where X was the percent loss of fluorescence and Y was the depth of demineralization. The correlation coefficient was 0.9696. The two-tailed t-test for coefficient was 7.93, indicating the P-value = .0014. The F test for the entire model was 62.86, which shows the P-value = .0013. The results indicated statistically significant linear correlation between the percent loss of fluorescence and depth of the enamel demineralization. PMID:20445755

  5. Software for Verifying Image-Correlation Tie Points

    NASA Technical Reports Server (NTRS)

    Klimeck, Gerhard; Yagi, Gary

    2008-01-01

    A computer program enables assessment of the quality of tie points in the image-correlation processes of the software described in the immediately preceding article. Tie points are computed in mappings between corresponding pixels in the left and right images of a stereoscopic pair. The mappings are sometimes not perfect because image data can be noisy and parallax can cause some points to appear in one image but not the other. The present computer program relies on the availability of a left- right correlation map in addition to the usual right left correlation map. The additional map must be generated, which doubles the processing time. Such increased time can now be afforded in the data-processing pipeline, since the time for map generation is now reduced from about 60 to 3 minutes by the parallelization discussed in the previous article. Parallel cluster processing time, therefore, enabled this better science result. The first mapping is typically from a point (denoted by coordinates x,y) in the left image to a point (x',y') in the right image. The second mapping is from (x',y') in the right image to some point (x",y") in the left image. If (x,y) and(x",y") are identical, then the mapping is considered perfect. The perfect-match criterion can be relaxed by introducing an error window that admits of round-off error and a small amount of noise. The mapping procedure can be repeated until all points in each image not connected to points in the other image are eliminated, so that what remains are verified correlation data.

  6. Software and Algorithms for Biomedical Image Data Processing and Visualization

    NASA Technical Reports Server (NTRS)

    Talukder, Ashit; Lambert, James; Lam, Raymond

    2004-01-01

    A new software equipped with novel image processing algorithms and graphical-user-interface (GUI) tools has been designed for automated analysis and processing of large amounts of biomedical image data. The software, called PlaqTrak, has been specifically used for analysis of plaque on teeth of patients. New algorithms have been developed and implemented to segment teeth of interest from surrounding gum, and a real-time image-based morphing procedure is used to automatically overlay a grid onto each segmented tooth. Pattern recognition methods are used to classify plaque from surrounding gum and enamel, while ignoring glare effects due to the reflection of camera light and ambient light from enamel regions. The PlaqTrak system integrates these components into a single software suite with an easy-to-use GUI (see Figure 1) that allows users to do an end-to-end run of a patient s record, including tooth segmentation of all teeth, grid morphing of each segmented tooth, and plaque classification of each tooth image. The automated and accurate processing of the captured images to segment each tooth [see Figure 2(a)] and then detect plaque on a tooth-by-tooth basis is a critical component of the PlaqTrak system to do clinical trials and analysis with minimal human intervention. These features offer distinct advantages over other competing systems that analyze groups of teeth or synthetic teeth. PlaqTrak divides each segmented tooth into eight regions using an advanced graphics morphing procedure [see results on a chipped tooth in Figure 2(b)], and a pattern recognition classifier is then used to locate plaque [red regions in Figure 2(d)] and enamel regions. The morphing allows analysis within regions of teeth, thereby facilitating detailed statistical analysis such as the amount of plaque present on the biting surfaces on teeth. This software system is applicable to a host of biomedical applications, such as cell analysis and life detection, or robotic applications, such

  7. Special Software for Planetary Image Processing and Research

    NASA Astrophysics Data System (ADS)

    Zubarev, A. E.; Nadezhdina, I. E.; Kozlova, N. A.; Brusnikin, E. S.; Karachevtseva, I. P.

    2016-06-01

    The special modules of photogrammetric processing of remote sensing data that provide the opportunity to effectively organize and optimize the planetary studies were developed. As basic application the commercial software package PHOTOMOD™ is used. Special modules were created to perform various types of data processing: calculation of preliminary navigation parameters, calculation of shape parameters of celestial body, global view image orthorectification, estimation of Sun illumination and Earth visibilities from planetary surface. For photogrammetric processing the different types of data have been used, including images of the Moon, Mars, Mercury, Phobos, Galilean satellites and Enceladus obtained by frame or push-broom cameras. We used modern planetary data and images that were taken over the years, shooting from orbit flight path with various illumination and resolution as well as obtained by planetary rovers from surface. Planetary data image processing is a complex task, and as usual it can take from few months to years. We present our efficient pipeline procedure that provides the possibilities to obtain different data products and supports a long way from planetary images to celestial body maps. The obtained data - new three-dimensional control point networks, elevation models, orthomosaics - provided accurate maps production: a new Phobos atlas (Karachevtseva et al., 2015) and various thematic maps that derived from studies of planetary surface (Karachevtseva et al., 2016a).

  8. 'Face value': new medical imaging software in commercial view.

    PubMed

    Coopmans, Catelijne

    2011-04-01

    Based on three ethnographic vignettes describing the engagements of a small start-up company with prospective competitors, partners and customers, this paper shows how commercial considerations are folded into the ways visual images become 'seeable'. When company members mount demonstrations of prototype mammography software, they seek to generate interest but also to protect their intellectual property. Pivotal to these efforts to manage revelation and concealment is the visual interface, which is variously performed as obstacle and ally in the development of a profitable product. Using the concept of 'face value', the paper seeks to develop further insight into contemporary dynamics of seeing and showing by tracing the way techno-visual presentations and commercial considerations become entangled in practice. It also draws attention to the salience and significance of enactments of surface and depth in image-based practices. PMID:21998921

  9. Crystallographic Image Processing Software for Scanning Probe Microscopists

    NASA Astrophysics Data System (ADS)

    Plachinda, Pavel; Moon, Bill; Moeck, Peter

    2010-03-01

    Following the common practice of structural electron crystallography, scanning probe microscopy (SPM) images can be processed ``crystallographically'' [1,2]. An estimate of the point spread function of the SPM can be obtained and subsequently its influence removed from the images. Also a difference Fourier synthesis can be calculated in order to enhance the visibility of structural defects. We are currently in the process of developing dedicated PC-based software for the wider SPM community. [4pt] [1] P. Moeck, B. Moon Jr., M. Abdel-Hafiez, and M. Hietschold, Proc. NSTI 2009, Houston, May 3-7, 2009, Vol. I (2009) 314-317, (ISBN: 978-1-4398-1782-7). [0pt] [2] P. Moeck, M. Toader, M. Abdel-Hafiez, and M. Hietschold, Proc. 2009 International Conference on Frontiers of Characterization and Metrology for Nanoelectronics, May 11-14, 2009, Albany, New York, Best Paper Award

  10. 'Face value': new medical imaging software in commercial view.

    PubMed

    Coopmans, Catelijne

    2011-04-01

    Based on three ethnographic vignettes describing the engagements of a small start-up company with prospective competitors, partners and customers, this paper shows how commercial considerations are folded into the ways visual images become 'seeable'. When company members mount demonstrations of prototype mammography software, they seek to generate interest but also to protect their intellectual property. Pivotal to these efforts to manage revelation and concealment is the visual interface, which is variously performed as obstacle and ally in the development of a profitable product. Using the concept of 'face value', the paper seeks to develop further insight into contemporary dynamics of seeing and showing by tracing the way techno-visual presentations and commercial considerations become entangled in practice. It also draws attention to the salience and significance of enactments of surface and depth in image-based practices.

  11. Reducing depth uncertainty in large surgical workspaces, with applications to veterinary medicine

    NASA Astrophysics Data System (ADS)

    Audette, Michel A.; Kolahi, Ahmad; Enquobahrie, Andinet; Gatti, Claudio; Cleary, Kevin

    2010-02-01

    This paper presents on-going research that addresses uncertainty along the Z-axis in image-guided surgery, for applications to large surgical workspaces, including those found in veterinary medicine. Veterinary medicine lags human medicine in using image guidance, despite MR and CT data scanning of animals. The positional uncertainty of a surgical tracking device can be modeled as an octahedron with one long axis coinciding with the depth axis of the sensor, where the short axes are determined by pixel resolution and workspace dimensions. The further a 3D point is from this device, the more elongated is this long axis, and the greater the uncertainty along Z of this point's position, in relation to its components along X and Y. Moreover, for a triangulation-based tracker, its position error degrades with the square of distance. Our approach is to use two or more Micron Trackers to communicate with each other, and combine this feature with flexible positioning. Prior knowledge of the type of surgical procedure, and if applicable, the species of animal that determines the scale of the workspace, would allow the surgeon to pre-operatively configure the trackers in the OR for optimal accuracy. Our research also leverages the open-source Image-guided Surgery Toolkit (IGSTK).

  12. Reachable Workspace in Facioscapulohumeral muscular dystrophy (FSHD) by Kinect

    PubMed Central

    Han, Jay J.; Kurillo, Gregorij; Abresch, Richard T.; de Bie, Evan; Nicorici, Alina; Bajcsy, Ruzena

    2014-01-01

    Introduction A depth-ranging sensor (Kinect) based upper extremity motion analysis system was applied to determine the spectrum of reachable workspace encountered in facioscapulohumeral muscular dystrophy (FSHD). Methods Reachable workspaces were obtained from 22 individuals with FSHD and 24 age- and height-matched healthy controls. To allow comparison, total and quadrant reachable workspace relative surface areas (RSA) were obtained by normalizing the acquired reachable workspace by each individual’s arm length. Results Significantly contracted reachable workspace and reduced RSAs were noted for the FSHD cohort compared to controls (0.473±0.188 vs. 0.747±0.082; P<0.0001). With worsening upper extremity function as categorized by the FSHD evaluation subscale II+III, the upper quadrant RSAs decreased progressively, while the lower quadrant RSAs were relatively preserved. There were no side-to-side differences in reachable workspace based on hand-dominance. Discussion This study demonstrates the feasibility and potential of using an innovative Kinect-based reachable workspace outcome measure in FSHD. PMID:24828906

  13. Development of image-processing software for automatic segmentation of brain tumors in MR images.

    PubMed

    Vijayakumar, C; Gharpure, Damayanti Chandrashekhar

    2011-07-01

    Most of the commercially available software for brain tumor segmentation have limited functionality and frequently lack the careful validation that is required for clinical studies. We have developed an image-analysis software package called 'Prometheus,' which performs neural system-based segmentation operations on MR images using pre-trained information. The software also has the capability to improve its segmentation performance by using the training module of the neural system. The aim of this article is to present the design and modules of this software. The segmentation module of Prometheus can be used primarily for image analysis in MR images. Prometheus was validated against manual segmentation by a radiologist and its mean sensitivity and specificity was found to be 85.71±4.89% and 93.2±2.87%, respectively. Similarly, the mean segmentation accuracy and mean correspondence ratio was found to be 92.35±3.37% and 0.78±0.046, respectively. PMID:21897560

  14. Predictive images of postoperative levator resection outcome using image processing software

    PubMed Central

    Mawatari, Yuki; Fukushima, Mikiko

    2016-01-01

    Purpose This study aims to evaluate the efficacy of processed images to predict postoperative appearance following levator resection. Methods Analysis involved 109 eyes from 65 patients with blepharoptosis who underwent advancement of levator aponeurosis and Müller’s muscle complex (levator resection). Predictive images were prepared from preoperative photographs using the image processing software (Adobe Photoshop®). Images of selected eyes were digitally enlarged in an appropriate manner and shown to patients prior to surgery. Results Approximately 1 month postoperatively, we surveyed our patients using questionnaires. Fifty-six patients (89.2%) were satisfied with their postoperative appearances, and 55 patients (84.8%) positively responded to the usefulness of processed images to predict postoperative appearance. Conclusion Showing processed images that predict postoperative appearance to patients prior to blepharoptosis surgery can be useful for those patients concerned with their postoperative appearance. This approach may serve as a useful tool to simulate blepharoptosis surgery. PMID:27757008

  15. GelClust: a software tool for gel electrophoresis images analysis and dendrogram generation.

    PubMed

    Khakabimamaghani, Sahand; Najafi, Ali; Ranjbar, Reza; Raam, Monireh

    2013-08-01

    This paper presents GelClust, a new software that is designed for processing gel electrophoresis images and generating the corresponding phylogenetic trees. Unlike the most of commercial and non-commercial related softwares, we found that GelClust is very user-friendly and guides the user from image toward dendrogram through seven simple steps. Furthermore, the software, which is implemented in C# programming language under Windows operating system, is more accurate than similar software regarding image processing and is the only software able to detect and correct gel 'smile' effects completely automatically. These claims are supported with experiments.

  16. GelClust: a software tool for gel electrophoresis images analysis and dendrogram generation.

    PubMed

    Khakabimamaghani, Sahand; Najafi, Ali; Ranjbar, Reza; Raam, Monireh

    2013-08-01

    This paper presents GelClust, a new software that is designed for processing gel electrophoresis images and generating the corresponding phylogenetic trees. Unlike the most of commercial and non-commercial related softwares, we found that GelClust is very user-friendly and guides the user from image toward dendrogram through seven simple steps. Furthermore, the software, which is implemented in C# programming language under Windows operating system, is more accurate than similar software regarding image processing and is the only software able to detect and correct gel 'smile' effects completely automatically. These claims are supported with experiments. PMID:23727299

  17. Global workspace model of consciousness and its electromagnetic correlates

    PubMed Central

    Prakash, Ravi; Prakash, Om; Prakash, Shashi; Abhishek, Priyadarshi; Gandotra, Sachin

    2008-01-01

    The global workspace of consciousness was proposed in its elementary framework by Baars, in 1982. Since the time of inception, there have been many speculations and modifications of this theory, but the central theme has remained the same, which refers to the global availability of information in the brain. However, the present understanding about the origin of this global workspace or its mechanism of operation is still deficient. One of the less-studied candidates for this global workspace is the electromagnetic field of the brain. The present work is a brief review of the theoretical underpinnings of the Global workspace model, in terms of its theoretical framework and neuroimaging evidences. Subsequently, we turn towards another broad group of theories of consciousness, in the form of electromagnetic field theories. We then proceed to highlight some electromagnetic correlates derived from these theories for this global access phenomenon. PMID:19893660

  18. Software defined multi-spectral imaging for Arctic sensor networks

    NASA Astrophysics Data System (ADS)

    Siewert, Sam; Angoth, Vivek; Krishnamurthy, Ramnarayan; Mani, Karthikeyan; Mock, Kenrick; Singh, Surjith B.; Srivistava, Saurav; Wagner, Chris; Claus, Ryan; Vis, Matthew Demi

    2016-05-01

    Availability of off-the-shelf infrared sensors combined with high definition visible cameras has made possible the construction of a Software Defined Multi-Spectral Imager (SDMSI) combining long-wave, near-infrared and visible imaging. The SDMSI requires a real-time embedded processor to fuse images and to create real-time depth maps for opportunistic uplink in sensor networks. Researchers at Embry Riddle Aeronautical University working with University of Alaska Anchorage at the Arctic Domain Awareness Center and the University of Colorado Boulder have built several versions of a low-cost drop-in-place SDMSI to test alternatives for power efficient image fusion. The SDMSI is intended for use in field applications including marine security, search and rescue operations and environmental surveys in the Arctic region. Based on Arctic marine sensor network mission goals, the team has designed the SDMSI to include features to rank images based on saliency and to provide on camera fusion and depth mapping. A major challenge has been the design of the camera computing system to operate within a 10 to 20 Watt power budget. This paper presents a power analysis of three options: 1) multi-core, 2) field programmable gate array with multi-core, and 3) graphics processing units with multi-core. For each test, power consumed for common fusion workloads has been measured at a range of frame rates and resolutions. Detailed analyses from our power efficiency comparison for workloads specific to stereo depth mapping and sensor fusion are summarized. Preliminary mission feasibility results from testing with off-the-shelf long-wave infrared and visible cameras in Alaska and Arizona are also summarized to demonstrate the value of the SDMSI for applications such as ice tracking, ocean color, soil moisture, animal and marine vessel detection and tracking. The goal is to select the most power efficient solution for the SDMSI for use on UAVs (Unoccupied Aerial Vehicles) and other drop

  19. How to Frame Universal Workspace Lighting.

    PubMed

    Mathiasen, Nanet; Frandsen, Anne Kathrine

    2016-01-01

    In 2012 the headquarters for the umbrella organisation 'Disabled people's organisation Denmark' opened, an office building that offers workspace for the administrations of all the member organisations. The ambition for the building was to be the most accessible office building in the world; regardless of disability everybody should be able to move around in the house and work in any of the offices. One, of many ambitions, was to design a functional and effective lighting scheme using as much daylight as possible, and integrating the artificial lighting design and daylight design. The lighting was intended to support all work stations in both one-man offices and open-plan offices with a functional and comfortable visual environment, fit for all users, regardless of disability. Based on a post occupancy evaluation conducted 2 years after the organisations moved in, the present paper evaluates the lighting design in the offices. It reveals that not all the people working in the offices have the same needs and preferences of lighting conditions; these differ even among users with the same disability. Accordingly the findings lead to a discussion on how to understand the concept of Universal Design. Based on the lighting theory of Peter Boyce, the paper discusses the idea of encompassing everyone in the same solution. PMID:27534330

  20. WorkstationJ: workstation emulation software for medical image perception and technology evaluation research

    NASA Astrophysics Data System (ADS)

    Schartz, Kevin M.; Berbaum, Kevin S.; Caldwell, Robert T.; Madsen, Mark T.

    2007-03-01

    We developed image presentation software that mimics the functionality available in the clinic, but also records time-stamped, observer-display interactions and is readily deployable on diverse workstations making it possible to collect comparable observer data at multiple sites. Commercial image presentation software for clinical use has limited application for research on image perception, ergonomics, computer-aids and informatics because it does not collect observer responses, or other information on observer-display interactions, in real time. It is also very difficult to collect observer data from multiple institutions unless the same commercial software is available at different sites. Our software not only records observer reports of abnormalities and their locations, but also inspection time until report, inspection time for each computed radiograph and for each slice of tomographic studies, window/level, and magnification settings used by the observer. The software is a modified version of the open source ImageJ software available from the National Institutes of Health. Our software involves changes to the base code and extensive new plugin code. Our free software is currently capable of displaying computed tomography and computed radiography images. The software is packaged as Java class files and can be used on Windows, Linux, or Mac systems. By deploying our software together with experiment-specific script files that administer experimental procedures and image file handling, multi-institutional studies can be conducted that increase reader and/or case sample sizes or add experimental conditions.

  1. Design and validation of Segment - freely available software for cardiovascular image analysis

    PubMed Central

    2010-01-01

    Background Commercially available software for cardiovascular image analysis often has limited functionality and frequently lacks the careful validation that is required for clinical studies. We have already implemented a cardiovascular image analysis software package and released it as freeware for the research community. However, it was distributed as a stand-alone application and other researchers could not extend it by writing their own custom image analysis algorithms. We believe that the work required to make a clinically applicable prototype can be reduced by making the software extensible, so that researchers can develop their own modules or improvements. Such an initiative might then serve as a bridge between image analysis research and cardiovascular research. The aim of this article is therefore to present the design and validation of a cardiovascular image analysis software package (Segment) and to announce its release in a source code format. Results Segment can be used for image analysis in magnetic resonance imaging (MRI), computed tomography (CT), single photon emission computed tomography (SPECT) and positron emission tomography (PET). Some of its main features include loading of DICOM images from all major scanner vendors, simultaneous display of multiple image stacks and plane intersections, automated segmentation of the left ventricle, quantification of MRI flow, tools for manual and general object segmentation, quantitative regional wall motion analysis, myocardial viability analysis and image fusion tools. Here we present an overview of the validation results and validation procedures for the functionality of the software. We describe a technique to ensure continued accuracy and validity of the software by implementing and using a test script that tests the functionality of the software and validates the output. The software has been made freely available for research purposes in a source code format on the project home page http

  2. Software for X-Ray Images Calculation of Hydrogen Compression Device in Megabar Pressure Range

    NASA Astrophysics Data System (ADS)

    Egorov, Nikolay; Bykov, Alexander; Pavlov, Valery

    2007-06-01

    Software for x-ray images simulation is described. The software is a part of x-ray method used for investigation of an equation of state of hydrogen in a megabar pressure range. A graphical interface that clearly and simply allows users to input data for x-ray image calculation: properties of the studied device, parameters of the x-ray radiation source, parameters of the x-ray radiation recorder, the experiment geometry; to represent the calculation results and efficiently transmit them to other software for processing. The calculation time is minimized. This makes it possible to perform calculations in a dialogue regime. The software is written in ``MATLAB'' system.

  3. NEIGHBOUR-IN: Image processing software for spatial analysis of animal grouping

    PubMed Central

    Caubet, Yves; Richard, Freddie-Jeanne

    2015-01-01

    Abstract Animal grouping is a very complex process that occurs in many species, involving many individuals under the influence of different mechanisms. To investigate this process, we have created an image processing software, called NEIGHBOUR-IN, designed to analyse individuals’ coordinates belonging to up to three different groups. The software also includes statistical analysis and indexes to discriminate aggregates based on spatial localisation of individuals and their neighbours. After the description of the software, the indexes computed by the software are illustrated using both artificial patterns and case studies using the spatial distribution of woodlice. The added strengths of this software and methods are also discussed. PMID:26261448

  4. Random issues in workspace analysis for a mobile robot

    NASA Astrophysics Data System (ADS)

    Stǎnescu, Tony; Dolga, Valer; Mondoc, Alina

    2014-12-01

    Evolution of the mobile robot is currently characterized by multiple applications in dynamic workspaces and low initial knowledge. In this paper presents aspects of approaching random processes of evolution of a mobile robot in an unstructured environment . The experimental results are used for modeling an infrared sensor (integrated in the mobile robot structure) and to assess the probability of locating obstacles in the environment.

  5. Future trends in image processing software and hardware

    NASA Technical Reports Server (NTRS)

    Green, W. B.

    1979-01-01

    JPL image processing applications are examined, considering future trends in fields such as planetary exploration, electronics, astronomy, computers, and Landsat. Attention is given to adaptive search and interrogation of large image data bases, the display of multispectral imagery recorded in many spectral channels, merging data acquired by a variety of sensors, and developing custom large scale integrated chips for high speed intelligent image processing user stations and future pipeline production processors.

  6. Software control and characterization aspects for image derotator of the AO188 system at Subaru

    NASA Astrophysics Data System (ADS)

    Golota, Taras; Oya, Shin; Egner, Sebastian; Watanabe, Makoto; Eldred, Michael; Minowa, Yosuke; Takami, Hideki; Cook, David; Hayano, Yutaka; Saito, Yoshihiko; Hattori, Masayuki; Garrel, Vincent; Ito, Meguru

    2010-07-01

    The image derotator is an integral part of the AO188 System at Subaru Telescope. In this article software control, characterization and integration issues of the image derotator for AO188 System presented. Physical limitations of the current hardware reviewed. Image derotator synchronization, tracking accuracy, and problem solving strategies to achieve requirements presented. It's use in different observation modes for various instruments and interaction with the telescope control system provides status and control functionality. We describe available observation modes along with integration issues. Technical solutions with results of the image derotator performance presented. Further improvements and control software for on-sky observations discussed based on the results obtained during engineering observations. An overview of the requirements, the final control method, and the structure of its control software is shown. Control limitations and accepted solutions that might be useful for development of other instrument's image derotators presented.

  7. GILDAS: Grenoble Image and Line Data Analysis Software

    NASA Astrophysics Data System (ADS)

    Gildas Team

    2013-05-01

    GILDAS is a collection of software oriented toward (sub-)millimeter radioastronomical applications (either single-dish or interferometer). It has been adopted as the IRAM standard data reduction package and is jointly maintained by IRAM & CNRS. GILDAS contains many facilities, most of which are oriented towards spectral line mapping and many kinds of 3-dimensional data. The code, written in Fortran-90 with a few parts in C/C++ (mainly keyboard interaction, plotting, widgets), is easily extensible.

  8. Eclipse: ESO C Library for an Image Processing Software Environment

    NASA Astrophysics Data System (ADS)

    Devillard, Nicolas

    2011-12-01

    Written in ANSI C, eclipse is a library offering numerous services related to astronomical image processing: FITS data access, various image and cube loading methods, binary image handling and filtering (including convolution and morphological filters), 2-D cross-correlation, connected components, cube and image arithmetic, dead pixel detection and correction, object detection, data extraction, flat-fielding with robust fit, image generation, statistics, photometry, image-space resampling, image combination, and cube stacking. It also contains support for mathematical tools like random number generation, FFT, curve fitting, matrices, fast median computation, and point-pattern matching. The main feature of this library is its ability to handle large amounts of input data (up to 2GB in the current version) regardless of the amount of memory and swap available on the local machine. Another feature is the very high speed allowed by optimized C, making it an ideal base tool for programming efficient number-crunching applications, e.g., on parallel (Beowulf) systems.

  9. Polarization information processing and software system design for simultaneously imaging polarimetry

    NASA Astrophysics Data System (ADS)

    Wang, Yahui; Liu, Jing; Jin, Weiqi; Wen, Renjie

    2015-08-01

    Simultaneous imaging polarimetry can realize real-time polarization imaging of the dynamic scene, which has wide application prospect. This paper first briefly illustrates the design of the double separate Wollaston Prism simultaneous imaging polarimetry, and then emphases are put on the polarization information processing methods and software system design for the designed polarimetry. Polarization information processing methods consist of adaptive image segmentation, high-accuracy image registration, instrument matrix calibration. Morphological image processing was used for image segmentation by taking dilation of an image; The accuracy of image registration can reach 0.1 pixel based on the spatial and frequency domain cross-correlation; Instrument matrix calibration adopted four-point calibration method. The software system was implemented under Windows environment based on C++ programming language, which realized synchronous polarization images acquisition and preservation, image processing and polarization information extraction and display. Polarization data obtained with the designed polarimetry shows that: the polarization information processing methods and its software system effectively performs live realize polarization measurement of the four Stokes parameters of a scene. The polarization information processing methods effectively improved the polarization detection accuracy.

  10. Software for Analyzing Sequences of Flow-Related Images

    NASA Technical Reports Server (NTRS)

    Klimek, Robert; Wright, Ted

    2004-01-01

    Spotlight is a computer program for analysis of sequences of images generated in combustion and fluid physics experiments. Spotlight can perform analysis of a single image in an interactive mode or a sequence of images in an automated fashion. The primary type of analysis is tracking of positions of objects over sequences of frames. Features and objects that are typically tracked include flame fronts, particles, droplets, and fluid interfaces. Spotlight automates the analysis of object parameters, such as centroid position, velocity, acceleration, size, shape, intensity, and color. Images can be processed to enhance them before statistical and measurement operations are performed. An unlimited number of objects can be analyzed simultaneously. Spotlight saves results of analyses in a text file that can be exported to other programs for graphing or further analysis. Spotlight is a graphical-user-interface-based program that at present can be executed on Microsoft Windows and Linux operating systems. A version that runs on Macintosh computers is being considered.

  11. BIRP: Software for interactive search and retrieval of image engineering data

    NASA Technical Reports Server (NTRS)

    Arvidson, R. E.; Bolef, L. K.; Guinness, E. A.; Norberg, P.

    1980-01-01

    Better Image Retrieval Programs (BIRP), a set of programs to interactively sort through and to display a database, such as engineering data for images acquired by spacecraft is described. An overview of the philosophy of BIRP design, the structure of BIRP data files, and examples that illustrate the capabilities of the software are provided.

  12. A Review of Diffusion Tensor Magnetic Resonance Imaging Computational Methods and Software Tools

    PubMed Central

    Hasan, Khader M.; Walimuni, Indika S.; Abid, Humaira; Hahn, Klaus R.

    2010-01-01

    In this work we provide an up-to-date short review of computational magnetic resonance imaging (MRI) and software tools that are widely used to process and analyze diffusion-weighted MRI data. A review of different methods used to acquire, model and analyze diffusion-weighted imaging data (DWI) is first provided with focus on diffusion tensor imaging (DTI). The major preprocessing, processing and post-processing procedures applied to DTI data are discussed. A list of freely available software packages to analyze diffusion MRI data is also provided. PMID:21087766

  13. Spatial data software integration - Merging CAD/CAM/mapping with GIS and image processing

    NASA Technical Reports Server (NTRS)

    Logan, Thomas L.; Bryant, Nevin A.

    1987-01-01

    The integration of CAD/CAM/mapping with image processing using geographic information systems (GISs) as the interface is examined. Particular emphasis is given to the development of software interfaces between JPL's Video Image Communication and Retrieval (VICAR)/Imaged Based Information System (IBIS) raster-based GIS and the CAD/CAM/mapping system. The design and functions of the VICAR and IBIS are described. Vector data capture and editing are studied. Various software programs for interfacing between the VICAR/IBIS and CAD/CAM/mapping are presented and analyzed.

  14. The design of real time infrared image generation software based on Creator and Vega

    NASA Astrophysics Data System (ADS)

    Wang, Rui-feng; Wu, Wei-dong; Huo, Jun-xiu

    2013-09-01

    Considering the requirement of high reality and real-time quality dynamic infrared image of an infrared image simulation, a method to design real-time infrared image simulation application on the platform of VC++ is proposed. This is based on visual simulation software Creator and Vega. The functions of Creator are introduced simply, and the main features of Vega developing environment are analyzed. The methods of infrared modeling and background are offered, the designing flow chart of the developing process of IR image real-time generation software and the functions of TMM Tool and MAT Tool and sensor module are explained, at the same time, the real-time of software is designed.

  15. JUPOS : Amateur analysis of Jupiter images with specialized measurement software

    NASA Astrophysics Data System (ADS)

    Jacquesson, M.; Mettig, H.-J.

    2008-09-01

    Introduction - Beginning of JUPOS in 1989, H.J. Mettig and Grischa Hahn in Dresden - 350.000 positional measures on electronic images of almost 180 observers have been gathered since 1998 - What do we mean by "electronic images": o digitized hi-res chemical photographs o traditional CCD technique o webcam images - 2002 started the implementation of WinJUPOS by Grischa, now finished - Cooperation with the Jupiter Section of the BAA for many years - At present, JUPOS has four active measurers: o Gianluigi Adamoli and Marco Vedovato (Italy) o H.J. Mettig (Germany) o Michel Jacquesson (France) How we work together - Each member of the team measures images of several observers (useful to detect and avoid systematic errors) - When necessary, exchange of problems and ideas by e-mail - During the period of visibility of Jupiter, team leader H.J. Mettig: o collects all recent measurements about once a month o produces drift charts - Once a year or two, the JUPOS team helds an informal meeting Criteria for image selection 1) Validity of time and date: Origins of time errors: 1. local zonal times are wrongly (or, not at all) converted to UT. This is still easy to find out: Either "only" the full hour is erroneous, or/and the date (problem of observers with UTC+10) 2. the computer clock has not been synchronised over a longer period 3. exposure of the final image exceeds the recommended two minutes or observers communicate begin or end of the total period of image recording instead of its middle How to test the validity of the time? - Position of a galilean satellite visible on or near Jupiter - Position of a satellite shadow - Measuring the longitude of permanent or long lived objects whose positions are known from former measurements 2) Avoid measuring the same object on different images taken at about the same time. One image every 1 ½ hours is sufficient 3) Duration of exposure: not more than 2-3 minutes because of the rapid rotation of Jupiter 4) Spectral range: Visual

  16. IDP: Image and data processing (software) in C++

    SciTech Connect

    Lehman, S.

    1994-11-15

    IDP++(Image and Data Processing in C++) is a complied, multidimensional, multi-data type, signal processing environment written in C++. It is being developed within the Radar Ocean Imaging group and is intended as a partial replacement for View. IDP++ takes advantage of the latest object-oriented compiler technology to provide `information hiding.` Users need only know C, not C++. Signals are treated like any other variable with a defined set of operators and functions in an intuitive manner. IDP++ is being designed for real-time environment where interpreted signal processing packages are less efficient.

  17. Image compression software for the SOHO LASCO and EIT experiments

    NASA Technical Reports Server (NTRS)

    Grunes, Mitchell R.; Howard, Russell A.; Hoppel, Karl; Mango, Stephen A.; Wang, Dennis

    1994-01-01

    This paper describes the lossless and lossy image compression algorithms to be used on board the Solar Heliospheric Observatory (SOHO) in conjunction with the Large Angle Spectrometric Coronograph and Extreme Ultraviolet Imaging Telescope experiments. It also shows preliminary results obtained using similar prior imagery and discusses the lossy compression artifacts which will result. This paper is in part intended for the use of SOHO investigators who need to understand the results of SOHO compression in order to better allocate the transmission bits which they have been allocated.

  18. Software optimization for electrical conductivity imaging in polycrystalline diamond cutters

    SciTech Connect

    Bogdanov, G.; Ludwig, R.; Wiggins, J.; Bertagnolli, K.

    2014-02-18

    We previously reported on an electrical conductivity imaging instrument developed for measurements on polycrystalline diamond cutters. These cylindrical cutters for oil and gas drilling feature a thick polycrystalline diamond layer on a tungsten carbide substrate. The instrument uses electrical impedance tomography to profile the conductivity in the diamond table. Conductivity images must be acquired quickly, on the order of 5 sec per cutter, to be useful in the manufacturing process. This paper reports on successful efforts to optimize the conductivity reconstruction routine, porting major portions of it to NVIDIA GPUs, including a custom CUDA kernel for Jacobian computation.

  19. Software for browsing sectioned images of a dog body and generating a 3D model.

    PubMed

    Park, Jin Seo; Jung, Yong Wook

    2016-01-01

    The goals of this study were (1) to provide accessible and instructive browsing software for sectioned images and a portable document format (PDF) file that includes three-dimensional (3D) models of an entire dog body and (2) to develop techniques for segmentation and 3D modeling that would enable an investigator to perform these tasks without the aid of a computer engineer. To achieve these goals, relatively important or large structures in the sectioned images were outlined to generate segmented images. The sectioned and segmented images were then packaged into browsing software. In this software, structures in the sectioned images are shown in detail and in real color. After 3D models were made from the segmented images, the 3D models were exported into a PDF file. In this format, the 3D models could be manipulated freely. The browsing software and PDF file are available for study by students, for lecture for teachers, and for training for clinicians. These files will be helpful for anatomical study by and clinical training of veterinary students and clinicians. Furthermore, these techniques will be useful for researchers who study two-dimensional images and 3D models.

  20. Image analysis software for following progression of peripheral neuropathy

    NASA Astrophysics Data System (ADS)

    Epplin-Zapf, Thomas; Miller, Clayton; Larkin, Sean; Hermesmeyer, Eduardo; Macy, Jenny; Pellegrini, Marco; Luccarelli, Saverio; Staurenghi, Giovanni; Holmes, Timothy

    2009-02-01

    A relationship has been reported by several research groups [1 - 4] between the density and shapes of nerve fibers in the cornea and the existence and severity of peripheral neuropathy. Peripheral neuropathy is a complication of several prevalent diseases or conditions, which include diabetes, HIV, prolonged alcohol overconsumption and aging. A common clinical technique for confirming the condition is intramuscular electromyography (EMG), which is invasive, so a noninvasive technique like the one proposed here carries important potential advantages for the physician and patient. A software program that automatically detects the nerve fibers, counts them and measures their shapes is being developed and tested. Tests were carried out with a database of subjects with levels of severity of diabetic neuropathy as determined by EMG testing. Results from this testing, that include a linear regression analysis are shown.

  1. Development of Software to Model AXAF-I Image Quality

    NASA Technical Reports Server (NTRS)

    Geary, Joseph; Hawkins, Lamar; Ahmad, Anees; Gong, Qian

    1997-01-01

    This report describes work conducted on Delivery Order 181 between October 1996 through June 1997. During this period software was written to: compute axial PSD's from RDOS AXAF-I mirror surface maps; plot axial surface errors and compute PSD's from HDOS "Big 8" axial scans; plot PSD's from FITS format PSD files; plot band-limited RMS vs axial and azimuthal position for multiple PSD files; combine and organize PSD's from multiple mirror surface measurements formatted as input to GRAZTRACE; modify GRAZTRACE to read FITS formatted PSD files; evaluate AXAF-I test results; improve and expand the capabilities of the GT x-ray mirror analysis package. During this period work began on a more user-friendly manual for the GT program, and improvements were made to the on-line help manual.

  2. Parallel-Processing Software for Correlating Stereo Images

    NASA Technical Reports Server (NTRS)

    Klimeck, Gerhard; Deen, Robert; Mcauley, Michael; DeJong, Eric

    2007-01-01

    A computer program implements parallel- processing algorithms for cor relating images of terrain acquired by stereoscopic pairs of digital stereo cameras on an exploratory robotic vehicle (e.g., a Mars rove r). Such correlations are used to create three-dimensional computatio nal models of the terrain for navigation. In this program, the scene viewed by the cameras is segmented into subimages. Each subimage is assigned to one of a number of central processing units (CPUs) opera ting simultaneously.

  3. Validated novel software to measure the conspicuity index of lesions in DICOM images

    NASA Astrophysics Data System (ADS)

    Szczepura, K. R.; Manning, D. J.

    2016-03-01

    A novel software programme and associated Excel spreadsheet has been developed to provide an objective measure of the expected visual detectability of focal abnormalities within DICOM images. ROIs are drawn around the abnormality, the software then fits the lesion using a least squares method to recognize the edges of the lesion based on the full width half maximum. 180 line profiles are then plotted around the lesion, giving 360 edge profiles.

  4. New StatPhantom software for assessment of digital image quality

    NASA Astrophysics Data System (ADS)

    Gurvich, Victor A.; Davydenko, George I.

    2002-04-01

    The rapid development of digital imaging and computers networks, using Picture Archiving and Communication Systems (PACS) and DICOM compatible devices increase requirements to the quality control process in medical imaging departments, but provide new opportunities for evaluation of image quality. New StatPhantom software simplifies statistical techniques based on modern detection theory and ROC analysis improving the accuracy and reliability of known methods and allowing to implement statistical analysis with phantoms of any design. In contrast to manual statistical methods, all calculation, analysis of results, and test elements positions changes in the image of phantom are implemented by computer. This paper describes the user interface and functionality of StatPhantom software, its opportunities and advantages in the assessment of various imaging modalities, and the diagnostic preference of an observer. The results obtained by the conventional ROC analysis, manual, and computerized statistical methods are analyzed. Different designs of phantoms are considered.

  5. DEIReconstructor: a software for diffraction enhanced imaging processing and tomography reconstruction

    NASA Astrophysics Data System (ADS)

    Zhang, Kai; Yuan, Qing-Xi; Huang, Wan-Xia; Zhu, Pei-Ping; Wu, Zi-Yu

    2014-10-01

    Diffraction enhanced imaging (DEI) has been widely applied in many fields, especially when imaging low-Z samples or when the difference in the attenuation coefficient between different regions in the sample is too small to be detected. Recent developments of this technique have presented a need for a new software package for data analysis. Here, the Diffraction Enhanced Image Reconstructor (DEIReconstructor), developed in Matlab, is presented. DEIReconstructor has a user-friendly graphical user interface and runs under any of the 32-bit or 64-bit Microsoft Windows operating systems including XP and Win7. Many of its features are integrated to support imaging preprocessing, extract absorption, refractive and scattering information of diffraction enhanced imaging and allow for parallel-beam tomography reconstruction for DEI-CT. Furthermore, many other useful functions are also implemented in order to simplify the data analysis and the presentation of results. The compiled software package is freely available.

  6. Image processing software for providing radiometric inputs to land surface climatology models

    NASA Technical Reports Server (NTRS)

    Newcomer, Jeffrey A.; Goetz, Scott J.; Strebel, Donald E.; Hall, Forrest G.

    1989-01-01

    During the First International Land Surface Climatology Project (ISLSCP) Field Experiment (FIFE), 80 gigabytes of image data were generated from a variety of satellite and airborne sensors in a multidisciplinary attempt to study energy and mass exchange between the land surface and the atmosphere. To make these data readily available to researchers with a range of image data handling experience and capabilities, unique image-processing software was designed to perform a variety of nonstandard image-processing manipulations and to derive a set of standard-format image products. The nonconventional features of the software include: (1) adding new layers of geographic coordinates, and solar and viewing conditions to existing data; (2) providing image polygon extraction and calibration of data to at-sensor radiances; and, (3) generating standard-format derived image products that can be easily incorporated into radiometric or climatology models. The derived image products consist of easily handled ASCII descriptor files, byte image data files, and additional per-pixel integer data files (e.g., geographic coordinates, and sun and viewing conditions). Details of the solutions to the image-processing problems, the conventions adopted for handling a variety of satellite and aircraft image data, and the applicability of the output products to quantitative modeling are presented. They should be of general interest to future experiment and data-handling design considerations.

  7. Vertical bone measurements from cone beam computed tomography images using different software packages.

    PubMed

    Vasconcelos, Taruska Ventorini; Neves, Frederico Sampaio; Moraes, Lívia Almeida Bueno; Freitas, Deborah Queiroz

    2015-01-01

    This article aimed at comparing the accuracy of linear measurement tools of different commercial software packages. Eight fully edentulous dry mandibles were selected for this study. Incisor, canine, premolar, first molar and second molar regions were selected. Cone beam computed tomography (CBCT) images were obtained with i-CAT Next Generation. Linear bone measurements were performed by one observer on the cross-sectional images using three different software packages: XoranCat®, OnDemand3D® and KDIS3D®, all able to assess DICOM images. In addition, 25% of the sample was reevaluated for the purpose of reproducibility. The mandibles were sectioned to obtain the gold standard for each region. Intraclass coefficients (ICC) were calculated to examine the agreement between the two periods of evaluation; the one-way analysis of variance performed with the post-hoc Dunnett test was used to compare each of the software-derived measurements with the gold standard. The ICC values were excellent for all software packages. The least difference between the software-derived measurements and the gold standard was obtained with the OnDemand3D and KDIS3D (-0.11 and -0.14 mm, respectively), and the greatest, with the XoranCAT (+0.25 mm). However, there was no statistical significant difference between the measurements obtained with the different software packages and the gold standard (p> 0.05). In conclusion, linear bone measurements were not influenced by the software package used to reconstruct the image from CBCT DICOM data.

  8. An image-processing software package: UU and Fig for optical metrology applications

    NASA Astrophysics Data System (ADS)

    Chen, Lujie

    2013-06-01

    Modern optical metrology applications are largely supported by computational methods, such as phase shifting [1], Fourier Transform [2], digital image correlation [3], camera calibration [4], etc, in which image processing is a critical and indispensable component. While it is not too difficult to obtain a wide variety of image-processing programs from the internet; few are catered for the relatively special area of optical metrology. This paper introduces an image-processing software package: UU (data processing) and Fig (data rendering) that incorporates many useful functions to process optical metrological data. The cross-platform programs UU and Fig are developed based on wxWidgets. At the time of writing, it has been tested on Windows, Linux and Mac OS. The userinterface is designed to offer precise control of the underline processing procedures in a scientific manner. The data input/output mechanism is designed to accommodate diverse file formats and to facilitate the interaction with other independent programs. In terms of robustness, although the software was initially developed for personal use, it is comparably stable and accurate to most of the commercial software of similar nature. In addition to functions for optical metrology, the software package has a rich collection of useful tools in the following areas: real-time image streaming from USB and GigE cameras, computational geometry, computer vision, fitting of data, 3D image processing, vector image processing, precision device control (rotary stage, PZT stage, etc), point cloud to surface reconstruction, volume rendering, batch processing, etc. The software package is currently used in a number of universities for teaching and research.

  9. Plume Ascent Tracker: Interactive Matlab software for analysis of ascending plumes in image data

    NASA Astrophysics Data System (ADS)

    Valade, S. A.; Harris, A. J. L.; Cerminara, M.

    2014-05-01

    This paper presents Matlab-based software designed to track and analyze an ascending plume as it rises above its source, in image data. It reads data recorded in various formats (video files, image files, or web-camera image streams), and at various wavelengths (infrared, visible, or ultra-violet). Using a set of filters which can be set interactively, the plume is first isolated from its background. A user-friendly interface then allows tracking of plume ascent and various parameters that characterize plume evolution during emission and ascent. These include records of plume height, velocity, acceleration, shape, volume, ash (fine-particle) loading, spreading rate, entrainment coefficient and inclination angle, as well as axial and radial profiles for radius and temperature (if data are radiometric). Image transformations (dilatation, rotation, resampling) can be performed to create new images with a vent-centered metric coordinate system. Applications may interest both plume observers (monitoring agencies) and modelers. For the first group, the software is capable of providing quantitative assessments of plume characteristics from image data, for post-event analysis or in near real-time analysis. For the second group, extracted data can serve as benchmarks for plume ascent models, and as inputs for cloud dispersal models. We here describe the software's tracking methodology and main graphical interfaces, using thermal infrared image data of an ascending volcanic ash plume at Santiaguito volcano.

  10. Development of HydroImage, A User Friendly Hydrogeophysical Characterization Software

    SciTech Connect

    Mok, Chin Man; Hubbard, Susan; Chen, Jinsong; Suribhatla, Raghu; Kaback, Dawn Samara

    2014-01-29

    HydroImage, user friendly software that utilizes high-resolution geophysical data for estimating hydrogeological parameters in subsurface strate, was developed under this grant. HydroImage runs on a personal computer platform to promote broad use by hydrogeologists to further understanding of subsurface processes that govern contaminant fate, transport, and remediation. The unique software provides estimates of hydrogeological properties over continuous volumes of the subsurface, whereas previous approaches only allow estimation of point locations. thus, this unique tool can be used to significantly enhance site conceptual models and improve design and operation of remediation systems. The HydroImage technical approach uses statistical models to integrate geophysical data with borehole geological data and hydrological measurements to produce hydrogeological parameter estimates as 2-D or 3-D images.

  11. MMX-I: data-processing software for multimodal X-ray imaging and tomography

    PubMed Central

    Bergamaschi, Antoine; Medjoubi, Kadda; Messaoudi, Cédric; Marco, Sergio; Somogyi, Andrea

    2016-01-01

    A new multi-platform freeware has been developed for the processing and reconstruction of scanning multi-technique X-ray imaging and tomography datasets. The software platform aims to treat different scanning imaging techniques: X-ray fluorescence, phase, absorption and dark field and any of their combinations, thus providing an easy-to-use data processing tool for the X-ray imaging user community. A dedicated data input stream copes with the input and management of large datasets (several hundred GB) collected during a typical multi-technique fast scan at the Nanoscopium beamline and even on a standard PC. To the authors’ knowledge, this is the first software tool that aims at treating all of the modalities of scanning multi-technique imaging and tomography experiments. PMID:27140159

  12. The image-guided surgery toolkit IGSTK: an open source C++ software toolkit.

    PubMed

    Enquobahrie, Andinet; Cheng, Patrick; Gary, Kevin; Ibanez, Luis; Gobbi, David; Lindseth, Frank; Yaniv, Ziv; Aylward, Stephen; Jomier, Julien; Cleary, Kevin

    2007-11-01

    This paper presents an overview of the image-guided surgery toolkit (IGSTK). IGSTK is an open source C++ software library that provides the basic components needed to develop image-guided surgery applications. It is intended for fast prototyping and development of image-guided surgery applications. The toolkit was developed through a collaboration between academic and industry partners. Because IGSTK was designed for safety-critical applications, the development team has adopted lightweight software processes that emphasizes safety and robustness while, at the same time, supporting geographically separated developers. A software process that is philosophically similar to agile software methods was adopted emphasizing iterative, incremental, and test-driven development principles. The guiding principle in the architecture design of IGSTK is patient safety. The IGSTK team implemented a component-based architecture and used state machine software design methodologies to improve the reliability and safety of the components. Every IGSTK component has a well-defined set of features that are governed by state machines. The state machine ensures that the component is always in a valid state and that all state transitions are valid and meaningful. Realizing that the continued success and viability of an open source toolkit depends on a strong user community, the IGSTK team is following several key strategies to build an active user community. These include maintaining a users and developers' mailing list, providing documentation (application programming interface reference document and book), presenting demonstration applications, and delivering tutorial sessions at relevant scientific conferences. PMID:17703338

  13. Digital processing of side-scan sonar data with the Woods Hole image processing system software

    USGS Publications Warehouse

    Paskevich, Valerie F.

    1992-01-01

    Since 1985, the Branch of Atlantic Marine Geology has been involved in collecting, processing and digitally mosaicking high and low-resolution side-scan sonar data. Recent development of a UNIX-based image-processing software system includes a series of task specific programs for processing side-scan sonar data. This report describes the steps required to process the collected data and to produce an image that has equal along- and across-track resol

  14. Digital image measurement of specimen deformation based on CCD cameras and Image J software: an application to human pelvic biomechanics

    NASA Astrophysics Data System (ADS)

    Jia, Yongwei; Cheng, Liming; Yu, Guangrong; Lou, Yongjian; Yu, Yan; Chen, Bo; Ding, Zuquan

    2008-03-01

    A method of digital image measurement of specimen deformation based on CCD cameras and Image J software was developed. This method was used to measure the biomechanics behavior of human pelvis. Six cadaveric specimens from the third lumbar vertebra to the proximal 1/3 part of femur were tested. The specimens without any structural abnormalities were dissected of all soft tissue, sparing the hip joint capsules and the ligaments of the pelvic ring and floor. Markers with black dot on white background were affixed to the key regions of the pelvis. Axial loading from the proximal lumbar was applied by MTS in the gradient of 0N to 500N, which simulated the double feet standing stance. The anterior and lateral images of the specimen were obtained through two CCD cameras. Based on Image J software, digital image processing software, which can be freely downloaded from the National Institutes of Health, digital 8-bit images were processed. The procedure includes the recognition of digital marker, image invert, sub-pixel reconstruction, image segmentation, center of mass algorithm based on weighted average of pixel gray values. Vertical displacements of S1 (the first sacral vertebrae) in front view and micro-angular rotation of sacroiliac joint in lateral view were calculated according to the marker movement. The results of digital image measurement showed as following: marker image correlation before and after deformation was excellent. The average correlation coefficient was about 0.983. According to the 768 × 576 pixels image (pixel size 0.68mm × 0.68mm), the precision of the displacement detected in our experiment was about 0.018 pixels and the comparatively error could achieve 1.11\\perthou. The average vertical displacement of S1 of the pelvis was 0.8356+/-0.2830mm under vertical load of 500 Newtons and the average micro-angular rotation of sacroiliac joint in lateral view was 0.584+/-0.221°. The load-displacement curves obtained from our optical measure system

  15. Onboard utilization of ground control points for image correction. Volume 4: Correlation analysis software design

    NASA Technical Reports Server (NTRS)

    1981-01-01

    The software utilized for image correction accuracy measurement is described. The correlation analysis program is written to allow the user various tools to analyze different correlation algorithms. The algorithms were tested using LANDSAT imagery in two different spectral bands. Three classification algorithms are implemented.

  16. Workspace Analysis and Optimization of 3-PUU Parallel Mechanism in Medicine Base on Genetic Algorithm

    PubMed Central

    Hou, Yongchao; Zhao, Yang

    2015-01-01

    A novel 3-PUU parallel robot was put forward, on which kinematic analysis was conducted to obtain its inverse kinematics solution, and on this basis, the limitations of the sliding pair and the Hooke joint on the workspace were analyzed. Moreover, the workspace was solved through the three dimensional limit search method, and then optimization analysis was performed on the workspace of this parallel robot, which laid the foundations for the configuration design and further analysis of the parallel mechanism, with the result indicated that this type of robot was equipped with promising application prospect. In addition that, the workspace after optimization can meet more requirements of patients. PMID:26628930

  17. WHIPPET: a collaborative software environment for medical image processing and analysis

    NASA Astrophysics Data System (ADS)

    Hu, Yangqiu; Haynor, David R.; Maravilla, Kenneth R.

    2007-03-01

    While there are many publicly available software packages for medical image processing, making them available to end users in clinical and research labs remains non-trivial. An even more challenging task is to mix these packages to form pipelines that meet specific needs seamlessly, because each piece of software usually has its own input/output formats, parameter sets, and so on. To address these issues, we are building WHIPPET (Washington Heterogeneous Image Processing Pipeline EnvironmenT), a collaborative platform for integrating image analysis tools from different sources. The central idea is to develop a set of Python scripts which glue the different packages together and make it possible to connect them in processing pipelines. To achieve this, an analysis is carried out for each candidate package for WHIPPET, describing input/output formats, parameters, ROI description methods, scripting and extensibility and classifying its compatibility with other WHIPPET components as image file level, scripting level, function extension level, or source code level. We then identify components that can be connected in a pipeline directly via image format conversion. We set up a TWiki server for web-based collaboration so that component analysis and task request can be performed online, as well as project tracking, knowledge base management, and technical support. Currently WHIPPET includes the FSL, MIPAV, FreeSurfer, BrainSuite, Measure, DTIQuery, and 3D Slicer software packages, and is expanding. Users have identified several needed task modules and we report on their implementation.

  18. A software tool to measure the geometric distortion in x-ray image systems

    NASA Astrophysics Data System (ADS)

    Prieto, Gabriel; Guibelalde, Eduardo; Chevalier, Margarita

    2010-04-01

    A software tool is presented to measure the geometric distortion in images obtained with X-ray systems that provides a more objective method than the usual measurements over the image of a phantom with usual rulers. In a first step, this software has been applied to mammography images and makes use of the grid included into the CDMAM phantom (University Hospital Nijmegen). For digital images, this software tool automatically locates the grid crossing points and obtains a set of corners (up to 237) that are used by the program to determine 6 different squares, at top, bottom, left, right and central positions. The sixth square is the largest that can be fitted in the grid (widest possible square). The distortion is calculated as ((length of left diagonal - length of right diagonal)/ length of left diagonal) (%) for the six positions. The algorithm error is of the order of 0.3%. The method might be applied to other radiological systems without any major changes to adjust the program code to other phantoms. In this work a set of measurements for 54 CDMAM images, acquired in 11 different mammography systems from 6 manufacturers are presented. We can conclude that the distortion of all equipments is smaller than the recommendations for maximum distortions in primary displays (2%)

  19. Capturing a failure of an ASIC in-situ, using infrared radiometry and image processing software

    NASA Technical Reports Server (NTRS)

    Ruiz, Ronald P.

    2003-01-01

    Failures in electronic devices can sometimes be tricky to locate-especially if they are buried inside radiation-shielded containers designed to work in outer space. Such was the case with a malfunctioning ASIC (Application Specific Integrated Circuit) that was drawing excessive power at a specific temperature during temperature cycle testing. To analyze the failure, infrared radiometry (thermography) was used in combination with image processing software to locate precisely where the power was being dissipated at the moment the failure took place. The IR imaging software was used to make the image of the target and background, appear as unity. As testing proceeded and the failure mode was reached, temperature changes revealed the precise location of the fault. The results gave the design engineers the information they needed to fix the problem. This paper describes the techniques and equipment used to accomplish this failure analysis.

  20. IHE cross-enterprise document sharing for imaging: interoperability testing software

    PubMed Central

    2010-01-01

    Background With the deployments of Electronic Health Records (EHR), interoperability testing in healthcare is becoming crucial. EHR enables access to prior diagnostic information in order to assist in health decisions. It is a virtual system that results from the cooperation of several heterogeneous distributed systems. Interoperability between peers is therefore essential. Achieving interoperability requires various types of testing. Implementations need to be tested using software that simulates communication partners, and that provides test data and test plans. Results In this paper we describe a software that is used to test systems that are involved in sharing medical images within the EHR. Our software is used as part of the Integrating the Healthcare Enterprise (IHE) testing process to test the Cross Enterprise Document Sharing for imaging (XDS-I) integration profile. We describe its architecture and functionalities; we also expose the challenges encountered and discuss the elected design solutions. Conclusions EHR is being deployed in several countries. The EHR infrastructure will be continuously evolving to embrace advances in the information technology domain. Our software is built on a web framework to allow for an easy evolution with web technology. The testing software is publicly available; it can be used by system implementers to test their implementations. It can also be used by site integrators to verify and test the interoperability of systems, or by developers to understand specifications ambiguities, or to resolve implementations difficulties. PMID:20858241

  1. OsiriX: an open-source software for navigating in multidimensional DICOM images.

    PubMed

    Rosset, Antoine; Spadola, Luca; Ratib, Osman

    2004-09-01

    A multidimensional image navigation and display software was designed for display and interpretation of large sets of multidimensional and multimodality images such as combined PET-CT studies. The software is developed in Objective-C on a Macintosh platform under the MacOS X operating system using the GNUstep development environment. It also benefits from the extremely fast and optimized 3D graphic capabilities of the OpenGL graphic standard widely used for computer games optimized for taking advantage of any hardware graphic accelerator boards available. In the design of the software special attention was given to adapt the user interface to the specific and complex tasks of navigating through large sets of image data. An interactive jog-wheel device widely used in the video and movie industry was implemented to allow users to navigate in the different dimensions of an image set much faster than with a traditional mouse or on-screen cursors and sliders. The program can easily be adapted for very specific tasks that require a limited number of functions, by adding and removing tools from the program's toolbar and avoiding an overwhelming number of unnecessary tools and functions. The processing and image rendering tools of the software are based on the open-source libraries ITK and VTK. This ensures that all new developments in image processing that could emerge from other academic institutions using these libraries can be directly ported to the OsiriX program. OsiriX is provided free of charge under the GNU open-source licensing agreement at http://homepage.mac.com/rossetantoine/osirix.

  2. Software tools of the Computis European project to process mass spectrometry images.

    PubMed

    Robbe, Marie-France; Both, Jean-Pierre; Prideaux, Brendan; Klinkert, Ivo; Picaud, Vincent; Schramm, Thorsten; Hester, Atfons; Guevara, Victor; Stoeckli, Markus; Roempp, Andreas; Heeren, Ron M A; Spengler, Bernhard; Gala, Olivier; Haan, Serge

    2014-01-01

    Among the needs usually expressed by teams using mass spectrometry imaging, one that often arises is that for user-friendly software able to manage huge data volumes quickly and to provide efficient assistance for the interpretation of data. To answer this need, the Computis European project developed several complementary software tools to process mass spectrometry imaging data. Data Cube Explorer provides a simple spatial and spectral exploration for matrix-assisted laser desorption/ionisation-time of flight (MALDI-ToF) and time of flight-secondary-ion mass spectrometry (ToF-SIMS) data. SpectViewer offers visualisation functions, assistance to the interpretation of data, classification functionalities, peak list extraction to interrogate biological database and image overlay, and it can process data issued from MALDI-ToF, ToF-SIMS and desorption electrospray ionisation (DESI) equipment. EasyReg2D is able to register two images, in American Standard Code for Information Interchange (ASCII) format, issued from different technologies. The collaboration between the teams was hampered by the multiplicity of equipment and data formats, so the project also developed a common data format (imzML) to facilitate the exchange of experimental data and their interpretation by the different software tools. The BioMap platform for visualisation and exploration of MALDI-ToF and DESI images was adapted to parse imzML files, enabling its access to all project partners and, more globally, to a larger community of users. Considering the huge advantages brought by the imzML standard format, a specific editor (vBrowser) for imzML files and converters from proprietary formats to imzML were developed to enable the use of the imzML format by a broad scientific community. This initiative paves the way toward the development of a large panel of software tools able to process mass spectrometry imaging datasets in the future.

  3. A Virtual Information-Action Workspace for Command and Control

    NASA Astrophysics Data System (ADS)

    Lintern, Gavan; Naikar, Neelam

    2002-10-01

    Information overload has become a critical challenge within military Command and Control. However, the problem is not so much one of too much information but of abundant information that is poorly organized and poorly represented. In addition, the capabilities to test the effects of decisions before they are implemented and to monitor the progress of events after a decision is implemented are primitive. A virtual information-action workspace could be designed to resolve these issues. The design of such a space would require a detailed understanding of the specific information needed to support decision making in Command and Control. That information can be obtained with the use of knowledge acquisition and knowledge representation tools from the field of applied cognitive psychology. In addition, it will be necessary to integrate forms for perception and action into a virtual space that will support access to the information and that will provide means for testing and implementing decisions. This paper presents a rationale for a virtual information-action workspace and outlines an approach to its design.

  4. MedXViewer: an extensible web-enabled software package for medical imaging

    NASA Astrophysics Data System (ADS)

    Looney, P. T.; Young, K. C.; Mackenzie, Alistair; Halling-Brown, Mark D.

    2014-03-01

    MedXViewer (Medical eXtensible Viewer) is an application designed to allow workstation-independent, PACS-less viewing and interaction with anonymised medical images (e.g. observer studies). The application was initially implemented for use in digital mammography and tomosynthesis but the flexible software design allows it to be easily extended to other imaging modalities. Regions of interest can be identified by a user and any associated information about a mark, an image or a study can be added. The questions and settings can be easily configured depending on the need of the research allowing both ROC and FROC studies to be performed. The extensible nature of the design allows for other functionality and hanging protocols to be available for each study. Panning, windowing, zooming and moving through slices are all available while modality-specific features can be easily enabled e.g. quadrant zooming in mammographic studies. MedXViewer can integrate with a web-based image database allowing results and images to be stored centrally. The software and images can be downloaded remotely from this centralised data-store. Alternatively, the software can run without a network connection where the images and results can be encrypted and stored locally on a machine or external drive. Due to the advanced workstation-style functionality, the simple deployment on heterogeneous systems over the internet without a requirement for administrative access and the ability to utilise a centralised database, MedXViewer has been used for running remote paper-less observer studies and is capable of providing a training infrastructure and co-ordinating remote collaborative viewing sessions (e.g. cancer reviews, interesting cases).

  5. A User Assessment of Workspaces in Selected Music Education Computer Laboratories.

    ERIC Educational Resources Information Center

    Badolato, Michael Jeremy

    A study of 120 students selected from the user populations of four music education computer laboratories was conducted to determine the applicability of current ergonomic and environmental design guidelines in satisfying the needs of users of educational computing workspaces. Eleven categories of workspace factors were organized into a…

  6. Shadow netWorkspace: An Open Source Intranet for Learning Communities

    ERIC Educational Resources Information Center

    Laffey, James M.; Musser, Dale

    2006-01-01

    Shadow netWorkspace (SNS) is a web application system that allows a school or any type of community to establish an intranet with network workspaces for all members and groups. The goal of SNS has been to make it easy for schools and other educational organizations to provide network services in support of implementing a learning community. SNS is…

  7. The Performance Evaluation of Multi-Image 3d Reconstruction Software with Different Sensors

    NASA Astrophysics Data System (ADS)

    Mousavi, V.; Khosravi, M.; Ahmadi, M.; Noori, N.; Naveh, A. Hosseini; Varshosaz, M.

    2015-12-01

    Today, multi-image 3D reconstruction is an active research field and generating three dimensional model of the objects is one the most discussed issues in Photogrammetry and Computer Vision that can be accomplished using range-based or image-based methods. Very accurate and dense point clouds generated by range-based methods such as structured light systems and laser scanners has introduced them as reliable tools in the industry. Image-based 3D digitization methodologies offer the option of reconstructing an object by a set of unordered images that depict it from different viewpoints. As their hardware requirements are narrowed down to a digital camera and a computer system, they compose an attractive 3D digitization approach, consequently, although range-based methods are generally very accurate, image-based methods are low-cost and can be easily used by non-professional users. One of the factors affecting the accuracy of the obtained model in image-based methods is the software and algorithm used to generate three dimensional model. These algorithms are provided in the form of commercial software, open source and web-based services. Another important factor in the accuracy of the obtained model is the type of sensor used. Due to availability of mobile sensors to the public, popularity of professional sensors and the advent of stereo sensors, a comparison of these three sensors plays an effective role in evaluating and finding the optimized method to generate three-dimensional models. Lots of research has been accomplished to identify a suitable software and algorithm to achieve an accurate and complete model, however little attention is paid to the type of sensors used and its effects on the quality of the final model. The purpose of this paper is deliberation and the introduction of an appropriate combination of a sensor and software to provide a complete model with the highest accuracy. To do this, different software, used in previous studies, were compared and

  8. The Spectral Image Processing System (SIPS): Software for integrated analysis of AVIRIS data

    NASA Technical Reports Server (NTRS)

    Kruse, F. A.; Lefkoff, A. B.; Boardman, J. W.; Heidebrecht, K. B.; Shapiro, A. T.; Barloon, P. J.; Goetz, A. F. H.

    1992-01-01

    The Spectral Image Processing System (SIPS) is a software package developed by the Center for the Study of Earth from Space (CSES) at the University of Colorado, Boulder, in response to a perceived need to provide integrated tools for analysis of imaging spectrometer data both spectrally and spatially. SIPS was specifically designed to deal with data from the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) and the High Resolution Imaging Spectrometer (HIRIS), but was tested with other datasets including the Geophysical and Environmental Research Imaging Spectrometer (GERIS), GEOSCAN images, and Landsat TM. SIPS was developed using the 'Interactive Data Language' (IDL). It takes advantage of high speed disk access and fast processors running under the UNIX operating system to provide rapid analysis of entire imaging spectrometer datasets. SIPS allows analysis of single or multiple imaging spectrometer data segments at full spatial and spectral resolution. It also allows visualization and interactive analysis of image cubes derived from quantitative analysis procedures such as absorption band characterization and spectral unmixing. SIPS consists of three modules: SIPS Utilities, SIPS_View, and SIPS Analysis. SIPS version 1.1 is described below.

  9. Oxygen octahedra picker: A software tool to extract quantitative information from STEM images.

    PubMed

    Wang, Yi; Salzberger, Ute; Sigle, Wilfried; Eren Suyolcu, Y; van Aken, Peter A

    2016-09-01

    In perovskite oxide based materials and hetero-structures there are often strong correlations between oxygen octahedral distortions and functionality. Thus, atomistic understanding of the octahedral distortion, which requires accurate measurements of atomic column positions, will greatly help to engineer their properties. Here, we report the development of a software tool to extract quantitative information of the lattice and of BO6 octahedral distortions from STEM images. Center-of-mass and 2D Gaussian fitting methods are implemented to locate positions of individual atom columns. The precision of atomic column distance measurements is evaluated on both simulated and experimental images. The application of the software tool is demonstrated using practical examples. PMID:27344044

  10. TiLIA: a software package for image analysis of firefly flash patterns.

    PubMed

    Konno, Junsuke; Hatta-Ohashi, Yoko; Akiyoshi, Ryutaro; Thancharoen, Anchana; Silalom, Somyot; Sakchoowong, Watana; Yiu, Vor; Ohba, Nobuyoshi; Suzuki, Hirobumi

    2016-05-01

    As flash signaling patterns of fireflies are species specific, signal-pattern analysis is important for understanding this system of communication. Here, we present time-lapse image analysis (TiLIA), a free open-source software package for signal and flight pattern analyses of fireflies that uses video-recorded image data. TiLIA enables flight path tracing of individual fireflies and provides frame-by-frame coordinates and light intensity data. As an example of TiLIA capabilities, we demonstrate flash pattern analysis of the fireflies Luciola cruciata and L. lateralis during courtship behavior. PMID:27069594

  11. 2D-CELL: image processing software for extraction and analysis of 2-dimensional cellular structures

    NASA Astrophysics Data System (ADS)

    Righetti, F.; Telley, H.; Leibling, Th. M.; Mocellin, A.

    1992-01-01

    2D-CELL is a software package for the processing and analyzing of photographic images of cellular structures in a largely interactive way. Starting from a binary digitized image, the programs extract the line network (skeleton) of the structure and determine the graph representation that best models it. Provision is made for manually correcting defects such as incorrect node positions or dangling bonds. Then a suitable algorithm retrieves polygonal contours which define individual cells — local boundary curvatures are neglected for simplicity. Using elementary analytical geometry relations, a range of metric and topological parameters describing the population are then computed, organized into statistical distributions and graphically displayed.

  12. TiLIA: a software package for image analysis of firefly flash patterns.

    PubMed

    Konno, Junsuke; Hatta-Ohashi, Yoko; Akiyoshi, Ryutaro; Thancharoen, Anchana; Silalom, Somyot; Sakchoowong, Watana; Yiu, Vor; Ohba, Nobuyoshi; Suzuki, Hirobumi

    2016-05-01

    As flash signaling patterns of fireflies are species specific, signal-pattern analysis is important for understanding this system of communication. Here, we present time-lapse image analysis (TiLIA), a free open-source software package for signal and flight pattern analyses of fireflies that uses video-recorded image data. TiLIA enables flight path tracing of individual fireflies and provides frame-by-frame coordinates and light intensity data. As an example of TiLIA capabilities, we demonstrate flash pattern analysis of the fireflies Luciola cruciata and L. lateralis during courtship behavior.

  13. SOFI Simulation Tool: A Software Package for Simulating and Testing Super-Resolution Optical Fluctuation Imaging

    PubMed Central

    Sharipov, Azat; Geissbuehler, Stefan; Leutenegger, Marcel; Vandenberg, Wim; Dedecker, Peter; Hofkens, Johan; Lasser, Theo

    2016-01-01

    Super-resolution optical fluctuation imaging (SOFI) allows one to perform sub-diffraction fluorescence microscopy of living cells. By analyzing the acquired image sequence with an advanced correlation method, i.e. a high-order cross-cumulant analysis, super-resolution in all three spatial dimensions can be achieved. Here we introduce a software tool for a simple qualitative comparison of SOFI images under simulated conditions considering parameters of the microscope setup and essential properties of the biological sample. This tool incorporates SOFI and STORM algorithms, displays and describes the SOFI image processing steps in a tutorial-like fashion. Fast testing of various parameters simplifies the parameter optimization prior to experimental work. The performance of the simulation tool is demonstrated by comparing simulated results with experimentally acquired data. PMID:27583365

  14. SOFI Simulation Tool: A Software Package for Simulating and Testing Super-Resolution Optical Fluctuation Imaging.

    PubMed

    Girsault, Arik; Lukes, Tomas; Sharipov, Azat; Geissbuehler, Stefan; Leutenegger, Marcel; Vandenberg, Wim; Dedecker, Peter; Hofkens, Johan; Lasser, Theo

    2016-01-01

    Super-resolution optical fluctuation imaging (SOFI) allows one to perform sub-diffraction fluorescence microscopy of living cells. By analyzing the acquired image sequence with an advanced correlation method, i.e. a high-order cross-cumulant analysis, super-resolution in all three spatial dimensions can be achieved. Here we introduce a software tool for a simple qualitative comparison of SOFI images under simulated conditions considering parameters of the microscope setup and essential properties of the biological sample. This tool incorporates SOFI and STORM algorithms, displays and describes the SOFI image processing steps in a tutorial-like fashion. Fast testing of various parameters simplifies the parameter optimization prior to experimental work. The performance of the simulation tool is demonstrated by comparing simulated results with experimentally acquired data. PMID:27583365

  15. IBIS integrated biological imaging system: electron micrograph image-processing software running on Unix workstations.

    PubMed

    Flifla, M J; Garreau, M; Rolland, J P; Coatrieux, J L; Thomas, D

    1992-12-01

    'IBIS' is a set of computer programs concerned with the processing of electron micrographs, with particular emphasis on the requirements for structural analyses of biological macromolecules. The software is written in FORTRAN 77 and runs on Unix workstations. A description of the various functions and the implementation mode is given. Some examples illustrate the user interface.

  16. Vobi One: a data processing software package for functional optical imaging.

    PubMed

    Takerkart, Sylvain; Katz, Philippe; Garcia, Flavien; Roux, Sébastien; Reynaud, Alexandre; Chavane, Frédéric

    2014-01-01

    Optical imaging is the only technique that allows to record the activity of a neuronal population at the mesoscopic scale. A large region of the cortex (10-20 mm diameter) is directly imaged with a CCD camera while the animal performs a behavioral task, producing spatio-temporal data with an unprecedented combination of spatial and temporal resolutions (respectively, tens of micrometers and milliseconds). However, researchers who have developed and used this technique have relied on heterogeneous software and methods to analyze their data. In this paper, we introduce Vobi One, a software package entirely dedicated to the processing of functional optical imaging data. It has been designed to facilitate the processing of data and the comparison of different analysis methods. Moreover, it should help bring good analysis practices to the community because it relies on a database and a standard format for data handling and it provides tools that allow producing reproducible research. Vobi One is an extension of the BrainVISA software platform, entirely written with the Python programming language, open source and freely available for download at https://trac.int.univ-amu.fr/vobi_one.

  17. Performing Quantitative Imaging Acquisition, Analysis and Visualization Using the Best of Open Source and Commercial Software Solutions

    PubMed Central

    Shenoy, Shailesh M.

    2016-01-01

    A challenge in any imaging laboratory, especially one that uses modern techniques, is to achieve a sustainable and productive balance between using open source and commercial software to perform quantitative image acquisition, analysis and visualization. In addition to considering the expense of software licensing, one must consider factors such as the quality and usefulness of the software’s support, training and documentation. Also, one must consider the reproducibility with which multiple people generate results using the same software to perform the same analysis, how one may distribute their methods to the community using the software and the potential for achieving automation to improve productivity. PMID:27516727

  18. Image 100 procedures manual development: Applications system library definition and Image 100 software definition

    NASA Technical Reports Server (NTRS)

    Guseman, L. F., Jr.; Decell, H. P., Jr.

    1975-01-01

    An outline for an Image 100 procedures manual for Earth Resources Program image analysis was developed which sets forth guidelines that provide a basis for the preparation and updating of an Image 100 Procedures Manual. The scope of the outline was limited to definition of general features of a procedures manual together with special features of an interactive system. Computer programs were identified which should be implemented as part of an applications oriented library for the system.

  19. 3D active workspace of human hand anatomical model

    PubMed Central

    Dragulescu, Doina; Perdereau, Véronique; Drouin, Michel; Ungureanu, Loredana; Menyhardt, Karoly

    2007-01-01

    Background If the model of the human hand is created with accuracy by respecting the type of motion provided by each articulation and the dimensions of articulated bones, it can function as the real organ providing the same motions. Unfortunately, the human hand is hard to model due to its kinematical chains submitted to motion constraints. On the other hand, if an application does not impose a fine manipulation it is not necessary to create a model as complex as the human hand is. But always the hand model has to perform a certain space of motions in imposed workspace architecture no matter what the practical application does. Methods Based on Denavit-Hartenberg convention, we conceived the kinematical model of the human hand, having in mind the structure and the behavior of the natural model. We obtained the kinematical equations describing the motion of every fingertip with respect to the general coordinate system, placed on the wrist. For every joint variable, a range of motion was established. Dividing these joint variables to an appropriate number of intervals and connecting them, the complex surface bordering the active hand model workspace was obtained. Results Using MATLAB 7.0, the complex surface described by fingertips, when hand articulations are all simultaneously moving, was obtained. It can be seen that any point on surface has its own coordinates smaller than the maximum length of the middle finger in static position. Therefore, a sphere having the centre in the origin of the general coordinate system and the radius which equals this length covers the represented complex surface. Conclusion We propose a human hand model that represents a new solution compared to the existing ones. This model is capable to make special movements like power grip and dexterous manipulations. During them, the fingertips do not exceed the active workspace encapsulated by determined surfaces. The proposed kinematical model can help to choose which model joints could be

  20. [Development of DICOM image viewing software for efficient image reading and evaluation of distributed server system for diagnostic environment].

    PubMed

    Ishikawa, K

    2000-12-01

    To construct an efficient diagnostic environment using computer displays, the author investigated the time of network transmission using clinical images. In our hospital, we introduced optical-fiber 100Base-Fx Ethernet connections between 22 HIS-segments and one RIS-segment. Although Ethernet architecture is inexpensive, the speed of image transmission becomes 2371 KB/sec. (4.6 CT-slice/sec.) in the RIS-segment and 996 KB/sec. (1.9 CT-slice/sec.) from the RIS-segment to HIS-segments. Because one examination is transmitted in one minute, it does not disturb image reading. Otherwise, a distributed server system using inexpensive personal computers helps in constructing an efficient system. This investigation showed that commercially based Digital Imaging and Communications in Medicine(DICOM) servers and RSNA Central Test Node servers are not so different in transmission speed. The author programmed and developed DICOM transmission and viewing software for Macintosh computers. This viewer includes two inventions, dynamic tiling window system (DTWS) and window binding mode(WBM). On DTWS, windows, tiles, and images are independent objects, which are movable and resizable. The tile-matrix is changeable by mouse dragging, which realizes suitable tile rectangles for wide-low or narrow-high images. The arranging window tool prevents windows from scattering. Using WBM, any operation affects each window similarly. This means that the relationship of compared images is always equivalent. DTWS and WBM contribute greatly to a filmless diagnostic environment.

  1. Applying Workspace Limitations in a Velocity-Controlled Robotic Mechanism

    NASA Technical Reports Server (NTRS)

    Abdallah, Muhammad E. (Inventor); Hargrave, Brian (Inventor); Platt, Robert J., Jr. (Inventor)

    2014-01-01

    A robotic system includes a robotic mechanism responsive to velocity control signals, and a permissible workspace defined by a convex-polygon boundary. A host machine determines a position of a reference point on the mechanism with respect to the boundary, and includes an algorithm for enforcing the boundary by automatically shaping the velocity control signals as a function of the position, thereby providing smooth and unperturbed operation of the mechanism along the edges and corners of the boundary. The algorithm is suited for application with higher speeds and/or external forces. A host machine includes an algorithm for enforcing the boundary by shaping the velocity control signals as a function of the reference point position, and a hardware module for executing the algorithm. A method for enforcing the convex-polygon boundary is also provided that shapes a velocity control signal via a host machine as a function of the reference point position.

  2. The i5k Workspace@NAL--enabling genomic data access, visualization and curation of arthropod genomes.

    PubMed

    Poelchau, Monica; Childers, Christopher; Moore, Gary; Tsavatapalli, Vijaya; Evans, Jay; Lee, Chien-Yueh; Lin, Han; Lin, Jun-Wei; Hackett, Kevin

    2015-01-01

    The 5000 arthropod genomes initiative (i5k) has tasked itself with coordinating the sequencing of 5000 insect or related arthropod genomes. The resulting influx of data, mostly from small research groups or communities with little bioinformatics experience, will require visualization, dissemination and curation, preferably from a centralized platform. The National Agricultural Library (NAL) has implemented the i5k Workspace@NAL (http://i5k.nal.usda.gov/) to help meet the i5k initiative's genome hosting needs. Any i5k member is encouraged to contact the i5k Workspace with their genome project details. Once submitted, new content will be accessible via organism pages, genome browsers and BLAST search engines, which are implemented via the open-source Tripal framework, a web interface for the underlying Chado database schema. We also implement the Web Apollo software for groups that choose to curate gene models. New content will add to the existing body of 35 arthropod species, which include species relevant for many aspects of arthropod genomic research, including agriculture, invasion biology, systematics, ecology and evolution, and developmental research.

  3. The i5k Workspace@NAL--enabling genomic data access, visualization and curation of arthropod genomes.

    PubMed

    Poelchau, Monica; Childers, Christopher; Moore, Gary; Tsavatapalli, Vijaya; Evans, Jay; Lee, Chien-Yueh; Lin, Han; Lin, Jun-Wei; Hackett, Kevin

    2015-01-01

    The 5000 arthropod genomes initiative (i5k) has tasked itself with coordinating the sequencing of 5000 insect or related arthropod genomes. The resulting influx of data, mostly from small research groups or communities with little bioinformatics experience, will require visualization, dissemination and curation, preferably from a centralized platform. The National Agricultural Library (NAL) has implemented the i5k Workspace@NAL (http://i5k.nal.usda.gov/) to help meet the i5k initiative's genome hosting needs. Any i5k member is encouraged to contact the i5k Workspace with their genome project details. Once submitted, new content will be accessible via organism pages, genome browsers and BLAST search engines, which are implemented via the open-source Tripal framework, a web interface for the underlying Chado database schema. We also implement the Web Apollo software for groups that choose to curate gene models. New content will add to the existing body of 35 arthropod species, which include species relevant for many aspects of arthropod genomic research, including agriculture, invasion biology, systematics, ecology and evolution, and developmental research. PMID:25332403

  4. The i5k Workspace@NAL—enabling genomic data access, visualization and curation of arthropod genomes

    PubMed Central

    Poelchau, Monica; Childers, Christopher; Moore, Gary; Tsavatapalli, Vijaya; Evans, Jay; Lee, Chien-Yueh; Lin, Han; Lin, Jun-Wei; Hackett, Kevin

    2015-01-01

    The 5000 arthropod genomes initiative (i5k) has tasked itself with coordinating the sequencing of 5000 insect or related arthropod genomes. The resulting influx of data, mostly from small research groups or communities with little bioinformatics experience, will require visualization, dissemination and curation, preferably from a centralized platform. The National Agricultural Library (NAL) has implemented the i5k Workspace@NAL (http://i5k.nal.usda.gov/) to help meet the i5k initiative's genome hosting needs. Any i5k member is encouraged to contact the i5k Workspace with their genome project details. Once submitted, new content will be accessible via organism pages, genome browsers and BLAST search engines, which are implemented via the open-source Tripal framework, a web interface for the underlying Chado database schema. We also implement the Web Apollo software for groups that choose to curate gene models. New content will add to the existing body of 35 arthropod species, which include species relevant for many aspects of arthropod genomic research, including agriculture, invasion biology, systematics, ecology and evolution, and developmental research. PMID:25332403

  5. A Survey of DICOM Viewer Software to Integrate Clinical Research and Medical Imaging.

    PubMed

    Haak, Daniel; Page, Charles-E; Deserno, Thomas M

    2016-04-01

    The digital imaging and communications in medicine (DICOM) protocol is the leading standard for image data management in healthcare. Imaging biomarkers and image-based surrogate endpoints in clinical trials and medical registries require DICOM viewer software with advanced functionality for visualization and interfaces for integration. In this paper, a comprehensive evaluation of 28 DICOM viewers is performed. The evaluation criteria are obtained from application scenarios in clinical research rather than patient care. They include (i) platform, (ii) interface, (iii) support, (iv) two-dimensional (2D), and (v) three-dimensional (3D) viewing. On the average, 4.48 and 1.43 of overall 8 2D and 5 3D image viewing criteria are satisfied, respectively. Suitable DICOM interfaces for central viewing in hospitals are provided by GingkoCADx, MIPAV, and OsiriX Lite. The viewers ImageJ, MicroView, MIPAV, and OsiriX Lite offer all included 3D-rendering features for advanced viewing. Interfaces needed for decentral viewing in web-based systems are offered by Oviyam, Weasis, and Xero. Focusing on open source components, MIPAV is the best candidate for 3D imaging as well as DICOM communication. Weasis is superior for workflow optimization in clinical trials. Our evaluation shows that advanced visualization and suitable interfaces can also be found in the open source field and not only in commercial products.

  6. Development of an Open Source Image-Based Flow Modeling Software - SimVascular

    NASA Astrophysics Data System (ADS)

    Updegrove, Adam; Merkow, Jameson; Schiavazzi, Daniele; Wilson, Nathan; Marsden, Alison; Shadden, Shawn

    2014-11-01

    SimVascular (www.simvascular.org) is currently the only comprehensive software package that provides a complete pipeline from medical image data segmentation to patient specific blood flow simulation. This software and its derivatives have been used in hundreds of conference abstracts and peer-reviewed journal articles, as well as the foundation of medical startups. SimVascular was initially released in August 2007, yet major challenges and deterrents for new adopters were the requirement of licensing three expensive commercial libraries utilized by the software, a complicated build process, and a lack of documentation, support and organized maintenance. In the past year, the SimVascular team has made significant progress to integrate open source alternatives for the linear solver, solid modeling, and mesh generation commercial libraries required by the original public release. In addition, the build system, available distributions, and graphical user interface have been significantly enhanced. Finally, the software has been updated to enable users to directly run simulations using models and boundary condition values, included in the Vascular Model Repository (vascularmodel.org). In this presentation we will briefly overview the capabilities of the new SimVascular 2.0 release. National Science Foundation.

  7. Upper Extremity 3D Reachable Workspace Assessment in ALS by Kinect sensor

    PubMed Central

    Oskarsson, Bjorn; Joyce, Nanette C.; de Bie, Evan; Nicorici, Alina; Bajcsy, Ruzena; Kurillo, Gregorij; Han, Jay J.

    2016-01-01

    Introduction Reachable workspace is a measure that provides clinically meaningful information regarding arm function. In this study, a Kinect sensor was used to determine the spectrum of 3D reachable workspace encountered in a cross-sectional cohort of individuals with ALS. Method Bilateral 3D reachable workspace was recorded from 10 subjects with ALS and 23 healthy controls. The data were normalized by each individual's arm length to obtain a reachable workspace relative surface area (RSA). Concurrent validity was assessed by correlation with ALSFRSr scores. Results The Kinect-measured reachable workspace RSA differed significantly between the ALS and control subjects (0.579±0.226 vs. 0.786±0.069; P<0.001). The RSA demonstrated correlation with ALSFRSr upper extremity items (Spearman correlation ρ=0.569; P=0.009). With worsening upper extremity function as categorized by the ALSFRSr, the reachable workspace also decreased progressively. Conclusions This study demonstrates the feasibility and potential of using a novel Kinect-based reachable workspace outcome measure in ALS. PMID:25965847

  8. CONRAD—A software framework for cone-beam imaging in radiology

    SciTech Connect

    Maier, Andreas; Choi, Jang-Hwan; Riess, Christian; Keil, Andreas; Fahrig, Rebecca; Hofmann, Hannes G.; Berger, Martin; Fischer, Peter; Schwemmer, Chris; Wu, Haibo; Müller, Kerstin; Hornegger, Joachim

    2013-11-15

    Purpose: In the community of x-ray imaging, there is a multitude of tools and applications that are used in scientific practice. Many of these tools are proprietary and can only be used within a certain lab. Often the same algorithm is implemented multiple times by different groups in order to enable comparison. In an effort to tackle this problem, the authors created CONRAD, a software framework that provides many of the tools that are required to simulate basic processes in x-ray imaging and perform image reconstruction with consideration of nonlinear physical effects.Methods: CONRAD is a Java-based state-of-the-art software platform with extensive documentation. It is based on platform-independent technologies. Special libraries offer access to hardware acceleration such as OpenCL. There is an easy-to-use interface for parallel processing. The software package includes different simulation tools that are able to generate up to 4D projection and volume data and respective vector motion fields. Well known reconstruction algorithms such as FBP, DBP, and ART are included. All algorithms in the package are referenced to a scientific source.Results: A total of 13 different phantoms and 30 processing steps have already been integrated into the platform at the time of writing. The platform comprises 74.000 nonblank lines of code out of which 19% are used for documentation. The software package is available for download at http://conrad.stanford.edu. To demonstrate the use of the package, the authors reconstructed images from two different scanners, a table top system and a clinical C-arm system. Runtimes were evaluated using the RabbitCT platform and demonstrate state-of-the-art runtimes with 2.5 s for the 256 problem size and 12.4 s for the 512 problem size.Conclusions: As a common software framework, CONRAD enables the medical physics community to share algorithms and develop new ideas. In particular this offers new opportunities for scientific collaboration and

  9. Performance of commercial and open source remote sensing/image processing software for land cover/use purposes

    NASA Astrophysics Data System (ADS)

    Teodoro, Ana C.; Ferreira, Dário; Sillero, Neftali

    2012-10-01

    We aim to compare the potentialities of four remote sensing/image processing software: PCI Geomatica V8.2, ENVI 4.7, SPRING 5.1.8, and ORFEO toolbox integrated in Monteverdi 1.11. We listed and assessed the performance of several classification algorithms. PCI Geomatica and ENVI are commercial/proprietary software and SPRING and ORFEO are open source software. We listed the main classification algorithms available in these four software, and divided them by the different types/approaches of classification (e.g., pixel-based, object-oriented, and data mining algorithms). We classified using these algorithms two images covering the same area (Porto-Vila Nova de Gaia, Northern Portugal): one Landsat TM image from October 2011 and one IKONOS image from September 2005. We compared time of performance and classification results using the confusion matrix (overall accuracy) and Kappa statistics. The algorithms tested presented different classification results according to the software used. In Landsat image, differences are greater than IKONOS image. This work could be very important for other researchers as it provides a qualitative and quantitative analysis of different image processing algorithms available in commercial and open source software.

  10. SPLASSH: Open source software for camera-based high-speed, multispectral in-vivo optical image acquisition

    PubMed Central

    Sun, Ryan; Bouchard, Matthew B.; Hillman, Elizabeth M. C.

    2010-01-01

    Camera-based in-vivo optical imaging can provide detailed images of living tissue that reveal structure, function, and disease. High-speed, high resolution imaging can reveal dynamic events such as changes in blood flow and responses to stimulation. Despite these benefits, commercially available scientific cameras rarely include software that is suitable for in-vivo imaging applications, making this highly versatile form of optical imaging challenging and time-consuming to implement. To address this issue, we have developed a novel, open-source software package to control high-speed, multispectral optical imaging systems. The software integrates a number of modular functions through a custom graphical user interface (GUI) and provides extensive control over a wide range of inexpensive IEEE 1394 Firewire cameras. Multispectral illumination can be incorporated through the use of off-the-shelf light emitting diodes which the software synchronizes to image acquisition via a programmed microcontroller, allowing arbitrary high-speed illumination sequences. The complete software suite is available for free download. Here we describe the software’s framework and provide details to guide users with development of this and similar software. PMID:21258475

  11. 76 FR 43724 - In the Matter of Certain Digital Imaging Devices and Related Software; Notice of Commission...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-07-21

    ... Cupertino, California (``Apple''). 75 FR 28058 (May 19, 2010). The complaint alleged ] violations of section... COMMISSION In the Matter of Certain Digital Imaging Devices and Related Software; Notice of Commission... related software by reason of infringement of various claims of United States Patent Nos. 6,031,964 and...

  12. Mississippi Company Using NASA Software Program to Provide Unique Imaging Service: DATASTAR Success Story

    NASA Technical Reports Server (NTRS)

    2001-01-01

    DATASTAR, Inc., of Picayune, Miss., has taken NASA's award-winning Earth Resources Laboratory Applications (ELAS) software program and evolved it to the point that the company is now providing a unique, spatial imagery service over the Internet. ELAS was developed in the early 80's to process satellite and airborne sensor imagery data of the Earth's surface into readable and useable information. While there are several software packages on the market that allow the manipulation of spatial data into useable products, this is usually a laborious task. The new program, called the DATASTAR Image Processing Exploitation, or DIPX, Delivery Service, is a subscription service available over the Internet that takes the work out of the equation and provides normalized geo-spatial data in the form of decision products.

  13. Advances in hardware, software, and automation for 193nm aerial image measurement systems

    NASA Astrophysics Data System (ADS)

    Zibold, Axel M.; Schmid, R.; Seyfarth, A.; Waechter, M.; Harnisch, W.; Doornmalen, H. v.

    2005-05-01

    A new, second generation AIMS fab 193 system has been developed which is capable of emulating lithographic imaging of any type of reticles such as binary and phase shift masks (PSM) including resolution enhancement technologies (RET) such as optical proximity correction (OPC) or scatter bars. The system emulates the imaging process by adjustment of the lithography equivalent illumination and imaging conditions of 193nm wafer steppers including circular, annular, dipole and quadrupole type illumination modes. The AIMS fab 193 allows a rapid prediction of wafer printability of critical mask features, including dense patterns and contacts, defects or repairs by acquiring through-focus image stacks by means of a CCD camera followed by quantitative image analysis. Moreover the technology can be readily applied to directly determine the process window of a given mask under stepper imaging conditions. Since data acquisition is performed electronically, AIMS in many applications replaces the need for costly and time consuming wafer prints using a wafer stepper/ scanner followed by CD SEM resist or wafer analysis. The AIMS fab 193 second generation system is designed for 193nm lithography mask printing predictability down to the 65nm node. In addition to hardware improvements a new modular AIMS software is introduced allowing for a fully automated operation mode. Multiple pre-defined points can be visited and through-focus AIMS measurements can be executed automatically in a recipe based mode. To increase the effectiveness of the automated operation mode, the throughput of the system to locate the area of interest, and to acquire the through-focus images is increased by almost a factor of two in comparison with the first generation AIMS systems. In addition a new software plug-in concept is realised for the tools. One new feature has been successfully introduced as "Global CD Map", enabling automated investigation of global mask quality based on the local determination of

  14. Higher-order continuation for the determination of robot workspace boundaries

    NASA Astrophysics Data System (ADS)

    Hentz, Gauthier; Charpentier, Isabelle; Renaud, Pierre

    2016-02-01

    In the medical and surgical fields, robotics may be of great interest for safer and more accurate procedures. Space constraints for a robotic assistant are however strict. Therefore, roboticists study non-conventional mechanisms with advantageous size/workspace ratios. The determination of mechanism workspace, and primarily its boundaries, is thus of major importance. This Note builds on boundary equation definition, continuation and automatic differentiation to propose a general, accurate, fast and automated method for the determination of mechanism workspace. The method is illustrated with a planar RRR mechanism and a three-dimensional Orthoglide parallel mechanism.

  15. Features of the Upgraded Imaging for Hypersonic Experimental Aeroheating Testing (IHEAT) Software

    NASA Technical Reports Server (NTRS)

    Mason, Michelle L.; Rufer, Shann J.

    2016-01-01

    The Imaging for Hypersonic Experimental Aeroheating Testing (IHEAT) software is used at the NASA Langley Research Center to analyze global aeroheating data on wind tunnel models tested in the Langley Aerothermodynamics Laboratory. One-dimensional, semi-infinite heating data derived from IHEAT are used in the design of thermal protection systems for hypersonic vehicles that are exposed to severe aeroheating loads, such as reentry vehicles during descent and landing procedures. This software program originally was written in the PV-WAVE(Registered Trademark) programming language to analyze phosphor thermography data from the two-color, relative-intensity system developed at Langley. To increase the efficiency, functionality, and reliability of IHEAT, the program was migrated to MATLAB(Registered Trademark) syntax and compiled as a stand-alone executable file labeled version 4.0. New features of IHEAT 4.0 include the options to perform diagnostic checks of the accuracy of the acquired data during a wind tunnel test, to extract data along a specified multi-segment line following a feature such as a leading edge or a streamline, and to batch process all of the temporal frame data from a wind tunnel run. Results from IHEAT 4.0 were compared on a pixel level to the output images from the legacy software to validate the program. The absolute differences between the heat transfer data output from the two programs were on the order of 10(exp -5) to 10(exp -7). IHEAT 4.0 replaces the PV-WAVE(Registered Trademark) version as the production software for aeroheating experiments conducted in the hypersonic facilities at NASA Langley.

  16. A complete software application for automatic registration of x-ray mammography and magnetic resonance images

    SciTech Connect

    Solves-Llorens, J. A.; Rupérez, M. J. Monserrat, C.; Lloret, M.

    2014-08-15

    Purpose: This work presents a complete and automatic software application to aid radiologists in breast cancer diagnosis. The application is a fully automated method that performs a complete registration of magnetic resonance (MR) images and x-ray (XR) images in both directions (from MR to XR and from XR to MR) and for both x-ray mammograms, craniocaudal (CC), and mediolateral oblique (MLO). This new approximation allows radiologists to mark points in the MR images and, without any manual intervention, it provides their corresponding points in both types of XR mammograms and vice versa. Methods: The application automatically segments magnetic resonance images and x-ray images using the C-Means method and the Otsu method, respectively. It compresses the magnetic resonance images in both directions, CC and MLO, using a biomechanical model of the breast that distinguishes the specific biomechanical behavior of each one of its three tissues (skin, fat, and glandular tissue) separately. It makes a projection of both compressions and registers them with the original XR images using affine transformations and nonrigid registration methods. Results: The application has been validated by two expert radiologists. This was carried out through a quantitative validation on 14 data sets in which the Euclidean distance between points marked by the radiologists and the corresponding points obtained by the application were measured. The results showed a mean error of 4.2 ± 1.9 mm for the MRI to CC registration, 4.8 ± 1.3 mm for the MRI to MLO registration, and 4.1 ± 1.3 mm for the CC and MLO to MRI registration. Conclusions: A complete software application that automatically registers XR and MR images of the breast has been implemented. The application permits radiologists to estimate the position of a lesion that is suspected of being a tumor in an imaging modality based on its position in another different modality with a clinically acceptable error. The results show that the

  17. Efficient 3D rendering for web-based medical imaging software: a proof of concept

    NASA Astrophysics Data System (ADS)

    Cantor-Rivera, Diego; Bartha, Robert; Peters, Terry

    2011-03-01

    Medical Imaging Software (MIS) found in research and in clinical practice, such as in Picture and Archiving Communication Systems (PACS) and Radiology Information Systems (RIS), has not been able to take full advantage of the Internet as a deployment platform. MIS is usually tightly coupled to algorithms that have substantial hardware and software requirements. Consequently, MIS is deployed on thick clients which usually leads project managers to allocate more resources during the deployment phase of the application than the resources that would be allocated if the application were deployed through a web interface.To minimize the costs associated with this scenario, many software providers use or develop plug-ins to provide the delivery platform (internet browser) with the features to load, interact and analyze medical images. Nevertheless there has not been a successful standard means to achieve this goal so far. This paper presents a study of WebGL as an alternative to plug-in development for efficient rendering of 3D medical models and DICOM images. WebGL is a technology that enables the internet browser to have access to the local graphics hardware in a native fashion. Because it is based in OpenGL, a widely accepted graphic industry standard, WebGL is being implemented in most of the major commercial browsers. After a discussion on the details of the technology, a series of experiments are presented to determine the operational boundaries in which WebGL is adequate for MIS. A comparison with current alternatives is also addressed. Finally conclusions and future work are discussed.

  18. A comparison of strain calculation using digital image correlation and finite element software

    NASA Astrophysics Data System (ADS)

    Iadicola, M.; Banerjee, D.

    2016-08-01

    Digital image correlation (DIC) data are being extensively used for many forming applications and for comparisons with finite element analysis (FEA) simulated results. The most challenging comparisons are often in the area of strain localizations just prior to material failure. While qualitative comparisons can be misleading, quantitative comparisons are difficult because of insufficient information about the type of strain output. In this work, strains computed from DIC displacements from a forming limit test are compared to those from three commercial FEA software. Quantitative differences in calculated strains are assessed to determine if the scale of variations seen between FEA and DIC calculated strains constitute real behavior or just calculation differences.

  19. Gemini planet imager integration to the Gemini South telescope software environment

    NASA Astrophysics Data System (ADS)

    Rantakyrö, Fredrik T.; Cardwell, Andrew; Chilcote, Jeffrey; Dunn, Jennifer; Goodsell, Stephen; Hibon, Pascale; Macintosh, Bruce; Quiroz, Carlos; Perrin, Marshall D.; Sadakuni, Naru; Saddlemyer, Leslie; Savransky, Dmitry; Serio, Andrew; Winge, Claudia; Galvez, Ramon; Gausachs, Gaston; Hardie, Kayla; Hartung, Markus; Luhrs, Javier; Poyneer, Lisa; Thomas, Sandrine

    2014-08-01

    The Gemini Planet Imager is an extreme AO instrument with an integral field spectrograph (IFS) operating in Y, J, H, and K bands. Both the Gemini telescope and the GPI instrument are very complex systems. Our goal is that the combined telescope and instrument system may be run by one observer operating the instrument, and one operator controlling the telescope and the acquisition of light to the instrument. This requires a smooth integration between the two systems and easily operated control interfaces. We discuss the definition of the software and hardware interfaces, their implementation and testing, and the integration of the instrument with the telescope environment.

  20. Two-Dimensional Gel Electrophoresis Image Analysis via Dedicated Software Packages.

    PubMed

    Maurer, Martin H

    2016-01-01

    Analyzing two-dimensional gel electrophoretic images is supported by a number of freely and commercially available software. Although the respective program is highly specific, all the programs follow certain standardized algorithms. General steps are: (1) detecting and separating individual spots, (2) subtracting background, (3) creating a reference gel and (4) matching the spots to the reference gel, (5) modifying the reference gel, (6) normalizing the gel measurements for comparison, (7) calibrating for isoelectric point and molecular weight markers, and moreover, (8) constructing a database containing the measurement results and (9) comparing data by statistical and bioinformatic methods.

  1. Multithreaded real-time 3D image processing software architecture and implementation

    NASA Astrophysics Data System (ADS)

    Ramachandra, Vikas; Atanassov, Kalin; Aleksic, Milivoje; Goma, Sergio R.

    2011-03-01

    Recently, 3D displays and videos have generated a lot of interest in the consumer electronics industry. To make 3D capture and playback popular and practical, a user friendly playback interface is desirable. Towards this end, we built a real time software 3D video player. The 3D video player displays user captured 3D videos, provides for various 3D specific image processing functions and ensures a pleasant viewing experience. Moreover, the player enables user interactivity by providing digital zoom and pan functionalities. This real time 3D player was implemented on the GPU using CUDA and OpenGL. The player provides user interactive 3D video playback. Stereo images are first read by the player from a fast drive and rectified. Further processing of the images determines the optimal convergence point in the 3D scene to reduce eye strain. The rationale for this convergence point selection takes into account scene depth and display geometry. The first step in this processing chain is identifying keypoints by detecting vertical edges within the left image. Regions surrounding reliable keypoints are then located on the right image through the use of block matching. The difference in the positions between the corresponding regions in the left and right images are then used to calculate disparity. The extrema of the disparity histogram gives the scene disparity range. The left and right images are shifted based upon the calculated range, in order to place the desired region of the 3D scene at convergence. All the above computations are performed on one CPU thread which calls CUDA functions. Image upsampling and shifting is performed in response to user zoom and pan. The player also consists of a CPU display thread, which uses OpenGL rendering (quad buffers). This also gathers user input for digital zoom and pan and sends them to the processing thread.

  2. Review of free software tools for image analysis of fluorescence cell micrographs.

    PubMed

    Wiesmann, V; Franz, D; Held, C; Münzenmayer, C; Palmisano, R; Wittenberg, T

    2015-01-01

    An increasing number of free software tools have been made available for the evaluation of fluorescence cell micrographs. The main users are biologists and related life scientists with no or little knowledge of image processing. In this review, we give an overview of available tools and guidelines about which tools the users should use to segment fluorescence micrographs. We selected 15 free tools and divided them into stand-alone, Matlab-based, ImageJ-based, free demo versions of commercial tools and data sharing tools. The review consists of two parts: First, we developed a criteria catalogue and rated the tools regarding structural requirements, functionality (flexibility, segmentation and image processing filters) and usability (documentation, data management, usability and visualization). Second, we performed an image processing case study with four representative fluorescence micrograph segmentation tasks with figure-ground and cell separation. The tools display a wide range of functionality and usability. In the image processing case study, we were able to perform figure-ground separation in all micrographs using mainly thresholding. Cell separation was not possible with most of the tools, because cell separation methods are provided only by a subset of the tools and are difficult to parametrize and to use. Most important is that the usability matches the functionality of a tool. To be usable, specialized tools with less functionality need to fulfill less usability criteria, whereas multipurpose tools need a well-structured menu and intuitive graphical user interface.

  3. MIA - A free and open source software for gray scale medical image analysis

    PubMed Central

    2013-01-01

    Background Gray scale images make the bulk of data in bio-medical image analysis, and hence, the main focus of many image processing tasks lies in the processing of these monochrome images. With ever improving acquisition devices, spatial and temporal image resolution increases, and data sets become very large. Various image processing frameworks exists that make the development of new algorithms easy by using high level programming languages or visual programming. These frameworks are also accessable to researchers that have no background or little in software development because they take care of otherwise complex tasks. Specifically, the management of working memory is taken care of automatically, usually at the price of requiring more it. As a result, processing large data sets with these tools becomes increasingly difficult on work station class computers. One alternative to using these high level processing tools is the development of new algorithms in a languages like C++, that gives the developer full control over how memory is handled, but the resulting workflow for the prototyping of new algorithms is rather time intensive, and also not appropriate for a researcher with little or no knowledge in software development. Another alternative is in using command line tools that run image processing tasks, use the hard disk to store intermediate results, and provide automation by using shell scripts. Although not as convenient as, e.g. visual programming, this approach is still accessable to researchers without a background in computer science. However, only few tools exist that provide this kind of processing interface, they are usually quite task specific, and don’t provide an clear approach when one wants to shape a new command line tool from a prototype shell script. Results The proposed framework, MIA, provides a combination of command line tools, plug-ins, and libraries that make it possible to run image processing tasks interactively in a command shell

  4. User's guide to the TCSTKF software library: a graphics library for emulation of TEKTRONIX display images in. TKF disk files

    SciTech Connect

    Gray, W.H.; Burris, R.D.

    1980-11-01

    This report documents the user-level subroutines of the TCSTKF software library for the Oak Ridge National Laboratory (ORNL) Fusion Energy Division (FED) DECsystem-10. The TCSTKF graphics library was written and is maintained so that large-production computer programs can access a small, efficient graphics library and produce device-independent graphics files. This library is presented as an alternative to the larger graphics software libraries, such as DISSPLA. The main external difference between this software and the TCSTEK software library is that the TCSTKF software will created .TKF formatted intermediate plot data files, as well as producing display images on the screen of a Tektronix 4000 series storage tube terminal. These intermediate plot data files can be subsequently postprocessed into report-quality images on a variety of other graphics devices at ORNL.

  5. BAMS2 Workspace: a comprehensive and versatile neuroinformatic platform for collating and processing neuroanatomical connections

    PubMed Central

    Bota, Mihail; Talpalaru, Ştefan; Hintiryan, Houri; Dong, Hong-Wei; Swanson, Larry W.

    2014-01-01

    We present in this paper a novel neuroinformatic platform, the BAMS2 Workspace (http://brancusi1.usc.edu), designed for storing and processing information about gray matter region axonal connections. This de novo constructed module allows registered users to directly collate their data by using a simple and versatile visual interface. It also allows construction and analysis of sets of connections associated with gray matter region nomenclatures from any designated species. The Workspace includes a set of tools allowing the display of data in matrix and networks formats, and the uploading of processed information in visual, PDF, CSV, and Excel formats. Finally, the Workspace can be accessed anonymously by third party systems to create individualized connectivity networks. All features of the BAMS2 Workspace are described in detail, and are demonstrated with connectivity reports collated in BAMS and associated with the rat sensory-motor cortex, medial frontal cortex, and amygdalar regions. PMID:24668342

  6. BAMS2 workspace: a comprehensive and versatile neuroinformatic platform for collating and processing neuroanatomical connections.

    PubMed

    Bota, Mihail; Talpalaru, Stefan; Hintiryan, Houri; Dong, Hong-Wei; Swanson, Larry W

    2014-10-01

    We describe a novel neuroinformatic platform, the BAMS2 Workspace (http://brancusi1.usc.edu), designed for storing and processing information on gray matter region axonal connections. This de novo constructed module allows registered users to collate their data directly by using a simple and versatile visual interface. It also allows construction and analysis of sets of connections associated with gray matter region nomenclatures from any designated species. The Workspace includes a set of tools allowing the display of data in matrix and networks formats and the uploading of processed information in visual, PDF, CSV, and Excel formats. Finally, the Workspace can be accessed anonymously by third-party systems to create individualized connectivity networks. All features of the BAMS2 Workspace are described in detail and are demonstrated with connectivity reports collated in BAMS and associated with the rat sensory-motor cortex, medial frontal cortex, and amygdalar regions. PMID:24668342

  7. Image processing in biodosimetry: A proposal of a generic free software platform.

    PubMed

    Dumpelmann, Matthias; Cadena da Matta, Mariel; Pereira de Lemos Pinto, Marcela Maria; de Salazar E Fernandes, Thiago; Borges da Silva, Edvane; Amaral, Ademir

    2015-08-01

    The scoring of chromosome aberrations is the most reliable biological method for evaluating individual exposure to ionizing radiation. However, microscopic analyses of chromosome human metaphases, generally employed to identify aberrations mainly dicentrics (chromosome with two centromeres), is a laborious task. This method is time consuming and its application in biological dosimetry would be almost impossible in case of a large scale radiation incidents. In this project, a generic software was enhanced for automatic chromosome image processing from a framework originally developed for the Framework V project Simbio, of the European Union for applications in the area of source localization from electroencephalographic signals. The platforms capability is demonstrated by a study comparing automatic segmentation strategies of chromosomes from microscopic images.

  8. Software-Assisted Depth Analysis of Optic Nerve Stereoscopic Images in Telemedicine.

    PubMed

    Xia, Tian; Patel, Shriji N; Szirth, Ben C; Kolomeyer, Anton M; Khouri, Albert S

    2016-01-01

    Background. Software guided optic nerve assessment can assist in process automation and reduce interobserver disagreement. We tested depth analysis software (DAS) in assessing optic nerve cup-to-disc ratio (VCD) from stereoscopic optic nerve images (SONI) of normal eyes. Methods. In a prospective study, simultaneous SONI from normal subjects were collected during telemedicine screenings using a Kowa 3Wx nonmydriatic simultaneous stereoscopic retinal camera (Tokyo, Japan). VCD was determined from SONI pairs and proprietary pixel DAS (Kowa Inc., Tokyo, Japan) after disc and cup contour line placement. A nonstereoscopic VCD was determined using the right channel of a stereo pair. Mean, standard deviation, t-test, and the intraclass correlation coefficient (ICCC) were calculated. Results. 32 patients had mean age of 40 ± 14 years. Mean VCD on SONI was 0.36 ± 0.09, with DAS 0.38 ± 0.08, and with nonstereoscopic 0.29 ± 0.12. The difference between stereoscopic and DAS assisted was not significant (p = 0.45). ICCC showed agreement between stereoscopic and software VCD assessment. Mean VCD difference was significant between nonstereoscopic and stereoscopic (p < 0.05) and nonstereoscopic and DAS (p < 0.005) recordings. Conclusions. DAS successfully assessed SONI and showed a high degree of correlation to physician-determined stereoscopic VCD. PMID:27190507

  9. Software-Assisted Depth Analysis of Optic Nerve Stereoscopic Images in Telemedicine

    PubMed Central

    Xia, Tian; Patel, Shriji N.; Szirth, Ben C.

    2016-01-01

    Background. Software guided optic nerve assessment can assist in process automation and reduce interobserver disagreement. We tested depth analysis software (DAS) in assessing optic nerve cup-to-disc ratio (VCD) from stereoscopic optic nerve images (SONI) of normal eyes. Methods. In a prospective study, simultaneous SONI from normal subjects were collected during telemedicine screenings using a Kowa 3Wx nonmydriatic simultaneous stereoscopic retinal camera (Tokyo, Japan). VCD was determined from SONI pairs and proprietary pixel DAS (Kowa Inc., Tokyo, Japan) after disc and cup contour line placement. A nonstereoscopic VCD was determined using the right channel of a stereo pair. Mean, standard deviation, t-test, and the intraclass correlation coefficient (ICCC) were calculated. Results. 32 patients had mean age of 40 ± 14 years. Mean VCD on SONI was 0.36 ± 0.09, with DAS 0.38 ± 0.08, and with nonstereoscopic 0.29 ± 0.12. The difference between stereoscopic and DAS assisted was not significant (p = 0.45). ICCC showed agreement between stereoscopic and software VCD assessment. Mean VCD difference was significant between nonstereoscopic and stereoscopic (p < 0.05) and nonstereoscopic and DAS (p < 0.005) recordings. Conclusions. DAS successfully assessed SONI and showed a high degree of correlation to physician-determined stereoscopic VCD. PMID:27190507

  10. User's Guide for the MapImage Reprojection Software Package, Version 1.01

    USGS Publications Warehouse

    Finn, Michael P.; Trent, Jason R.

    2004-01-01

    Scientists routinely accomplish small-scale geospatial modeling in the raster domain, using high-resolution datasets (such as 30-m data) for large parts of continents and low-resolution to high-resolution datasets for the entire globe. Recently, Usery and others (2003a) expanded on the previously limited empirical work with real geographic data by compiling and tabulating the accuracy of categorical areas in projected raster datasets of global extent. Geographers and applications programmers at the U.S. Geological Survey's (USGS) Mid-Continent Mapping Center (MCMC) undertook an effort to expand and evolve an internal USGS software package, MapImage, or mapimg, for raster map projection transformation (Usery and others, 2003a). Daniel R. Steinwand of Science Applications International Corporation, Earth Resources Observation Systems Data Center in Sioux Falls, S. Dak., originally developed mapimg for the USGS, basing it on the USGS's General Cartographic Transformation Package (GCTP). It operated as a command line program on the Unix operating system. Through efforts at MCMC, and in coordination with Mr. Steinwand, this program has been transformed from an application based on a command line into a software package based on a graphic user interface for Windows, Linux, and Unix machines. Usery and others (2003b) pointed out that many commercial software packages do not use exact projection equations and that even when exact projection equations are used, the software often results in error and sometimes does not complete the transformation for specific projections, at specific resampling resolutions, and for specific singularities. Direct implementation of point-to-point transformation with appropriate functions yields the variety of projections available in these software packages, but implementation with data other than points requires specific adaptation of the equations or prior preparation of the data to allow the transformation to succeed. Additional

  11. Software development for ACR-approved phantom-based nuclear medicine tomographic image quality control with cross-platform compatibility

    NASA Astrophysics Data System (ADS)

    Oh, Jungsu S.; Choi, Jae Min; Nam, Ki Pyo; Chae, Sun Young; Ryu, Jin-Sook; Moon, Dae Hyuk; Kim, Jae Seung

    2015-07-01

    Quality control and quality assurance (QC/QA) have been two of the most important issues in modern nuclear medicine (NM) imaging for both clinical practices and academic research. Whereas quantitative QC analysis software is common to modern positron emission tomography (PET) scanners, the QC of gamma cameras and/or single-photon-emission computed tomography (SPECT) scanners has not been sufficiently addressed. Although a thorough standard operating process (SOP) for mechanical and software maintenance may help the QC/QA of a gamma camera and SPECT-computed tomography (CT), no previous study has addressed a unified platform or process to decipher or analyze SPECT phantom images acquired from various scanners thus far. In addition, a few approaches have established cross-platform software to enable the technologists and physicists to assess the variety of SPECT scanners from different manufacturers. To resolve these issues, we have developed Interactive Data Language (IDL)-based in-house software for crossplatform (in terms of not only operating systems (OS) but also manufacturers) analyses of the QC data on an ACR SPECT phantom, which is essential for assessing and assuring the tomographical image quality of SPECT. We applied our devised software to our routine quarterly QC of ACR SPECT phantom images acquired from a number of platforms (OS/manufacturers). Based on our experience, we suggest that our devised software can offer a unified platform that allows images acquired from various types of scanners to be analyzed with great precision and accuracy.

  12. Hierarchical Image Segmentation of Remotely Sensed Data using Massively Parallel GNU-LINUX Software

    NASA Technical Reports Server (NTRS)

    Tilton, James C.

    2003-01-01

    A hierarchical set of image segmentations is a set of several image segmentations of the same image at different levels of detail in which the segmentations at coarser levels of detail can be produced from simple merges of regions at finer levels of detail. In [1], Tilton, et a1 describes an approach for producing hierarchical segmentations (called HSEG) and gave a progress report on exploiting these hierarchical segmentations for image information mining. The HSEG algorithm is a hybrid of region growing and constrained spectral clustering that produces a hierarchical set of image segmentations based on detected convergence points. In the main, HSEG employs the hierarchical stepwise optimization (HSWO) approach to region growing, which was described as early as 1989 by Beaulieu and Goldberg. The HSWO approach seeks to produce segmentations that are more optimized than those produced by more classic approaches to region growing (e.g. Horowitz and T. Pavlidis, [3]). In addition, HSEG optionally interjects between HSWO region growing iterations, merges between spatially non-adjacent regions (i.e., spectrally based merging or clustering) constrained by a threshold derived from the previous HSWO region growing iteration. While the addition of constrained spectral clustering improves the utility of the segmentation results, especially for larger images, it also significantly increases HSEG s computational requirements. To counteract this, a computationally efficient recursive, divide-and-conquer, implementation of HSEG (RHSEG) was devised, which includes special code to avoid processing artifacts caused by RHSEG s recursive subdivision of the image data. The recursive nature of RHSEG makes for a straightforward parallel implementation. This paper describes the HSEG algorithm, its recursive formulation (referred to as RHSEG), and the implementation of RHSEG using massively parallel GNU-LINUX software. Results with Landsat TM data are included comparing RHSEG with classic

  13. A collaborative resource management workspace and project management application for data collection, analysis and visualization: OpenNRM

    NASA Astrophysics Data System (ADS)

    Osti, A.

    2013-12-01

    During the process of research and design for OpenNRM, we imagined a place where diverse groups of people and communities could effectively and efficiently collaborate to manage large-scale environmental problems and projects. Our research revealed the need to combine a variety of software components. Users can explore and analyze a topic while simultaneously develop stories and solve problems in a way that the end result is consumable by their colleagues and the general public. To do this we brought together software modules that are typically separate: Document and Asset Management, GIS and Interactive Mapping, WIKI and Information Libraries, Data Catalogs and Services, Project Management Tools and Storytelling templates. These components, along with others are supported by extensive data catalogs (NWIS, Storet, CDEC, Cuahsi), data analysis tools and web services for a turn-key workspace that will allow you to quickly build project communities and data stories. In this presentation we will show you how our investigation into these collaborative efforts are implemented and working for some of our clients, including the State of California's Sacramento San Joaquin Bay-Delta and San Joaquin River Basin. The case study will display the use of the OpenNRM workspace for real time environmental conditions management, data visualization, project operations, environmental restoration, high frequency monitoring and data reporting. We will demonstrate how scientists and policy makers are working together to tell the story of this complicated and divisive system and how they are becoming better managers of that system. Using the genius of web services, we will show you how OpenNRM was designed to allow you to build your own community while easily sharing data stories, project data, monitoring results, document libraries, interactive maps and datasets with others. We will get into more technical detail by presenting how our data interpolation tools can show high frequency

  14. HARPS-N: software path from the observation block to the image

    NASA Astrophysics Data System (ADS)

    Sosnowska, D.; Lodi, M.; Gao, X.; Buchschacher, N.; Vick, A.; Guerra, J.; Gonzalez, M.; Kelly, D.; Lovis, C.; Pepe, F.; Molinari, E.; Cameron, A. C.; Latham, D.; Udry, S.

    2012-09-01

    HARPS North is the twin of the HARPS (High Accuracy Radial velocity for Planetary Search) spectrograph operating in La Silla (Chile) recently installed on the TNG in La Palma observatory and used to follow-up, the "hot" candidates delivered by the Kepler satellite. HARPS-N is delivered with its own software that completely integrates with the TNG control system. A special care has been dedicated to develop tools that will assist the astronomers during the whole process of taking images: from the observation schedule to the raw image acquisition. All these tools are presented in the paper. In order to provide a stable and reliable system, the software has been developed keeping in mind concepts like failover and high-availability. HARPS-N is made of heterogeneous systems, from normal computer to real-time systems, that's why the standard message queue middleware (ActiveMQ) was chosen to provide the communications between different processes. The path of operations starting with the Observation Blocks and ending with the FITS frames is fully automated and could allow, in the future, the completely remote observing runs optimized for the time and quality constraints.

  15. Upper extremity 3D reachable workspace analysis in dystrophinopathy using Kinect

    PubMed Central

    Han, Jay J.; Kurillo, Gregorij; Abresch, Richard T.; de Bie, Evan; Nicorici, Alina; Bajcsy, Ruzena

    2015-01-01

    Introduction An innovative upper extremity 3D reachable workspace outcome measure acquired using Kinect sensor is applied towards Duchenne/Becker muscular dystrophy (DMD/BMD). The validity, sensitivity, and clinical meaningfulness of the novel outcome is examined. Methods Upper extremity function assessment (Brooke scale, NeuroQOL questionnaire) and Kinect-based reachable workspace analyses were conducted in 43 individuals with dystrophinopathy (30-DMD, 13-BMD; ages 7–60) and 46 controls (ages 6–68). Results The reachable workspace measure reliably captured a wide-range of upper extremity impairments encountered in both pediatric and adult, as well as ambulatory and non-ambulatory individuals with dystrophinopathy. Reduced reachable workspaces were noted for the dystrophinopathy cohort compared to controls, and they correlated with Brooke grades. Additionally, progressive reduction in reachable workspace directly correlated with worsening ability to perform activities of daily living, as self-reported on the NeuroQOL. Discussion This study demonstrates the utility and potential of the novel sensor-acquired reachable workspace outcome measure in dystrophinopathy. PMID:25597487

  16. Optimal Design of a 3-Leg 6-DOF Parallel Manipulator for a Specific Workspace

    NASA Astrophysics Data System (ADS)

    Fu, Jianxun; Gao, Feng

    2016-04-01

    Researchers seldom study optimum design of a six-degree-of-freedom(DOF) parallel manipulator with three legs based upon the given workspace. An optimal design method of a novel three-leg six-DOF parallel manipulator(TLPM) is presented. The mechanical structure of this robot is introduced, with this structure the kinematic constrain equations is decoupled. Analytical solutions of the forward kinematics are worked out, one configuration of this robot, including position and orientation of the end-effector are graphically displayed. Then, on the basis of several extreme positions of the kinematic performances, the task workspace is given. An algorithm of optimal designing is introduced to find the smallest dimensional parameters of the proposed robot. Examples illustrate the design results, and a design stability index is introduced, which ensures that the robot remains a safe distance from the boundary of sits actual workspace. Finally, one prototype of the robot is developed based on this method. This method can easily find appropriate kinematic parameters that can size a robot having the smallest workspace enclosing a predefined task workspace. It improves the design efficiency, ensures that the robot has a small mechanical size possesses a large given workspace volume, and meets the lightweight design requirements.

  17. Optimal design of a 3-leg 6-DOF parallel manipulator for a specific workspace

    NASA Astrophysics Data System (ADS)

    Fu, Jianxun; Gao, Feng

    2016-07-01

    Researchers seldom study optimum design of a six-degree-of-freedom(DOF) parallel manipulator with three legs based upon the given workspace. An optimal design method of a novel three-leg six-DOF parallel manipulator(TLPM) is presented. The mechanical structure of this robot is introduced, with this structure the kinematic constrain equations is decoupled. Analytical solutions of the forward kinematics are worked out, one configuration of this robot, including position and orientation of the end-effector are graphically displayed. Then, on the basis of several extreme positions of the kinematic performances, the task workspace is given. An algorithm of optimal designing is introduced to find the smallest dimensional parameters of the proposed robot. Examples illustrate the design results, and a design stability index is introduced, which ensures that the robot remains a safe distance from the boundary of sits actual workspace. Finally, one prototype of the robot is developed based on this method. This method can easily find appropriate kinematic parameters that can size a robot having the smallest workspace enclosing a predefined task workspace. It improves the design efficiency, ensures that the robot has a small mechanical size possesses a large given workspace volume, and meets the lightweight design requirements.

  18. Image pixel guided tours: a software platform for non-destructive x-ray imaging

    NASA Astrophysics Data System (ADS)

    Lam, K. P.; Emery, R.

    2009-02-01

    Multivariate analysis seeks to describe the relationship between an arbitrary number of variables. To explore highdimensional data sets, projections are often used for data visualisation to aid discovering structure or patterns that lead to the formation of statistical hypothesis. The basic concept necessitates a systematic search for lower-dimensional representations of the data that might show interesting structure(s). Motivated by the recent research on the Image Grand Tour (IGT), which can be adapted to view guided projections by using objective indexes that are capable of revealing latent structures of the data, this paper presents a signal processing perspective on constructing such indexes under the unifying exploratory frameworks of Independent Component Analysis (ICA) and Projection Pursuit (PP). Our investigation begins with an overview of dimension reduction techniques by means of orthogonal transforms, including the classical procedure of Principal Component Analysis (PCA), and extends to an application of the more powerful techniques of ICA in the context of our recent work on non-destructive testing technology by element specific x-ray imaging.

  19. Imaging C. elegans Embryos using an Epifluorescent Microscope and Open Source Software

    PubMed Central

    Verbrugghe, Koen J. C.; Chan, Raymond C.

    2011-01-01

    Cellular processes, such as chromosome assembly, segregation and cytokinesis,are inherently dynamic. Time-lapse imaging of living cells, using fluorescent-labeled reporter proteins or differential interference contrast (DIC) microscopy, allows for the examination of the temporal progression of these dynamic events which is otherwise inferred from analysis of fixed samples1,2. Moreover, the study of the developmental regulations of cellular processes necessitates conducting time-lapse experiments on an intact organism during development. The Caenorhabiditis elegans embryo is light-transparent and has a rapid, invariant developmental program with a known cell lineage3, thus providing an ideal experiment model for studying questions in cell biology4,5and development6-9. C. elegans is amendable to genetic manipulation by forward genetics (based on random mutagenesis10,11) and reverse genetics to target specific genes (based on RNAi-mediated interference and targeted mutagenesis12-15). In addition, transgenic animals can be readily created to express fluorescently tagged proteins or reporters16,17. These traits combine to make it easy to identify the genetic pathways regulating fundamental cellular and developmental processes in vivo18-21. In this protocol we present methods for live imaging of C. elegans embryos using DIC optics or GFP fluorescence on a compound epifluorescent microscope. We demonstrate the ease with which readily available microscopes, typically used for fixed sample imaging, can also be applied for time-lapse analysis using open-source software to automate the imaging process. PMID:21490567

  20. Comprehensive, powerful, efficient, intuitive: a new software framework for clinical imaging applications

    NASA Astrophysics Data System (ADS)

    Augustine, Kurt E.; Holmes, David R., III; Hanson, Dennis P.; Robb, Richard A.

    2006-03-01

    One of the greatest challenges for a software engineer is to create a complex application that is comprehensive enough to be useful to a diverse set of users, yet focused enough for individual tasks to be carried out efficiently with minimal training. This "powerful yet simple" paradox is particularly prevalent in advanced medical imaging applications. Recent research in the Biomedical Imaging Resource (BIR) at Mayo Clinic has been directed toward development of an imaging application framework that provides powerful image visualization/analysis tools in an intuitive, easy-to-use interface. It is based on two concepts very familiar to physicians - Cases and Workflows. Each case is associated with a unique patient and a specific set of routine clinical tasks, or a workflow. Each workflow is comprised of an ordered set of general-purpose modules which can be re-used for each unique workflow. Clinicians help describe and design the workflows, and then are provided with an intuitive interface to both patient data and analysis tools. Since most of the individual steps are common to many different workflows, the use of general-purpose modules reduces development time and results in applications that are consistent, stable, and robust. While the development of individual modules may reflect years of research by imaging scientists, new customized workflows based on the new modules can be developed extremely fast. If a powerful, comprehensive application is difficult to learn and complicated to use, it will be unacceptable to most clinicians. Clinical image analysis tools must be intuitive and effective or they simply will not be used.

  1. New AIRS: The medical imaging software for segmentation and registration of elastic organs in SPECT/CT

    NASA Astrophysics Data System (ADS)

    Widita, R.; Kurniadi, R.; Darma, Y.; Perkasa, Y. S.; Trianti, N.

    2012-06-01

    We have been successfully improved our software, Automated Image Registration and Segmentation (AIRS), to fuse the CT and SPECT images of elastic organs. Segmentation and registration of elastic organs presents many challenges. Many artifacts can arise in SPECT/CT scans. Also, different organs and tissues have very similar gray levels, which consign thresholding to limited utility. We have been developed a new software to solve different registration and segmentation problems that arises in tomographic data sets. It will be demonstrated that the information obtained by SPECT/CT is more accurate in evaluating patients/objects than that obtained from either SPECT or CT alone. We used multi-modality registration which is amenable for images produced by different modalities and having unclear boundaries between tissues. The segmentation components used in this software is region growing algorithms which have proven to be an effective approach for image segmentation. Our method is designed to perform with clinically acceptable speed, using accelerated techniques (multiresolution).

  2. Software-based high-level synthesis design of FPGA beamformers for synthetic aperture imaging.

    PubMed

    Amaro, Joao; Yiu, Billy Y S; Falcao, Gabriel; Gomes, Marco A C; Yu, Alfred C H

    2015-05-01

    Field-programmable gate arrays (FPGAs) can potentially be configured as beamforming platforms for ultrasound imaging, but a long design time and skilled expertise in hardware programming are typically required. In this article, we present a novel approach to the efficient design of FPGA beamformers for synthetic aperture (SA) imaging via the use of software-based high-level synthesis techniques. Software kernels (coded in OpenCL) were first developed to stage-wise handle SA beamforming operations, and their corresponding FPGA logic circuitry was emulated through a high-level synthesis framework. After design space analysis, the fine-tuned OpenCL kernels were compiled into register transfer level descriptions to configure an FPGA as a beamformer module. The processing performance of this beamformer was assessed through a series of offline emulation experiments that sought to derive beamformed images from SA channel-domain raw data (40-MHz sampling rate, 12 bit resolution). With 128 channels, our FPGA-based SA beamformer can achieve 41 frames per second (fps) processing throughput (3.44 × 10(8) pixels per second for frame size of 256 × 256 pixels) at 31.5 W power consumption (1.30 fps/W power efficiency). It utilized 86.9% of the FPGA fabric and operated at a 196.5 MHz clock frequency (after optimization). Based on these findings, we anticipate that FPGA and high-level synthesis can together foster rapid prototyping of real-time ultrasound processor modules at low power consumption budgets.

  3. Software-based high-level synthesis design of FPGA beamformers for synthetic aperture imaging.

    PubMed

    Amaro, Joao; Yiu, Billy Y S; Falcao, Gabriel; Gomes, Marco A C; Yu, Alfred C H

    2015-05-01

    Field-programmable gate arrays (FPGAs) can potentially be configured as beamforming platforms for ultrasound imaging, but a long design time and skilled expertise in hardware programming are typically required. In this article, we present a novel approach to the efficient design of FPGA beamformers for synthetic aperture (SA) imaging via the use of software-based high-level synthesis techniques. Software kernels (coded in OpenCL) were first developed to stage-wise handle SA beamforming operations, and their corresponding FPGA logic circuitry was emulated through a high-level synthesis framework. After design space analysis, the fine-tuned OpenCL kernels were compiled into register transfer level descriptions to configure an FPGA as a beamformer module. The processing performance of this beamformer was assessed through a series of offline emulation experiments that sought to derive beamformed images from SA channel-domain raw data (40-MHz sampling rate, 12 bit resolution). With 128 channels, our FPGA-based SA beamformer can achieve 41 frames per second (fps) processing throughput (3.44 × 10(8) pixels per second for frame size of 256 × 256 pixels) at 31.5 W power consumption (1.30 fps/W power efficiency). It utilized 86.9% of the FPGA fabric and operated at a 196.5 MHz clock frequency (after optimization). Based on these findings, we anticipate that FPGA and high-level synthesis can together foster rapid prototyping of real-time ultrasound processor modules at low power consumption budgets. PMID:25965680

  4. An image-based software tool for screening retinal fundus images using vascular morphology and network transport analysis

    NASA Astrophysics Data System (ADS)

    Clark, Richard D.; Dickrell, Daniel J.; Meadows, David L.

    2014-03-01

    As the number of digital retinal fundus images taken each year grows at an increasing rate, there exists a similarly increasing need for automatic eye disease detection through image-based analysis. A new method has been developed for classifying standard color fundus photographs into both healthy and diseased categories. This classification was based on the calculated network fluid conductance, a function of the geometry and connectivity of the vascular segments. To evaluate the network resistance, the retinal vasculature was first manually separated from the background to ensure an accurate representation of the geometry and connectivity. The arterial and venous networks were then semi-automatically separated into two separate binary images. The connectivity of the arterial network was then determined through a series of morphological image operations. The network comprised of segments of vasculature and points of bifurcation, with each segment having a characteristic geometric and fluid properties. Based on the connectivity and fluid resistance of each vascular segment, an arterial network flow conductance was calculated, which described the ease with which blood can pass through a vascular system. In this work, 27 eyes (13 healthy and 14 diabetic) from patients roughly 65 years in age were evaluated using this methodology. Healthy arterial networks exhibited an average fluid conductance of 419 ± 89 μm3/mPa-s while the average network fluid conductance of the diabetic set was 165 ± 87 μm3/mPa-s (p < 0.001). The results of this new image-based software demonstrated an ability to automatically, quantitatively and efficiently screen diseased eyes from color fundus imagery.

  5. Real-time telemedicine using shared three-dimensional workspaces over ATM

    NASA Astrophysics Data System (ADS)

    Cahoon, Peter; Forsey, David R.; Hutchison, Susan

    1999-03-01

    During the past five years a high speed ATM network has been developed at UBC that provides a campus testbed, a local testbed to the hospitals, and a National testbed between here and the BADLAB in Ottawa. This testbed has been developed to combine a commercial shared audio/video/whiteboard environment coupled with a shared interactive 3-dimensional solid model. This solid model ranges from a skull reconstructed from a CT scan with muscles and an overlying skin, to a model of the ventricle system of the human brain. Typical interactions among surgeon, radiologist and modeler consist of having image slices of the original scan shared by all and the ability to adjust the surface of the model to conform to each individuals perception of what the final object should look like. The purpose of this interaction can range from forensic reconstruction from partial remains to pre-maxillofacial surgery. A joint project with the forensic unit of the R.C.M.P. in Ottawa using the BADLAB is now in the stages of testing this methodology on a real case beginning with a CT scan of partial remains. A second study underway with the department of Maxiofacial reconstruction at Dalhousie University in Halifax Nova Scotia and concerns a subject who is about to undergo orthognathic surgery, in particular a mandibular advancement. This subject has been MRI scanned, a solid model constructed of the mandible and the virtual surgery constructed on the model. This model and the procedure have been discussed and modified by the modeler and the maxillofacial specialist using these shared workspaces. The procedure will be repeated after the actual surgery to verify the modeled procedure. The advantage of this technique is that none of the specialists need be in the same room, or city. Given the scarcity of time and specialists this methodology shows great promise. In November of this last year a shared live demonstration of this facial modeler was done between Vancouver and Dalhousie University in

  6. Digital map and situation surface: a team-oriented multidisplay workspace for network enabled situation analysis

    NASA Astrophysics Data System (ADS)

    Peinsipp-Byma, E.; Geisler, Jürgen; Bader, Thomas

    2009-05-01

    System concepts for network enabled image-based ISR (intelligence, surveillance, reconnaissance) is the major mission of Fraunhofer IITB's applied research in the area of defence and security solutions. For the TechDemo08 as part of the NATO CNAD POW Defence against terrorism Fraunhofer IITB advanced a new multi display concept to handle the shear amount and high complexity of ISR data acquired by networked, distributed surveillance systems with the objective to support the generation of a common situation picture. Amount and Complexity of ISR data demands an innovative man-machine interface concept for humans to deal with it. The IITB's concept is the Digital Map & Situation Surface. This concept offers to the user a coherent multi display environment combining a horizontal surface for the situation overview from the bird's eye view, an attached vertical display for collateral information and so-called foveatablets as personalized magic lenses in order to obtain high resolved and role-specific information about a focused areaof- interest and to interact with it. In the context of TechDemo08 the Digital Map & Situation Surface served as workspace for team-based situation visualization and analysis. Multiple sea- and landside surveillance components were connected to the system.

  7. Pre-Hardware Optimization of Spacecraft Image Processing Software Algorithms and Hardware Implementation

    NASA Technical Reports Server (NTRS)

    Kizhner, Semion; Flatley, Thomas P.; Hestnes, Phyllis; Jentoft-Nilsen, Marit; Petrick, David J.; Day, John H. (Technical Monitor)

    2001-01-01

    Spacecraft telemetry rates have steadily increased over the last decade presenting a problem for real-time processing by ground facilities. This paper proposes a solution to a related problem for the Geostationary Operational Environmental Spacecraft (GOES-8) image processing application. Although large super-computer facilities are the obvious heritage solution, they are very costly, making it imperative to seek a feasible alternative engineering solution at a fraction of the cost. The solution is based on a Personal Computer (PC) platform and synergy of optimized software algorithms and re-configurable computing hardware technologies, such as Field Programmable Gate Arrays (FPGA) and Digital Signal Processing (DSP). It has been shown in [1] and [2] that this configuration can provide superior inexpensive performance for a chosen application on the ground station or on-board a spacecraft. However, since this technology is still maturing, intensive pre-hardware steps are necessary to achieve the benefits of hardware implementation. This paper describes these steps for the GOES-8 application, a software project developed using Interactive Data Language (IDL) (Trademark of Research Systems, Inc.) on a Workstation/UNIX platform. The solution involves converting the application to a PC/Windows/RC platform, selected mainly by the availability of low cost, adaptable high-speed RC hardware. In order for the hybrid system to run, the IDL software was modified to account for platform differences. It was interesting to examine the gains and losses in performance on the new platform, as well as unexpected observations before implementing hardware. After substantial pre-hardware optimization steps, the necessity of hardware implementation for bottleneck code in the PC environment became evident and solvable beginning with the methodology described in [1], [2], and implementing a novel methodology for this specific application [6]. The PC-RC interface bandwidth problem for the

  8. JHelioviewer: Open-Source Software for Discovery and Image Access in the Petabyte Age (Invited)

    NASA Astrophysics Data System (ADS)

    Mueller, D.; Dimitoglou, G.; Langenberg, M.; Pagel, S.; Dau, A.; Nuhn, M.; Garcia Ortiz, J. P.; Dietert, H.; Schmidt, L.; Hughitt, V. K.; Ireland, J.; Fleck, B.

    2010-12-01

    The unprecedented torrent of data returned by the Solar Dynamics Observatory is both a blessing and a barrier: a blessing for making available data with significantly higher spatial and temporal resolution, but a barrier for scientists to access, browse and analyze them. With such staggering data volume, the data is bound to be accessible only from a few repositories and users will have to deal with data sets effectively immobile and practically difficult to download. From a scientist's perspective this poses three challenges: accessing, browsing and finding interesting data while avoiding the proverbial search for a needle in a haystack. To address these challenges, we have developed JHelioviewer, an open-source visualization software that lets users browse large data volumes both as still images and movies. We did so by deploying an efficient image encoding, storage, and dissemination solution using the JPEG 2000 standard. This solution enables users to access remote images at different resolution levels as a single data stream. Users can view, manipulate, pan, zoom, and overlay JPEG 2000 compressed data quickly, without severe network bandwidth penalties. Besides viewing data, the browser provides third-party metadata and event catalog integration to quickly locate data of interest, as well as an interface to the Virtual Solar Observatory to download science-quality data. As part of the Helioviewer Project, JHelioviewer offers intuitive ways to browse large amounts of heterogeneous data remotely and provides an extensible and customizable open-source platform for the scientific community.

  9. Measuring the area of tear film break-up by image analysis software

    NASA Astrophysics Data System (ADS)

    Pena-Verdeal, Hugo; García-Resúa, Carlos; Ramos, Lucía.; Mosquera, Antonio; Yebra-Pimentel, Eva; Giráldez, María. Jesús

    2013-11-01

    Tear film breakup time (BUT) test only examines the first break in the tear film, but subsequent tear film events are not monitored. We present a method of measuring the area of breakup after the appearance of the first breakup by using open source software. Furthermore, the speed of the rupture was determined. 84 subjects participated in the study. 2 μl volume of 2% sodium fluorescein was instilled using a micropipette. The subject was seated behind a slit-lamp using a cobalt blue filter together with a Wratten 12 yellow filter. Then, the tear film was recorded by a camera attached to the slit lamp. 4 frames of each video was extracted, the first rupture (BUT_0), breakup after 1 second (BUT_1), rupture after 2 seconds (BUT_2) and breakup before the last blink (BUT_F). Open source software of measurement based on Java (NIH ImageJ) was used to measure the number of pixels in areas of breakup. These areas were divided by the area of exposed cornea to obtain the percentage of ruptures. Instantaneous breakup speed was calculated for second 1 as the difference between BUT_1 - BUT_0, whereas instant speed for second 2 was BUT_2 - BUT_1. Mean area of breakup obtained was: BUT_0 = 0.26%, BUT_1 = 0.48%, BUT_2 = 0.79% and BUT_F = 1.61%. Break speed was 0.22 area/sec for second 1 and 0.31 area/sec for second 2, showing a statistical difference between them (p = 0.007). Post BUT analysis may be easily monitoring with the aid of this software.

  10. Quantitative Neuroimaging Software for Clinical Assessment of Hippocampal Volumes on MR Imaging

    PubMed Central

    Ahdidan, Jamila; Raji, Cyrus A.; DeYoe, Edgar A.; Mathis, Jedidiah; Noe, Karsten Ø.; Rimestad, Jens; Kjeldsen, Thomas K.; Mosegaard, Jesper; Becker, James T.; Lopez, Oscar

    2015-01-01

    Background: Multiple neurological disorders including Alzheimer’s disease (AD), mesial temporal sclerosis, and mild traumatic brain injury manifest with volume loss on brain MRI. Subtle volume loss is particularly seen early in AD. While prior research has demonstrated the value of this additional information from quantitative neuroimaging, very few applications have been approved for clinical use. Here we describe a US FDA cleared software program, NeuroreaderTM, for assessment of clinical hippocampal volume on brain MRI. Objective: To present the validation of hippocampal volumetrics on a clinical software program. Method: Subjects were drawn (n = 99) from the Alzheimer Disease Neuroimaging Initiative study. Volumetric brain MR imaging was acquired in both 1.5 T (n = 59) and 3.0 T (n = 40) scanners in participants with manual hippocampal segmentation. Fully automated hippocampal segmentation and measurement was done using a multiple atlas approach. The Dice Similarity Coefficient (DSC) measured the level of spatial overlap between NeuroreaderTM and gold standard manual segmentation from 0 to 1 with 0 denoting no overlap and 1 representing complete agreement. DSC comparisons between 1.5 T and 3.0 T scanners were done using standard independent samples T-tests. Results: In the bilateral hippocampus, mean DSC was 0.87 with a range of 0.78–0.91 (right hippocampus) and 0.76–0.91 (left hippocampus). Automated segmentation agreement with manual segmentation was essentially equivalent at 1.5 T (DSC = 0.879) versus 3.0 T (DSC = 0.872). Conclusion: This work provides a description and validation of a software program that can be applied in measuring hippocampal volume, a biomarker that is frequently abnormal in AD and other neurological disorders. PMID:26484924

  11. Digital mapping of side-scan sonar data with the Woods Hole Image Processing System software

    USGS Publications Warehouse

    Paskevich, Valerie F.

    1992-01-01

    Since 1985, the Branch of Atlantic Marine Geology has been involved in collecting, processing and digitally mosaicking high and low resolution sidescan sonar data. In the past, processing and digital mosaicking has been accomplished with a dedicated, shore-based computer system. Recent development of a UNIX-based image-processing software system includes a series of task specific programs for pre-processing sidescan sonar data. To extend the capabilities of the UNIX-based programs, development of digital mapping techniques have been developed. This report describes the initial development of an automated digital mapping procedure. Included is a description of the programs and steps required to complete the digital mosaicking on a UNIXbased computer system, and a comparison of techniques that the user may wish to select.

  12. A software platform for the comparative analysis of electroanatomic and imaging data including conduction velocity mapping.

    PubMed

    Cantwell, Chris D; Roney, Caroline H; Ali, Rheeda L; Qureshi, Norman A; Lim, Phang Boon; Peters, Nicholas S

    2014-01-01

    Electroanatomic mapping systems collect increasingly large quantities of spatially-distributed electrical data which may be potentially further scrutinized post-operatively to expose mechanistic properties which sustain and perpetuate atrial fibrillation. We describe a modular software platform, developed to post-process and rapidly analyse data exported from electroanatomic mapping systems using a range of existing and novel algorithms. Imaging data highlighting regions of scar can also be overlaid for comparison. In particular, we describe the conduction velocity (CV) mapping algorithm used to highlight wavefront behaviour. CV was found to be particularly sensitive to the spatial distribution of the triangulation points and corresponding activation times. A set of geometric conditions were devised for selecting suitable triangulations of the electrogram set for generating CV maps.

  13. Integration of XNAT/PACS, DICOM, and research software for automated multi-modal image analysis

    NASA Astrophysics Data System (ADS)

    Gao, Yurui; Burns, Scott S.; Lauzon, Carolyn B.; Fong, Andrew E.; James, Terry A.; Lubar, Joel F.; Thatcher, Robert W.; Twillie, David A.; Wirt, Michael D.; Zola, Marc A.; Logan, Bret W.; Anderson, Adam W.; Landman, Bennett A.

    2013-03-01

    Traumatic brain injury (TBI) is an increasingly important public health concern. While there are several promising avenues of intervention, clinical assessments are relatively coarse and comparative quantitative analysis is an emerging field. Imaging data provide potentially useful information for evaluating TBI across functional, structural, and microstructural phenotypes. Integration and management of disparate data types are major obstacles. In a multi-institution collaboration, we are collecting electroencephalogy (EEG), structural MRI, diffusion tensor MRI (DTI), and single photon emission computed tomography (SPECT) from a large cohort of US Army service members exposed to mild or moderate TBI who are undergoing experimental treatment. We have constructed a robust informatics backbone for this project centered on the DICOM standard and eXtensible Neuroimaging Archive Toolkit (XNAT) server. Herein, we discuss (1) optimization of data transmission, validation and storage, (2) quality assurance and workflow management, and (3) integration of high performance computing with research software.

  14. SU-E-J-264: Comparison of Two Commercially Available Software Platforms for Deformable Image Registration

    SciTech Connect

    Tuohy, R; Stathakis, S; Mavroidis, P; Bosse, C; Papanikolaou, N

    2014-06-01

    Purpose: To evaluate and compare the deformable image registration algorithms available in the Velocity (Velocity Medical Solutions, Atlanta, GA) and RayStation (RaySearch Americas, Inc., Garden city NY). Methods: Ten consecutive patient cone beam CTs (CBCT) for each fraction were collected. The CBCTs along with the simulation CT were exported to the Velocity and the RayStation software. Each CBCT was registered using deformable image registration to the simulation CT and the resulting deformable vector matrix was generated. Each registration was visually inspected by a physicist and the prescribing physician. The volumes of the critical organs were calculated for each deformable CT and used for comparison. Results: The resulting deformable registrations revealed differences between the two algorithms. These differences were realized when the organs at risk were contoured on each deformed CBCT. Differences in the order of 10% ±30% in volume were observed for bladder, 17 ±21% for rectum and 16±10% for sigmoid. The prostate and PTV volume differences were in the order of 3±5%. The volumetric differences observed had a respective impact on the DVHs of all organs at risk. Differences of 8–10% in the mean dose were observed for all organs above. Conclusion: Deformable registration is a powerful tool that aids in the definition of critical structures and is often used for the evaluation of daily dose delivered to the patient. It should be noted that extended QA should be performed before clinical implementation of the software and the users should be aware of advantages and limitations of the methods.

  15. Comparison between three methods to value lower tear meniscus measured by image software

    NASA Astrophysics Data System (ADS)

    García-Resúa, Carlos; Pena-Verdeal, Hugo; Lira, Madalena; Oliveira, M. Elisabete Real; Giráldez, María. Jesús; Yebra-Pimentel, Eva

    2013-11-01

    To measure different parameters of lower tear meniscus height (TMH) by using photography with open software of measurement. TMH was addressed from lower eyelid to the top of the meniscus (absolute TMH) and to the brightest meniscus reflex (reflex TMH). 121 young healthy subjects were included in the study. The lower tear meniscus was videotaped by a digital camera attached to a slit lamp. Three videos were recorded in central meniscus portion on three different methods: slit lamp without fluorescein instillation, slit lamp with fluorescein instillation and TearscopeTM without fluorescein instillation. Then, a masked observed obtained an image from each video and measured TMH by using open source software of measurement based on Java (NIH ImageJ). Absolute central (TMH-CA), absolute with fluorescein (TMH-F) and absolute using the Tearscope (TMH-Tc) were compared each other as well as reflex central (TMH-CR) and reflex Tearscope (TMH-TcR). Mean +/- S.D. values of TMH-CA, TMH-CR, TMH-F, TMH-Tc and TMH-TcR of 0.209 +/- 0.049, 0.139 +/- 0.031, 0.222 +/- 0.058, 0.175 +/- 0.045 and 0.109 +/- 0.029 mm, respectively were found. Paired t-test was performed for the relationship between TMH-CA - TMH-CR, TMH-CA - TMH-F, TMH-CA - TMH-Tc, TMH-F - TMH-Tc, TMH-Tc - TMH-TcR and TMH-CR - TMH-TcR. In all cases, it was found a significant difference between both variables (all p < 0.008). This study showed a useful tool to objectively measure TMH by photography. Eye care professionals should maintain the same TMH parameter in the follow-up visits, due to the difference between them.

  16. Reliability evaluation of I-123 ADAM SPECT imaging using SPM software and AAL ROI methods

    NASA Astrophysics Data System (ADS)

    Yang, Bang-Hung; Tsai, Sung-Yi; Wang, Shyh-Jen; Su, Tung-Ping; Chou, Yuan-Hwa; Chen, Chia-Chieh; Chen, Jyh-Cheng

    2011-08-01

    The level of serotonin was regulated by serotonin transporter (SERT), which is a decisive protein in regulation of serotonin neurotransmission system. Many psychiatric disorders and therapies were also related to concentration of cerebral serotonin. I-123 ADAM was the novel radiopharmaceutical to image SERT in brain. The aim of this study was to measure reliability of SERT densities of healthy volunteers by automated anatomical labeling (AAL) method. Furthermore, we also used statistic parametric mapping (SPM) on a voxel by voxel analysis to find difference of cortex between test and retest of I-123 ADAM single photon emission computed tomography (SPECT) images.Twenty-one healthy volunteers were scanned twice with SPECT at 4 h after intravenous administration of 185 MBq of 123I-ADAM. The image matrix size was 128×128 and pixel size was 3.9 mm. All images were obtained through filtered back-projection (FBP) reconstruction algorithm. Region of interest (ROI) definition was performed based on the AAL brain template in PMOD version 2.95 software package. ROI demarcations were placed on midbrain, pons, striatum, and cerebellum. All images were spatially normalized to the SPECT MNI (Montreal Neurological Institute) templates supplied with SPM2. And each image was transformed into standard stereotactic space, which was matched to the Talairach and Tournoux atlas. Then differences across scans were statistically estimated on a voxel by voxel analysis using paired t-test (population main effect: 2 cond's, 1 scan/cond.), which was applied to compare concentration of SERT between the test and retest cerebral scans.The average of specific uptake ratio (SUR: target/cerebellum-1) of 123I-ADAM binding to SERT in midbrain was 1.78±0.27, pons was 1.21±0.53, and striatum was 0.79±0.13. The cronbach's α of intra-class correlation coefficient (ICC) was 0.92. Besides, there was also no significant statistical finding in cerebral area using SPM2 analysis. This finding might help us

  17. A flexible software architecture for scalable real-time image and video processing applications

    NASA Astrophysics Data System (ADS)

    Usamentiaga, Rubén; Molleda, Julio; García, Daniel F.; Bulnes, Francisco G.

    2012-06-01

    Real-time image and video processing applications require skilled architects, and recent trends in the hardware platform make the design and implementation of these applications increasingly complex. Many frameworks and libraries have been proposed or commercialized to simplify the design and tuning of real-time image processing applications. However, they tend to lack flexibility because they are normally oriented towards particular types of applications, or they impose specific data processing models such as the pipeline. Other issues include large memory footprints, difficulty for reuse and inefficient execution on multicore processors. This paper presents a novel software architecture for real-time image and video processing applications which addresses these issues. The architecture is divided into three layers: the platform abstraction layer, the messaging layer, and the application layer. The platform abstraction layer provides a high level application programming interface for the rest of the architecture. The messaging layer provides a message passing interface based on a dynamic publish/subscribe pattern. A topic-based filtering in which messages are published to topics is used to route the messages from the publishers to the subscribers interested in a particular type of messages. The application layer provides a repository for reusable application modules designed for real-time image and video processing applications. These modules, which include acquisition, visualization, communication, user interface and data processing modules, take advantage of the power of other well-known libraries such as OpenCV, Intel IPP, or CUDA. Finally, we present different prototypes and applications to show the possibilities of the proposed architecture.

  18. Free digital image analysis software helps to resolve equivocal scores in HER2 immunohistochemistry.

    PubMed

    Helin, Henrik O; Tuominen, Vilppu J; Ylinen, Onni; Helin, Heikki J; Isola, Jorma

    2016-02-01

    Evaluation of human epidermal growth factor receptor 2 (HER2) immunohistochemistry (IHC) is subject to interobserver variation and lack of reproducibility. Digital image analysis (DIA) has been shown to improve the consistency and accuracy of the evaluation and its use is encouraged in current testing guidelines. We studied whether digital image analysis using a free software application (ImmunoMembrane) can assist in interpreting HER2 IHC in equivocal 2+ cases. We also compared digital photomicrographs with whole-slide images (WSI) as material for ImmunoMembrane DIA. We stained 750 surgical resection specimens of invasive breast cancers immunohistochemically for HER2 and analysed staining with ImmunoMembrane. The ImmunoMembrane DIA scores were compared with the originally responsible pathologists' visual scores, a researcher's visual scores and in situ hybridisation (ISH) results. The originally responsible pathologists reported 9.1 % positive 3+ IHC scores, for the researcher this was 8.4 % and for ImmunoMembrane 9.5 %. Equivocal 2+ scores were 34 % for the pathologists, 43.7 % for the researcher and 10.1 % for ImmunoMembrane. Negative 0/1+ scores were 57.6 % for the pathologists, 46.8 % for the researcher and 80.8 % for ImmunoMembrane. There were six false positive cases, which were classified as 3+ by ImmunoMembrane and negative by ISH. Six cases were false negative defined as 0/1+ by IHC and positive by ISH. ImmunoMembrane DIA using digital photomicrographs and WSI showed almost perfect agreement. In conclusion, digital image analysis by ImmunoMembrane can help to resolve a majority of equivocal 2+ cases in HER2 IHC, which reduces the need for ISH testing.

  19. Variability and accuracy of different software packages for dynamic susceptibility contrast magnetic resonance imaging for distinguishing glioblastoma progression from pseudoprogression

    PubMed Central

    Kelm, Zachary S.; Korfiatis, Panagiotis D.; Lingineni, Ravi K.; Daniels, John R.; Buckner, Jan C.; Lachance, Daniel H.; Parney, Ian F.; Carter, Rickey E.; Erickson, Bradley J.

    2015-01-01

    Abstract. Determining whether glioblastoma multiforme (GBM) is progressing despite treatment is challenging due to the pseudoprogression phenomenon seen on conventional MRIs, but relative cerebral blood volume (CBV) has been shown to be helpful. As CBV’s calculation from perfusion-weighted images is not standardized, we investigated whether there were differences between three FDA-cleared software packages in their CBV output values and subsequent performance regarding predicting survival/progression. Forty-five postradiation therapy GBM cases were retrospectively identified as having indeterminate MRI findings of progression versus pseudoprogression. The dynamic susceptibility contrast MR images were processed with different software and three different relative CBV metrics based on the abnormally enhancing regions were computed. The intersoftware intraclass correlation coefficients were 0.8 and below, depending on the metric used. No statistically significant difference in progression determination performance was found between the software packages, but performance was better for the cohort imaged at 3.0 T versus those imaged at 1.5 T for many relative CBV metric and classification criteria combinations. The results revealed clinically significant variation in relative CBV measures based on the software used, but minimal interoperator variation. We recommend against using specific relative CBV measurement thresholds for GBM progression determination unless the same software or processing algorithm is used. PMID:26158114

  20. SU-E-I-13: Evaluation of Metal Artifact Reduction (MAR) Software On Computed Tomography (CT) Images

    SciTech Connect

    Huang, V; Kohli, K

    2015-06-15

    Purpose: A new commercially available metal artifact reduction (MAR) software in computed tomography (CT) imaging was evaluated with phantoms in the presence of metals. The goal was to assess the ability of the software to restore the CT number in the vicinity of the metals without impacting the image quality. Methods: A Catphan 504 was scanned with a GE Optima RT 580 CT scanner (GE Healthcare, Milwaukee, WI) and the images were reconstructed with and without the MAR software. Both datasets were analyzed with Image Owl QA software (Image Owl Inc, Greenwich, NY). CT number sensitometry, MTF, low contrast, uniformity, noise and spatial accuracy were compared for scans with and without MAR software. In addition, an in-house made phantom was scanned with and without a stainless steel insert at three different locations. The accuracy of the CT number and metal insert dimension were investigated as well. Results: Comparisons between scans with and without MAR algorithm on the Catphan phantom demonstrate similar results for image quality. However, noise was slightly higher for the MAR algorithm. Evaluation of the CT number at various locations of the in-house made phantom was also performed. The baseline HU, obtained from the scan without metal insert, was compared to scans with the stainless steel insert at 3 different locations. The HU difference between the baseline scan versus metal scan was improved when the MAR algorithm was applied. In addition, the physical diameter of the stainless steel rod was over-estimated by the MAR algorithm by 0.9 mm. Conclusion: This work indicates with the presence of metal in CT scans, the MAR algorithm is capable of providing a more accurate CT number without compromising the overall image quality. Future work will include the dosimetric impact on the MAR algorithm.

  1. Xmipp 3.0: an improved software suite for image processing in electron microscopy.

    PubMed

    de la Rosa-Trevín, J M; Otón, J; Marabini, R; Zaldívar, A; Vargas, J; Carazo, J M; Sorzano, C O S

    2013-11-01

    Xmipp is a specialized software package for image processing in electron microscopy, and that is mainly focused on 3D reconstruction of macromolecules through single-particles analysis. In this article we present Xmipp 3.0, a major release which introduces several improvements and new developments over the previous version. A central improvement is the concept of a project that stores the entire processing workflow from data import to final results. It is now possible to monitor, reproduce and restart all computing tasks as well as graphically explore the complete set of interrelated tasks associated to a given project. Other graphical tools have also been improved such as data visualization, particle picking and parameter "wizards" that allow the visual selection of some key parameters. Many standard image formats are transparently supported for input/output from all programs. Additionally, results have been standardized, facilitating the interoperation between different Xmipp programs. Finally, as a result of a large code refactoring, the underlying C++ libraries are better suited for future developments and all code has been optimized. Xmipp is an open-source package that is freely available for download from: http://xmipp.cnb.csic.es.

  2. JHelioviewer: Open-Source Software for Discovery and Image Access in the Petabyte Age

    NASA Astrophysics Data System (ADS)

    Mueller, D.; Dimitoglou, G.; Garcia Ortiz, J.; Langenberg, M.; Nuhn, M.; Dau, A.; Pagel, S.; Schmidt, L.; Hughitt, V. K.; Ireland, J.; Fleck, B.

    2011-12-01

    The unprecedented torrent of data returned by the Solar Dynamics Observatory is both a blessing and a barrier: a blessing for making available data with significantly higher spatial and temporal resolution, but a barrier for scientists to access, browse and analyze them. With such staggering data volume, the data is accessible only from a few repositories and users have to deal with data sets effectively immobile and practically difficult to download. From a scientist's perspective this poses three challenges: accessing, browsing and finding interesting data while avoiding the proverbial search for a needle in a haystack. To address these challenges, we have developed JHelioviewer, an open-source visualization software that lets users browse large data volumes both as still images and movies. We did so by deploying an efficient image encoding, storage, and dissemination solution using the JPEG 2000 standard. This solution enables users to access remote images at different resolution levels as a single data stream. Users can view, manipulate, pan, zoom, and overlay JPEG 2000 compressed data quickly, without severe network bandwidth penalties. Besides viewing data, the browser provides third-party metadata and event catalog integration to quickly locate data of interest, as well as an interface to the Virtual Solar Observatory to download science-quality data. As part of the ESA/NASA Helioviewer Project, JHelioviewer offers intuitive ways to browse large amounts of heterogeneous data remotely and provides an extensible and customizable open-source platform for the scientific community. In addition, the easy-to-use graphical user interface enables the general public and educators to access, enjoy and reuse data from space missions without barriers.

  3. Caltech/JPL Conference on Image Processing Technology, Data Sources and Software for Commercial and Scientific Applications

    NASA Technical Reports Server (NTRS)

    Redmann, G. H.

    1976-01-01

    Recent advances in image processing and new applications are presented to the user community to stimulate the development and transfer of this technology to industrial and commercial applications. The Proceedings contains 37 papers and abstracts, including many illustrations (some in color) and provides a single reference source for the user community regarding the ordering and obtaining of NASA-developed image-processing software and science data.

  4. Creating the optimal workspace for hospital staff using human centred design.

    PubMed

    Cawood, T; Saunders, E; Drennan, C; Cross, N; Nicholl, D; Kenny, A; Meates, D; Laing, R

    2016-07-01

    We were tasked with creating best possible non-clinical workspace solutions for approximately 450 hospital staff across 11 departments encompassing medical, nursing, allied health, administrative and other support staff. We used a Human-Centred Design process, involving 'Hear, Create and Deliver' stages. We used observations, contextual enquiry and role-specific workshops to understand needs, key interactions and drivers of behaviour. Co-design workshops were then used to explore and prototype-test concepts for the final design. With extensive employee engagement and design process expertise, an innovative solution was created that focussed on meeting the functional workspace needs of a diverse group of staff requiring a range of different spaces, incorporating space constraints and equity. This project demonstrated the strength of engaging employees in an expert-led Human-Centred Design process. We believe this is a successful blueprint process for other institutions to embrace when facing similar workspace design challenges. PMID:27405891

  5. Workspace Safe Operation of a Force- or Impedance-Controlled Robot

    NASA Technical Reports Server (NTRS)

    Abdallah, Muhammad E. (Inventor); Hargrave, Brian (Inventor); Yamokoski, John D. (Inventor); Strawser, Philip A. (Inventor)

    2013-01-01

    A method of controlling a robotic manipulator of a force- or impedance-controlled robot within an unstructured workspace includes imposing a saturation limit on a static force applied by the manipulator to its surrounding environment, and may include determining a contact force between the manipulator and an object in the unstructured workspace, and executing a dynamic reflex when the contact force exceeds a threshold to thereby alleviate an inertial impulse not addressed by the saturation limited static force. The method may include calculating a required reflex torque to be imparted by a joint actuator to a robotic joint. A robotic system includes a robotic manipulator having an unstructured workspace and a controller that is electrically connected to the manipulator, and which controls the manipulator using force- or impedance-based commands. The controller, which is also disclosed herein, automatically imposes the saturation limit and may execute the dynamic reflex noted above.

  6. Creating the optimal workspace for hospital staff using human centred design.

    PubMed

    Cawood, T; Saunders, E; Drennan, C; Cross, N; Nicholl, D; Kenny, A; Meates, D; Laing, R

    2016-07-01

    We were tasked with creating best possible non-clinical workspace solutions for approximately 450 hospital staff across 11 departments encompassing medical, nursing, allied health, administrative and other support staff. We used a Human-Centred Design process, involving 'Hear, Create and Deliver' stages. We used observations, contextual enquiry and role-specific workshops to understand needs, key interactions and drivers of behaviour. Co-design workshops were then used to explore and prototype-test concepts for the final design. With extensive employee engagement and design process expertise, an innovative solution was created that focussed on meeting the functional workspace needs of a diverse group of staff requiring a range of different spaces, incorporating space constraints and equity. This project demonstrated the strength of engaging employees in an expert-led Human-Centred Design process. We believe this is a successful blueprint process for other institutions to embrace when facing similar workspace design challenges.

  7. Development of a viability standard curve for microencapsulated probiotic bacteria using confocal microscopy and image analysis software.

    PubMed

    Moore, Sarah; Kailasapathy, Kasipathy; Phillips, Michael; Jones, Mark R

    2015-07-01

    Microencapsulation is proposed to protect probiotic strains from food processing procedures and to maintain probiotic viability. Little research has described the in situ viability of microencapsulated probiotics. This study successfully developed a real-time viability standard curve for microencapsulated bacteria using confocal microscopy, fluorescent dyes and image analysis software.

  8. Design and evaluation of a THz time domain imaging system using standard optical design software.

    PubMed

    Brückner, Claudia; Pradarutti, Boris; Müller, Ralf; Riehemann, Stefan; Notni, Gunther; Tünnermann, Andreas

    2008-09-20

    A terahertz (THz) time domain imaging system is analyzed and optimized with standard optical design software (ZEMAX). Special requirements to the illumination optics and imaging optics are presented. In the optimized system, off-axis parabolic mirrors and lenses are combined. The system has a numerical aperture of 0.4 and is diffraction limited for field points up to 4 mm and wavelengths down to 750 microm. ZEONEX is used as the lens material. Higher aspherical coefficients are used for correction of spherical aberration and reduction of lens thickness. The lenses were manufactured by ultraprecision machining. For optimization of the system, ray tracing and wave-optical methods were combined. We show how the ZEMAX Gaussian beam analysis tool can be used to evaluate illumination optics. The resolution of the THz system was tested with a wire and a slit target, line gratings of different period, and a Siemens star. The behavior of the temporal line spread function can be modeled with the polychromatic coherent line spread function feature in ZEMAX. The spectral and temporal resolutions of the line gratings are compared with the respective modulation transfer function of ZEMAX. For maximum resolution, the system has to be diffraction limited down to the smallest wavelength of the spectrum of the THz pulse. Then, the resolution on time domain analysis of the pulse maximum can be estimated with the spectral resolution of the center of gravity wavelength. The system resolution near the optical axis on time domain analysis of the pulse maximum is 1 line pair/mm with an intensity contrast of 0.22. The Siemens star is used for estimation of the resolution of the whole system. An eight channel electro-optic sampling system was used for detection. The resolution on time domain analysis of the pulse maximum of all eight channels could be determined with the Siemens star to be 0.7 line pairs/mm. PMID:18806862

  9. ORBS: A data reduction software for the imaging Fourier transform spectrometers SpIOMM and SITELLE

    NASA Astrophysics Data System (ADS)

    Martin, T.; Drissen, L.; Joncas, G.

    2012-09-01

    SpIOMM (Spectromètre-Imageur de l'Observatoire du Mont Mégantic) is still the only operational astronomical Imaging Fourier Transform Spectrometer (IFTS) capable of obtaining the visible spectrum of every source of light in a field of view of 12 arc-minutes. Even if it has been designed to work with both outputs of the Michelson interferometer, up to now only one output has been used. Here we present ORBS (Outils de Réduction Binoculaire pour SpIOMM/SITELLE), the reduction software we designed in order to take advantage of the two output data. ORBS will also be used to reduce the data of SITELLE (Spectromètre-Imageur pour l' Étude en Long et en Large des raies d' Émissions) { the direct successor of SpIOMM, which will be in operation at the Canada-France- Hawaii Telescope (CFHT) in early 2013. SITELLE will deliver larger data cubes than SpIOMM (up to 2 cubes of 34 Go each). We thus have made a strong effort in optimizing its performance efficiency in terms of speed and memory usage in order to ensure the best compliance with the quality characteristics discussed with the CFHT team. As a result ORBS is now capable of reducing 68 Go of data in less than 20 hours using only 5 Go of random-access memory (RAM).

  10. 3D reconstruction of SEM images by use of optical photogrammetry software.

    PubMed

    Eulitz, Mona; Reiss, Gebhard

    2015-08-01

    Reconstruction of the three-dimensional (3D) surface of an object to be examined is widely used for structure analysis in science and many biological questions require information about their true 3D structure. For Scanning Electron Microscopy (SEM) there has been no efficient non-destructive solution for reconstruction of the surface morphology to date. The well-known method of recording stereo pair images generates a 3D stereoscope reconstruction of a section, but not of the complete sample surface. We present a simple and non-destructive method of 3D surface reconstruction from SEM samples based on the principles of optical close range photogrammetry. In optical close range photogrammetry a series of overlapping photos is used to generate a 3D model of the surface of an object. We adapted this method to the special SEM requirements. Instead of moving a detector around the object, the object itself was rotated. A series of overlapping photos was stitched and converted into a 3D model using the software commonly used for optical photogrammetry. A rabbit kidney glomerulus was used to demonstrate the workflow of this adaption. The reconstruction produced a realistic and high-resolution 3D mesh model of the glomerular surface. The study showed that SEM micrographs are suitable for 3D reconstruction by optical photogrammetry. This new approach is a simple and useful method of 3D surface reconstruction and suitable for various applications in research and teaching.

  11. Army technology development. IBIS query. Software to support the Image Based Information System (IBIS) expansion for mapping, charting and geodesy

    NASA Technical Reports Server (NTRS)

    Friedman, S. Z.; Walker, R. E.; Aitken, R. B.

    1986-01-01

    The Image Based Information System (IBIS) has been under development at the Jet Propulsion Laboratory (JPL) since 1975. It is a collection of more than 90 programs that enable processing of image, graphical, tabular data for spatial analysis. IBIS can be utilized to create comprehensive geographic data bases. From these data, an analyst can study various attributes describing characteristics of a given study area. Even complex combinations of disparate data types can be synthesized to obtain a new perspective on spatial phenomena. In 1984, new query software was developed enabling direct Boolean queries of IBIS data bases through the submission of easily understood expressions. An improved syntax methodology, a data dictionary, and display software simplified the analysts' tasks associated with building, executing, and subsequently displaying the results of a query. The primary purpose of this report is to describe the features and capabilities of the new query software. A secondary purpose of this report is to compare this new query software to the query software developed previously (Friedman, 1982). With respect to this topic, the relative merits and drawbacks of both approaches are covered.

  12. Global workspace dynamics: cortical "binding and propagation" enables conscious contents.

    PubMed

    Baars, Bernard J; Franklin, Stan; Ramsoy, Thomas Zoega

    2013-01-01

    A global workspace (GW) is a functional hub of binding and propagation in a population of loosely coupled signaling elements. In computational applications, GW architectures recruit many distributed, specialized agents to cooperate in resolving focal ambiguities. In the brain, conscious experiences may reflect a GW function. For animals, the natural world is full of unpredictable dangers and opportunities, suggesting a general adaptive pressure for brains to resolve focal ambiguities quickly and accurately. GW theory aims to understand the differences between conscious and unconscious brain events. In humans and related species the cortico-thalamic (C-T) core is believed to underlie conscious aspects of perception, thinking, learning, feelings of knowing (FOK), felt emotions, visual imagery, working memory, and executive control. Alternative theoretical perspectives are also discussed. The C-T core has many anatomical hubs, but conscious percepts are unitary and internally consistent at any given moment. Over time, conscious contents constitute a very large, open set. This suggests that a brain-based GW capacity cannot be localized in a single anatomical hub. Rather, it should be sought in a functional hub - a dynamic capacity for binding and propagation of neural signals over multiple task-related networks, a kind of neuronal cloud computing. In this view, conscious contents can arise in any region of the C-T core when multiple input streams settle on a winner-take-all equilibrium. The resulting conscious gestalt may ignite an any-to-many broadcast, lasting ∼100-200 ms, and trigger widespread adaptation in previously established networks. To account for the great range of conscious contents over time, the theory suggests an open repertoire of binding coalitions that can broadcast via theta/gamma or alpha/gamma phase coupling, like radio channels competing for a narrow frequency band. Conscious moments are thought to hold only 1-4 unrelated items; this small

  13. Global workspace dynamics: cortical "binding and propagation" enables conscious contents.

    PubMed

    Baars, Bernard J; Franklin, Stan; Ramsoy, Thomas Zoega

    2013-01-01

    A global workspace (GW) is a functional hub of binding and propagation in a population of loosely coupled signaling elements. In computational applications, GW architectures recruit many distributed, specialized agents to cooperate in resolving focal ambiguities. In the brain, conscious experiences may reflect a GW function. For animals, the natural world is full of unpredictable dangers and opportunities, suggesting a general adaptive pressure for brains to resolve focal ambiguities quickly and accurately. GW theory aims to understand the differences between conscious and unconscious brain events. In humans and related species the cortico-thalamic (C-T) core is believed to underlie conscious aspects of perception, thinking, learning, feelings of knowing (FOK), felt emotions, visual imagery, working memory, and executive control. Alternative theoretical perspectives are also discussed. The C-T core has many anatomical hubs, but conscious percepts are unitary and internally consistent at any given moment. Over time, conscious contents constitute a very large, open set. This suggests that a brain-based GW capacity cannot be localized in a single anatomical hub. Rather, it should be sought in a functional hub - a dynamic capacity for binding and propagation of neural signals over multiple task-related networks, a kind of neuronal cloud computing. In this view, conscious contents can arise in any region of the C-T core when multiple input streams settle on a winner-take-all equilibrium. The resulting conscious gestalt may ignite an any-to-many broadcast, lasting ∼100-200 ms, and trigger widespread adaptation in previously established networks. To account for the great range of conscious contents over time, the theory suggests an open repertoire of binding coalitions that can broadcast via theta/gamma or alpha/gamma phase coupling, like radio channels competing for a narrow frequency band. Conscious moments are thought to hold only 1-4 unrelated items; this small

  14. Analyses of requirements for computer control and data processing experiment subsystems: Image data processing system (IDAPS) software description (7094 version), volume 2

    NASA Technical Reports Server (NTRS)

    1973-01-01

    A description of each of the software modules of the Image Data Processing System (IDAPS) is presented. The changes in the software modules are the result of additions to the application software of the system and an upgrade of the IBM 7094 Mod(1) computer to a 1301 disk storage configuration. Necessary information about IDAPS sofware is supplied to the computer programmer who desires to make changes in the software system or who desires to use portions of the software outside of the IDAPS system. Each software module is documented with: module name, purpose, usage, common block(s) description, method (algorithm of subroutine) flow diagram (if needed), subroutines called, and storage requirements.

  15. The 'Densitometric Image Analysis Software' and its application to determine stepwise equilibrium constants from electrophoretic mobility shift assays.

    PubMed

    van Oeffelen, Liesbeth; Peeters, Eveline; Nguyen Le Minh, Phu; Charlier, Daniël

    2014-01-01

    Current software applications for densitometric analysis, such as ImageJ, QuantityOne (BioRad) and the Intelligent or Advanced Quantifier (Bio Image) do not allow to take the non-linearity of autoradiographic films into account during calibration. As a consequence, quantification of autoradiographs is often regarded as problematic, and phosphorimaging is the preferred alternative. However, the non-linear behaviour of autoradiographs can be described mathematically, so it can be accounted for. Therefore, the 'Densitometric Image Analysis Software' has been developed, which allows to quantify electrophoretic bands in autoradiographs, as well as in gels and phosphorimages, while providing optimized band selection support to the user. Moreover, the program can determine protein-DNA binding constants from Electrophoretic Mobility Shift Assays (EMSAs). For this purpose, the software calculates a chosen stepwise equilibrium constant for each migration lane within the EMSA, and estimates the errors due to non-uniformity of the background noise, smear caused by complex dissociation or denaturation of double-stranded DNA, and technical errors such as pipetting inaccuracies. Thereby, the program helps the user to optimize experimental parameters and to choose the best lanes for estimating an average equilibrium constant. This process can reduce the inaccuracy of equilibrium constants from the usual factor of 2 to about 20%, which is particularly useful when determining position weight matrices and cooperative binding constants to predict genomic binding sites. The MATLAB source code, platform-dependent software and installation instructions are available via the website http://micr.vub.ac.be.

  16. Parameter-based estimation of CT dose index and image quality using an in-house android™-based software

    NASA Astrophysics Data System (ADS)

    Mubarok, S.; Lubis, L. E.; Pawiro, S. A.

    2016-03-01

    Compromise between radiation dose and image quality is essential in the use of CT imaging. CT dose index (CTDI) is currently the primary dosimetric formalisms in CT scan, while the low and high contrast resolutions are aspects indicating the image quality. This study was aimed to estimate CTDIvol and image quality measures through a range of exposure parameters variation. CTDI measurements were performed using PMMA (polymethyl methacrylate) phantom of 16 cm diameter, while the image quality test was conducted by using catphan ® 600. CTDI measurements were carried out according to IAEA TRS 457 protocol using axial scan mode, under varied parameters of tube voltage, collimation or slice thickness, and tube current. Image quality test was conducted accordingly under the same exposure parameters with CTDI measurements. An Android™ based software was also result of this study. The software was designed to estimate the value of CTDIvol with maximum difference compared to actual CTDIvol measurement of 8.97%. Image quality can also be estimated through CNR parameter with maximum difference to actual CNR measurement of 21.65%.

  17. MR urography in children. Part 2: how to use ImageJ MR urography processing software.

    PubMed

    Vivier, Pierre-Hugues; Dolores, Michael; Taylor, Melissa; Dacher, Jean-Nicolas

    2010-05-01

    MR urography (MRU) is an emerging technique particularly useful in paediatric uroradiology. The most common indication is the investigation of hydronephrosis. Combined static and dynamic contrast-enhanced MRU (DCE-MRU) provides both morphological and functional information in a single examination. However, specific post-processing must be performed and to our knowledge, dedicated software is not available in conventional workstations. Investigators involved in MRU classically use homemade software that is not freely accessible. For these reasons, we have developed a software program that is freely downloadable on the National Institute of Health (NIH) website. We report and describe in this study the features of this software program.

  18. The Importance of Structuring Information and Resources within Shared Workspaces during Collaborative Design Learning

    ERIC Educational Resources Information Center

    Nicol, David; Littlejohn, Allison; Grierson, Hilary

    2005-01-01

    This paper investigates how the organization or structure of information and resources in shared workspaces influences team sharing and design learning. Two groupware products, BSCW and TikiWiki, were configured so that teams could structure and share resources. In BSCW the resources were structured hierarchically using folders and subfolders…

  19. An adaptive workspace hypothesis about the neural correlates of consciousness: insights from neuroscience and meditation studies.

    PubMed

    Raffone, Antonino; Srinivasan, Narayanan

    2009-01-01

    While enormous progress has been made to identify neural correlates of consciousness (NCC), crucial NCC aspects are still very controversial. A major hurdle is the lack of an adequate definition and characterization of different aspects of conscious experience and also its relationship to attention and metacognitive processes like monitoring. In this paper, we therefore attempt to develop a unitary theoretical framework for NCC, with an interdependent characterization of endogenous attention, access consciousness, phenomenal awareness, metacognitive consciousness, and a non-referential form of unified consciousness. We advance an adaptive workspace hypothesis about the NCC based on the global workspace model emphasizing transient resonant neurodynamics and prefrontal cortex function, as well as meditation-related characterizations of conscious experiences. In this hypothesis, transient dynamic links within an adaptive coding net in prefrontal cortex, especially in anterior prefrontal cortex, and between it and the rest of the brain, in terms of ongoing intrinsic and long-range signal exchanges, flexibly regulate the interplay between endogenous attention, access consciousness, phenomenal awareness, and metacognitive consciousness processes. Such processes are established in terms of complementary aspects of an ongoing transition between context-sensitive global workspace assemblies, modulated moment-to-moment by body and environment states. Brain regions associated to momentary interoceptive and exteroceptive self-awareness, or first-person experiential perspective as emphasized in open monitoring meditation, play an important modulatory role in adaptive workspace transitions.

  20. Revolute manipulator workspace optimization using a modified bacteria foraging algorithm: A comparative study

    NASA Astrophysics Data System (ADS)

    Panda, S.; Mishra, D.; Biswal, B. B.; Tripathy, M.

    2014-02-01

    Robotic manipulators with three-revolute (3R) motions to attain desired positional configurations are very common in industrial robots. The capability of these robots depends largely on the workspace of the manipulator in addition to other parameters. In this study, an evolutionary optimization algorithm based on the foraging behaviour of the Escherichia coli bacteria present in the human intestine is utilized to optimize the workspace volume of a 3R manipulator. The new optimization method is modified from the original algorithm for faster convergence. This method is also useful for optimization problems in a highly constrained environment, such as robot workspace optimization. The new approach for workspace optimization of 3R manipulators is tested using three cases. The test results are compared with standard results available using other optimization algorithms, i.e. the differential evolution algorithm, the genetic algorithm and the particle swarm optimization algorithm. The present method is found to be superior to the other methods in terms of computational efficiency.

  1. Investigating Uses and Perceptions of an Online Collaborative Workspace for the Dissertation Process

    ERIC Educational Resources Information Center

    Rockinson-Szapkiw, Amanda J.

    2012-01-01

    The intent of this study was to investigate 93 doctoral candidates' perceptions and use of an online collaboration workspace and content management server, Microsoft Office SharePoint, for dissertation process. All candidates were enrolled in an Ed.D. programme in the United States. Descriptive statistics demonstrate that candidates frequently use…

  2. Communities of Practice Transition Online - Lessons learned from NASA's EPO Online Workspace

    NASA Astrophysics Data System (ADS)

    Davey, B.

    2012-12-01

    The Earth Forum Education and Public Outreach (EP/O) community has long interacted to better their practice as a community as well as individually. Working together to share knowledge and grow, they function as a community of practice. In 2009, NASA designed and implemented an online workspace in hopes of promoting the communities continued interactions. This study examines the role of an online workspace component of a community in the work of a community of practice. Much has been studied revealing the importance of communities of practice to organizations, project success, and knowledge management and some of these same successes hold true for virtual communities of practice. Study participants were 75 Education and Public Outreach community members of NASA's Science Mission Directorate Earth Forum. In this mixed methods study, online workspace metrics were used to track participation and a survey completed by 21 members was used to quantify participation. For a more detailed analysis, 15 community members (five highly active users, five average users, and five infrequent users) selected based on survey responses, were interviewed. Finally, survey data was gathered from seven online facilitators to understand their role in the community. Data collected from these 21 community members and five facilitating members suggest that highly active users (logging into the workspace daily), were more likely to have transformative experiences, co-create knowledge, feel ownership of community knowledge, have extended opportunities for community exchange, and find new forms of evaluation. Average users shared some similar characteristics with both the highly active members and infrequent users, representing a group in transition as they become more engaged and active in the online workspace. Inactive users viewed the workspace as having little value, being difficult to navigate, being mainly for gaining basic information about events and community news, and as another demand

  3. Web-based spatial analysis with the ILWIS open source GIS software and satellite images from GEONETCast

    NASA Astrophysics Data System (ADS)

    Lemmens, R.; Maathuis, B.; Mannaerts, C.; Foerster, T.; Schaeffer, B.; Wytzisk, A.

    2009-12-01

    This paper involves easy accessible integrated web-based analysis of satellite images with a plug-in based open source software. The paper is targeted to both users and developers of geospatial software. Guided by a use case scenario, we describe the ILWIS software and its toolbox to access satellite images through the GEONETCast broadcasting system. The last two decades have shown a major shift from stand-alone software systems to networked ones, often client/server applications using distributed geo-(web-)services. This allows organisations to combine without much effort their own data with remotely available data and processing functionality. Key to this integrated spatial data analysis is a low-cost access to data from within a user-friendly and flexible software. Web-based open source software solutions are more often a powerful option for developing countries. The Integrated Land and Water Information System (ILWIS) is a PC-based GIS & Remote Sensing software, comprising a complete package of image processing, spatial analysis and digital mapping and was developed as commercial software from the early nineties onwards. Recent project efforts have migrated ILWIS into a modular, plug-in-based open source software, and provide web-service support for OGC-based web mapping and processing. The core objective of the ILWIS Open source project is to provide a maintainable framework for researchers and software developers to implement training components, scientific toolboxes and (web-) services. The latest plug-ins have been developed for multi-criteria decision making, water resources analysis and spatial statistics analysis. The development of this framework is done since 2007 in the context of 52°North, which is an open initiative that advances the development of cutting edge open source geospatial software, using the GPL license. GEONETCast, as part of the emerging Global Earth Observation System of Systems (GEOSS), puts essential environmental data at the

  4. New Instruments for Survey: on Line Softwares for 3d Recontruction from Images

    NASA Astrophysics Data System (ADS)

    Fratus de Balestrini, E.; Guerra, F.

    2011-09-01

    3d scanning technologies had a significant development and have been widely used in documentation of cultural, architectural and archeological heritages. Modern methods of three-dimensional acquiring and modeling allow to represent an object through a digital model that combines visual potentialities of images (normally used for documentation) to the accuracy of the survey, becoming at the same time support for the visualization that for metric evaluation of any artefact that have an historical or artistic interest, opening up new possibilities for cultural heritage's fruition, cataloging and study. Despite this development, because of the small catchment area and the 3D laser scanner's sophisticated technologies, the cost of these instruments is very high and beyond the reach of most operators in the field of cultural heritages. This is the reason why they have appeared low-cost technologies or even free, allowing anyone to approach the issues of acquisition and 3D modeling, providing tools that allow to create three-dimensional models in a simple and economical way. The research, conducted by the Laboratory of Photogrammetry of the University IUAV of Venice, of which we present here some results, is intended to figure out whether, with Arc3D, it is possible to obtain results that can be somehow comparable, in therms of overall quality, to those of the laser scanner, and/or whether it is possible to integrate them. They were carried out a series of tests on certain types of objects, models made with Arc3D, from raster images, were compared with those obtained using the point clouds from laser scanner. We have also analyzed the conditions for an optimal use of Arc3D: environmental conditions (lighting), acquisition tools (digital cameras) and type and size of objects. After performing the tests described above, we analyzed the patterns generated by Arc3D to check what other graphic representations can be obtained from them: orthophotos and drawings. The research

  5. Mapping and correction of the CMM workspace error with the use of an electronic gyroscope and neural networks--practical application.

    PubMed

    Swornowski, Pawel J

    2013-01-01

    The article presents the application of neural networks in determining and correction of the deformation of a coordinate measuring machine (CMM) workspace. The information about the CMM errors is acquired using an ADXRS401 electronic gyroscope. A test device (PS-20 module) was built and integrated with a commercial measurement system based on the SP25M passive scanning probe and with a PH10M module (Renishaw). The proposed solution was tested on a Kemco 600 CMM and on a DEA Global Clima CMM. In the former case, correction of the CMM errors was performed using the source code of WinIOS software owned by The Institute of Advanced Manufacturing Technology, Cracow, Poland and in the latter on an external PC. Optimum parameters of full and simplified mapping of a given layer of the CMM workspace were determined for practical applications. The proposed method can be employed for the interim check (ISO 10360-2 procedure) or to detect local CMM deformations, occurring when the CMM works at high scanning speeds (>20 mm/s).

  6. Comparison of the Number of Image Acquisitions and Procedural Time Required for Transarterial Chemoembolization of Hepatocellular Carcinoma with and without Tumor-Feeder Detection Software.

    PubMed

    Iwazawa, Jin; Ohue, Shoichi; Hashimoto, Naoko; Mitani, Takashi

    2013-01-01

    Purpose. To compare the number of image acquisitions and procedural time required for transarterial chemoembolization (TACE) with and without tumor-feeder detection software in cases of hepatocellular carcinoma (HCC). Materials and Methods. We retrospectively reviewed 50 cases involving software-assisted TACE (September 2011-February 2013) and 84 cases involving TACE without software assistance (January 2010-August 2011). We compared the number of image acquisitions, the overall procedural time, and the therapeutic efficacy in both groups. Results. Angiography acquisition per session reduced from 6.6 times to 4.6 times with software assistance (P < 0.001). Total image acquisition significantly decreased from 10.4 times to 8.7 times with software usage (P = 0.004). The mean procedural time required for a single session with software-assisted TACE (103 min) was significantly lower than that for a session without software (116 min, P = 0.021). For TACE with and without software usage, the complete (68% versus 63%, resp.) and objective (78% versus 80%, resp.) response rates did not differ significantly. Conclusion. In comparison with software-unassisted TACE, automated feeder-vessel detection software-assisted TACE for HCC involved fewer image acquisitions and could be completed faster while maintaining a comparable treatment response.

  7. 3DVIEWNIX-AVS: a software package for the separate visualization of arteries and veins in CE-MRA images.

    PubMed

    Lei, Tianhu; Udupa, Jayaram K; Odhner, Dewey; Nyúl, László G; Saha, Punam K

    2003-01-01

    Our earlier study developed a computerized method, based on fuzzy connected object delineation principles and algorithms, for artery and vein separation in contrast enhanced Magnetic Resonance Angiography (CE-MRA) images. This paper reports its current development-a software package-for routine clinical use. The software package, termed 3DVIEWNIX-AVS, consists of the following major operational parts: (1) converting data from DICOM3 to 3DVIEWNIX format, (2) previewing slices and creating VOI and MIP Shell, (3) segmenting vessel, (4) separating artery and vein, (5) shell rendering vascular structures and creating animations. This package has been applied to EPIX Medical Inc's CE-MRA data (AngioMark MS-325). One hundred and thirty-five original CE-MRA data sets (of 52 patients) from 6 hospitals have been processed. In all case studies, unified parameter settings produce correct artery-vein separation. The current package is running on a Pentium PC under Linux and the total computation time per study is about 3 min. The strengths of this software package are (1) minimal user interaction, (2) minimal anatomic knowledge requirements on human vascular system, (3) clinically required speed, (4) free entry to any operational stages, (5) reproducible, reliable, high quality of results, and (6) cost effective computer implementation. To date, it seems to be the only software package (using an image processing approach) available for artery and vein separation of the human vascular system for routine use in a clinical setting. PMID:12821028

  8. Biological Visualization, Imaging and Simulation(Bio-VIS) at NASA Ames Research Center: Developing New Software and Technology for Astronaut Training and Biology Research in Space

    NASA Technical Reports Server (NTRS)

    Smith, Jeffrey

    2003-01-01

    The Bio- Visualization, Imaging and Simulation (BioVIS) Technology Center at NASA's Ames Research Center is dedicated to developing and applying advanced visualization, computation and simulation technologies to support NASA Space Life Sciences research and the objectives of the Fundamental Biology Program. Research ranges from high resolution 3D cell imaging and structure analysis, virtual environment simulation of fine sensory-motor tasks, computational neuroscience and biophysics to biomedical/clinical applications. Computer simulation research focuses on the development of advanced computational tools for astronaut training and education. Virtual Reality (VR) and Virtual Environment (VE) simulation systems have become important training tools in many fields from flight simulation to, more recently, surgical simulation. The type and quality of training provided by these computer-based tools ranges widely, but the value of real-time VE computer simulation as a method of preparing individuals for real-world tasks is well established. Astronauts routinely use VE systems for various training tasks, including Space Shuttle landings, robot arm manipulations and extravehicular activities (space walks). Currently, there are no VE systems to train astronauts for basic and applied research experiments which are an important part of many missions. The Virtual Glovebox (VGX) is a prototype VE system for real-time physically-based simulation of the Life Sciences Glovebox where astronauts will perform many complex tasks supporting research experiments aboard the International Space Station. The VGX consists of a physical display system utilizing duel LCD projectors and circular polarization to produce a desktop-sized 3D virtual workspace. Physically-based modeling tools (Arachi Inc.) provide real-time collision detection, rigid body dynamics, physical properties and force-based controls for objects. The human-computer interface consists of two magnetic tracking devices

  9. Implementation of a real-time software-only image smoothing filter for a block-transform video codec

    NASA Astrophysics Data System (ADS)

    Miaw, Wesley F.; Rowe, Lawrence A.

    2003-05-01

    The JPEG compression standard is a popular image format. However, at high compression ratios JPEG compression, which uses block-transform coding, can produce blocking artifacts, or artificially introduced edges within the image. Several post-processing algorithms have been developed to remove these artifacts. This paper describes an implementation of a post-processing algorithm developed by Ramchandran, Chou, and Crouse (RCC) which is fast enough for real-time software-only video applications. The original implementation of the RCC algorithm involved calculating thresholds to identify artificial edges. These calculations proved too expensive for use in real-time software-only applications. We replaced these calculations with a linear scale approximating ideal threshold values based on a combination of peak signal-to-noise ratio calculations and subjective visual quality. The resulting filter implementation is available in the widely-deployed Open Mash streaming media toolkit.

  10. CellSeT: novel software to extract and analyze structured networks of plant cells from confocal images.

    PubMed

    Pound, Michael P; French, Andrew P; Wells, Darren M; Bennett, Malcolm J; Pridmore, Tony P

    2012-04-01

    It is increasingly important in life sciences that many cell-scale and tissue-scale measurements are quantified from confocal microscope images. However, extracting and analyzing large-scale confocal image data sets represents a major bottleneck for researchers. To aid this process, CellSeT software has been developed, which utilizes tissue-scale structure to help segment individual cells. We provide examples of how the CellSeT software can be used to quantify fluorescence of hormone-responsive nuclear reporters, determine membrane protein polarity, extract cell and tissue geometry for use in later modeling, and take many additional biologically relevant measures using an extensible plug-in toolset. Application of CellSeT promises to remove subjectivity from the resulting data sets and facilitate higher-throughput, quantitative approaches to plant cell research.

  11. Comparison of grey scale median (GSM) measurement in ultrasound images of human carotid plaques using two different softwares.

    PubMed

    Östling, Gerd; Persson, Margaretha; Hedblad, Bo; Gonçalves, Isabel

    2013-11-01

    Grey scale median (GSM) measured on ultrasound images of carotid plaques has been used for several years now in research to find the vulnerable plaque. Centres have used different software and also different methods for GSM measurement. This has resulted in a wide range of GSM values and cut-off values for the detection of the vulnerable plaque. The aim of this study was to compare the values obtained with two different softwares, using different standardization methods, for the measurement of GSM on ultrasound images of carotid human plaques. GSM was measured with Adobe Photoshop(®) and with Artery Measurement System (AMS) on duplex ultrasound images of 100 consecutive medium- to large-sized carotid plaques of the Beta-blocker Cholesterol-lowering Asymptomatic Plaque Study (BCAPS). The mean values of GSM were 35·2 ± 19·3 and 55·8 ± 22·5 for Adobe Photoshop(®) and AMS, respectively. Mean difference was 20·45 (95% CI: 19·17-21·73). Although the absolute values of GSM differed, the agreement between the two measurements was good, correlation coefficient 0·95. A chi-square test revealed a kappa value of 0·68 when studying quartiles of GSM. The intra-observer variability was 1·9% for AMS and 2·5% for Adobe Photoshop. The difference between softwares and standardization methods must be taken into consideration when comparing studies. To avoid these problems, researcher should come to a consensus regarding software and standardization method for GSM measurement on ultrasound images of plaque in the arteries.

  12. Interference-free ultrasound imaging during HIFU therapy, using software tools

    NASA Technical Reports Server (NTRS)

    Vaezy, Shahram (Inventor); Held, Robert (Inventor); Sikdar, Siddhartha (Inventor); Managuli, Ravi (Inventor); Zderic, Vesna (Inventor)

    2010-01-01

    Disclosed herein is a method for obtaining a composite interference-free ultrasound image when non-imaging ultrasound waves would otherwise interfere with ultrasound imaging. A conventional ultrasound imaging system is used to collect frames of ultrasound image data in the presence of non-imaging ultrasound waves, such as high-intensity focused ultrasound (HIFU). The frames are directed to a processor that analyzes the frames to identify portions of the frame that are interference-free. Interference-free portions of a plurality of different ultrasound image frames are combined to generate a single composite interference-free ultrasound image that is displayed to a user. In this approach, a frequency of the non-imaging ultrasound waves is offset relative to a frequency of the ultrasound imaging waves, such that the interference introduced by the non-imaging ultrasound waves appears in a different portion of the frames.

  13. Three-Dimensional Root Phenotyping with a Novel Imaging and Software Platform1[C][W][OA

    PubMed Central

    Clark, Randy T.; MacCurdy, Robert B.; Jung, Janelle K.; Shaff, Jon E.; McCouch, Susan R.; Aneshansley, Daniel J.; Kochian, Leon V.

    2011-01-01

    A novel imaging and software platform was developed for the high-throughput phenotyping of three-dimensional root traits during seedling development. To demonstrate the platform’s capacity, plants of two rice (Oryza sativa) genotypes, Azucena and IR64, were grown in a transparent gellan gum system and imaged daily for 10 d. Rotational image sequences consisting of 40 two-dimensional images were captured using an optically corrected digital imaging system. Three-dimensional root reconstructions were generated and analyzed using a custom-designed software, RootReader3D. Using the automated and interactive capabilities of RootReader3D, five rice root types were classified and 27 phenotypic root traits were measured to characterize these two genotypes. Where possible, measurements from the three-dimensional platform were validated and were highly correlated with conventional two-dimensional measurements. When comparing gellan gum-grown plants with those grown under hydroponic and sand culture, significant differences were detected in morphological root traits (P < 0.05). This highly flexible platform provides the capacity to measure root traits with a high degree of spatial and temporal resolution and will facilitate novel investigations into the development of entire root systems or selected components of root systems. In combination with the extensive genetic resources that are now available, this platform will be a powerful resource to further explore the molecular and genetic determinants of root system architecture. PMID:21454799

  14. Experiment Design Regularization-Based Hardware/Software Codesign for Real-Time Enhanced Imaging in Uncertain Remote Sensing Environment

    NASA Astrophysics Data System (ADS)

    Castillo Atoche, A.; Torres Roman, D.; Shkvarko, Y.

    2010-12-01

    A new aggregated Hardware/Software (HW/SW) codesign approach to optimization of the digital signal processing techniques for enhanced imaging with real-world uncertain remote sensing (RS) data based on the concept of descriptive experiment design regularization (DEDR) is addressed. We consider the applications of the developed approach to typical single-look synthetic aperture radar (SAR) imaging systems operating in the real-world uncertain RS scenarios. The software design is aimed at the algorithmic-level decrease of the computational load of the large-scale SAR image enhancement tasks. The innovative algorithmic idea is to incorporate into the DEDR-optimized fixed-point iterative reconstruction/enhancement procedure the convex convergence enforcement regularization via constructing the proper multilevel projections onto convex sets (POCS) in the solution domain. The hardware design is performed via systolic array computing based on a Xilinx Field Programmable Gate Array (FPGA) XC4VSX35-10ff668 and is aimed at implementing the unified DEDR-POCS image enhancement/reconstruction procedures in a computationally efficient multi-level parallel fashion that meets the (near) real-time image processing requirements. Finally, we comment on the simulation results indicative of the significantly increased performance efficiency both in resolution enhancement and in computational complexity reduction metrics gained with the proposed aggregated HW/SW co-design approach.

  15. Use of ImageJ software for histomorphometric evaluation of normal and severely affected canine ear canals.

    PubMed

    Zur, Gila; Klement, Eyal

    2015-10-01

    Morphological studies comparing normal and diseased ear canals use primarily subjective scoring. The aim of this study was to compare normal and severely affected ears in dogs with objective measurements using ImageJ software. Ear canals were harvested from cadavers with normal ears and from dogs that underwent total ear canal ablation for unresolved otitis. Histopathology samples from ear canals were evaluated by semi-quantitative scoring and also by using ImageJ-software for histomorphometric measurements. The normal ears were compared to the severely affected ears using the 2 methods. The 2 methods were significantly (P < 0.0001) correlated for epidermal hyperplasia, ceruminous gland dilation, and hyperplasia and tissue inflammation, which were significantly greater in the severely affected ears (P < 0.0001). This study demonstrated that there is a very high correlation between the 2 methods for the most markedly affected components of otitis externa and that ImageJ software can be efficiently used to measure and evaluate ear canal histomorphometry.

  16. SU-E-J-42: Customized Deformable Image Registration Using Open-Source Software SlicerRT

    SciTech Connect

    Gaitan, J Cifuentes; Chin, L; Pignol, J; Kirby, N; Pouliot, J; Lasso, A; Pinter, C; Fichtinger, G

    2014-06-01

    Purpose: SlicerRT is a flexible platform that allows the user to incorporate the necessary images registration and processing tools to improve clinical workflow. This work validates the accuracy and the versatility of the deformable image registration algorithm of the free open-source software SlicerRT using a deformable physical pelvic phantom versus available commercial image fusion algorithms. Methods: Optical camera images of nonradiopaque markers implanted in an anatomical pelvic phantom were used to measure the ground-truth deformation and evaluate the theoretical deformations for several DIR algorithms. To perform the registration, full and empty bladder computed tomography (CT) images of the phantom were obtained and used as fixed and moving images, respectively. The DIR module, found in SlicerRT, used a B-spline deformable image registration with multiple optimization parameters that allowed customization of the registration including a regularization term that controlled the amount of local voxel displacement. The virtual deformation field at the center of the phantom was obtained and compared to the experimental ground-truth values. The parameters of SlicerRT were then varied to improve spatial accuracy. To quantify image similarity, the mean absolute difference (MAD) parameter using Hounsfield units was calculated. In addition, the Dice coefficient of the contoured rectum was evaluated to validate the strength of the algorithm to transfer anatomical contours. Results: Overall, SlicerRT achieved one of the lowest MAD values across the algorithm spectrum, but slightly smaller mean spatial errors in comparison to MIM software (MIM). On the other hand, SlicerRT created higher mean spatial errors than Velocity Medical Solutions (VEL), although obtaining an improvement on the DICE to 0.91. The large spatial errors were attributed to the poor contrast in the prostate bladder interface of the phantom. Conclusion: Based phantom validation, SlicerRT is capable of

  17. Nuquantus: Machine learning software for the characterization and quantification of cell nuclei in complex immunofluorescent tissue images

    PubMed Central

    Gross, Polina; Honnorat, Nicolas; Varol, Erdem; Wallner, Markus; Trappanese, Danielle M.; Sharp, Thomas E.; Starosta, Timothy; Duran, Jason M.; Koller, Sarah; Davatzikos, Christos; Houser, Steven R.

    2016-01-01

    Determination of fundamental mechanisms of disease often hinges on histopathology visualization and quantitative image analysis. Currently, the analysis of multi-channel fluorescence tissue images is primarily achieved by manual measurements of tissue cellular content and sub-cellular compartments. Since the current manual methodology for image analysis is a tedious and subjective approach, there is clearly a need for an automated analytical technique to process large-scale image datasets. Here, we introduce Nuquantus (Nuclei quantification utility software) - a novel machine learning-based analytical method, which identifies, quantifies and classifies nuclei based on cells of interest in composite fluorescent tissue images, in which cell borders are not visible. Nuquantus is an adaptive framework that learns the morphological attributes of intact tissue in the presence of anatomical variability and pathological processes. Nuquantus allowed us to robustly perform quantitative image analysis on remodeling cardiac tissue after myocardial infarction. Nuquantus reliably classifies cardiomyocyte versus non-cardiomyocyte nuclei and detects cell proliferation, as well as cell death in different cell classes. Broadly, Nuquantus provides innovative computerized methodology to analyze complex tissue images that significantly facilitates image analysis and minimizes human bias. PMID:27005843

  18. Nuquantus: Machine learning software for the characterization and quantification of cell nuclei in complex immunofluorescent tissue images

    NASA Astrophysics Data System (ADS)

    Gross, Polina; Honnorat, Nicolas; Varol, Erdem; Wallner, Markus; Trappanese, Danielle M.; Sharp, Thomas E.; Starosta, Timothy; Duran, Jason M.; Koller, Sarah; Davatzikos, Christos; Houser, Steven R.

    2016-03-01

    Determination of fundamental mechanisms of disease often hinges on histopathology visualization and quantitative image analysis. Currently, the analysis of multi-channel fluorescence tissue images is primarily achieved by manual measurements of tissue cellular content and sub-cellular compartments. Since the current manual methodology for image analysis is a tedious and subjective approach, there is clearly a need for an automated analytical technique to process large-scale image datasets. Here, we introduce Nuquantus (Nuclei quantification utility software) - a novel machine learning-based analytical method, which identifies, quantifies and classifies nuclei based on cells of interest in composite fluorescent tissue images, in which cell borders are not visible. Nuquantus is an adaptive framework that learns the morphological attributes of intact tissue in the presence of anatomical variability and pathological processes. Nuquantus allowed us to robustly perform quantitative image analysis on remodeling cardiac tissue after myocardial infarction. Nuquantus reliably classifies cardiomyocyte versus non-cardiomyocyte nuclei and detects cell proliferation, as well as cell death in different cell classes. Broadly, Nuquantus provides innovative computerized methodology to analyze complex tissue images that significantly facilitates image analysis and minimizes human bias.

  19. Software workflow for the automatic tagging of medieval manuscript images (SWATI)

    NASA Astrophysics Data System (ADS)

    Chandna, Swati; Tonne, Danah; Jejkal, Thomas; Stotzka, Rainer; Krause, Celia; Vanscheidt, Philipp; Busch, Hannah; Prabhune, Ajinkya

    2015-01-01

    Digital methods, tools and algorithms are gaining in importance for the analysis of digitized manuscript collections in the arts and humanities. One example is the BMBF-funded research project "eCodicology" which aims to design, evaluate and optimize algorithms for the automatic identification of macro- and micro-structural layout features of medieval manuscripts. The main goal of this research project is to provide better insights into high-dimensional datasets of medieval manuscripts for humanities scholars. The heterogeneous nature and size of the humanities data and the need to create a database of automatically extracted reproducible features for better statistical and visual analysis are the main challenges in designing a workflow for the arts and humanities. This paper presents a concept of a workflow for the automatic tagging of medieval manuscripts. As a starting point, the workflow uses medieval manuscripts digitized within the scope of the project Virtual Scriptorium St. Matthias". Firstly, these digitized manuscripts are ingested into a data repository. Secondly, specific algorithms are adapted or designed for the identification of macro- and micro-structural layout elements like page size, writing space, number of lines etc. And lastly, a statistical analysis and scientific evaluation of the manuscripts groups are performed. The workflow is designed generically to process large amounts of data automatically with any desired algorithm for feature extraction. As a result, a database of objectified and reproducible features is created which helps to analyze and visualize hidden relationships of around 170,000 pages. The workflow shows the potential of automatic image analysis by enabling the processing of a single page in less than a minute. Furthermore, the accuracy tests of the workflow on a small set of manuscripts with respect to features like page size and text areas show that automatic and manual analysis are comparable. The usage of a computer

  20. Determining of a robot workspace using the integration of a CAD system with a virtual control system

    NASA Astrophysics Data System (ADS)

    Herbuś, K.; Ociepka, P.

    2016-08-01

    The paper presents a method for determining the workspace of an industrial robot using an approach consisting in integration a 3D model of an industrial robot with a virtual control system. The robot model with his work environment, prepared for motion simulation, was created in the “Motion Simulation” module of the Siemens PLM NX software. In the mentioned model components of the “link” type were created which map the geometrical form of particular elements of the robot and the components of “joint” type mapping way of cooperation of components of the “link” type. In the paper is proposed the solution in which the control process of a virtual robot is similar to the control process of a real robot using the manual control panel (teach pendant). For this purpose, the control application “JOINT” was created, which provides the manipulation of a virtual robot in accordance with its internal control system. The set of procedures stored in an .xlsx file is the element integrating the 3D robot model working in the CAD/CAE class system with the elaborated control application.

  1. The 'Densitometric Image Analysis Software' and its application to determine stepwise equilibrium constants from electrophoretic mobility shift assays.

    PubMed

    van Oeffelen, Liesbeth; Peeters, Eveline; Nguyen Le Minh, Phu; Charlier, Daniël

    2014-01-01

    Current software applications for densitometric analysis, such as ImageJ, QuantityOne (BioRad) and the Intelligent or Advanced Quantifier (Bio Image) do not allow to take the non-linearity of autoradiographic films into account during calibration. As a consequence, quantification of autoradiographs is often regarded as problematic, and phosphorimaging is the preferred alternative. However, the non-linear behaviour of autoradiographs can be described mathematically, so it can be accounted for. Therefore, the 'Densitometric Image Analysis Software' has been developed, which allows to quantify electrophoretic bands in autoradiographs, as well as in gels and phosphorimages, while providing optimized band selection support to the user. Moreover, the program can determine protein-DNA binding constants from Electrophoretic Mobility Shift Assays (EMSAs). For this purpose, the software calculates a chosen stepwise equilibrium constant for each migration lane within the EMSA, and estimates the errors due to non-uniformity of the background noise, smear caused by complex dissociation or denaturation of double-stranded DNA, and technical errors such as pipetting inaccuracies. Thereby, the program helps the user to optimize experimental parameters and to choose the best lanes for estimating an average equilibrium constant. This process can reduce the inaccuracy of equilibrium constants from the usual factor of 2 to about 20%, which is particularly useful when determining position weight matrices and cooperative binding constants to predict genomic binding sites. The MATLAB source code, platform-dependent software and installation instructions are available via the website http://micr.vub.ac.be. PMID:24465496

  2. NiftyFit: a Software Package for Multi-parametric Model-Fitting of 4D Magnetic Resonance Imaging Data.

    PubMed

    Melbourne, Andrew; Toussaint, Nicolas; Owen, David; Simpson, Ivor; Anthopoulos, Thanasis; De Vita, Enrico; Atkinson, David; Ourselin, Sebastien

    2016-07-01

    Multi-modal, multi-parametric Magnetic Resonance (MR) Imaging is becoming an increasingly sophisticated tool for neuroimaging. The relationships between parameters estimated from different individual MR modalities have the potential to transform our understanding of brain function, structure, development and disease. This article describes a new software package for such multi-contrast Magnetic Resonance Imaging that provides a unified model-fitting framework. We describe model-fitting functionality for Arterial Spin Labeled MRI, T1 Relaxometry, T2 relaxometry and Diffusion Weighted imaging, providing command line documentation to generate the figures in the manuscript. Software and data (using the nifti file format) used in this article are simultaneously provided for download. We also present some extended applications of the joint model fitting framework applied to diffusion weighted imaging and T2 relaxometry, in order to both improve parameter estimation in these models and generate new parameters that link different MR modalities. NiftyFit is intended as a clear and open-source educational release so that the user may adapt and develop their own functionality as they require.

  3. NiftyFit: a Software Package for Multi-parametric Model-Fitting of 4D Magnetic Resonance Imaging Data.

    PubMed

    Melbourne, Andrew; Toussaint, Nicolas; Owen, David; Simpson, Ivor; Anthopoulos, Thanasis; De Vita, Enrico; Atkinson, David; Ourselin, Sebastien

    2016-07-01

    Multi-modal, multi-parametric Magnetic Resonance (MR) Imaging is becoming an increasingly sophisticated tool for neuroimaging. The relationships between parameters estimated from different individual MR modalities have the potential to transform our understanding of brain function, structure, development and disease. This article describes a new software package for such multi-contrast Magnetic Resonance Imaging that provides a unified model-fitting framework. We describe model-fitting functionality for Arterial Spin Labeled MRI, T1 Relaxometry, T2 relaxometry and Diffusion Weighted imaging, providing command line documentation to generate the figures in the manuscript. Software and data (using the nifti file format) used in this article are simultaneously provided for download. We also present some extended applications of the joint model fitting framework applied to diffusion weighted imaging and T2 relaxometry, in order to both improve parameter estimation in these models and generate new parameters that link different MR modalities. NiftyFit is intended as a clear and open-source educational release so that the user may adapt and develop their own functionality as they require. PMID:26972806

  4. Filtering Chromatic Aberration for Wide Acceptance Angle Electrostatic Lenses II--Experimental Evaluation and Software-Based Imaging Energy Analyzer.

    PubMed

    Fazekas, Ádám; Daimon, Hiroshi; Matsuda, Hiroyuki; Tóth, László

    2016-03-01

    Here, the experimental results of the method of filtering the effect of chromatic aberration for wide acceptance angle electrostatic lens-based system are described. This method can eliminate the effect of chromatic aberration from the images of a measured spectral image sequence by determining and removing the effect of higher and lower kinetic energy electrons on each different energy image, which leads to significant improvement of image and spectral quality. The method is based on the numerical solution of a large system of linear equations and equivalent with a multivariate strongly nonlinear deconvolution method. A matrix whose elements describe the strongly nonlinear chromatic aberration-related transmission function of the lens system acts on the vector of the ordered pixels of the distortion free spectral image sequence, and produces the vector of the ordered pixels of the measured spectral image sequence. Since the method can be applied not only on 2D real- and $k$ -space diffraction images, but also along a third dimension of the image sequence that is along the optical or in the 3D parameter space, the energy axis, it functions as a software-based imaging energy analyzer (SBIEA). It can also be applied in cases of light or other type of optics for different optical aberrations and distortions. In case of electron optics, the SBIEA method makes possible the spectral imaging without the application of any other energy filter. It is notable that this method also eliminates the disturbing background significantly in the present investigated case of reflection electron energy loss spectra. It eliminates the instrumental effects and makes possible to measure the real physical processes better. PMID:26863662

  5. Novel development tool for software pipeline optimization for VLIW-DSPs used in real-time image processing

    NASA Astrophysics Data System (ADS)

    Fuertler, Johannes; Mayer, Konrad J.; Krattenthaler, Werner; Bajla, Ivan

    2003-04-01

    Although the hardware platform is often seen as the most important element of real-time imaging systems, software optimization can also provide remarkable reduction of overall computational costs. The recommended code development flow for digital signal processors based on the TMS320C6000(TM) architecture usually involves three phases: development of C code, refinement of C code, and programming linear assembly code. Each step requires a different level of knowledge of processor internals. The developer is not directly involved in the automatic scheduling process. In some cases, however, this may result in unacceptable code performance. A better solution can be achieved by scheduling the assembly code by hand. Unfortunately, scheduling of software pipelines by hand not only requires expert skills but is also time consuming, and moreover, prone to errors. To overcome these drawbacks we have designed an innovative development tool - the Software Pipeline Optimization Tool (SPOT(TM)). The SPOT is based on visualization of the scheduled assembly code by a two-dimensional interactive schedule editor, which is equipped with feedback mechanisms deduced from analysis of data dependencies and resource allocation conflicts. The paper addresses optimization techniques available by the application of the SPOT. Furthermore, the benefit of the SPOT is documented by more than 20 optimized image processing algorithms.

  6. NeuroGam Software Analysis in Epilepsy Diagnosis Using 99mTc-ECD Brain Perfusion SPECT Imaging.

    PubMed

    Fu, Peng; Zhang, Fang; Gao, Jianqing; Jing, Jianmin; Pan, Liping; Li, Dongxue; Wei, Lingge

    2015-09-20

    BACKGROUND The aim of this study was to explore the value of NeuroGam software in diagnosis of epilepsy by 99Tcm-ethyl cysteinate dimer (ECD) brain imaging. MATERIAL AND METHODS NeuroGam was used to analyze 52 cases of clinically proven epilepsy by 99Tcm-ECD brain imaging. The results were compared with EEG and MRI, and the positive rates and localization to epileptic foci were analyzed. RESULTS NeuroGam analysis showed that 42 of 52 epilepsy cases were abnormal. 99Tcm-ECD brain imaging revealed a positive rate of 80.8% (42/52), with 36 out of 42 patients (85.7%) clearly showing an abnormal area. Both were higher than that of brain perfusion SPECT, with a consistency of 64.5% (34/52) using these 2 methods. Decreased regional cerebral blood flow (rCBF) was observed in frontal (18), temporal (20), and parietal lobes (2). Decreased rCBF was seen in frontal and temporal lobes in 4 out of 36 patients, and in temporal and parietal lobes of 2 out of 36 patients. NeuroGam further showed that the abnormal area was located in a different functional area of the brain. EEG abnormalities were detected in 29 out of 52 patients (55.8%) with 16 cases (55.2%) clearly showing an abnormal area. MRI abnormalities were detected in 17 out of 43 cases (39.5%), including 9 cases (52.9%) clearly showing an abnormal area. The consistency of NeuroGam software analysis, and EEG and MRI were 48.1% (25/52) and 34.9% (15/43), respectively. CONCLUSIONS NeuroGam software analysis offers a higher sensitivity in detecting epilepsy than EEG or MRI. It is a powerful tool in 99Tcm-ECD brain imaging.

  7. SU-E-J-104: Evaluation of Accuracy for Various Deformable Image Registrations with Virtual Deformation QA Software

    SciTech Connect

    Han, S; Kim, K; Kim, M; Jung, H; Ji, Y; Choi, S; Park, S

    2015-06-15

    Purpose: The accuracy of deformable image registration (DIR) has a significant dosimetric impact in radiation treatment planning. We evaluated accuracy of various DIR algorithms using virtual deformation QA software (ImSimQA, Oncology System Limited, UK). Methods: The reference image (Iref) and volume (Vref) was first generated with IMSIMQA software. We deformed Iref with axial movement of deformation point and Vref depending on the type of deformation that are the deformation1 is to increase the Vref (relaxation) and the deformation 2 is to decrease the Vref (contraction) .The deformed image (Idef) and volume (Vdef) were inversely deformed to Iref and Vref using DIR algorithms. As a Result, we acquired deformed image (Iid) and volume (Vid). The DIR algorithms were optical flow (HS, IOF) and demons (MD, FD) of the DIRART. The image similarity evaluation between Iref and Iid was calculated by Normalized Mutual Information (NMI) and Normalized Cross Correlation (NCC). The value of Dice Similarity Coefficient (DSC) was used for evaluation of volume similarity. Results: When moving distance of deformation point was 4 mm, the value of NMI was above 1.81 and NCC was above 0.99 in all DIR algorithms. Since the degree of deformation was increased, the degree of image similarity was decreased. When the Vref increased or decreased about 12%, the difference between Vref and Vid was within ±5% regardless of the type of deformation. The value of DSC was above 0.95 in deformation1 except for the MD algorithm. In case of deformation 2, that of DSC was above 0.95 in all DIR algorithms. Conclusion: The Idef and Vdef have not been completely restored to Iref and Vref and the accuracy of DIR algorithms was different depending on the degree of deformation. Hence, the performance of DIR algorithms should be verified for the desired applications.

  8. Towards a Collaborative Online Workspace and Unified Standards for Geochemical Data

    NASA Astrophysics Data System (ADS)

    Mernagh, T. P.; Treloar, A.; Wyborn, L. A.

    2011-12-01

    stores at the institution or elsewhere. They are developing a national discovery service that enables access to data in institutional stores with rich context. No data is stored in this system, only metadata with pointers back to the original data. This enables researchers to keep their own data but also enables access to many repositories at once. Such a system will require standardisation at all phases of the process of analytical geochemistry. The geochemistry community needs to work together to develop standards for attributes as the data are collected from the instrument, to develop more standardised processing of the raw data and to agree on what is required for publishing. An online-collaborative workspace such as this would be ideal for geochemical data and the provision of standardised, open source software would greatly enhance the persistence of individual geochemistry data collections and facilitate reuse and repurposing. This conforms to the guidelines from Geoinformatics for Geochemistry (http://www.geoinfogeochem.org/) which requires metadata on how the samples were analysed.

  9. Online Workspace to Connect Scientists with NASA's Science E/PO Efforts and Practitioners

    NASA Astrophysics Data System (ADS)

    Shipp, Stephanie; Bartolone , Lindsay; Peticolas, Laura; Woroner, Morgan; Dalton, Heather; Schwerin, Theresa; Smith, Denise

    2014-11-01

    There is a growing awareness of the need for a scientifically literate public in light of challenges facing society today, and also a growing concern about the preparedness of our future workforce to meet those challenges. Federal priorities for science, technology, engineering, and math (STEM) education call for improvement of teacher training, increased youth and public engagement, greater involvement of underrepresented populations, and investment in undergraduate and graduate education. How can planetary scientists contribute to these priorities? How can they “make their work and findings comprehensible, appealing, and available to the public” as called for in the Planetary Decadal Survey?NASA’s Science Mission Directorate (SMD) Education and Public Outreach (E/PO) workspace provides the SMD E/PO community of practice - scientists and educators funded to conduct SMD E/PO or those using NASA’s science discoveries in E/PO endeavors - with an online environment in which to communicate, collaborate, and coordinate activities, thus helping to increase effectiveness of E/PO efforts. The workspace offers interested scientists avenues to partner with SMD E/PO practitioners and learn about E/PO projects and impacts, as well as to advertise their own efforts to reach a broader audience. Through the workspace, scientists can become aware of opportunities for involvement and explore resources to improve professional practice, including literature reviews of best practices for program impact, mechanisms for engaging diverse audiences, and large- and small-scale program evaluation. Scientists will find “how to” manuals for getting started and increasing impact with public presentations, classroom visits, and other audiences, as well as primers with activity ideas and resources that can augment E/PO interactions with different audiences. The poster will introduce the workspace to interested scientists and highlight pathways to resources of interest that can help

  10. Mission planning for Shuttle Imaging Radar-C (SIR-C) with a real-time interactive planning software

    NASA Technical Reports Server (NTRS)

    Potts, Su K.

    1993-01-01

    The Shuttle Imaging Radar-C (SIR-C) mission will operate from the payload bay of the space shuttle for 8 days, gathering Synthetic Aperture Radar (SAR) data over specific sites on the Earth. The short duration of the mission and the requirement for realtime planning offer challenges in mission planning and in the design of the Planning and Analysis Subsystem (PAS). The PAS generates shuttle ephemerides and mission planning data and provides an interactive real-time tool for quick mission replanning. It offers a multi-user and multiprocessing environment, and it is able to keep multiple versions of the mission timeline data while maintaining data integrity and security. Its flexible design allows one software to provide different menu options based on the user's operational function, and makes it easy to tailor the software for other Earth orbiting missions.

  11. Histostitcher™: An informatics software platform for reconstructing whole-mount prostate histology using the extensible imaging platform framework

    PubMed Central

    Toth, Robert J.; Shih, Natalie; Tomaszewski, John E.; Feldman, Michael D.; Kutter, Oliver; Yu, Daphne N.; Paulus, John C.; Paladini, Ginaluca; Madabhushi, Anant

    2014-01-01

    Context: Co-registration of ex-vivo histologic images with pre-operative imaging (e.g., magnetic resonance imaging [MRI]) can be used to align and map disease extent, and to identify quantitative imaging signatures. However, ex-vivo histology images are frequently sectioned into quarters prior to imaging. Aims: This work presents Histostitcher™, a software system designed to create a pseudo whole mount histology section (WMHS) from a stitching of four individual histology quadrant images. Materials and Methods: Histostitcher™ uses user-identified fiducials on the boundary of two quadrants to stitch such quadrants. An original prototype of Histostitcher™ was designed using the Matlab programming languages. However, clinical use was limited due to slow performance, computer memory constraints and an inefficient workflow. The latest version was created using the extensible imaging platform (XIP™) architecture in the C++ programming language. A fast, graphics processor unit renderer was designed to intelligently cache the visible parts of the histology quadrants and the workflow was significantly improved to allow modifying existing fiducials, fast transformations of the quadrants and saving/loading sessions. Results: The new stitching platform yielded significantly more efficient workflow and reconstruction than the previous prototype. It was tested on a traditional desktop computer, a Windows 8 Surface Pro table device and a 27 inch multi-touch display, with little performance difference between the different devices. Conclusions: Histostitcher™ is a fast, efficient framework for reconstructing pseudo WMHS from individually imaged quadrants. The highly modular XIP™ framework was used to develop an intuitive interface and future work will entail mapping the disease extent from the pseudo WMHS onto pre-operative MRI. PMID:24843820

  12. Mathematically gifted adolescents mobilize enhanced workspace configuration of theta cortical network during deductive reasoning.

    PubMed

    Zhang, L; Gan, J Q; Wang, H

    2015-03-19

    Previous studies have established the importance of the fronto-parietal brain network in the information processing of reasoning. At the level of cortical source analysis, this eletroencepalogram (EEG) study investigates the functional reorganization of the theta-band (4-8Hz) neurocognitive network of mathematically gifted adolescents during deductive reasoning. Depending on the dense increase of long-range phase synchronizations in the reasoning process, math-gifted adolescents show more significant adaptive reorganization and enhanced "workspace" configuration in the theta network as compared with average-ability control subjects. The salient areas are mainly located in the anterior cortical vertices of the fronto-parietal network. Further correlation analyses have shown that the enhanced workspace configuration with respect to the global topological metrics of the theta network in math-gifted subjects is correlated with the intensive frontal midline theta (fm theta) response that is related to strong neural effort for cognitive events. These results suggest that by investing more cognitive resources math-gifted adolescents temporally mobilize an enhanced task-related global neuronal workspace, which is manifested as a highly integrated fronto-parietal information processing network during the reasoning process. PMID:25595993

  13. Fundamentally Distributed Information Processing Integrates the Motor Network into the Mental Workspace during Mental Rotation.

    PubMed

    Schlegel, Alexander; Konuthula, Dedeepya; Alexander, Prescott; Blackwood, Ethan; Tse, Peter U

    2016-08-01

    The manipulation of mental representations in the human brain appears to share similarities with the physical manipulation of real-world objects. In particular, some neuroimaging studies have found increased activity in motor regions during mental rotation, suggesting that mental and physical operations may involve overlapping neural populations. Does the motor network contribute information processing to mental rotation? If so, does it play a similar computational role in both mental and manual rotation, and how does it communicate with the wider network of areas involved in the mental workspace? Here we used multivariate methods and fMRI to study 24 participants as they mentally rotated 3-D objects or manually rotated their hands in one of four directions. We find that information processing related to mental rotations is distributed widely among many cortical and subcortical regions, that the motor network becomes tightly integrated into a wider mental workspace network during mental rotation, and that motor network activity during mental rotation only partially resembles that involved in manual rotation. Additionally, these findings provide evidence that the mental workspace is organized as a distributed core network that dynamically recruits specialized subnetworks for specific tasks as needed. PMID:27054403

  14. Mathematically gifted adolescents mobilize enhanced workspace configuration of theta cortical network during deductive reasoning.

    PubMed

    Zhang, L; Gan, J Q; Wang, H

    2015-03-19

    Previous studies have established the importance of the fronto-parietal brain network in the information processing of reasoning. At the level of cortical source analysis, this eletroencepalogram (EEG) study investigates the functional reorganization of the theta-band (4-8Hz) neurocognitive network of mathematically gifted adolescents during deductive reasoning. Depending on the dense increase of long-range phase synchronizations in the reasoning process, math-gifted adolescents show more significant adaptive reorganization and enhanced "workspace" configuration in the theta network as compared with average-ability control subjects. The salient areas are mainly located in the anterior cortical vertices of the fronto-parietal network. Further correlation analyses have shown that the enhanced workspace configuration with respect to the global topological metrics of the theta network in math-gifted subjects is correlated with the intensive frontal midline theta (fm theta) response that is related to strong neural effort for cognitive events. These results suggest that by investing more cognitive resources math-gifted adolescents temporally mobilize an enhanced task-related global neuronal workspace, which is manifested as a highly integrated fronto-parietal information processing network during the reasoning process.

  15. New technique to count mosquito adults: using ImageJ software to estimate number of mosquito adults in a trap.

    PubMed

    Kesavaraju, Banugopan; Dickson, Sammie

    2012-12-01

    A new technique is described here to count mosquitoes using open-source software. We wanted to develop a protocol that would estimate the total number of mosquitoes from a picture using ImageJ. Adult mosquitoes from CO2-baited traps were spread on a tray and photographed. The total number of mosquitoes in a picture was estimated using various calibrations on ImageJ, and results were compared with manual counting to identify the ideal calibration. The average trap count was 1,541, and the average difference between the manual count and the best calibration was 174.11 +/- 21.59, with 93% correlation. Subsequently, contents of a trap were photographed 5 different times after they were shuffled between each picture to alter the picture pattern of adult mosquitoes. The standard error among variations stayed below 50, indicating limited variation for total count between pictures of the same trap when the pictures were processed through ImageJ. These results indicate the software could be utilized efficiently to estimate total number of mosquitoes from traps.

  16. The MicroAnalysis Toolkit: X-ray Fluorescence Image Processing Software

    SciTech Connect

    Webb, S. M.

    2011-09-09

    The MicroAnalysis Toolkit is an analysis suite designed for the processing of x-ray fluorescence microprobe data. The program contains a wide variety of analysis tools, including image maps, correlation plots, simple image math, image filtering, multiple energy image fitting, semi-quantitative elemental analysis, x-ray fluorescence spectrum analysis, principle component analysis, and tomographic reconstructions. To be as widely useful as possible, data formats from many synchrotron sources can be read by the program with more formats available by request. An overview of the most common features will be presented.

  17. ImaSim, a software tool for basic education of medical x-ray imaging in radiotherapy and radiology

    NASA Astrophysics Data System (ADS)

    Landry, Guillaume; deBlois, François; Verhaegen, Frank

    2013-11-01

    Introduction: X-ray imaging is an important part of medicine and plays a crucial role in radiotherapy. Education in this field is mostly limited to textbook teaching due to equipment restrictions. A novel simulation tool, ImaSim, for teaching the fundamentals of the x-ray imaging process based on ray-tracing is presented in this work. ImaSim is used interactively via a graphical user interface (GUI). Materials and methods: The software package covers the main x-ray based medical modalities: planar kilo voltage (kV), planar (portal) mega voltage (MV), fan beam computed tomography (CT) and cone beam CT (CBCT) imaging. The user can modify the photon source, object to be imaged and imaging setup with three-dimensional editors. Objects are currently obtained by combining blocks with variable shapes. The imaging of three-dimensional voxelized geometries is currently not implemented, but can be added in a later release. The program follows a ray-tracing approach, ignoring photon scatter in its current implementation. Simulations of a phantom CT scan were generated in ImaSim and were compared to measured data in terms of CT number accuracy. Spatial variations in the photon fluence and mean energy from an x-ray tube caused by the heel effect were estimated from ImaSim and Monte Carlo simulations and compared. Results: In this paper we describe ImaSim and provide two examples of its capabilities. CT numbers were found to agree within 36 Hounsfield Units (HU) for bone, which corresponds to a 2% attenuation coefficient difference. ImaSim reproduced the heel effect reasonably well when compared to Monte Carlo simulations. Discussion: An x-ray imaging simulation tool is made available for teaching and research purposes. ImaSim provides a means to facilitate the teaching of medical x-ray imaging.

  18. Analysis of a marine phototrophic biofilm by confocal laser scanning microscopy using the new image quantification software PHLIP

    PubMed Central

    Mueller, Lukas N; de Brouwer, Jody FC; Almeida, Jonas S; Stal, Lucas J; Xavier, João B

    2006-01-01

    Background Confocal laser scanning microscopy (CLSM) is the method of choice to study interfacial biofilms and acquires time-resolved three-dimensional data of the biofilm structure. CLSM can be used in a multi-channel modus where the different channels map individual biofilm components. This communication presents a novel image quantification tool, PHLIP, for the quantitative analysis of large amounts of multichannel CLSM data in an automated way. PHLIP can be freely downloaded from Results PHLIP is an open source public license Matlab toolbox that includes functions for CLSM imaging data handling and ten image analysis operations describing various aspects of biofilm morphology. The use of PHLIP is here demonstrated by a study of the development of a natural marine phototrophic biofilm. It is shown how the examination of the individual biofilm components using the multi-channel capability of PHLIP allowed the description of the dynamic spatial and temporal separation of diatoms, bacteria and organic and inorganic matter during the shift from a bacteria-dominated to a diatom-dominated phototrophic biofilm. Reflection images and weight measurements complementing the PHLIP analyses suggest that a large part of the biofilm mass consisted of inorganic mineral material. Conclusion The presented case study reveals new insight into the temporal development of a phototrophic biofilm where multi-channel imaging allowed to parallel monitor the dynamics of the individual biofilm components over time. This application of PHLIP presents the power of biofilm image analysis by multi-channel CLSM software and demonstrates the importance of PHLIP for the scientific community as a flexible and extendable image analysis platform for automated image processing. PMID:16412253

  19. PCID and ASPIRE 2.0 - The Next Generation of AMOS Image Processing Software

    NASA Astrophysics Data System (ADS)

    Matson, C.; Soo Hoo, T.; Murphy, M.; Calef, B.; Beckner, C.; You, S.

    One of the missions of the Air Force Maui Optical and Supercomputing (AMOS) site is to generate high-resolution images of space objects using the Air Force telescopes located on Haleakala. Because atmospheric turbulence greatly reduces the resolution of space object images collected with ground-based telescopes, methods for overcoming atmospheric blurring are necessary. One such method is the use of adaptive optics systems to measure and compensate for atmospheric blurring in real time. A second method is to use image restoration algorithms on one or more short-exposure images of the space object under consideration. At AMOS, both methods are used routinely. In the case of adaptive optics, rarely can all atmospheric turbulence effects be removed from the imagery, so image restoration algorithms are useful even for adaptive-optics-corrected images. Historically, the bispectrum algorithm has been the primary image restoration algorithm used at AMOS. It has the advantages of being extremely fast (processing times of less than one second) and insensitive to atmospheric phase distortions. In addition, multi-frame blind deconvolution (MFBD) algorithms have also been used for image restoration. It has been observed empirically and with the use of computer simulation studies that MFBD algorithms produce higher-resolution image restorations than does the bispectrum algorithm. MFBD algorithms also do not need separate measurements of a star in order to work. However, in the past, MFBD algorithms have been factors of one hundred or more slower than the bispectrum algorithm, limiting their use to non-time-critical image restorations. Recently, with the financial support of AMOS and the High-Performance Computing Modernization Office, an MFBD algorithm called Physically-Constrained Iterative Deconvolution (PCID) has been efficiently parallelized and is able to produce image restorations in only a few seconds. In addition, with the financial support of AFOSR, it has been shown

  20. msIQuant--Quantitation Software for Mass Spectrometry Imaging Enabling Fast Access, Visualization, and Analysis of Large Data Sets.

    PubMed

    Källback, Patrik; Nilsson, Anna; Shariatgorji, Mohammadreza; Andrén, Per E

    2016-04-19

    This paper presents msIQuant, a novel instrument- and manufacturer-independent quantitative mass spectrometry imaging software suite that uses the standardized open access data format imzML. Its data processing structure enables rapid image display and the analysis of very large data sets (>50 GB) without any data reduction. In addition, msIQuant provides many tools for image visualization including multiple interpolation methods, low intensity transparency display, and image fusion. It also has a quantitation function that automatically generates calibration standard curves from series of standards that can be used to determine the concentrations of specific analytes. Regions-of-interest in a tissue section can be analyzed based on a number of quantities including the number of pixels, average intensity, standard deviation of intensity, and median and quartile intensities. Moreover, the suite's export functions enable simplified postprocessing of data and report creation. We demonstrate its potential through several applications including the quantitation of small molecules such as drugs and neurotransmitters. The msIQuant suite is a powerful tool for accessing and evaluating very large data sets, quantifying drugs and endogenous compounds in tissue areas of interest, and for processing mass spectra and images.

  1. Software-based mitigation of image degradation due to atmospheric turbulence

    NASA Astrophysics Data System (ADS)

    Huebner, Claudia S.; Scheifling, Corinne

    2010-10-01

    Motion-Compensated Averaging (MCA) with blind deconvolution has proven successful in mitigating turbulence effects like image dancing and blurring. In this paper an image quality control according to the "Lucky Imaging" principle is combined with the MCA-procedure, weighting good frames more heavily than bad ones, skipping a given percentage of extremely degraded frames entirely. To account for local isoplanatism, when image dancing will effect local displacements between consecutive frames rather than global shifts only, a locally operating MCA variant with block matching, proposed in earlier work, is employed. In order to reduce loss of detail due to normal averaging, various combinations of temporal mode, median and mean are tested as reference image. The respective restoration results by means of a weighted blind deconvolution algorithm are presented and evaluated.

  2. Useful diagnostic biometabolic data obtained by PET/CT and MR fusion imaging using open source software.

    PubMed

    Antonica, Filippo; Asabella, Artor Niccoli; Ferrari, Cristina; Rubini, Domenico; Notaristefano, Antonio; Nicoletti, Adriano; Altini, Corinna; Merenda, Nunzio; Mossa, Emilio; Guarini, Attilio; Rubini, Giuseppe

    2014-01-01

    In the last decade numerous attempts were considered to co-register and integrate different imaging data. Like PET/CT the integration of PET to MR showed great interest. PET/MR scanners are recently tested on different distrectual or systemic pathologies. Unfortunately PET/MR scanners are expensive and diagnostic protocols are still under studies and investigations. Nuclear Medicine imaging highlights functional and biometabolic information but has poor anatomic details. The aim of this study is to integrate MR and PET data to produce distrectual or whole body fused images acquired from different scanners even in different days. We propose an offline method to fuse PET with MR data using an open-source software that has to be inexpensive, reproducible and capable to exchange data over the network. We also evaluate global quality, alignment quality, and diagnostic confidence of fused PET-MR images. We selected PET/CT studies performed in our Nuclear Medicine unit, MR studies provided by patients on DICOM CD media or network received. We used Osirix 5.7 open source version. We aligned CT slices with the first MR slice, pointed and marked for co-registration using MR-T1 sequence and CT as reference and fused with PET to produce a PET-MR image. A total of 100 PET/CT studies were fused with the following MR studies: 20 head, 15 thorax, 24 abdomen, 31 pelvis, 10 whole body. An interval of no more than 15 days between PET and MR was the inclusion criteria. PET/CT, MR and fused studies were evaluated by two experienced radiologist and two experienced nuclear medicine physicians. Each one filled a five point based evaluation scoring scheme based on image quality, image artifacts, segmentation errors, fusion misalignment and diagnostic confidence. Our fusion method showed best results for head, thorax and pelvic districts in terms of global quality, alignment quality and diagnostic confidence,while for the abdomen and pelvis alignement quality and global quality resulted

  3. User's Guide for MapIMG 2: Map Image Re-projection Software Package

    USGS Publications Warehouse

    Finn, Michael P.; Trent, Jason R.; Buehler, Robert A.

    2006-01-01

    BACKGROUND Scientists routinely accomplish small-scale geospatial modeling in the raster domain, using high-resolution datasets for large parts of continents and low-resolution to high-resolution datasets for the entire globe. Direct implementation of point-to-point transformation with appropriate functions yields the variety of projections available in commercial software packages, but implementation with data other than points requires specific adaptation of the transformation equations or prior preparation of the data to allow the transformation to succeed. It seems that some of these packages use the U.S. Geological Survey's (USGS) General Cartographic Transformation Package (GCTP) or similar point transformations without adaptation to the specific characteristics of raster data (Usery and others, 2003a). Usery and others (2003b) compiled and tabulated the accuracy of categorical areas in projected raster datasets of global extent. Based on the shortcomings identified in these studies, geographers and applications programmers at the USGS expanded and evolved a USGS software package, MapIMG, for raster map projection transformation (Finn and Trent, 2004). Daniel R. Steinwand of Science Applications International Corporation, National Center for Earth Resources Observation and Science, originally developed MapIMG for the USGS, basing it on GCTP. Through previous and continuing efforts at the USGS' National Geospatial Technical Operations Center, this program has been transformed from an application based on command line input into a software package based on a graphical user interface for Windows, Linux, and other UNIX machines.

  4. [CASTOR-Radiology: software of management in a Unit of Medical Imaging: use in the CHU of Tours].

    PubMed

    Bertrand, P; Rouleau, P; Alison, D; Bristeau, M; Minard, P; Saad, B

    1993-01-01

    Despite the large volume of information circulating in radiology departments, very few of them are currently computerised, although computer processing is developing rapidly in hospitals, encouraged by the installation of PMSI. This article illustrates the example of an imaging department management software: CASTOR-Radiologie, Computerisation of part of the Hospital Information System (HIS) must allow an improvement in the efficacy of the service rendered, must reliably reflect the department's activity and must be able to monitor the running costs. CASTOR-Radiologie was developed in conformity with standard national specifications defined by the Public Hospitals Department of the French Ministry of Health. The functions of this software are: unique patient identification, HIS base, management of examination requests, allowing a rapid reply to clinician's requests, "real-time" follow-up of patients in the department, saving time for secretaries and technicians, medical files and file analysis, allowing analysis of diagnostic strategies and quality control, edition of analytical tables of the department's activity compatible with the PMSI procedures catalogue, allowing optimisation of the use of limited resources, aid to the management of human, equipment and consumable resources. Links with other hospital computers raise organisational rather than technical problems, but have been planned for in the CASTOR-Radiologie software. This new tool was very well accepted by the personnel.

  5. Quick and easy molecular weight determination with Macintosh computers and public domain image analysis software.

    PubMed

    Seebacher, T; Bade, E G

    1996-10-01

    The program "molecular weights" allows a fast and easy estimation of molecular weights (M(r)), isoelectric point (pI) values and band intensities directly from scanned, polyacrylamide gels, two-dimensional protein patterns and DNA gel images. The image coordinates of M(r) and pI reference standards enable the program to calculate M(r) and pI values in a real time manner for any cursor position. The program requires NIH-Image for Macintosh computers and includes automatic band detection coupled with a densitometric evaluation.

  6. Automated Scoring of Chromogenic Media for Detection of Methicillin-Resistant Staphylococcus aureus by Use of WASPLab Image Analysis Software.

    PubMed

    Faron, Matthew L; Buchan, Blake W; Vismara, Chiara; Lacchini, Carla; Bielli, Alessandra; Gesu, Giovanni; Liebregts, Theo; van Bree, Anita; Jansz, Arjan; Soucy, Genevieve; Korver, John; Ledeboer, Nathan A

    2016-03-01

    Recently, systems have been developed to create total laboratory automation for clinical microbiology. These systems allow for the automation of specimen processing, specimen incubation, and imaging of bacterial growth. In this study, we used the WASPLab to validate software that discriminates and segregates positive and negative chromogenic methicillin-resistant Staphylococcus aureus (MRSA) plates by recognition of pigmented colonies. A total of 57,690 swabs submitted for MRSA screening were enrolled in the study. Four sites enrolled specimens following their standard of care. Chromogenic agar used at these sites included MRSASelect (Bio-Rad Laboratories, Redmond, WA), chromID MRSA (bioMérieux, Marcy l'Etoile, France), and CHROMagar MRSA (BD Diagnostics, Sparks, MD). Specimens were plated and incubated using the WASPLab. The digital camera took images at 0 and 16 to 24 h and the WASPLab software determined the presence of positive colonies based on a hue, saturation, and value (HSV) score. If the HSV score fell within a defined threshold, the plate was called positive. The performance of the digital analysis was compared to manual reading. Overall, the digital software had a sensitivity of 100% and a specificity of 90.7% with the specificity ranging between 90.0 and 96.0 across all sites. The results were similar using the three different agars with a sensitivity of 100% and specificity ranging between 90.7 and 92.4%. These data demonstrate that automated digital analysis can be used to accurately sort positive from negative chromogenic agar cultures regardless of the pigmentation produced. PMID:26719443

  7. Novel collaboration and situational awareness environment for leaders and their support staff via self assembling software.

    SciTech Connect

    Bouchard, Ann Marie; Osbourn, Gordon Cecil; Bartholomew, John Warren

    2008-02-01

    This is the final report on the Sandia Fellow LDRD, project 117865, 08-0281. This presents an investigation of self-assembling software intended to create shared workspace environment to allow online collaboration and situational awareness for use by high level managers and their teams.

  8. TGS[underscore]FIT: Image reconstruction software for quantitative, low-resolution tomographic assays

    SciTech Connect

    Estep, R J

    1993-01-01

    We developed the computer program TGS[underscore]FIT to aid in researching the tomographic gamma scanner method of nondestructive assay. This software, written in C-programming, language, implements a full Beer's Law attenuation correction in reconstructing low-resolution emission tomograms. The attenuation coefficients for the corrections are obtained by reconstructing a transmission tomogram of the same resolution. The command-driven interface, combined with (crude) simulation capabilities and command file control, allows design studies to be performed in a semi-automated manner.

  9. Accuracy and reliability of linear measurements using 3-dimensional computed tomographic imaging software for Le Fort I Osteotomy.

    PubMed

    Gaia, Bruno Felipe; Pinheiro, Lucas Rodrigues; Umetsubo, Otávio Shoite; Santos, Oseas; Costa, Felipe Ferreira; Cavalcanti, Marcelo Gusmão Paraíso

    2014-03-01

    Our purpose was to compare the accuracy and reliability of linear measurements for Le Fort I osteotomy using volume rendering software. We studied 11 dried skulls and used cone-beam computed tomography (CT) to generate 3-dimensional images. Linear measurements were based on craniometric anatomical landmarks that were predefined as specifically used for Le Fort I osteotomy, and identified twice each by 2 radiologists, independently, using Dolphin imaging version 11.5.04.35. A third examiner then made physical measurements using digital calipers. There was a significant difference between Dolphin imaging and the gold standard, particularly in the pterygoid process. The largest difference was 1.85mm (LLpPtg L). The mean differences between the physical and the 3-dimensional linear measurements ranged from -0.01 to 1.12mm for examiner 1, and 0 to 1.85mm for examiner 2. Interexaminer analysis ranged from 0.51 to 0.93. Intraexaminer correlation coefficients ranged from 0.81 to 0.96 and 0.57 to 0.92, for examiners 1 and 2, respectively. We conclude that the Dolphin imaging should be used sparingly during Le Fort I osteotomy.

  10. Seismic reflection imaging of underground cavities using open-source software

    SciTech Connect

    Mellors, R J

    2011-12-20

    The Comprehensive Nuclear Test Ban Treaty (CTBT) includes provisions for an on-site inspection (OSI), which allows the use of specific techniques to detect underground anomalies including cavities and rubble zones. One permitted technique is active seismic surveys such as seismic refraction or reflection. The purpose of this report is to conduct some simple modeling to evaluate the potential use of seismic reflection in detecting cavities and to test the use of open-source software in modeling possible scenarios. It should be noted that OSI inspections are conducted under specific constraints regarding duration and logistics. These constraints are likely to significantly impact active seismic surveying, as a seismic survey typically requires considerable equipment, effort, and expertise. For the purposes of this study, which is a first-order feasibility study, these issues will not be considered. This report provides a brief description of the seismic reflection method along with some commonly used software packages. This is followed by an outline of a simple processing stream based on a synthetic model, along with results from a set of models representing underground cavities. A set of scripts used to generate the models are presented in an appendix. We do not consider detection of underground facilities in this work and the geologic setting used in these tests is an extremely simple one.

  11. Familiarity effects in the construction of facial-composite images using modern software systems.

    PubMed

    Frowd, Charlie D; Skelton, Faye C; Butt, Neelam; Hassan, Amal; Fields, Stephen; Hancock, Peter J B

    2011-12-01

    We investigate the effect of target familiarity on the construction of facial composites, as used by law enforcement to locate criminal suspects. Two popular software construction methods were investigated. Participants were shown a target face that was either familiar or unfamiliar to them and constructed a composite of it from memory using a typical 'feature' system, involving selection of individual facial features, or one of the newer 'holistic' types, involving repeated selection and breeding from arrays of whole faces. This study found that composites constructed of a familiar face were named more successfully than composites of an unfamiliar face; also, naming of composites of internal and external features was equivalent for construction of unfamiliar targets, but internal features were better named than the external features for familiar targets. These findings applied to both systems, although benefit emerged for the holistic type due to more accurate construction of internal features and evidence for a whole-face advantage. STATEMENT OF RELEVANCE: This work is of relevance to practitioners who construct facial composites with witnesses to and victims of crime, as well as for software designers to help them improve the effectiveness of their composite systems.

  12. Integration of instrumentation and processing software of a laser speckle contrast imaging system

    NASA Astrophysics Data System (ADS)

    Carrick, Jacob J.

    Laser speckle contrast imaging (LSCI) has the potential to be a powerful tool in medicine, but more research in the field is required so it can be used properly. To help in the progression of Michigan Tech's research in the field, a graphical user interface (GUI) was designed in Matlab to control the instrumentation of the experiments as well as process the raw speckle images into contrast images while they are being acquired. The design of the system was successful and is currently being used by Michigan Tech's Biomedical Engineering department. This thesis describes the development of the LSCI GUI as well as offering a full introduction into the history, theory and applications of LSCI.

  13. Automated image mosaics by non-automated light microscopes: the MicroMos software tool.

    PubMed

    Piccinini, F; Bevilacqua, A; Lucarelli, E

    2013-12-01

    Light widefield microscopes and digital imaging are the basis for most of the analyses performed in every biological laboratory. In particular, the microscope's user is typically interested in acquiring high-detailed images for analysing observed cells and tissues, meanwhile being representative of a wide area to have reliable statistics. The microscopist has to choose between higher magnification factor and extension of the observed area, due to the finite size of the camera's field of view. To overcome the need of arrangement, mosaicing techniques have been developed in the past decades for increasing the camera's field of view by stitching together more images. Nevertheless, these approaches typically work in batch mode and rely on motorized microscopes. Or alternatively, the methods are conceived just to provide visually pleasant mosaics not suitable for quantitative analyses. This work presents a tool for building mosaics of images acquired with nonautomated light microscopes. The method proposed is based on visual information only and the mosaics are built by incrementally stitching couples of images, making the approach available also for online applications. Seams in the stitching regions as well as tonal inhomogeneities are corrected by compensating the vignetting effect. In the experiments performed, we tested different registration approaches, confirming that the translation model is not always the best, despite the fact that the motion of the sample holder of the microscope is apparently translational and typically considered as such. The method's implementation is freely distributed as an open source tool called MicroMos. Its usability makes building mosaics of microscope images at subpixel accuracy easier. Furthermore, optional parameters for building mosaics according to different strategies make MicroMos an easy and reliable tool to compare different registration approaches, warping models and tonal corrections.

  14. Tools for Scientist Engagement in E/PO: NASA SMD Community Workspace and Online Resources

    NASA Astrophysics Data System (ADS)

    Dalton, H.; Shipp, S. S.; Grier, J.; Gross, N. A.; Buxner, S.; Bartolone, L.; Peticolas, L. M.; Woroner, M.; Schwerin, T. G.

    2014-12-01

    The Science Mission Directorate (SMD) Science Education and Public Outreach (E/PO) Forums are here to help you get involved in E/PO! The Forums have been developing several online resources to support scientists who are - or who are interested in becoming - involved in E/PO. These include NASA Wavelength, EarthSpace, and the SMD E/PO online community workspace. NASA Wavelength is the one-stop shop of all peer-reviewed NASA education resources to find materials you - or your audiences - can use. Browse by audience (pre-K through 12, higher education, and informal education) or topic, or choose to search for something specific by keyword and audience. http://nasawavelength.org. EarthSpace, an online clearinghouse of Earth and space materials for use in the higher education classroom, is driven by a powerful search engine that allows you to browse the collection of resources by science topic, audience, type of material or key terms. All materials are peer-reviewed before posting, and because all submissions receive a digital object identifier (doi), submitted materials can be listed as publications. http://www.lpi.usra.edu/earthspace. The SMD E/PO online community workspace contains many resources for scientists. These include one-page guides on how to get involved, tips on how to make the most of your time spent on E/PO, and sample activities, as well as news on funding, policy, and what's happening in the E/PO community. The workspace also provides scientists and the public pathways to find opportunities for participation in E/PO, to learn about SMD E/PO projects and their impacts, to connect with SMD E/PO practitioners, and to explore resources to improve professional E/PO practice, including literature reviews, information about the Next Generation Science Standards, and best practices in evaluation and engaging diverse audiences. http://smdepo.org.

  15. Improved modified pressure imaging and software for egg micro-crack detection and egg quality grading

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Cracks in the egg shell increase a food safety risk. Especially, eggs with very fine, hairline cracks (micro-cracks) are often undetected during the grading process because they are almost impossible to detect visually. A modified pressure imaging system was developed to detect eggs with micro-crack...

  16. Object oriented design provides flexible framework for electrophysiolgy software toolbox - biomed 2010.

    PubMed

    Gruner, Charlotte M

    2010-01-01

    This work describes a software platform to support an expandible toolbox for electrophysiology data analysis. The current focus of the toolbox, known as NeuroMAX is spike-sorting and spike-time analysis tools. A key feature of the toolbox is the ability for a user to connect tools into a workspace toolchain in a flexible, intelligent feed-forward manner that allows a tool to use any previously computed data set as input. Tool parameters can be saved and applied to other data sets. Tools and workspaces can be accessed to process data either directly from the MATLAB command line or from the NeuroMAX GUI. This work discusses the object-oriented design of the toolbox, including the data classes, workspace classes, and tool classes created to achieve this functionality. PMID:20467111

  17. Full-sun synchronic EUV and coronal hole mapping using multi-instrument images: Data and software made available

    NASA Astrophysics Data System (ADS)

    Caplan, R. M.; Downs, C.; Linker, J.

    2015-12-01

    A method for the automatic generation of EUV and coronal hole (CH) maps using simultaneous multi-instrument imaging data is described. Synchronized EUV images from STEREO/EUVI A&B 195Å and SDO/AIA 193Å undergo preprocessing steps that include PSF-deconvolution and the application of nonlinear data-derived intensity corrections that account for center-to-limb variations (limb-brightening) and inter-instrument intensity normalization. The latter two corrections are derived using a robust, systematic approach that takes advantage of unbiased long-term averages of data and serve to flatten the images by converting all pixel intensities to a unified disk center equivalent. While the number of applications are broad, we demonstrate how this technique is very useful for CH detection as it enables the use of a fast and simplified image segmentation algorithm to obtain consistent detection results. The multi-instrument nature of the technique also allows one to track evolving features consistently for longer periods than is possible with a single instrument, and preliminary results quantifying CH area and shape evolution are shown.Most importantly, several data and software products are made available to the community for use. For the ~4 year period of 6/10/2010 to 8/18/2014, we provide synchronic EUV and coronal hole maps at 6-hour cadence as well as the data-derived limb brightening and inter-instrument correction factors that we applied. We also make available a ready-to-use MATLAB script EUV2CHM used to generate the maps, which loads EUV images, applies our preprocessing steps, and then uses our GPU-accelerated/CPU-multithreaded segmentation algorithm EZSEG to detect coronal holes.

  18. Gamma-H2AX foci counting: image processing and control software for high-content screening

    NASA Astrophysics Data System (ADS)

    Barber, P. R.; Locke, R. J.; Pierce, G. P.; Rothkamm, K.; Vojnovic, B.

    2007-02-01

    Phosphorylation of the chromatin protein H2AX (forming γH2AX) is implicated in the repair of DNA double strand breaks (DSB's); a large number of H2AX molecules become phosphorylated at the sites of DSB's. Fluorescent staining of the cell nuclei for γH2AX, via an antibody, visualises the formation of these foci, allowing the quantification of DNA DSB's and forming the basis for a sensitive biological dosimeter of ionising radiation. We describe an automated fluorescence microscopy system, including automated image processing, to count γH2AX foci. The image processing is performed by a Hough transform based algorithm, CHARM, which has wide applicability for the detection and analysis of cells and cell colonies. This algorithm and its applications for cell nucleus and foci detection will be described. The system also relies heavily on robust control software, written using multi-threaded cbased modules in LabWindows/CVI that adapt to the timing requirements of a particular experiment for optimised slide/plate scanning and mosaicing, making use of modern multi-core processors. The system forms the basis of a general purpose high-content screening platform with wide ranging applications in live and fixed cell imaging and tissue micro arrays, that in future, can incorporate spectrally and time-resolved information.

  19. Grid-less imaging with antiscatter correction software in 2D mammography: the effects on image quality and MGD under a partial virtual clinical validation study

    NASA Astrophysics Data System (ADS)

    Van Peteghem, Nelis; Bemelmans, Frédéric; Bramaje Adversalo, Xenia; Salvagnini, Elena; Marshall, Nicholas; Bosmans, Hilde; Van Ongeval, Chantal

    2016-03-01

    This work investigated the effect of the grid-less acquisition mode with scatter correction software developed by Siemens Healthcare (PRIME mode) on image quality and mean glandular dose (MGD) in a comparative study against a standard mammography system with grid. Image quality was technically quantified with contrast-detail (c-d) analysis and by calculating detectability indices (d') using a non-prewhitening with eye filter model observer (NPWE). MGD was estimated technically using slabs of PMMA and clinically on a set of 11439 patient images. The c-d analysis gave similar results for all mammographic systems examined, although the d' values were slightly lower for the system with PRIME mode when compared to the same system in standard mode (-2.8% to -5.7%, depending on the PMMA thickness). The MGD values corresponding to the PMMA measurements with automatic exposure control indicated a dose reduction from 11.0% to 20.8% for the system with PRIME mode compared to the same system without PRIME mode. The largest dose reductions corresponded to the thinnest PMMA thicknesses. The results from the clinical dosimetry study showed an overall population-averaged dose reduction of 11.6% (up to 27.7% for thinner breasts) for PRIME mode compared to standard mode for breast thicknesses from 20 to 69 mm. These technical image quality measures were then supported using a clinically oriented study whereby simulated clusters of microcalcifications and masses were inserted into patient images and read by radiologists in an AFROC study to quantify their detectability. In line with the technical investigation, no significant difference was found between the two imaging modes (p-value 0.95).

  20. Photon counting imaging and centroiding with an electron-bombarded CCD using single molecule localisation software

    NASA Astrophysics Data System (ADS)

    Hirvonen, Liisa M.; Barber, Matthew J.; Suhling, Klaus

    2016-06-01

    Photon event centroiding in photon counting imaging and single-molecule localisation in super-resolution fluorescence microscopy share many traits. Although photon event centroiding has traditionally been performed with simple single-iteration algorithms, we recently reported that iterative fitting algorithms originally developed for single-molecule localisation fluorescence microscopy work very well when applied to centroiding photon events imaged with an MCP-intensified CMOS camera. Here, we have applied these algorithms for centroiding of photon events from an electron-bombarded CCD (EBCCD). We find that centroiding algorithms based on iterative fitting of the photon events yield excellent results and allow fitting of overlapping photon events, a feature not reported before and an important aspect to facilitate an increased count rate and shorter acquisition times.

  1. Improved structure, function and compatibility for CellProfiler: modular high-throughput image analysis software

    PubMed Central

    Kamentsky, Lee; Jones, Thouis R.; Fraser, Adam; Bray, Mark-Anthony; Logan, David J.; Madden, Katherine L.; Ljosa, Vebjorn; Rueden, Curtis; Eliceiri, Kevin W.; Carpenter, Anne E.

    2011-01-01

    Summary: There is a strong and growing need in the biology research community for accurate, automated image analysis. Here, we describe CellProfiler 2.0, which has been engineered to meet the needs of its growing user base. It is more robust and user friendly, with new algorithms and features to facilitate high-throughput work. ImageJ plugins can now be run within a CellProfiler pipeline. Availability and Implementation: CellProfiler 2.0 is free and open source, available at http://www.cellprofiler.org under the GPL v. 2 license. It is available as a packaged application for Macintosh OS X and Microsoft Windows and can be compiled for Linux. Contact: anne@broadinstitute.org Supplementary information: Supplementary data are available at Bioinformatics online. PMID:21349861

  2. Photon counting imaging and centroiding with an electron-bombarded CCD using single molecule localisation software

    PubMed Central

    Hirvonen, Liisa M.; Barber, Matthew J.; Suhling, Klaus

    2016-01-01

    Photon event centroiding in photon counting imaging and single-molecule localisation in super-resolution fluorescence microscopy share many traits. Although photon event centroiding has traditionally been performed with simple single-iteration algorithms, we recently reported that iterative fitting algorithms originally developed for single-molecule localisation fluorescence microscopy work very well when applied to centroiding photon events imaged with an MCP-intensified CMOS camera. Here, we have applied these algorithms for centroiding of photon events from an electron-bombarded CCD (EBCCD). We find that centroiding algorithms based on iterative fitting of the photon events yield excellent results and allow fitting of overlapping photon events, a feature not reported before and an important aspect to facilitate an increased count rate and shorter acquisition times. PMID:27274604

  3. Software programmable multi-mode interface for nuclear-medical imaging

    SciTech Connect

    Zubal, I.G.; Rowe, R.W.; Bizais, Y.J.C.; Bennett, G.W.; Brill, A.B.

    1982-01-01

    An innovative multi-port interface allows gamma camera events (spatial coordinates and energy) to be acquired concurrently with a sampling of physiological patient data. The versatility of the interface permits all conventional static, dynamic, and tomographic imaging modes, in addition to multi-hole coded aperture acquisition. The acquired list mode data may be analyzed or gated on the basis of various camera, isotopic, or physiological parameters.

  4. Development of fast patient position verification software using 2D-3D image registration and its clinical experience.

    PubMed

    Mori, Shinichiro; Kumagai, Motoki; Miki, Kentaro; Fukuhara, Riki; Haneishi, Hideaki

    2015-09-01

    To improve treatment workflow, we developed a graphic processing unit (GPU)-based patient positional verification software application and integrated it into carbon-ion scanning beam treatment. Here, we evaluated the basic performance of the software. The algorithm provides 2D/3D registration matching using CT and orthogonal X-ray flat panel detector (FPD) images. The participants were 53 patients with tumors of the head and neck, prostate or lung receiving carbon-ion beam treatment. 2D/3D-ITchi-Gime (ITG) calculation accuracy was evaluated in terms of computation time and registration accuracy. Registration calculation was determined using the similarity measurement metrics gradient difference (GD), normalized mutual information (NMI), zero-mean normalized cross-correlation (ZNCC), and their combination. Registration accuracy was dependent on the particular metric used. Representative examples were determined to have target registration error (TRE) = 0.45 ± 0.23 mm and angular error (AE) = 0.35 ± 0.18° with ZNCC + GD for a head and neck tumor; TRE = 0.12 ± 0.07 mm and AE = 0.16 ± 0.07° with ZNCC for a pelvic tumor; and TRE = 1.19 ± 0.78 mm and AE = 0.83 ± 0.61° with ZNCC for lung tumor. Calculation time was less than 7.26 s.The new registration software has been successfully installed and implemented in our treatment process. We expect that it will improve both treatment workflow and treatment accuracy. PMID:26081313

  5. Development of fast patient position verification software using 2D-3D image registration and its clinical experience

    PubMed Central

    Mori, Shinichiro; Kumagai, Motoki; Miki, Kentaro; Fukuhara, Riki; Haneishi, Hideaki

    2015-01-01

    To improve treatment workflow, we developed a graphic processing unit (GPU)-based patient positional verification software application and integrated it into carbon-ion scanning beam treatment. Here, we evaluated the basic performance of the software. The algorithm provides 2D/3D registration matching using CT and orthogonal X-ray flat panel detector (FPD) images. The participants were 53 patients with tumors of the head and neck, prostate or lung receiving carbon-ion beam treatment. 2D/3D-ITchi-Gime (ITG) calculation accuracy was evaluated in terms of computation time and registration accuracy. Registration calculation was determined using the similarity measurement metrics gradient difference (GD), normalized mutual information (NMI), zero-mean normalized cross-correlation (ZNCC), and their combination. Registration accuracy was dependent on the particular metric used. Representative examples were determined to have target registration error (TRE) = 0.45 ± 0.23 mm and angular error (AE) = 0.35 ± 0.18° with ZNCC + GD for a head and neck tumor; TRE = 0.12 ± 0.07 mm and AE = 0.16 ± 0.07° with ZNCC for a pelvic tumor; and TRE = 1.19 ± 0.78 mm and AE = 0.83 ± 0.61° with ZNCC for lung tumor. Calculation time was less than 7.26 s.The new registration software has been successfully installed and implemented in our treatment process. We expect that it will improve both treatment workflow and treatment accuracy. PMID:26081313

  6. The Java Image Science Toolkit (JIST) for Rapid Prototyping and Publishing of Neuroimaging Software

    PubMed Central

    Lucas, Blake C.; Bogovic, John A.; Carass, Aaron; Bazin, Pierre-Louis; Prince, Jerry L.; Pham, Dzung

    2010-01-01

    Non-invasive neuroimaging techniques enable extraordinarily sensitive and specific in vivo study of the structure, functional response and connectivity of biological mechanisms. With these advanced methods comes a heavy reliance on computer-based processing, analysis and interpretation. While the neuroimaging community has produced many excellent academic and commercial tool packages, new tools are often required to interpret new modalities and paradigms. Developing custom tools and ensuring interoperability with existing tools is a significant hurdle. To address these limitations, we present a new framework for algorithm development that implicitly ensures tool interoperability, generates graphical user interfaces, provides advanced batch processing tools, and, most importantly, requires minimal additional programming or computational overhead. Java-based rapid prototyping with this system is an efficient and practical approach to evaluate new algorithms since the proposed system ensures that rapidly constructed prototypes are actually fully-functional processing modules with support for multiple GUI's, a broad range of file formats, and distributed computation. Herein, we demonstrate MRI image processing with the proposed system for cortical surface extraction in large cross-sectional cohorts, provide a system for fully automated diffusion tensor image analysis, and illustrate how the system can be used as a simulation framework for the development of a new image analysis method. The system is released as open source under the Lesser GNU Public License (LGPL) through the Neuroimaging Informatics Tools and Resources Clearinghouse (NITRC). PMID:20077162

  7. PITRE: software for phase-sensitive X-ray image processing and tomography reconstruction.

    PubMed

    Chen, Rong Chang; Dreossi, Diego; Mancini, Lucia; Menk, Ralf; Rigon, Luigi; Xiao, Ti Qiao; Longo, Renata

    2012-09-01

    Synchrotron-radiation computed tomography has been applied in many research fields. Here, PITRE (Phase-sensitive X-ray Image processing and Tomography REconstruction) and PITRE_BM (PITRE Batch Manager) are presented. PITRE supports phase retrieval for propagation-based phase-contrast imaging/tomography (PPCI/PPCT), extracts apparent absorption, refractive and scattering information of diffraction enhanced imaging (DEI), and allows parallel-beam tomography reconstruction for conventional absorption CT data and for PPCT phase retrieved and DEI-CT extracted information. PITRE_BM is a batch processing manager for PITRE: it executes a series of tasks, created via PITRE, without manual intervention. Both PITRE and PITRE_BM are coded in Interactive Data Language (IDL), and have a user-friendly graphical user interface. They are freeware and can run on Microsoft Windows systems via IDL Virtual Machine, which can be downloaded for free and does not require a license. The data-processing principle and some examples of application will be presented.

  8. ORBS, ORCS, OACS, a Software Suite for Data Reduction and Analysis of the Hyperspectral Imagers SITELLE and SpIOMM

    NASA Astrophysics Data System (ADS)

    Martin, T.; Drissen, L.; Joncas, G.

    2015-09-01

    SITELLE (installed in 2015 at the Canada-France-Hawaii Telescope) and SpIOMM (a prototype attached to the Observatoire du Mont-Mégantic) are the first Imaging Fourier Transform Spectrometers (IFTS) capable of obtaining a hyperspectral data cube which samples a 12 arc minutes field of view into four millions of visible spectra. The result of each observation is made up of two interferometric data cubes which need to be merged, corrected, transformed and calibrated in order to get a spectral cube of the observed region ready to be analysed. ORBS is a fully automatic data reduction software that has been entirely designed for this purpose. The data size (up to 68 Gb for larger science cases) and the computational needs have been challenging and the highly parallelized object-oriented architecture of ORBS reflects the solutions adopted which made possible to process 68 Gb of raw data in less than 11 hours using 8 cores and 22.6 Gb of RAM. It is based on a core framework (ORB) that has been designed to support the whole software suite for data analysis (ORCS and OACS), data simulation (ORUS) and data acquisition (IRIS). They all aim to provide a strong basis for the creation and development of specialized analysis modules that could benefit the scientific community working with SITELLE and SpIOMM.

  9. Orbit Determination and Gravity Field Estimation of the Dawn spacecraft at Vesta Using Radiometric and Image Constraints with GEODYN Software

    NASA Astrophysics Data System (ADS)

    Centinello, F. J.; Zuber, M. T.; Mazarico, E.

    2013-12-01

    The Dawn spacecraft orbited the protoplanet Vesta from May 3, 2011 to July 25, 2012. Precise orbit determination was critical for the geophysical investigation, as well as the definition of the Vesta-fixed reference frame and the subsequent registration of datasets to the surface. GEODYN, the orbit determination and geodetic parameter estimation software of NASA Goddard Spaceflight Center, was used to compute the orbit of the Dawn spacecraft and estimate the gravity field of Vesta. GEODYN utilizes radiometric Doppler and range measurements, and was modified to process image data from Dawn's cameras. X-band radiometric measurements were acquired by the NASA Deep Space Network (DSN). The addition of the capability to process image constraints decreases position uncertainty in the along- and cross-orbit track directions because of their geometric strengths compared with radiometric measurements. This capability becomes critical for planetary missions such as Dawn due to the weak gravity environment, where non-conservative forces affect the orbit more than typical of orbits at larger planetary bodies. Radiometric measurements were fit to less than 0.1 mm/s and 5 m for Doppler and range during the Survey orbit phase (compared with measurement noise RMS of about 0.05 mm/s and 2 m for Doppler and range). Image constraint RMS was fit to less than 100 m (resolution is 5 - 150 m/pixel, depending on the spacecraft altitude). Orbits computed using GEODYN were used to estimate a 20th degree and order gravity field of Vesta. The quality of the orbit determination and estimated gravity field with and without image constraints was assessed through comparison with the spacecraft trajectory and gravity model provided by the Dawn Science Team.

  10. Kinematic, workspace and singularity analysis of a new parallel robot used in minimally invasive surgery

    NASA Astrophysics Data System (ADS)

    Stoica, Alin; Pisla, Doina; Andras, Szilaghyi; Gherman, Bogdan; Gyurka, Bela-Zoltan; Plitea, Nicolae

    2013-03-01

    In the last ten years, due to development in robotic assisted surgery, the minimally invasive surgery has greatly changed. Until now, the vast majority of robots used in surgery, have serial structures. Due to the orientation parallel module, the structure is able to reduce the pressure exerted on the entrance point in the patient's abdominal wall. The parallel robot can also handle both a laparoscope as well an active instrument for different surgical procedures. The advantage of this parallel structure is that the geometric model has been obtained through an analytical approach. The kinematic modelling of a new parallel architecture, the inverse and direct geometric model and the inverse and direct kinematic models for velocities and accelerations are being determined. The paper will demonstrate that with this parallel structure, one can obtain the necessary workspace required for a minimally invasive operation. The robot workspace was generated using the inverse geometric model. An indepth study of different types of singularity is performed, allowing the development of safe control algorithms of the experimental model. Some kinematic simulation results and the experimental model of the robot are presented in the paper.

  11. Optimal Design of a Y-Star Robot for the Prescribed Cylindrical Dexterous Workspace

    NASA Astrophysics Data System (ADS)

    Wan, Yuehua; Wang, Guan; Ji, Shiming; Wang, Zhongfei

    2009-08-01

    This paper determines a set of optimal design parameters of a Y-Star robot whose workspace is as close as possible of being equal to a prescribed cylindrical dexterous workspace (PCDW). Two requirements should be satisfied: (i) the local or global performance index; (ii) a regular shape. The kinematic problem is analysed in brief to determine the design parameters and their relation. The optimal design problem is discussed and then translated into two linear problems. Then, an optimal design procedure which adopts the generalized pattern search algorithm in the genetic algorithm and direct search toolbox of Matlab is adopted to solve these problems. As applying example, the results of four cases PCDW to the robot are presented. And the design result is compared with a concept of the distance between the best state of the robot and the requirement of the operation task. The method and result of this paper are very useful for the design and comparison of the Y-Star robot.

  12. C++ software integration for a high-throughput phase imaging platform

    NASA Astrophysics Data System (ADS)

    Kandel, Mikhail E.; Luo, Zelun; Han, Kevin; Popescu, Gabriel

    2015-03-01

    The multi-shot approach in SLIM requires reliable, synchronous, and parallel operation of three independent hardware devices - not meeting these challenges results in degraded phase and slow acquisition speeds, narrowing applications to holistic statements about complex phenomena. The relative youth of quantitative imaging and the lack of ready-made commercial hardware and tools further compounds the problem as Higher level programming languages result in inflexible, experiment specific instruments limited by ill-fitting computational modules, resulting in a palpable chasm between promised and realized hardware performance. Furthermore, general unfamiliarity with intricacies such as background calibration, objective lens attenuation, along with spatial light modular alignment, makes successful measurements difficult for the inattentive or uninitiated. This poses an immediate challenge for moving our techniques beyond the lab to biologically oriented collaborators and clinical practitioners. To meet these challenges, we present our new Quantitative Phase Imaging pipeline, with improved instrument performance, friendly user interface and robust data processing features, enabling us to acquire and catalog clinical datasets hundreds of gigapixels in size.

  13. Fundus image fusion in EYEPLAN software: An evaluation of a novel technique for ocular melanoma radiation treatment planning

    SciTech Connect

    Daftari, Inder K.; Mishra, Kavita K.; O'Brien, Joan M.; and others

    2010-10-15

    Purpose: The purpose of this study is to evaluate a novel approach for treatment planning using digital fundus image fusion in EYEPLAN for proton beam radiation therapy (PBRT) planning for ocular melanoma. The authors used a prototype version of EYEPLAN software, which allows for digital registration of high-resolution fundus photographs. The authors examined the improvement in tumor localization by replanning with the addition of fundus photo superimposition in patients with macular area tumors. Methods: The new version of EYEPLAN (v3.05) software allows for the registration of fundus photographs as a background image. This is then used in conjunction with clinical examination, tantalum marker clips, surgeon's mapping, and ultrasound to draw the tumor contour accurately. In order to determine if the fundus image superimposition helps in tumor delineation and treatment planning, the authors identified 79 patients with choroidal melanoma in the macular location that were treated with PBRT. All patients were treated to a dose of 56 GyE in four fractions. The authors reviewed and replanned all 79 macular melanoma cases with superimposition of pretreatment and post-treatment fundus imaging in the new EYEPLAN software. For patients with no local failure, the authors analyzed whether fundus photograph fusion accurately depicted and confirmed tumor volumes as outlined in the original treatment plan. For patients with local failure, the authors determined whether the addition of the fundus photograph might have benefited in terms of more accurate tumor volume delineation. Results: The mean follow-up of patients was 33.6{+-}23 months. Tumor growth was seen in six eyes of the 79 macular lesions. All six patients were marginal failures or tumor miss in the region of dose fall-off, including one patient with both in-field recurrence as well as marginal. Among the six recurrences, three were managed by enucleation and one underwent retreatment with proton therapy. Three

  14. High-throughput image analysis of tumor spheroids: a user-friendly software application to measure the size of spheroids automatically and accurately.

    PubMed

    Chen, Wenjin; Wong, Chung; Vosburgh, Evan; Levine, Arnold J; Foran, David J; Xu, Eugenia Y

    2014-07-08

    The increasing number of applications of three-dimensional (3D) tumor spheroids as an in vitro model for drug discovery requires their adaptation to large-scale screening formats in every step of a drug screen, including large-scale image analysis. Currently there is no ready-to-use and free image analysis software to meet this large-scale format. Most existing methods involve manually drawing the length and width of the imaged 3D spheroids, which is a tedious and time-consuming process. This study presents a high-throughput image analysis software application - SpheroidSizer, which measures the major and minor axial length of the imaged 3D tumor spheroids automatically and accurately; calculates the volume of each individual 3D tumor spheroid; then outputs the results in two different forms in spreadsheets for easy manipulations in the subsequent data analysis. The main advantage of this software is its powerful image analysis application that is adapted for large numbers of images. It provides high-throughput computation and quality-control workflow. The estimated time to process 1,000 images is about 15 min on a minimally configured laptop, or around 1 min on a multi-core performance workstation. The graphical user interface (GUI) is also designed for easy quality control, and users can manually override the computer results. The key method used in this software is adapted from the active contour algorithm, also known as Snakes, which is especially suitable for images with uneven illumination and noisy background that often plagues automated imaging processing in high-throughput screens. The complimentary "Manual Initialize" and "Hand Draw" tools provide the flexibility to SpheroidSizer in dealing with various types of spheroids and diverse quality images. This high-throughput image analysis software remarkably reduces labor and speeds up the analysis process. Implementing this software is beneficial for 3D tumor spheroids to become a routine in vitro model

  15. FIRE: an open-software suite for real-time 2D/3D image registration for image guided radiotherapy research

    NASA Astrophysics Data System (ADS)

    Furtado, H.; Gendrin, C.; Spoerk, J.; Steiner, E.; Underwood, T.; Kuenzler, T.; Georg, D.; Birkfellner, W.

    2016-03-01

    Radiotherapy treatments have changed at a tremendously rapid pace. Dose delivered to the tumor has escalated while organs at risk (OARs) are better spared. The impact of moving tumors during dose delivery has become higher due to very steep dose gradients. Intra-fractional tumor motion has to be managed adequately to reduce errors in dose delivery. For tumors with large motion such as tumors in the lung, tracking is an approach that can reduce position uncertainty. Tumor tracking approaches range from purely image intensity based techniques to motion estimation based on surrogate tracking. Research efforts are often based on custom designed software platforms which take too much time and effort to develop. To address this challenge we have developed an open software platform especially focusing on tumor motion management. FLIRT is a freely available open-source software platform. The core method for tumor tracking is purely intensity based 2D/3D registration. The platform is written in C++ using the Qt framework for the user interface. The performance critical methods are implemented on the graphics processor using the CUDA extension. One registration can be as fast as 90ms (11Hz). This is suitable to track tumors moving due to respiration (~0.3Hz) or heartbeat (~1Hz). Apart from focusing on high performance, the platform is designed to be flexible and easy to use. Current use cases range from tracking feasibility studies, patient positioning and method validation. Such a framework has the potential of enabling the research community to rapidly perform patient studies or try new methods.

  16. Using Image Pro Plus Software to Develop Particle Mapping on Genesis Solar Wind Collector Surfaces

    NASA Technical Reports Server (NTRS)

    Rodriquez, Melissa C.; Allton, J. H.; Burkett, P. J.

    2012-01-01

    The continued success of the Genesis mission science team in analyzing solar wind collector array samples is partially based on close collaboration of the JSC curation team with science team members who develop cleaning techniques and those who assess elemental cleanliness at the levels of detection. The goal of this collaboration is to develop a reservoir of solar wind collectors of known cleanliness to be available to investigators. The heart and driving force behind this effort is Genesis mission PI Don Burnett. While JSC contributes characterization, safe clean storage, and benign collector cleaning with ultrapure water (UPW) and UV ozone, Burnett has coordinated more exotic and rigorous cleaning which is contributed by science team members. He also coordinates cleanliness assessment requiring expertise and instruments not available in curation, such as XPS, TRXRF [1,2] and synchrotron TRXRF. JSC participates by optically documenting the particle distributions as cleaning steps progress. Thus, optical document supplements SEM imaging and analysis, and elemental assessment by TRXRF.

  17. Cytopathology whole slide images and virtual microscopy adaptive tutorials: A software pilot

    PubMed Central

    Van Es, Simone L.; Pryor, Wendy M.; Belinson, Zack; Salisbury, Elizabeth L.; Velan, Gary M.

    2015-01-01

    Background: The constant growth in the body of knowledge in medicine requires pathologists and pathology trainees to engage in continuing education. Providing them with equitable access to efficient and effective forms of education in pathology (especially in remote and rural settings) is important, but challenging. Methods: We developed three pilot cytopathology virtual microscopy adaptive tutorials (VMATs) to explore a novel adaptive E-learning platform (AeLP) which can incorporate whole slide images for pathology education. We collected user feedback to further develop this educational material and to subsequently deploy randomized trials in both pathology specialist trainee and also medical student cohorts. Cytopathology whole slide images were first acquired then novel VMATs teaching cytopathology were created using the AeLP, an intelligent tutoring system developed by Smart Sparrow. The pilot was run for Australian pathologists and trainees through the education section of Royal College of Pathologists of Australasia website over a period of 9 months. Feedback on the usability, impact on learning and any technical issues was obtained using 5-point Likert scale items and open-ended feedback in online questionnaires. Results: A total of 181 pathologists and pathology trainees anonymously attempted the three adaptive tutorials, a smaller proportion of whom went on to provide feedback at the end of each tutorial. VMATs were perceived as effective and efficient E-learning tools for pathology education. User feedback was positive. There were no significant technical issues. Conclusion: During this pilot, the user feedback on the educational content and interface and the lack of technical issues were helpful. Large scale trials of similar online cytopathology adaptive tutorials were planned for the future. PMID:26605119

  18. An Upgrade of the Imaging for Hypersonic Experimental Aeroheating Testing (IHEAT) Software

    NASA Technical Reports Server (NTRS)

    Mason, Michelle L.; Rufer, Shann J.

    2015-01-01

    The Imaging for Hypersonic Experimental Aeroheating Testing (IHEAT) code is used at NASA Langley Research Center to analyze global aeroheating data on wind tunnel models tested in the Langley Aerothermodynamics Laboratory. One-dimensional, semi-infinite heating data derived from IHEAT are used to design thermal protection systems to mitigate the risks due to the aeroheating loads on hypersonic vehicles, such as re-entry vehicles during descent and landing procedures. This code was originally written in the PV-WAVE programming language to analyze phosphor thermography data from the two-color, relativeintensity system developed at Langley. To increase the efficiency, functionality, and reliability of IHEAT, the code was migrated to MATLAB syntax and compiled as a stand-alone executable file labeled version 4.0. New features of IHEAT 4.0 include the options to batch process all of the data from a wind tunnel run, to map the two-dimensional heating distribution to a three-dimensional computer-aided design model of the vehicle to be viewed in Tecplot, and to extract data from a segmented line that follows an interesting feature in the data. Results from IHEAT 4.0 were compared on a pixel level to the output images from the legacy code to validate the program. The differences between the two codes were on the order of 10-5 to 10-7. IHEAT 4.0 replaces the PV-WAVE version as the production code for aeroheating experiments conducted in the hypersonic facilities at NASA Langley.

  19. Leap Motion Gesture Control With Carestream Software in the Operating Room to Control Imaging: Installation Guide and Discussion.

    PubMed

    Pauchot, Julien; Di Tommaso, Laetitia; Lounis, Ahmed; Benassarou, Mourad; Mathieu, Pierre; Bernot, Dominique; Aubry, Sébastien

    2015-12-01

    Nowadays, routine cross-sectional imaging viewing during a surgical procedure requires physical contact with an interface (mouse or touch-sensitive screen). Such contact risks exposure to aseptic conditions and causes loss of time. Devices such as the recently introduced Leap Motion (Leap Motion Society, San Francisco, CA), which enables interaction with the computer without any physical contact, are of wide interest in the field of surgery, but configuration and ergonomics are key challenges for the practitioner, imaging software, and surgical environment. This article aims to suggest an easy configuration of Leap Motion on a PC for optimized use with Carestream Vue PACS v11.3.4 (Carestream Health, Inc, Rochester, NY) using a plug-in (to download at https://drive.google.com/open?id=0B_F4eBeBQc3yNENvTXlnY09qS00&authuser=0) and a video tutorial (https://www.youtube.com/watch?v=yVPTgxg-SIk). Videos of surgical procedure and discussion about innovative gesture control technology and its various configurations are provided in this article.

  20. Leap Motion Gesture Control With Carestream Software in the Operating Room to Control Imaging: Installation Guide and Discussion.

    PubMed

    Pauchot, Julien; Di Tommaso, Laetitia; Lounis, Ahmed; Benassarou, Mourad; Mathieu, Pierre; Bernot, Dominique; Aubry, Sébastien

    2015-12-01

    Nowadays, routine cross-sectional imaging viewing during a surgical procedure requires physical contact with an interface (mouse or touch-sensitive screen). Such contact risks exposure to aseptic conditions and causes loss of time. Devices such as the recently introduced Leap Motion (Leap Motion Society, San Francisco, CA), which enables interaction with the computer without any physical contact, are of wide interest in the field of surgery, but configuration and ergonomics are key challenges for the practitioner, imaging software, and surgical environment. This article aims to suggest an easy configuration of Leap Motion on a PC for optimized use with Carestream Vue PACS v11.3.4 (Carestream Health, Inc, Rochester, NY) using a plug-in (to download at https://drive.google.com/open?id=0B_F4eBeBQc3yNENvTXlnY09qS00&authuser=0) and a video tutorial (https://www.youtube.com/watch?v=yVPTgxg-SIk). Videos of surgical procedure and discussion about innovative gesture control technology and its various configurations are provided in this article. PMID:26002115

  1. Collecting field data from Mars Exploration Rover Spirit and Opportunity Images: Development of 3-D Visualization and Data-Mining Software

    NASA Astrophysics Data System (ADS)

    Eppes, M. C.; Willis, A.; Zhou, B.

    2010-12-01

    NASA’s two Mars rover spacecraft, Spirit and Opportunity, have collected more than 4 years worth of data from nine imaging instruments producing greater than 200k images. To date, however, the potential ‘field’ data that these images represent has remained relatively untapped because of a lack of software with which to readily analyze the images quantitatively. We have developed prototype software that allows scientists to locate and explore 2D and 3D imagery captured by the NASA's Mars Exploratory Rover (MER) mission robots Spirit and Opportunity. For example, using our software, a person could measure the dimensions of a rock or the strike and dip of a bedding plane. The developed software has three aspects that make it distinct from existing approaches for indexing large sets of imagery: (1) a computationally efficient image search engine capable of locating MER images containing features of interest, rocks in particular, (2) an interface for making measurements (distances and orientations) from stereographic image pairs and (3) remote browsing and storage capabilities that removes the burden of storing and managing these very large image sets. Two methods of search are supported (i) a rock detection algorithm for finding images that contain rock-like structures having a specified size and (ii) a generic query-by-image search which uses exemplar image(s) of a desired object to locate other images within the MER data repository that contain similar structures (i.e. one could search for all images of sand dunes). Query by image capabilities are made possible via a bag-of-features (e.g. Labeznik et. al. 2003; Schmid et. al. 2002) representation of the image data which compresses the image into a small set of features which are robust to changes in illumination and perspective. Searches are then reduced to looking for feature sets which have similar values; a task that is computationally tractable providing quick search results for complex image-based queries

  2. The Decoding Toolbox (TDT): a versatile software package for multivariate analyses of functional imaging data

    PubMed Central

    Hebart, Martin N.; Görgen, Kai; Haynes, John-Dylan

    2015-01-01

    The multivariate analysis of brain signals has recently sparked a great amount of interest, yet accessible and versatile tools to carry out decoding analyses are scarce. Here we introduce The Decoding Toolbox (TDT) which represents a user-friendly, powerful and flexible package for multivariate analysis of functional brain imaging data. TDT is written in Matlab and equipped with an interface to the widely used brain data analysis package SPM. The toolbox allows running fast whole-brain analyses, region-of-interest analyses and searchlight analyses, using machine learning classifiers, pattern correlation analysis, or representational similarity analysis. It offers automatic creation and visualization of diverse cross-validation schemes, feature scaling, nested parameter selection, a variety of feature selection methods, multiclass capabilities, and pattern reconstruction from classifier weights. While basic users can implement a generic analysis in one line of code, advanced users can extend the toolbox to their needs or exploit the structure to combine it with external high-performance classification toolboxes. The toolbox comes with an example data set which can be used to try out the various analysis methods. Taken together, TDT offers a promising option for researchers who want to employ multivariate analyses of brain activity patterns. PMID:25610393

  3. User interface software development for the WIYN One Degree Imager (ODI)

    NASA Astrophysics Data System (ADS)

    Ivens, John; Yeatts, Andrey; Harbeck, Daniel; Martin, Pierre

    2010-07-01

    User interfaces (UIs) are a necessity for almost any data acquisition system. The development team for the WIYN One Degree Imager (ODI) chose to develop a user interface that allows access to most of the instrument control for both scientists and engineers through the World Wide Web, because of the web's ease of use and accessibility around the world. Having a web based UI allows ODI to grow from a visitor-mode instrument to a queue-managed instrument and also facilitate remote servicing and troubleshooting. The challenges of developing such a system involve the difficulties of browser inter-operability, speed, presentation, and the choices involved with integrating browser and server technologies. To this end, the team has chosen a combination of Java, JBOSS, AJAX technologies, XML data descriptions, Oracle XML databases, and an emerging technology called the Google Web Toolkit (GWT) that compiles Java into Javascript for presentation in a browser. Advantages of using GWT include developing the front end browser code in Java, GWT's native support for AJAX, the use of XML to describe the user interface, the ability to profile code speed and discover bottlenecks, the ability to efficiently communicate with application servers such as JBOSS, and the ability to optimize and test code for multiple browsers. We discuss the inter-operation of all of these technologies to create fast, flexible, and robust user interfaces that are scalable, manageable, separable, and as much as possible allow maintenance of all code in Java.

  4. [Dynamic imaging of gastric ulcer healing using the most modern Morph-Software].

    PubMed

    Jaspersen, D; Keerl, R; Weber, R; Huppmann, A; Hammar, C H; Draf, W

    1996-06-01

    The presentation of gastric ulcer healing taken from video endoscopy as a dynamic process could not be realized till now. The documentation of the dynamic healing process shattered either on the patient's compliance or on the inconstancy of the image cut due to wobbling. The replay should be performed as a time lapse whereby the picture disturbances would become an essential part.-Instead of presenting a continuous film, instant takes of ulcer healing were processed. A dynamic effect was produced by computer-assisted production of intermediate pictures. A video was created in which short video sequences in definite time intervals were recorded endoscopically. Single stills-so-called original pictures-fitting together from each sequence were selected and spliced together. The missing intermediate pictures were made with a special computer technique according to the mathematical concept of interpolation. With this technique, the dynamic documentation of gastric ulcer healing in a 47-year-old male patient was performed. The technique enables an almost natural and real observation of ulcer healing and promises new physiological and patho-physiological knowledge in gastroenterologic endoscopy.

  5. The Decoding Toolbox (TDT): a versatile software package for multivariate analyses of functional imaging data.

    PubMed

    Hebart, Martin N; Görgen, Kai; Haynes, John-Dylan

    2014-01-01

    The multivariate analysis of brain signals has recently sparked a great amount of interest, yet accessible and versatile tools to carry out decoding analyses are scarce. Here we introduce The Decoding Toolbox (TDT) which represents a user-friendly, powerful and flexible package for multivariate analysis of functional brain imaging data. TDT is written in Matlab and equipped with an interface to the widely used brain data analysis package SPM. The toolbox allows running fast whole-brain analyses, region-of-interest analyses and searchlight analyses, using machine learning classifiers, pattern correlation analysis, or representational similarity analysis. It offers automatic creation and visualization of diverse cross-validation schemes, feature scaling, nested parameter selection, a variety of feature selection methods, multiclass capabilities, and pattern reconstruction from classifier weights. While basic users can implement a generic analysis in one line of code, advanced users can extend the toolbox to their needs or exploit the structure to combine it with external high-performance classification toolboxes. The toolbox comes with an example data set which can be used to try out the various analysis methods. Taken together, TDT offers a promising option for researchers who want to employ multivariate analyses of brain activity patterns. PMID:25610393

  6. User's guide for mapIMG 3--Map image re-projection software package

    USGS Publications Warehouse

    Finn, Michael P.; Mattli, David M.

    2012-01-01

    Version 0.0 (1995), Dan Steinwand, U.S. Geological Survey (USGS)/Earth Resources Observation Systems (EROS) Data Center (EDC)--Version 0.0 was a command line version for UNIX that required four arguments: the input metadata, the output metadata, the input data file, and the output destination path. Version 1.0 (2003), Stephen Posch and Michael P. Finn, USGS/Mid-Continent Mapping Center (MCMC--Version 1.0 added a GUI interface that was built using the Qt library for cross platform development. Version 1.01 (2004), Jason Trent and Michael P. Finn, USGS/MCMC--Version 1.01 suggested bounds for the parameters of each projection. Support was added for larger input files, storage of the last used input and output folders, and for TIFF/ GeoTIFF input images. Version 2.0 (2005), Robert Buehler, Jason Trent, and Michael P. Finn, USGS/National Geospatial Technical Operations Center (NGTOC)--Version 2.0 added Resampling Methods (Mean, Mode, Min, Max, and Sum), updated the GUI design, and added the viewer/pre-viewer. The metadata style was changed to XML and was switched to a new naming convention. Version 3.0 (2009), David Mattli and Michael P. Finn, USGS/Center of Excellence for Geospatial Information Science (CEGIS)--Version 3.0 brings optimized resampling methods, an updated GUI, support for less than global datasets, UTM support and the whole codebase was ported to Qt4.

  7. On-Line Access to Weather Satellite Imagery and Image Manipulation Software

    NASA Technical Reports Server (NTRS)

    Emery, William J.; Kelley, T.; Dozier, J.; Rotar, P.

    1995-01-01

    Advanced Very High Resolution Radiometer and Geostationary Operational Environmental Satellite Imagery, received by antennas located at the University of Colorado, are made available to the Internet users through an on-line data access system. Created as a 'test bed' data system for the National Aeronautics and Space Administration's future Earth Observing System Data and Information System, this test bed provides an opportunity to test both the technical requirements of an on-line data system and the different ways in which the general user community would employ such a system. Initiated in December 1991, the basic data system experienced four major evolutionary changes in response to user requests and requirements. Features added with these changes were the addition of on-line browse, user subsetting, and dynamic image processing/navigation. Over its lifetime the system has grown to a maximum of over 2500 registered users, and after losing many of these users due to hardware changes, the system is once again growing with its own independent mass storage system.

  8. On-line access to weather satellite imagery and image manipulation software

    NASA Technical Reports Server (NTRS)

    Emery, W.; Kelley, T.; Dozier, J.; Rotar, P.

    1995-01-01

    Advanced Very High Resolution Radiometer and Geostationary Operational Environmental Satellite imagery, received by antennas located at the University of Colorado, are made available to the Internet users through an on-line data access system. Created as a 'test bed' system for the National Aeronautics and Space Administration's future Earth Observing System Data and Information System, this test bed provides an opportunity to test both the technical requirements of an on-line data system and the different ways in which the general user community would employ such a system. Initiated in December 1991, the basic data system experiment four major evolutionary changes in response to user requests and requirements. Features added with these changes were the addition of on-line browse, user subsetting, and dynamic image processing/navigation. Over its lifetime the system has grown to a maximum of over 2500 registered users, and after losing many of these users due to hardware changes, the system is once again growing with its own independent mass storage system.

  9. Validation of a digital image processing software package for the in vivo measurement of wear in cemented Charnley total hip arthroplasties.

    PubMed

    Kennard, Emma; Wilcox, Ruth K; Hall, Richard M

    2006-05-01

    Computer-generated images were used to assess image processing software employed in the radiographic evaluation of penetration in total hip replacement. The images were corrupted using Laplacian noise and smoothed to simulate different modulation transfer functions in a range associated with hospital digital radiographic systems. With no corruption, the penetration depth measurements were both precise and accurate. However, as the noise increased so did the inaccuracy and imprecision to levels that may make changes in the penetration observed clinically difficult to discern between follow-up assessments. Simulated rotation of the wire marker produced significant bias in the measured penetration depth. The use of these simulated radiographs allows the evaluation of the software used to process the digital images alone rather than the whole measurement system.

  10. Comparison of retinal thickness by Fourier-domain optical coherence tomography and OCT retinal image analysis software segmentation analysis derived from Stratus optical coherence tomography images

    NASA Astrophysics Data System (ADS)

    Tátrai, Erika; Ranganathan, Sudarshan; Ferencz, Mária; Debuc, Delia Cabrera; Somfai, Gábor Márk

    2011-05-01

    Purpose: To compare thickness measurements between Fourier-domain optical coherence tomography (FD-OCT) and time-domain OCT images analyzed with a custom-built OCT retinal image analysis software (OCTRIMA). Methods: Macular mapping (MM) by StratusOCT and MM5 and MM6 scanning protocols by an RTVue-100 FD-OCT device are performed on 11 subjects with no retinal pathology. Retinal thickness (RT) and the thickness of the ganglion cell complex (GCC) obtained with the MM6 protocol are compared for each early treatment diabetic retinopathy study (ETDRS)-like region with corresponding results obtained with OCTRIMA. RT results are compared by analysis of variance with Dunnett post hoc test, while GCC results are compared by paired t-test. Results: A high correlation is obtained for the RT between OCTRIMA and MM5 and MM6 protocols. In all regions, the StratusOCT provide the lowest RT values (mean difference 43 +/- 8 μm compared to OCTRIMA, and 42 +/- 14 μm compared to RTVue MM6). All RTVue GCC measurements were significantly thicker (mean difference between 6 and 12 μm) than the GCC measurements of OCTRIMA. Conclusion: High correspondence of RT measurements is obtained not only for RT but also for the segmentation of intraretinal layers between FD-OCT and StratusOCT-derived OCTRIMA analysis. However, a correction factor is required to compensate for OCT-specific differences to make measurements more comparable to any available OCT device.

  11. The Role of a Facilitated Online Workspace Component of a Community of Practice: Knowledge Building and Value Creation for NASA

    ERIC Educational Resources Information Center

    Davey, Bradford Thomas

    2013-01-01

    The purpose of this study was to examine the role of an online workspace component of a community in the work of a community of practice. Much has been studied revealing the importance of communities of practice to organizations, project success, and knowledge management and some of these same successes hold true for virtual communities of…

  12. Experiences with the BSCW Shared Workspace System as the Backbone of a Virtual Learning Environment for Students.

    ERIC Educational Resources Information Center

    Appelt, Wolfgang; Mambrey, Peter

    The GMD (German National Research Center for Information Technology) has developed the BSCW (Basic Support for Cooperative Work) Shared Workspace system within the last four years with the goal of transforming the Web from a primarily passive information repository to an active cooperation medium. The BSCW system is a Web-based groupware tool for…

  13. Using a Shared Workspace and Wireless Laptops to Improve Collaborative Project Learning in an Engineering Design Class

    ERIC Educational Resources Information Center

    Nicol, David J.; MacLeod, Iain A.

    2005-01-01

    Two different technologies, groupware (a shared workspace) and shared wireless laptop computers, were implemented in a project design class in a civil engineering course. The research interest was in the way these technologies supported resource sharing within and across project groups and in the forms of group collaboration that resulted. The…

  14. An architectural model of conscious and unconscious brain functions: Global Workspace Theory and IDA.

    PubMed

    Baars, Bernard J; Franklin, Stan

    2007-11-01

    While neural net models have been developed to a high degree of sophistication, they have some drawbacks at a more integrative, "architectural" level of analysis. We describe a "hybrid" cognitive architecture that is implementable in neuronal nets, and which has uniform brainlike features, including activation-passing and highly distributed "codelets," implementable as small-scale neural nets. Empirically, this cognitive architecture accounts qualitatively for the data described by Baars' Global Workspace Theory (GWT), and Franklin's LIDA architecture, including state-of-the-art models of conscious contents in action-planning, Baddeley-style Working Memory, and working models of episodic and semantic longterm memory. These terms are defined both conceptually and empirically for the current theoretical domain. The resulting architecture meets four desirable goals for a unified theory of cognition: practical workability, autonomous agency, a plausible role for conscious cognition, and translatability into plausible neural terms. It also generates testable predictions, both empirical and computational.

  15. Alteration of consciousness in focal epilepsy: the global workspace alteration theory.

    PubMed

    Bartolomei, Fabrice; McGonigal, Aileen; Naccache, Lionel

    2014-01-01

    Alteration of consciousness (AOC) is an important clinical manifestation of partial seizures that greatly impacts the quality of life of patients with epilepsy. Several theories have been proposed in the last fifty years. An emerging concept in neurology is the global workspace (GW) theory that postulates that access to consciousness (from several sensorial modalities) requires transient coordinated activity from associative cortices, in particular the prefrontal cortex and the posterior parietal associative cortex. Several lines of evidence support the view that partial seizures alter consciousness through disturbance of the GW. In particular, a nonlinear relation has been shown between excess of synchronization in the GW regions and the degree of AOC. Changes in thalamocortical synchrony occurring during the spreading of the ictal activity seem particularly involved in the mechanism of altered consciousness. This link between abnormal synchrony and AOC offers new perspectives in the treatment of the AOC since means of decreasing consciousness alteration in seizures could improve patients' quality of life.

  16. Extracting and Utilizing Social Networks from Log Files of Shared Workspaces

    NASA Astrophysics Data System (ADS)

    Nasirifard, Peyman; Peristeras, Vassilios; Hayes, Conor; Decker, Stefan

    Log files of online shared workspaces contain rich information that can be further analyzed. In this paper, log-file information is used to extract object-centric and user-centric social networks. The object-centric social networks are used as a means for assigning concept-based expertise elements to users based on the documents that they created, revised or read. The user-centric social networks are derived from users working on common documents. Weights, called the Cooperation Index, are assigned to links between users in a user-centric social network, which indicates how closely two people have collaborated together, based on their history. We also present a set of tools that was developed to realize our approach.

  17. Collision-free motion of two robot arms in a common workspace

    NASA Technical Reports Server (NTRS)

    Basta, Robert A.; Mehrotra, Rajiv; Varanasi, Murali R.

    1987-01-01

    Collision-free motion of two robot arms in a common workspace is investigated. A collision-free motion is obtained by detecting collisions along the preplanned trajectories using a sphere model for the wrist of each robot and then modifying the paths and/or trajectories of one or both robots to avoid the collision. Detecting and avoiding collisions are based on the premise that: preplanned trajectories of the robots follow a straight line; collisions are restricted to between the wrists of the two robots (which corresponds to the upper three links of PUMA manipulators); and collisions never occur between the beginning points or end points on the straight line paths. The collision detection algorithm is described and some approaches to collision avoidance are discussed.

  18. The Secure Medical Research Workspace: An IT Infrastructure to Enable Secure Research on Clinical Data

    PubMed Central

    Shoffner, Michael; Owen, Phillips; Mostafa, Javed; Lamm, Brent; Wang, Xiaoshu; Schmitt, Charles P.; Ahalt, Stanley C.

    2013-01-01

    Clinical data has tremendous value for translational research, but only if security and privacy concerns can be addressed satisfactorily. A collaboration of clinical and informatics teams, including RENCI, NC TraCS, UNC’s School of Information and Library Science, Information Technology Service’s Research Computing and other partners at the University of North Carolina at Chapel Hill have developed a system called the Secure Medical Research Workspace (SMRW) that enables researchers to use clinical data securely for research. SMRW significantly minimizes the risk presented when using of identified clinical data, thereby protecting patients, researchers, and institutions associated with the data. The SMRW is built on a novel combination of virtualization and data leakage protection and can be combined with other protection methodologies and scaled to production levels. PMID:23751029

  19. Singularity and workspace analysis of three isoconstrained parallel manipulators with schoenflies motion

    NASA Astrophysics Data System (ADS)

    Lee, Po-Chih; Lee, Jyh-Jone

    2012-06-01

    This paper presents the analysis of three parallel manipulators with Schoenflies-motion. Each parallel manipulator possesses two limbs in structure and the end-effector has three DOFs (degree of freedom) in the translational motion and one DOF in rotational motion about a given direction axis with respect to the world coordinate system. The three isoconstrained parallel manipulators have the structures denoted as C{u/u}UwHw-//-C{v/v}UwHw, CuR{u/u}Uhw-//-CvR{v/v}Uhw and CuPuUhw-//-CvPvUhw. The kinematic equations are first introduced for each manipulator. Then, Jacobian matrix, singularity, workspace, and performance index for each mechanism are subsequently derived and analysed for the first time. The results can be helpful for the engineers to evaluate such kind of parallel robots for possible application in industry where pick-and-place motion is required.

  20. Repeatability and Reproducibility of Quantitative Corneal Shape Analysis after Orthokeratology Treatment Using Image-Pro Plus Software

    PubMed Central

    Mei, Ying; Tang, Zhiping

    2016-01-01

    Purpose. To evaluate the repeatability and reproducibility of quantitative analysis of the morphological corneal changes after orthokeratology treatment using “Image-Pro Plus 6.0” software (IPP). Methods. Three sets of measurements were obtained: two sets by examiner 1 with 5 days apart and one set by examiner 2 on the same day. Parameters of the eccentric distance, eccentric angle, area, and roundness of the corneal treatment zone were measured using IPP. The intraclass correlation coefficient (ICC) and repetitive coefficient (COR) were used to calculate the repeatability and reproducibility of these three sets of measurements. Results. ICC analysis suggested “excellent” reliability of more than 0.885 for all variables, and COR values were less than 10% for all variables within the same examiner. ICC analysis suggested “excellent” reliability for all variables of more than 0.90, and COR values were less than 10% for all variables between different examiners. All extreme values of the eccentric distance and area of the treatment zone pointed to the same material number in three sets of measurements. Conclusions. IPP could be used to acquire the exact data of the characteristic morphological corneal changes after orthokeratology treatment with good repeatability and reproducibility. This trial is registered with trial registration number: ChiCTR-IPR-14005505. PMID:27774312

  1. Computer Software.

    ERIC Educational Resources Information Center

    Kay, Alan

    1984-01-01

    Discusses the nature and development of computer software. Programing, programing languages, types of software (including dynamic spreadsheets), and software of the future are among the topics considered. (JN)

  2. A user's guide for the signal processing software for image and speech compression developed in the Communications and Signal Processing Laboratory (CSPL), version 1

    NASA Technical Reports Server (NTRS)

    Kumar, P.; Lin, F. Y.; Vaishampayan, V.; Farvardin, N.

    1986-01-01

    A complete documentation of the software developed in the Communication and Signal Processing Laboratory (CSPL) during the period of July 1985 to March 1986 is provided. Utility programs and subroutines that were developed for a user-friendly image and speech processing environment are described. Additional programs for data compression of image and speech type signals are included. Also, programs for the zero-memory and block transform quantization in the presence of channel noise are described. Finally, several routines for simulating the perfromance of image compression algorithms are included.

  3. The Role of A Facilitated Online Workspace Component of A Community of Practice: Knowledge Building and Value Creation for NASA

    NASA Astrophysics Data System (ADS)

    Davey, B.

    2014-12-01

    This study examined the role of an online workspace component of a community in the work of a community of practice. Much has been studied revealing the importance of communities of practice to organizations, project success, and knowledge management and some of these same successes hold true for virtual communities of practice. Study participants were 75 Education and Public Outreach community members of NASA's Science Mission Directorate Earth Forum. In this mixed methods study, online workspace metrics were used to track participation and a survey completed by 21 members was used to quantify participation. For a more detailed analysis, 15 community members (5 highly active users, 5 average users, and 5 infrequent users) selected based on survey responses, were interviewed. Finally, survey data was gathered from 7 online facilitators to understand their role in the community. Data collected from these 21 community members and 5 facilitating members suggest that highly active users (logging into the workspace daily), were more likely to have transformative experiences, co-create knowledge, feel ownership of community knowledge, have extended opportunities for community exchange, and find new forms of evaluation. Average users shared some similar characteristics with both the highly active members and infrequent users, representing a group in transition as they become more engaged and active in the online workspace. Inactive users viewed the workspace as having little value, being difficult to navigate, being mainly for gaining basic information about events and community news, and as another demand on their time. Results show the online workspace component of the Earth Science Education and Outreach Forum is playing an important and emerging role for this community by supporting knowledge building and knowledge sharing, and growing in value for those that utilizing it more frequently. The evidence suggests that with increased participation or "usage" comes

  4. Model-based software engineering for an imaging CubeSat and its extrapolation to other missions

    NASA Astrophysics Data System (ADS)

    Mohammad, Atif; Straub, Jeremy; Korvald, Christoffer; Grant, Emanuel

    Small satellites with their limited computational capabilities require that software engineering techniques promote efficient use of spacecraft resources. A model-driven approach to software engineering is an excellent solution to this resource maximization challenge as it facilitates visualization of the key solution processes and data elements.

  5. Effect of Tendon Vibration on Hemiparetic Arm Stability in Unstable Workspaces

    PubMed Central

    Conrad, Megan O.; Gadhoke, Bani; Scheidt, Robert A.; Schmit, Brian D.

    2015-01-01

    Sensory stimulation of wrist musculature can enhance stability in the proximal arm and may be a useful therapy aimed at improving arm control post-stroke. Specifically, our prior research indicates tendon vibration can enhance stability during point-to-point arm movements and in tracking tasks. The goal of the present study was to investigate the influence of forearm tendon vibration on endpoint stability, measured at the hand, immediately following forward arm movements in an unstable environment. Both proximal and distal workspaces were tested. Ten hemiparetic stroke subjects and 5 healthy controls made forward arm movements while grasping the handle of a two-joint robotic arm. At the end of each movement, the robot applied destabilizing forces. During some trials, 70 Hz vibration was applied to the forearm flexor muscle tendons. 70 Hz was used as the stimulus frequency as it lies within the range of optimal frequencies that activate the muscle spindles at the highest response rate. Endpoint position, velocity, muscle activity and grip force data were compared before, during and after vibration. Stability at the endpoint was quantified as the magnitude of oscillation about the target position, calculated from the power of the tangential velocity data. Prior to vibration, subjects produced unstable, oscillating hand movements about the target location due to the applied force field. Stability increased during vibration, as evidenced by decreased oscillation in hand tangential velocity. PMID:26633892

  6. A novel scanning system using an industrial robot and the workspace measurement and positioning system

    NASA Astrophysics Data System (ADS)

    Zhao, Ziyue; Zhu, Jigui; Yang, Linghui; Lin, Jiarui

    2015-10-01

    The present scanning system consists of an industrial robot and a line-structured laser sensor which uses the industrial robot as a position instrument to guarantee the accuracy. However, the absolute accuracy of an industrial robot is relatively poor compared with the good repeatability in the manufacturing industry. This paper proposes a novel method using the workspace measurement and positioning system (wMPS) to remedy the lack of accuracy of the industrial robot. In order to guarantee the positioning accuracy of the system, the wMPS which is a laser-based measurement technology designed for large-volume metrology applications is brought in. Benefitting from the wMPS, this system can measure different cell-areas by the line-structured laser sensor and fuse the measurement data of different cell-areas by using the wMPS accurately. The system calibration which is the procedure to acquire and optimize the structure parameters of the scanning system is also stated in detail in this paper. In order to verify the feasibility of the system for scanning the large free-form surface, an experiment is designed to scan the internal surface of the door of a car-body in white. The final results show that the measurement data of the whole measuring areas have been jointed perfectly and there is no mismatch in the figure especially in the hole measuring areas. This experiment has verified the rationality of the system scheme, the correctness and effectiveness of the relevant methods.

  7. Effect of Tendon Vibration on Hemiparetic Arm Stability in Unstable Workspaces.

    PubMed

    Conrad, Megan O; Gadhoke, Bani; Scheidt, Robert A; Schmit, Brian D

    2015-01-01

    Sensory stimulation of wrist musculature can enhance stability in the proximal arm and may be a useful therapy aimed at improving arm control post-stroke. Specifically, our prior research indicates tendon vibration can enhance stability during point-to-point arm movements and in tracking tasks. The goal of the present study was to investigate the influence of forearm tendon vibration on endpoint stability, measured at the hand, immediately following forward arm movements in an unstable environment. Both proximal and distal workspaces were tested. Ten hemiparetic stroke subjects and 5 healthy controls made forward arm movements while grasping the handle of a two-joint robotic arm. At the end of each movement, the robot applied destabilizing forces. During some trials, 70 Hz vibration was applied to the forearm flexor muscle tendons. 70 Hz was used as the stimulus frequency as it lies within the range of optimal frequencies that activate the muscle spindles at the highest response rate. Endpoint position, velocity, muscle activity and grip force data were compared before, during and after vibration. Stability at the endpoint was quantified as the magnitude of oscillation about the target position, calculated from the power of the tangential velocity data. Prior to vibration, subjects produced unstable, oscillating hand movements about the target location due to the applied force field. Stability increased during vibration, as evidenced by decreased oscillation in hand tangential velocity.

  8. Study on the three-station typical network deployments of workspace Measurement and Positioning System

    NASA Astrophysics Data System (ADS)

    Xiong, Zhi; Zhu, J. G.; Xue, B.; Ye, Sh. H.; Xiong, Y.

    2013-10-01

    As a novel network coordinate measurement system based on multi-directional positioning, workspace Measurement and Positioning System (wMPS) has outstanding advantages of good parallelism, wide measurement range and high measurement accuracy, which makes it to be the research hotspots and important development direction in the field of large-scale measurement. Since station deployment has a significant impact on the measurement range and accuracy, and also restricts the use-cost, the optimization method of station deployment was researched in this paper. Firstly, positioning error model was established. Then focusing on the small network consisted of three stations, the typical deployments and error distribution characteristics were studied. Finally, through measuring the simulated fuselage using typical deployments at the industrial spot and comparing the results with Laser Tracker, some conclusions are obtained. The comparison results show that under existing prototype conditions, I_3 typical deployment of which three stations are distributed in a straight line has an average error of 0.30 mm and the maximum error is 0.50 mm in the range of 12 m. Meanwhile, C_3 typical deployment of which three stations are uniformly distributed in the half-circumference of an circle has an average error of 0.17 mm and the maximum error is 0.28 mm. Obviously, C_3 typical deployment has a higher control effect on precision than I_3 type. The research work provides effective theoretical support for global measurement network optimization in the future work.

  9. Intermediary objects in the workspace design process: means of experience transfer in the offshore sector.

    PubMed

    Conceição, Carolina; Silva, Gislaine; Broberg, Ole; Duarte, Francisco

    2012-01-01

    The aim of this paper is to discuss the use of intermediary objects in the workspace design process of offshore accommodations module. The integration of ergonomics in the design process can lead to better work conditions, more effectiveness in the work process and less health and safety issues. Moreover, it is more efficient in terms of cost if ergonomics is considered from the initial phases of the project, as the potential costs of the redesign, the possible losses and the down-time in the operation of the platform would be more increased. The goal, then, is to discuss the integration of ergonomics and users involvement in the design process of accommodations modules, focusing on the transfer of information from reference situations by the use of intermediary objects during the process. In this paper we will present two tools developed to be used as intermediary object(s) aiming at transferring the experience from the use to the design in the specific field of offshore accommodations module.

  10. A multi-sensorial hybrid control for robotic manipulation in human-robot workspaces.

    PubMed

    Pomares, Jorge; Perea, Ivan; García, Gabriel J; Jara, Carlos A; Corrales, Juan A; Torres, Fernando

    2011-01-01

    Autonomous manipulation in semi-structured environments where human operators can interact is an increasingly common task in robotic applications. This paper describes an intelligent multi-sensorial approach that solves this issue by providing a multi-robotic platform with a high degree of autonomy and the capability to perform complex tasks. The proposed sensorial system is composed of a hybrid visual servo control to efficiently guide the robot towards the object to be manipulated, an inertial motion capture system and an indoor localization system to avoid possible collisions between human operators and robots working in the same workspace, and a tactile sensor algorithm to correctly manipulate the object. The proposed controller employs the whole multi-sensorial system and combines the measurements of each one of the used sensors during two different phases considered in the robot task: a first phase where the robot approaches the object to be grasped, and a second phase of manipulation of the object. In both phases, the unexpected presence of humans is taken into account. This paper also presents the successful results obtained in several experimental setups which verify the validity of the proposed approach. PMID:22163729

  11. A multi-sensorial hybrid control for robotic manipulation in human-robot workspaces.

    PubMed

    Pomares, Jorge; Perea, Ivan; García, Gabriel J; Jara, Carlos A; Corrales, Juan A; Torres, Fernando

    2011-01-01

    Autonomous manipulation in semi-structured environments where human operators can interact is an increasingly common task in robotic applications. This paper describes an intelligent multi-sensorial approach that solves this issue by providing a multi-robotic platform with a high degree of autonomy and the capability to perform complex tasks. The proposed sensorial system is composed of a hybrid visual servo control to efficiently guide the robot towards the object to be manipulated, an inertial motion capture system and an indoor localization system to avoid possible collisions between human operators and robots working in the same workspace, and a tactile sensor algorithm to correctly manipulate the object. The proposed controller employs the whole multi-sensorial system and combines the measurements of each one of the used sensors during two different phases considered in the robot task: a first phase where the robot approaches the object to be grasped, and a second phase of manipulation of the object. In both phases, the unexpected presence of humans is taken into account. This paper also presents the successful results obtained in several experimental setups which verify the validity of the proposed approach.

  12. A Multi-Sensorial Hybrid Control for Robotic Manipulation in Human-Robot Workspaces

    PubMed Central

    Pomares, Jorge; Perea, Ivan; García, Gabriel J.; Jara, Carlos A.; Corrales, Juan A.; Torres, Fernando

    2011-01-01

    Autonomous manipulation in semi-structured environments where human operators can interact is an increasingly common task in robotic applications. This paper describes an intelligent multi-sensorial approach that solves this issue by providing a multi-robotic platform with a high degree of autonomy and the capability to perform complex tasks. The proposed sensorial system is composed of a hybrid visual servo control to efficiently guide the robot towards the object to be manipulated, an inertial motion capture system and an indoor localization system to avoid possible collisions between human operators and robots working in the same workspace, and a tactile sensor algorithm to correctly manipulate the object. The proposed controller employs the whole multi-sensorial system and combines the measurements of each one of the used sensors during two different phases considered in the robot task: a first phase where the robot approaches the object to be grasped, and a second phase of manipulation of the object. In both phases, the unexpected presence of humans is taken into account. This paper also presents the successful results obtained in several experimental setups which verify the validity of the proposed approach. PMID:22163729

  13. 5D CNS+ Software for Automatically Imaging Axial, Sagittal, and Coronal Planes of Normal and Abnormal Second-Trimester Fetal Brains.

    PubMed

    Rizzo, Giuseppe; Capponi, Alessandra; Persico, Nicola; Ghi, Tullio; Nazzaro, Giovanni; Boito, Simona; Pietrolucci, Maria Elena; Arduini, Domenico

    2016-10-01

    The purpose of this study was to test new 5D CNS+ software (Samsung Medison Co, Ltd, Seoul, Korea), which is designed to image axial, sagittal, and coronal planes of the fetal brain from volumes obtained by 3-dimensional sonography. The study consisted of 2 different steps. First in a prospective study, 3-dimensional fetal brain volumes were acquired in 183 normal consecutive singleton pregnancies undergoing routine sonographic examinations at 18 to 24 weeks' gestation. The 5D CNS+ software was applied, and the percentage of adequate visualization of brain diagnostic planes was evaluated by 2 independent observers. In the second step, the software was also tested in 22 fetuses with cerebral anomalies. In 180 of 183 fetuses (98.4%), 5D CNS+ successfully reconstructed all of the diagnostic planes. Using the software on healthy fetuses, the observers acknowledged the presence of diagnostic images with visualization rates ranging from 97.7% to 99.4% for axial planes, 94.4% to 97.7% for sagittal planes, and 92.2% to 97.2% for coronal planes. The Cohen κ coefficient was analyzed to evaluate the agreement rates between the observers and resulted in values of 0.96 or greater for axial planes, 0.90 or greater for sagittal planes, and 0.89 or greater for coronal planes. All 22 fetuses with brain anomalies were identified among a series that also included healthy fetuses, and in 21 of the 22 cases, a correct diagnosis was made. 5D CNS+ was efficient in successfully imaging standard axial, sagittal, and coronal planes of the fetal brain. This approach may simplify the examination of the fetal central nervous system and reduce operator dependency.

  14. Selecting Software.

    ERIC Educational Resources Information Center

    Pereus, Steven C.

    2002-01-01

    Describes a comprehensive computer software selection and evaluation process, including documenting district needs, evaluating software packages, weighing the alternatives, and making the purchase. (PKP)

  15. Software-only IR image generation and reticle simulation for the HWIL testing of a single detector frequency modulated reticle seeker

    NASA Astrophysics Data System (ADS)

    Delport, Jan Peet; le Roux, Francois P. J.; du Plooy, Matthys J. U.; Theron, Hendrik J.; Annamalai, Leeandran

    2004-08-01

    Hardware-in-the-Loop (HWIL) testing of seeker systems usually requires a 5-axis flight motion simulator (FMS) coupled to expensive hardware for infrared (IR) scene generation and projection. Similar tests can be conducted by using a 3-axis flight motion simulator, bypassing the seeker optics and injecting a synthetically calculated detector signal directly into the seeker. The constantly increasing speed and memory bandwidth of high-end personal computers make them attractive software rendering platforms. A software OpenGL pipeline provides flexibility in terms of access to the rendered output, colour channel dynamic range and lighting equations. This paper describes how a system was constructed using personal computer hardware to perform closed tracking loop HWIL testing of a single detector frequency modulated reticle seeker. The main parts of the system that are described include: * The software-only implementation of OpenGL used to render the IR image with floating point accuracy directly to system memory. * The software used to inject the detector signal and extract the seeker look position. * The architecture used to control the flight motion simulator.

  16. SynPAnal: software for rapid quantification of the density and intensity of protein puncta from fluorescence microscopy images of neurons.

    PubMed

    Danielson, Eric; Lee, Sang H

    2014-01-01

    Continuous modification of the protein composition at synapses is a driving force for the plastic changes of synaptic strength, and provides the fundamental molecular mechanism of synaptic plasticity and information storage in the brain. Studying synaptic protein turnover is not only important for understanding learning and memory, but also has direct implication for understanding pathological conditions like aging, neurodegenerative diseases, and psychiatric disorders. Proteins involved in synaptic transmission and synaptic plasticity are typically concentrated at synapses of neurons and thus appear as puncta (clusters) in immunofluorescence microscopy images. Quantitative measurement of the changes in puncta density, intensity, and sizes of specific proteins provide valuable information on their function in synaptic transmission, circuit development, synaptic plasticity, and synaptopathy. Unfortunately, puncta quantification is very labor intensive and time consuming. In this article, we describe a software tool designed for the rapid semi-automatic detection and quantification of synaptic protein puncta from 2D immunofluorescence images generated by confocal laser scanning microscopy. The software, dubbed as SynPAnal (for Synaptic Puncta Analysis), streamlines data quantification for puncta density and average intensity, thereby increases data analysis throughput compared to a manual method. SynPAnal is stand-alone software written using the JAVA programming language, and thus is portable and platform-free.

  17. Environmental, scanning electron and optical microscope image analysis software for determining volume and occupied area of solid-state fermentation fungal cultures.

    PubMed

    Osma, Johann F; Toca-Herrera, José L; Rodríguez-Couto, Susana

    2011-01-01

    Here we propose a software for the estimation of the occupied area and volume of fungal cultures. This software was developed using a Matlab platform and allows analysis of high-definition images from optical, electronic or atomic force microscopes. In a first step, a single hypha grown on potato dextrose agar was monitored using optical microscopy to estimate the change in occupied area and volume. Weight measurements were carried out to compare them with the estimated volume, revealing a slight difference of less than 1.5%. Similarly, samples from two different solid-state fermentation cultures were analyzed using images from a scanning electron microscope (SEM) and an environmental SEM (ESEM). Occupied area and volume were calculated for both samples, and the results obtained were correlated with the dry weight of the cultures. The difference between the estimated volume ratio and the dry weight ratio of the two cultures showed a difference of 10%. Therefore, this software is a promising non-invasive technique to determine fungal biomass in solid-state cultures. PMID:21154435

  18. Force Sensor-less Workspace Virtual Impedance Control Considering Resonant Vibration for Industrial Robot

    NASA Astrophysics Data System (ADS)

    Tungpataratanawong, Somsawas; Ohishi, Kiyoshi; Miyazaki, Toshimasa; Katsura, Seiichiro

    The motion control paradigm provides sufficient performance in many elementary industrial tasks. However, only stiff motion the robot cannot accommodate the interaction force under constrained motion. In such situation, the robot is required to perform interaction behavior with the environment. The conventional impedance control schemes require force-sensing devices to feedback force signals to the controllers. The force-sensing device is therefore indispensable and the performance of the system also depends on the quality of this device. This paper proposes a novel strategy for force sensor-less impedance control using disturbance observer and dynamic model of the robot to estimate the external force. In motion task, the robust D-PD (derivative-PD) control is used with feedforward inverse-dynamic torque compensation to ensure robustness and high-speed response with flexible joint model. When robot is in contact with environment, the proposed force sensor-less scheme impedance control with inner-loop D-PD control is utilized. D-PD control uses both position and speed as the references to implement the damping and stiffness characteristic of the virtual impedance model. In addition, the gravity and friction force-feedback compensation is computed by the same dynamic model, which is used in external force estimation. The flexible-joint robot model is utilized in both disturbance observer and motion control design. The workspace impedance control for robot interaction with human operator is implemented on the experimental setup three-degree-of-freedom (3-DOF) robot manipulator to assure the ability and performance of the proposed force sensor-less scheme for flexible-joint industrial robot.

  19. Onboard utilization of ground control points for image correction. Volume 3: Ground control point simulation software design

    NASA Technical Reports Server (NTRS)

    1981-01-01

    The software developed to simulate the ground control point navigation system is described. The Ground Control Point Simulation Program (GCPSIM) is designed as an analysis tool to predict the performance of the navigation system. The system consists of two star trackers, a global positioning system receiver, a gyro package, and a landmark tracker.

  20. Comparison of estimates of left ventricular ejection fraction obtained from gated blood pool imaging, different software packages and cameras

    PubMed Central

    Steyn, Rachelle; Boniaszczuk, John; Geldenhuys, Theodore

    2014-01-01

    Summary Objective To determine how two software packages, supplied by Siemens and Hermes, for processing gated blood pool (GBP) studies should be used in our department and whether the use of different cameras for the acquisition of raw data influences the results. Methods The study had two components. For the first component, 200 studies were acquired on a General Electric (GE) camera and processed three times by three operators using the Siemens and Hermes software packages. For the second part, 200 studies were acquired on two different cameras (GE and Siemens). The matched pairs of raw data were processed by one operator using the Siemens and Hermes software packages. Results The Siemens method consistently gave estimates that were 4.3% higher than the Hermes method (p < 0.001). The differences were not associated with any particular level of left ventricular ejection fraction (LVEF). There was no difference in the estimates of LVEF obtained by the three operators (p = 0.1794). The reproducibility of estimates was good. In 95% of patients, using the Siemens method, the SD of the three estimates of LVEF by operator 1 was ≤ 1.7, operator 2 was ≤ 2.1 and operator 3 was ≤ 1.3. The corresponding values for the Hermes method were ≤ 2.5, ≤ 2.0 and ≤ 2.1. There was no difference in the results of matched pairs of data acquired on different cameras (p = 0.4933) Conclusion Software packages for processing GBP studies are not interchangeable. The report should include the name and version of the software package used. Wherever possible, the same package should be used for serial studies. If this is not possible, the report should include the limits of agreement of the different packages. Data acquisition on different cameras did not influence the results. PMID:24844547

  1. Osteolytica: An automated image analysis software package that rapidly measures cancer-induced osteolytic lesions in in vivo models with greater reproducibility compared to other commonly used methods.

    PubMed

    Evans, H R; Karmakharm, T; Lawson, M A; Walker, R E; Harris, W; Fellows, C; Huggins, I D; Richmond, P; Chantry, A D

    2016-02-01

    Methods currently used to analyse osteolytic lesions caused by malignancies such as multiple myeloma and metastatic breast cancer vary from basic 2-D X-ray analysis to 2-D images of micro-CT datasets analysed with non-specialised image software such as ImageJ. However, these methods have significant limitations. They do not capture 3-D data, they are time-consuming and they often suffer from inter-user variability. We therefore sought to develop a rapid and reproducible method to analyse 3-D osteolytic lesions in mice with cancer-induced bone disease. To this end, we have developed Osteolytica, an image analysis software method featuring an easy to use, step-by-step interface to measure lytic bone lesions. Osteolytica utilises novel graphics card acceleration (parallel computing) and 3-D rendering to provide rapid reconstruction and analysis of osteolytic lesions. To evaluate the use of Osteolytica we analysed tibial micro-CT datasets from murine models of cancer-induced bone disease and compared the results to those obtained using a standard ImageJ analysis method. Firstly, to assess inter-user variability we deployed four independent researchers to analyse tibial datasets from the U266-NSG murine model of myeloma. Using ImageJ, inter-user variability between the bones was substantial (±19.6%), in contrast to using Osteolytica, which demonstrated minimal variability (±0.5%). Secondly, tibial datasets from U266-bearing NSG mice or BALB/c mice injected with the metastatic breast cancer cell line 4T1 were compared to tibial datasets from aged and sex-matched non-tumour control mice. Analyses by both Osteolytica and ImageJ showed significant increases in bone lesion area in tumour-bearing mice compared to control mice. These results confirm that Osteolytica performs as well as the current 2-D ImageJ osteolytic lesion analysis method. However, Osteolytica is advantageous in that it analyses over the entirety of the bone volume (as opposed to selected 2-D images), it

  2. Osteolytica: An automated image analysis software package that rapidly measures cancer-induced osteolytic lesions in in vivo models with greater reproducibility compared to other commonly used methods.

    PubMed

    Evans, H R; Karmakharm, T; Lawson, M A; Walker, R E; Harris, W; Fellows, C; Huggins, I D; Richmond, P; Chantry, A D

    2016-02-01

    Methods currently used to analyse osteolytic lesions caused by malignancies such as multiple myeloma and metastatic breast cancer vary from basic 2-D X-ray analysis to 2-D images of micro-CT datasets analysed with non-specialised image software such as ImageJ. However, these methods have significant limitations. They do not capture 3-D data, they are time-consuming and they often suffer from inter-user variability. We therefore sought to develop a rapid and reproducible method to analyse 3-D osteolytic lesions in mice with cancer-induced bone disease. To this end, we have developed Osteolytica, an image analysis software method featuring an easy to use, step-by-step interface to measure lytic bone lesions. Osteolytica utilises novel graphics card acceleration (parallel computing) and 3-D rendering to provide rapid reconstruction and analysis of osteolytic lesions. To evaluate the use of Osteolytica we analysed tibial micro-CT datasets from murine models of cancer-induced bone disease and compared the results to those obtained using a standard ImageJ analysis method. Firstly, to assess inter-user variability we deployed four independent researchers to analyse tibial datasets from the U266-NSG murine model of myeloma. Using ImageJ, inter-user variability between the bones was substantial (±19.6%), in contrast to using Osteolytica, which demonstrated minimal variability (±0.5%). Secondly, tibial datasets from U266-bearing NSG mice or BALB/c mice injected with the metastatic breast cancer cell line 4T1 were compared to tibial datasets from aged and sex-matched non-tumour control mice. Analyses by both Osteolytica and ImageJ showed significant increases in bone lesion area in tumour-bearing mice compared to control mice. These results confirm that Osteolytica performs as well as the current 2-D ImageJ osteolytic lesion analysis method. However, Osteolytica is advantageous in that it analyses over the entirety of the bone volume (as opposed to selected 2-D images), it

  3. Osteolytica: An automated image analysis software package that rapidly measures cancer-induced osteolytic lesions in in vivo models with greater reproducibility compared to other commonly used methods☆

    PubMed Central

    Evans, H.R.; Karmakharm, T.; Lawson, M.A.; Walker, R.E.; Harris, W.; Fellows, C.; Huggins, I.D.; Richmond, P.; Chantry, A.D.

    2016-01-01

    Methods currently used to analyse osteolytic lesions caused by malignancies such as multiple myeloma and metastatic breast cancer vary from basic 2-D X-ray analysis to 2-D images of micro-CT datasets analysed with non-specialised image software such as ImageJ. However, these methods have significant limitations. They do not capture 3-D data, they are time-consuming and they often suffer from inter-user variability. We therefore sought to develop a rapid and reproducible method to analyse 3-D osteolytic lesions in mice with cancer-induced bone disease. To this end, we have developed Osteolytica, an image analysis software method featuring an easy to use, step-by-step interface to measure lytic bone lesions. Osteolytica utilises novel graphics card acceleration (parallel computing) and 3-D rendering to provide rapid reconstruction and analysis of osteolytic lesions. To evaluate the use of Osteolytica we analysed tibial micro-CT datasets from murine models of cancer-induced bone disease and compared the results to those obtained using a standard ImageJ analysis method. Firstly, to assess inter-user variability we deployed four independent researchers to analyse tibial datasets from the U266-NSG murine model of myeloma. Using ImageJ, inter-user variability between the bones was substantial (± 19.6%), in contrast to using Osteolytica, which demonstrated minimal variability (± 0.5%). Secondly, tibial datasets from U266-bearing NSG mice or BALB/c mice injected with the metastatic breast cancer cell line 4T1 were compared to tibial datasets from aged and sex-matched non-tumour control mice. Analyses by both Osteolytica and ImageJ showed significant increases in bone lesion area in tumour-bearing mice compared to control mice. These results confirm that Osteolytica performs as well as the current 2-D ImageJ osteolytic lesion analysis method. However, Osteolytica is advantageous in that it analyses over the entirety of the bone volume (as opposed to selected 2-D images

  4. Secure Video Surveillance System Acquisition Software

    SciTech Connect

    2009-12-04

    The SVSS Acquisition Software collects and displays video images from two cameras through a VPN, and store the images onto a collection controller. The software is configured to allow a user to enter a time window to display up to 2 1/2, hours of video review. The software collects images from the cameras at a rate of 1 image per second and automatically deletes images older than 3 hours. The software code operates in a linux environment and can be run in a virtual machine on Windows XP. The Sandia software integrates the different COTS software together to build the video review system.

  5. Secure Video Surveillance System Acquisition Software

    2009-12-04

    The SVSS Acquisition Software collects and displays video images from two cameras through a VPN, and store the images onto a collection controller. The software is configured to allow a user to enter a time window to display up to 2 1/2, hours of video review. The software collects images from the cameras at a rate of 1 image per second and automatically deletes images older than 3 hours. The software code operates in amore » linux environment and can be run in a virtual machine on Windows XP. The Sandia software integrates the different COTS software together to build the video review system.« less

  6. Software for 3D radiotherapy dosimetry. Validation

    NASA Astrophysics Data System (ADS)

    Kozicki, Marek; Maras, Piotr; Karwowski, Andrzej C.

    2014-08-01

    The subject of this work is polyGeVero® software (GeVero Co., Poland), which has been developed to fill the requirements of fast calculations of 3D dosimetry data with the emphasis on polymer gel dosimetry for radiotherapy. This software comprises four workspaces that have been prepared for: (i) calculating calibration curves and calibration equations, (ii) storing the calibration characteristics of the 3D dosimeters, (iii) calculating 3D dose distributions in irradiated 3D dosimeters, and (iv) comparing 3D dose distributions obtained from measurements with the aid of 3D dosimeters and calculated with the aid of treatment planning systems (TPSs). The main features and functions of the software are described in this work. Moreover, the core algorithms were validated and the results are presented. The validation was performed using the data of the new PABIGnx polymer gel dosimeter. The polyGeVero® software simplifies and greatly accelerates the calculations of raw 3D dosimetry data. It is an effective tool for fast verification of TPS-generated plans for tumor irradiation when combined with a 3D dosimeter. Consequently, the software may facilitate calculations by the 3D dosimetry community. In this work, the calibration characteristics of the PABIGnx obtained through four calibration methods: multi vial, cross beam, depth dose, and brachytherapy, are discussed as well.

  7. TIA Software User's Manual

    NASA Technical Reports Server (NTRS)

    Cramer, K. Elliott; Syed, Hazari I.

    1995-01-01

    This user's manual describes the installation and operation of TIA, the Thermal-Imaging acquisition and processing Application, developed by the Nondestructive Evaluation Sciences Branch at NASA Langley Research Center, Hampton, Virginia. TIA is a user friendly graphical interface application for the Macintosh 2 and higher series computers. The software has been developed to interface with the Perceptics/Westinghouse Pixelpipe(TM) and PixelStore(TM) NuBus cards and the GW Instruments MacADIOS(TM) input-output (I/O) card for the Macintosh for imaging thermal data. The software is also capable of performing generic image-processing functions.

  8. Design principles for innovative workspaces to increase efficiency in pharmaceutical R&D: lessons learned from the Novartis campus.

    PubMed

    Zoller, Frank A; Boutellier, Roman

    2013-04-01

    When managing R&D departments for increased efficiency and effectiveness the focus has often been on organizational structure. Space is, however, of outstanding importance in an environment of large task uncertainty, which is the case in pharmaceutical R&D. Based on case studies about the Novartis campus in Basel, Switzerland, we propose some design principles for laboratory and office workspace to support the strong and weak ties of scientist networks. We address the diversity of technologies and specialization, as well as the pressure on time-to-market, as major challenges in pharmaceutical R&D. PMID:23318251

  9. Design principles for innovative workspaces to increase efficiency in pharmaceutical R&D: lessons learned from the Novartis campus.

    PubMed

    Zoller, Frank A; Boutellier, Roman

    2013-04-01

    When managing R&D departments for increased efficiency and effectiveness the focus has often been on organizational structure. Space is, however, of outstanding importance in an environment of large task uncertainty, which is the case in pharmaceutical R&D. Based on case studies about the Novartis campus in Basel, Switzerland, we propose some design principles for laboratory and office workspace to support the strong and weak ties of scientist networks. We address the diversity of technologies and specialization, as well as the pressure on time-to-market, as major challenges in pharmaceutical R&D.

  10. ChiMS: Open-source instrument control software platform on LabVIEW for imaging/depth profiling mass spectrometers

    PubMed Central

    Cui, Yang; Hanley, Luke

    2015-01-01

    ChiMS is an open-source data acquisition and control software program written within LabVIEW for high speed imaging and depth profiling mass spectrometers. ChiMS can also transfer large datasets from a digitizer to computer memory at high repetition rate, save data to hard disk at high throughput, and perform high speed data processing. The data acquisition mode generally simulates a digital oscilloscope, but with peripheral devices integrated for control as well as advanced data sorting and processing capabilities. Customized user-designed experiments can be easily written based on several included templates. ChiMS is additionally well suited to non-laser based mass spectrometers imaging and various other experiments in laser physics, physical chemistry, and surface science. PMID:26133872

  11. ChiMS: Open-source instrument control software platform on LabVIEW for imaging/depth profiling mass spectrometers.

    PubMed

    Cui, Yang; Hanley, Luke

    2015-06-01

    ChiMS is an open-source data acquisition and control software program written within LabVIEW for high speed imaging and depth profiling mass spectrometers. ChiMS can also transfer large datasets from a digitizer to computer memory at high repetition rate, save data to hard disk at high throughput, and perform high speed data processing. The data acquisition mode generally simulates a digital oscilloscope, but with peripheral devices integrated for control as well as advanced data sorting and processing capabilities. Customized user-designed experiments can be easily written based on several included templates. ChiMS is additionally well suited to non-laser based mass spectrometers imaging and various other experiments in laser physics, physical chemistry, and surface science. PMID:26133872

  12. Hardware, software, and scanning issues encountered during small animal imaging of photodynamic therapy in the athymic nude rat

    NASA Astrophysics Data System (ADS)

    Cross, Nathan; Sharma, Rahul; Varghai, Davood; Spring-Robinson, Chandra; Oleinick, Nancy L.; Muzic, Raymond F., Jr.; Dean, David

    2007-02-01

    Small animal imaging devices are now commonly used to study gene activation and model the effects of potential therapies. We are attempting to develop a protocol that non-invasively tracks the affect of Pc 4-mediated photodynamic therapy (PDT) in a human glioma model using structural image data from micro-CT and/or micro-MR scanning and functional data from 18F-fluorodeoxy-glucose (18F-FDG) micro-PET imaging. Methods: Athymic nude rat U87-derived glioma was imaged by micro-PET and either micro-CT or micro-MR prior to Pc 4-PDT. Difficulty insuring animal anesthesia and anatomic position during the micro-PET, micro-CT, and micro-MR scans required adaptation of the scanning bed hardware. Following Pc 4-PDT the animals were again 18F-FDG micro-PET scanned, euthanized one day later, and their brains were explanted and prepared for H&E histology. Histology provided the gold standard for tumor location and necrosis. The tumor and surrounding brain functional and structural image data were then isolated and coregistered. Results: Surprisingly, both the non-PDT and PDT groups showed an increase in tumor functional activity when we expected this signal to disappear in the group receiving PDT. Co-registration of the functional and structural image data was done manually. Discussion: As expected, micro-MR imaging provided better structural discrimination of the brain tumor than micro-CT. Contrary to expectations, in our preliminary analysis 18F-FDG micro-PET imaging does not readily discriminate the U87 tumors that received Pc 4-PDT. We continue to investigate the utility of micro-PET and other methods of functional imaging to remotely detect the specificity and sensitivity of Pc 4-PDT in deeply placed tumors.

  13. Software Reviews.

    ERIC Educational Resources Information Center

    Smith, Richard L., Ed.

    1985-01-01

    Reviews software packages by providing extensive descriptions and discussions of their strengths and weaknesses. Software reviewed include (1) "VISIFROG: Vertebrate Anatomy" (grade seven-adult); (2) "Fraction Bars Computer Program" (grades three to six) and (3) four telecommunications utilities. (JN)

  14. Comparison of performance of object-based image analysis techniques available in open source software (Spring and Orfeo Toolbox/Monteverdi) considering very high spatial resolution data

    NASA Astrophysics Data System (ADS)

    Teodoro, Ana C.; Araujo, Ricardo

    2016-01-01

    The use of unmanned aerial vehicles (UAVs) for remote sensing applications is becoming more frequent. However, this type of information can result in several software problems related to the huge amount of data available. Object-based image analysis (OBIA) has proven to be superior to pixel-based analysis for very high-resolution images. The main objective of this work was to explore the potentialities of the OBIA methods available in two different open source software applications, Spring and OTB/Monteverdi, in order to generate an urban land cover map. An orthomosaic derived from UAVs was considered, 10 different regions of interest were selected, and two different approaches were followed. The first one (Spring) uses the region growing segmentation algorithm followed by the Bhattacharya classifier. The second approach (OTB/Monteverdi) uses the mean shift segmentation algorithm followed by the support vector machine (SVM) classifier. Two strategies were followed: four classes were considered using Spring and thereafter seven classes were considered for OTB/Monteverdi. The SVM classifier produces slightly better results and presents a shorter processing time. However, the poor spectral resolution of the data (only RGB bands) is an important factor that limits the performance of the classifiers applied.

  15. A New Measurement Technique of the Characteristics of Nutrient Artery Canals in Tibias Using Materialise's Interactive Medical Image Control System Software

    PubMed Central

    Li, Jiantao; Zhang, Hao; Yin, Peng; Su, Xiuyun; Zhao, Zhe; Zhou, Jianfeng; Li, Chen; Li, Zhirui; Zhang, Lihai; Tang, Peifu

    2015-01-01

    We established a novel measurement technique to evaluate the anatomic information of nutrient artery canals using Mimics (Materialise's Interactive Medical Image Control System) software, which will provide full knowledge of nutrient artery canals to assist in the diagnosis of longitudinal fractures of tibia and choosing an optimal therapy. Here we collected Digital Imaging and Communications in Medicine (DICOM) format of 199 patients hospitalized in our hospital. All three-dimensional models of tibia in Mimics were reconstructed. In 3-matic software, we marked five points in tibia which located at intercondylar eminence, tibia tuberosity, outer ostium, inner ostium, and bottom of medial malleolus. We then recorded Z-coordinates values of the five points and performed statistical analysis. Our results indicate that foramen was found to be absent in 9 (2.3%) tibias, and 379 (95.2%) tibias had single nutrient foramen. The double foramina was observed in 10 (2.5%) tibias. The mean of tibia length was 358 ± 22 mm. The mean foraminal index was 31.8%  ± 3%. The mean distance between tibial tuberosity and foramen (TFD) is 66 ± 12 mm. Foraminal index has significant positive correlation with TFD (r = 0.721, P < 0.01). Length of nutrient artery canals has significant negative correlation with TFD (r = −0.340, P < 0.01) and has significant negative correlation with foraminal index (r = −0.541, P < 0.01). PMID:26788498

  16. Software Program: Software Management Guidebook

    NASA Technical Reports Server (NTRS)

    1996-01-01

    The purpose of this NASA Software Management Guidebook is twofold. First, this document defines the core products and activities required of NASA software projects. It defines life-cycle models and activity-related methods but acknowledges that no single life-cycle model is appropriate for all NASA software projects. It also acknowledges that the appropriate method for accomplishing a required activity depends on characteristics of the software project. Second, this guidebook provides specific guidance to software project managers and team leaders in selecting appropriate life cycles and methods to develop a tailored plan for a software engineering project.

  17. Development of kinematic equations and determination of workspace of a 6 DOF end-effector with closed-kinematic chain mechanism

    NASA Technical Reports Server (NTRS)

    Nguyen, Charles C.; Pooran, Farhad J.

    1989-01-01

    This report presents results from the research grant entitled Active Control of Robot Manipulators, funded by the Goddard Space Flight Center, under Grant NAG5-780, for the period July 1, 1988 to January 1, 1989. An analysis is presented of a 6 degree-of-freedom robot end-effector built to study telerobotic assembly of NASA hardware in space. Since the end-effector is required to perform high precision motion in a limited workspace, closed-kinematic mechanisms are chosen for its design. A closed-form solution is obtained for the inverse kinematic problem and an iterative procedure employing Newton-Raphson method is proposed to solve the forward kinematic problem. A study of the end-effector workspace results in a general procedure for the workspace determination based on link constraints. Computer simulation results are presented.

  18. Proprietary software

    NASA Technical Reports Server (NTRS)

    Marnock, M. J.

    1971-01-01

    The protection of intellectual property by a patent, a copyright, or trade secrets is reviewed. The present and future use of computers and software are discussed, along with the governmental uses of software. The popularity of contractual agreements for sale or lease of computer programs and software services is also summarized.

  19. Computer software.

    PubMed

    Rosenthal, L E

    1986-10-01

    Software is the component in a computer system that permits the hardware to perform the various functions that a computer system is capable of doing. The history of software and its development can be traced to the early nineteenth century. All computer systems are designed to utilize the "stored program concept" as first developed by Charles Babbage in the 1850s. The concept was lost until the mid-1940s, when modern computers made their appearance. Today, because of the complex and myriad tasks that a computer system can perform, there has been a differentiation of types of software. There is software designed to perform specific business applications. There is software that controls the overall operation of a computer system. And there is software that is designed to carry out specialized tasks. Regardless of types, software is the most critical component of any computer system. Without it, all one has is a collection of circuits, transistors, and silicone chips.

  20. Real-time navigation in transoral robotic nasopharyngectomy utilizing on table fluoroscopy and image overlay software: a cadaveric feasibility study.

    PubMed

    Tsang, Raymond K; Sorger, Jonathan M; Azizian, Mahdi; Holsinger, Christopher F

    2015-12-01

    Inability to integrate surgical navigation systems into current surgical robot is one of the reasons for the lack of development of robotic endoscopic skull base surgery. We describe an experiment to adapt current technologies for real-time navigation during transoral robotic nasopharyngectomy. A cone-beam CT was performed with a robotic C-arm after the injecting contrast into common carotid artery. 3D reconstruction of the skull images with the internal carotid artery (ICA) highlighted red was projected on the console. Robotic nasopharyngectomy was then performed. Fluoroscopy was performed with the C-arm. Fluoroscopic image was then overlaid on the reconstructed skull image. The relationship of the robotic instruments with the bony landmarks and ICA could then been viewed in real-time, acting as a surgical navigation system. Navigation during robotic skull base surgery is feasible with available technologies and can increase the safety of robotic skull base surgery.

  1. Towards a software profession

    NASA Technical Reports Server (NTRS)

    Berard, Edward V.

    1986-01-01

    An increasing number of programmers have attempted to change their image. They have made it plain that they wish not only to be taken seriously, but they also wish to be regarded as professionals. Many programmers now wish to referred to as software engineers. If programmers wish to be considered professionals in every sense of the word, two obstacles must be overcome: the inability to think of software as a product, and the idea that little or no skill is required to create and handle software throughout its life cycle. The steps to be taken toward professionalization are outlined along with recommendations.

  2. Revealing text in a complexly rolled silver scroll from Jerash with computed tomography and advanced imaging software

    NASA Astrophysics Data System (ADS)

    Hoffmann Barfod, Gry; Larsen, John Møller; Lichtenberger, Achim; Raja, Rubina

    2015-12-01

    Throughout Antiquity magical amulets written on papyri, lead and silver were used for apotropaic reasons. While papyri often can be unrolled and deciphered, metal scrolls, usually very thin and tightly rolled up, cannot easily be unrolled without damaging the metal. This leaves us with unreadable results due to the damage done or with the decision not to unroll the scroll. The texts vary greatly and tell us about the cultural environment and local as well as individual practices at a variety of locations across the Mediterranean. Here we present the methodology and the results of the digital unfolding of a silver sheet from Jerash in Jordan from the mid-8th century CE. The scroll was inscribed with 17 lines in presumed pseudo-Arabic as well as some magical signs. The successful unfolding shows that it is possible to digitally unfold complexly folded scrolls, but that it requires a combination of the know-how of the software and linguistic knowledge.

  3. Global Workspace Dynamics: Cortical “Binding and Propagation” Enables Conscious Contents

    PubMed Central

    Baars, Bernard J.; Franklin, Stan; Ramsoy, Thomas Zoega

    2013-01-01

    A global workspace (GW) is a functional hub of binding and propagation in a population of loosely coupled signaling elements. In computational applications, GW architectures recruit many distributed, specialized agents to cooperate in resolving focal ambiguities. In the brain, conscious experiences may reflect a GW function. For animals, the natural world is full of unpredictable dangers and opportunities, suggesting a general adaptive pressure for brains to resolve focal ambiguities quickly and accurately. GW theory aims to understand the differences between conscious and unconscious brain events. In humans and related species the cortico-thalamic (C-T) core is believed to underlie conscious aspects of perception, thinking, learning, feelings of knowing (FOK), felt emotions, visual imagery, working memory, and executive control. Alternative theoretical perspectives are also discussed. The C-T core has many anatomical hubs, but conscious percepts are unitary and internally consistent at any given moment. Over time, conscious contents constitute a very large, open set. This suggests that a brain-based GW capacity cannot be localized in a single anatomical hub. Rather, it should be sought in a functional hub – a dynamic capacity for binding and propagation of neural signals over multiple task-related networks, a kind of neuronal cloud computing. In this view, conscious contents can arise in any region of the C-T core when multiple input streams settle on a winner-take-all equilibrium. The resulting conscious gestalt may ignite an any-to-many broadcast, lasting ∼100–200 ms, and trigger widespread adaptation in previously established networks. To account for the great range of conscious contents over time, the theory suggests an open repertoire of binding1 coalitions that can broadcast via theta/gamma or alpha/gamma phase coupling, like radio channels competing for a narrow frequency band. Conscious moments are thought to hold only 1–4 unrelated items; this

  4. IRIS explorer software for radial-depth cueing reovirus particles and other macromolecular structures determined by cryoelectron microscopy and image reconstruction.

    PubMed

    Spencer, S M; Sgro, J Y; Dryden, K A; Baker, T S; Nibert, M L

    1997-10-01

    Structures of biological macromolecules determined by transmission cryoelectron microscopy (cryo-TEM) and three-dimensional image reconstruction are often displayed as surface-shaded representations with depth cueing along the viewed direction (Z cueing). Depth cueing to indicate distance from the center of virus particles (radial-depth cueing, or R cueing) has also been used. We have found that a style of R cueing in which color is applied in smooth or discontinuous gradients using the IRIS Explorer software is an informative technique for displaying the structures of virus particles solved by cryo-TEM and image reconstruction. To develop and test these methods, we used existing cryo-TEM reconstructions of mammalian reovirus particles. The newly applied visualization techniques allowed us to discern several new structural features, including sites in the inner capsid through which the viral mRNAs may be extruded after they are synthesized by the reovirus transcriptase complexes. To demonstrate the broad utility of the methods, we also applied them to cryo-TEM reconstructions of human rhinovirus, native and swollen forms of cowpea chlorotic mottle virus, truncated core of pyruvate dehydrogenase complex from Saccharomyces cerevisiae, and flagellar filament of Salmonella typhimurium. We conclude that R cueing with color gradients is a useful tool for displaying virus particles and other macromolecules analyzed by cryo-TEM and image reconstruction.

  5. Phantom evaluation of an image-guided navigation system based on electromagnetic tracking and open source software

    NASA Astrophysics Data System (ADS)

    Lin, Ralph; Cheng, Peng; Lindisch, David; Banovac, Filip; Lee, Justin; Cleary, Kevin

    2008-03-01

    We have developed an image-guided navigation system using electromagnetically-tracked tools, with potential applications for abdominal procedures such as biopsies, radiofrequency ablations, and radioactive seed placements. We present the results of two phantom studies using our navigation system in a clinical environment. In the first study, a physician and medical resident performed a total of 18 targeting passes in the abdomen of an anthropomorphic phantom based solely upon image guidance. The distance between the target and needle tip location was measured based on confirmatory scans which gave an average of 3.56 mm. In the second study, three foam nodules were placed at different depths in a gelatin phantom. Ten targeting passes were attempted in each of the three depths. Final distances between the target and needle tip were measured which gave an average of 3.00 mm. In addition to these targeting studies, we discuss our refinement to the standard four-quadrant image-guided navigation user interface, based on clinician preferences. We believe these refinements increase the usability of our system while decreasing targeting error.

  6. Software safety

    NASA Technical Reports Server (NTRS)

    Leveson, Nancy

    1987-01-01

    Software safety and its relationship to other qualities are discussed. It is shown that standard reliability and fault tolerance techniques will not solve the safety problem for the present. A new attitude requires: looking at what you do NOT want software to do along with what you want it to do; and assuming things will go wrong. New procedures and changes to entire software development process are necessary: special software safety analysis techniques are needed; and design techniques, especially eliminating complexity, can be very helpful.

  7. Software Reviews.

    ERIC Educational Resources Information Center

    Beezer, Robert A.; And Others

    1988-01-01

    Reviews for three software packages are given. Those packages are: Linear Algebra Computer Companion; Probability and Statistics Demonstrations and Tutorials; and Math Utilities: CURVES, SURFS, AND DIFFS. (PK)

  8. Revealing text in a complexly rolled silver scroll from Jerash with computed tomography and advanced imaging software

    PubMed Central

    Hoffmann Barfod, Gry; Larsen, John Møller; Raja, Rubina

    2015-01-01

    Throughout Antiquity magical amulets written on papyri, lead and silver were used for apotropaic reasons. While papyri often can be unrolled and deciphered, metal scrolls, usually very thin and tightly rolled up, cannot easily be unrolled without damaging the metal. This leaves us with unreadable results due to the damage done or with the decision not to unroll the scroll. The texts vary greatly and tell us about the cultural environment and local as well as individual practices at a variety of locations across the Mediterranean. Here we present the methodology and the results of the digital unfolding of a silver sheet from Jerash in Jordan from the mid-8th century CE. The scroll was inscribed with 17 lines in presumed pseudo-Arabic as well as some magical signs. The successful unfolding shows that it is possible to digitally unfold complexly folded scrolls, but that it requires a combination of the know-how of the software and linguistic knowledge. PMID:26648504

  9. Application of Technical Measures and Software in Constructing Photorealistic 3D Models of Historical Building Using Ground-Based and Aerial (UAV) Digital Images

    NASA Astrophysics Data System (ADS)

    Zarnowski, Aleksander; Banaszek, Anna; Banaszek, Sebastian

    2015-12-01

    Preparing digital documentation of historical buildings is a form of protecting cultural heritage. Recently there have been several intensive studies using non-metric digital images to construct realistic 3D models of historical buildings. Increasingly often, non-metric digital images are obtained with unmanned aerial vehicles (UAV). Technologies and methods of UAV flights are quite different from traditional photogrammetric approaches. The lack of technical guidelines for using drones inhibits the process of implementing new methods of data acquisition. This paper presents the results of experiments in the use of digital images in the construction of photo-realistic 3D model of a historical building (Raphaelsohns' Sawmill in Olsztyn). The aim of the study at the first stage was to determine the meteorological and technical conditions for the acquisition of aerial and ground-based photographs. At the next stage, the technology of 3D modelling was developed using only ground-based or only aerial non-metric digital images. At the last stage of the study, an experiment was conducted to assess the possibility of 3D modelling with the comprehensive use of aerial (UAV) and ground-based digital photographs in terms of their labour intensity and precision of development. Data integration and automatic photo-realistic 3D construction of the models was done with Pix4Dmapper and Agisoft PhotoScan software Analyses have shown that when certain parameters established in an experiment are kept, the process of developing the stock-taking documentation for a historical building moves from the standards of analogue to digital technology with considerably reduced cost.

  10. Software Bridge

    NASA Technical Reports Server (NTRS)

    1995-01-01

    I-Bridge is a commercial version of software developed by I-Kinetics under a NASA Small Business Innovation Research (SBIR) contract. The software allows users of Windows applications to gain quick, easy access to databases, programs and files on UNIX services. Information goes directly onto spreadsheets and other applications; users need not manually locate, transfer and convert data.

  11. Software Reviews.

    ERIC Educational Resources Information Center

    Miller, Anne, Ed.; Radziemski, Cathy, Ed.

    1988-01-01

    Reviews two software packages for the Macintosh series. "Course Builder 2.0," a courseware authoring system, allows the user to create programs which stand alone and may be used independently in the classroom. "World Builder," an artificial intelligence software package, allows creative thinking, problem-solving, and decision-making. (YP)

  12. Software Reviews.

    ERIC Educational Resources Information Center

    Bitter, Gary G., Ed.

    1990-01-01

    Reviews three computer software: (1) "Elastic Lines: The Electronic Geoboard" on elementary geometry; (2) "Wildlife Adventures: Whales" on environmental science; and (3) "What Do You Do with a Broken Calculator?" on computation and problem solving. Summarizes the descriptions, strengths and weaknesses, and applications of each software. (YP)

  13. Software Reviews.

    ERIC Educational Resources Information Center

    Wulfson, Stephen

    1988-01-01

    Presents reviews of six computer software programs for teaching science. Provides the publisher, grade level, cost, and descriptions of software, including: (1) "Recycling Logic"; (2) "Introduction to Biochemistry"; (3) "Food for Thought"; (4) "Watts in a Home"; (5) "Geology in Action"; and (6) "Biomes." All are for Apple series microcomputers.…

  14. Software Reviews.

    ERIC Educational Resources Information Center

    Science and Children, 1988

    1988-01-01

    Reviews six software packages for the Apple II family. Programs reviewed include "Science Courseware: Earth Science Series"; "Heat and Light"; "In Search of Space: Introduction to Model Rocketry"; "Drug Education Series: Drugs--Their Effects on You'"; "Uncertainties and Measurement"; and "Software Films: Learning about Science Series," which…

  15. Diagnostic use of facial image analysis software in endocrine and genetic disorders: review, current results and future perspectives.

    PubMed

    Kosilek, R P; Frohner, R; Würtz, R P; Berr, C M; Schopohl, J; Reincke, M; Schneider, H J

    2015-10-01

    Cushing's syndrome (CS) and acromegaly are endocrine diseases that are currently diagnosed with a delay of several years from disease onset. Novel diagnostic approaches and increased awareness among physicians are needed. Face classification technology has recently been introduced as a promising diagnostic tool for CS and acromegaly in pilot studies. It has also been used to classify various genetic syndromes using regular facial photographs. The authors provide a basic explanation of the technology, review available literature regarding its use in a medical setting, and discuss possible future developments. The method the authors have employed in previous studies uses standardized frontal and profile facial photographs for classification. Image analysis is based on applying mathematical functions evaluating geometry and image texture to a grid of nodes semi-automatically placed on relevant facial structures, yielding a binary classification result. Ongoing research focuses on improving diagnostic algorithms of this method and bringing it closer to clinical use. Regarding future perspectives, the authors propose an online interface that facilitates submission of patient data for analysis and retrieval of results as a possible model for clinical application. PMID:26162404

  16. Workspace design for crane cabins applying a combined traditional approach and the Taguchi method for design of experiments.

    PubMed

    Spasojević Brkić, Vesna K; Veljković, Zorica A; Golubović, Tamara; Brkić, Aleksandar Dj; Kosić Šotić, Ivana

    2016-01-01

    Procedures in the development process of crane cabins are arbitrary and subjective. Since approximately 42% of incidents in the construction industry are linked to them, there is a need to collect fresh anthropometric data and provide additional recommendations for design. In this paper, dimensioning of the crane cabin interior space was carried out using a sample of 64 crane operators' anthropometric measurements, in the Republic of Serbia, by measuring workspace with 10 parameters using nine measured anthropometric data from each crane operator. This paper applies experiments run via full factorial designs using a combined traditional and Taguchi approach. The experiments indicated which design parameters are influenced by which anthropometric measurements and to what degree. The results are expected to be of use for crane cabin designers and should assist them to design a cabin that may lead to less strenuous sitting postures and fatigue for operators, thus improving safety and accident prevention.

  17. Kinematic Analysis and Synthesis of a 3-URU Pure Rotational Parallel Mechanism with Respect to Singularity and Workspace

    NASA Astrophysics Data System (ADS)

    Huda, Syamsul; Takeda, Yukio

    This paper concerns kinematics and dimensional synthesis of a three universal-revolute-universal (3-URU) pure rotational parallel mechanism. The mechanism is composed of a base, a platform and three symmetric limbs consisting of U-R-U joints. This mechanism is a spatial non-overconstrained mechanism with three degrees of freedom. The joints in each limb are so arranged to perform pure rotational motion of the platform around a specific point. Equations for inverse displacement analysis and singularities were derived to investigate the relationship of the kinematic constants to the solution of the inverse kinematics and singularities. Based on the results, a dimensional synthesis procedure for the 3-URU parallel mechanism considering singularities and the workspace was proposed. A numerical example was also presented to illustrate the synthesis method.

  18. Designing, Supporting, and Sustaining an Online Community of Practice: NASA EPO Workspace as an Ongoing Exploration of the Value of Community

    NASA Astrophysics Data System (ADS)

    Davey, B.; Davis, H. B.

    2015-12-01

    Increasingly, geographically diverse organizations, like NASA's Science Mission Directorate Education and Public Outreach personnel (SMD EPO), are looking for ways to facilitate group interactions in meaningful ways while limiting costs. Towards this end, of particular interest, and showing great potential are communities of practice. Communities of practice represent relationships in real-time between and among people sharing a common practice. They facilitate the sharing of information, building collective knowledge, and growing of the principles of practice. In 2010-11, SMD EPO established a website to support EPO professionals, facilitate headquarters reporting, and foster a community of practice. The purpose of this evaluation is to examine the design and use of the workspace and the value created for both individual community members and SMD EPO, the sponsoring organization. The online workspace was launched in 2010-11 for the members of NASA's SMDEPO community. The online workspace was designed to help facilitate the efficient sharing of information, be a central repository for resources, help facilitate and support knowledge creation, and ultimately lead to the development of an online community of practice. This study examines the role of the online workspace component of a community in the work of a community of practice. Much has been studied revealing the importance of communities of practice to organizations, project success, and knowledge management and some of these same successes hold true for virtual communities of practice. Additionally, we look at the outcomes of housting the online community for these past years in respect to knowledge building and personal and organizational value, the affects on professional dvelopment opportunities, how community members have benefited, and how the workspace has evolved to better serve the community.

  19. Determination of Flow Rates in Capillary Liquid Chromatography Coupled to a Nanoelectrospray Source using Droplet Image Analysis Software.

    PubMed

    Cohen, Alejandro M; Soto, Axel J; Fawcett, James P

    2016-08-01

    Liquid chromatography coupled to electrospray tandem mass spectrometry (LC-ESI-MS/MS) is widely used in proteomic and metabolomic workflows. Considerable analytical improvements have been observed when the components of LC systems are scaled down. Currently, nano-ESI is typically done at capillary LC flow rates ranging from 200 to 300 nL/min. At these flow rates, trouble shooting and leak detection of LC systems has become increasingly challenging. In this paper we present a novel proof-of-concept approach to measure flow rates at the tip of electrospray emitters when the ionization voltage is turned off. This was achieved by estimating the changes in the droplet volume over time using digital image analysis. The results are comparable with the traditional methods of measuring flow rates, with the potential advantages of being fully automatable and nondisruptive. PMID:27351615

  20. SAR Product Control Software

    NASA Astrophysics Data System (ADS)

    Meadows, P. J.; Hounam, D.; Rye, A. J.; Rosich, B.; Börner, T.; Closa, J.; Schättler, B.; Smith, P. J.; Zink, M.

    2003-03-01

    As SAR instruments and their operating modes become more complex, as new applications place more and more demands on image quality and as our understanding of their imperfections becomes more sophisticated, there is increasing recognition that SAR data quality has to be controlled more completely to keep pace. The SAR product CONtrol software (SARCON) is a comprehensive SAR product control software suite tailored to the latest generation of SAR sensors. SARCON profits from the most up-to-date thinking on SAR image performance derived from other spaceborne and airborne SAR projects and is based on the newest applications. This paper gives an overview of the structure and the features of this new software tool, which is a product of a co-operation between teams at BAE SYSTEMS Advanced Technology Centre and DLR under contract to ESA (ESRIN). Work on SARCON began in 1999 and is continuing.

  1. Software Smarts

    NASA Technical Reports Server (NTRS)

    1998-01-01

    Under an SBIR (Small Business Innovative Research) contract with Johnson Space Center, Knowledge Based Systems Inc. (KBSI) developed an intelligent software environment for modeling and analyzing mission planning activities, simulating behavior, and, using a unique constraint propagation mechanism, updating plans with each change in mission planning activities. KBSI developed this technology into a commercial product, PROJECTLINK, a two-way bridge between PROSIm, KBSI's process modeling and simulation software and leading project management software like Microsoft Project and Primavera's SureTrak Project Manager.

  2. Software testing

    NASA Astrophysics Data System (ADS)

    Price-Whelan, Adrian M.

    2016-01-01

    Now more than ever, scientific results are dependent on sophisticated software and analysis. Why should we trust code written by others? How do you ensure your own code produces sensible results? How do you make sure it continues to do so as you update, modify, and add functionality? Software testing is an integral part of code validation and writing tests should be a requirement for any software project. I will talk about Python-based tools that make managing and running tests much easier and explore some statistics for projects hosted on GitHub that contain tests.

  3. Comparative evaluation of cephalometric measurements of monitor-displayed images by Nemoceph software and its hard copy by manual tracing

    PubMed Central

    Tikku, Tripti; Khanna, Rohit; Maurya, R.P.; Srivastava, Kamna; Bhushan, Rastra

    2014-01-01

    Objective The aim of this study was to evaluate and compare the cephalometric measurements obtained from computerized tracing of direct digital radiographs and hand tracing of their digital radiographic printouts. Material and methods The soft- and hard-copies of pre-treatment lateral cephalograms of 40 subjects (both males and females) within the age group of 10–30 years, irrespective of the type of malocclusion were taken. Total 26 measurements (13 linear and 13 angular) were obtained using both the manual and the digital technique. Results Amongst the linear measurements, Anterior facial height (AFH), Posterior facial height (PFH), Upper lip length (ULL), Lower lip length (LLL), Anterior cranial base length (ACBL), Posterior cranial base length (PCBL), Maxillary length (MxL), Mandibular length (MdL), Lower incisor to NB line (L1 to NB) and Lower lip protrusion (LLP) showed statistically significant difference between the two techniques but were clinically acceptable (difference between the digital and manual technique were less than 2 units (1 unit = 1 mm for linear measurements and 1° for angular measurements). While amongst the angular measurements, only occlusal plane angle showed statistically significant difference between the two techniques that was not clinically acceptable. Conclusion Digital measurements obtained from monitor-displayed images (soft copy) were found to be reproducible and comparable to the manual method done on its hard copy, for all the measurements except occlusal plane angle (SN-occlusal plane). PMID:25737917

  4. CSAM Metrology Software Tool

    NASA Technical Reports Server (NTRS)

    Vu, Duc; Sandor, Michael; Agarwal, Shri

    2005-01-01

    CSAM Metrology Software Tool (CMeST) is a computer program for analysis of false-color CSAM images of plastic-encapsulated microcircuits. (CSAM signifies C-mode scanning acoustic microscopy.) The colors in the images indicate areas of delamination within the plastic packages. Heretofore, the images have been interpreted by human examiners. Hence, interpretations have not been entirely consistent and objective. CMeST processes the color information in image-data files to detect areas of delamination without incurring inconsistencies of subjective judgement. CMeST can be used to create a database of baseline images of packages acquired at given times for comparison with images of the same packages acquired at later times. Any area within an image can be selected for analysis, which can include examination of different delamination types by location. CMeST can also be used to perform statistical analyses of image data. Results of analyses are available in a spreadsheet format for further processing. The results can be exported to any data-base-processing software.

  5. Software Reviews.

    ERIC Educational Resources Information Center

    Dwyer, Donna; And Others

    1989-01-01

    Reviewed are seven software packages for Apple and IBM computers. Included are: "Toxicology"; "Science Corner: Space Probe"; "Alcohol and Pregnancy"; "Science Tool Kit Plus"; Computer Investigations: Plant Growth"; "Climatrolls"; and "Animal Watch: Whales." (CW)

  6. Software Reviews.

    ERIC Educational Resources Information Center

    McGrath, Diane

    1990-01-01

    Reviews two programs: (1) "The Weather Machine" on understanding weather and weather forecasting and (2) "The Mystery of the Hotel Victoria" on problem solving in mathematics. Presents the descriptions, advantages, and weaknesses of the software. (YP)

  7. Software Reviews.

    ERIC Educational Resources Information Center

    Wulfson, Stephen, Ed.

    1990-01-01

    Reviewed are six computer software packages including "Lunar Greenhouse,""Dyno-Quest,""How Weather Works,""Animal Trackers,""Personal Science Laboratory," and "The Skeletal and Muscular Systems." Availability, functional, and hardware requirements are discussed. (CW)

  8. Software Reviews.

    ERIC Educational Resources Information Center

    Classroom Computer Learning, 1990

    1990-01-01

    Reviewed are three computer software packages including "Martin Luther King, Jr.: Instant Replay of History,""Weeds to Trees," and "The New Print Shop, School Edition." Discussed are hardware requirements, costs, grade levels, availability, emphasis, strengths, and weaknesses. (CW)

  9. Software Reviews.

    ERIC Educational Resources Information Center

    Davis, Shelly J., Ed.; Knaupp, Jon, Ed.

    1984-01-01

    Reviewed is computer software on: (1) classification of living things, a tutorial program for grades 5-10; and (2) polynomial practice using tiles, a drill-and-practice program for algebra students. (MNS)

  10. Software Reviews.

    ERIC Educational Resources Information Center

    Wulfson, Stephen, Ed.

    1987-01-01

    Reviews seven computer software programs that can be used in science education programs. Describes courseware which deals with muscles and bones, terminology, classifying animals without backbones, molecular structures, drugs, genetics, and shaping the earth's surface. (TW)

  11. Software Reviews.

    ERIC Educational Resources Information Center

    Mathematics and Computer Education, 1988

    1988-01-01

    Presents reviews of six software packages. Includes (1) "Plain Vanilla Statistics"; (2) "MathCAD 2.0"; (3) "GrFx"; (4) "Trigonometry"; (5) "Algebra II"; (6) "Algebra Drill and Practice I, II, and III." (PK)

  12. Software Reviews.

    ERIC Educational Resources Information Center

    Wulfson, Eugene T., Ed.

    1988-01-01

    Presents reviews by classroom teachers of software for teaching science. Includes material on the work of geologists, genetics, earth science, classification of living things, astronomy, endangered species, skeleton, drugs, and heartbeat. Provides information on availability and equipment needed. (RT)

  13. Software Reviews.

    ERIC Educational Resources Information Center

    Wulfson, Stephen, Ed.

    1987-01-01

    Provides a review of four science software programs. Includes topics such as plate tectonics, laboratory experiment simulations, the human body, and light and temperature. Contains information on ordering and reviewers' comments. (ML)

  14. Software Reviews.

    ERIC Educational Resources Information Center

    Wulfson, Stephen, Ed.

    1987-01-01

    Provides reviews of six computer software programs designed for use in elementary science education programs. Provides the title, publisher, grade level, and descriptions of courseware on ant farms, drugs, genetics, beachcombing, matter, and test generation. (TW)

  15. Remote Viewer for Maritime Robotics Software

    NASA Technical Reports Server (NTRS)

    Kuwata, Yoshiaki; Wolf, Michael; Huntsberger, Terrance L.; Howard, Andrew B.

    2013-01-01

    This software is a viewer program for maritime robotics software that provides a 3D visualization of the boat pose, its position history, ENC (Electrical Nautical Chart) information, camera images, map overlay, and detected tracks.

  16. EXSdetect: an end-to-end software for extended source detection in X-ray images: application to Swift-XRT data

    NASA Astrophysics Data System (ADS)

    Liu, T.; Tozzi, P.; Tundo, E.; Moretti, A.; Wang, J.-X.; Rosati, P.; Guglielmetti, F.

    2013-01-01

    Aims: We present a stand-alone software (named EXSdetect) for the detection of extended sources in X-ray images. Our goal is to provide a flexible tool capable of detecting extended sources down to the lowest flux levels attainable within instrumental limitations, while maintaining robust photometry, high completeness, and low contamination, regardless of source morphology. EXSdetect was developed mainly to exploit the ever-increasing wealth of archival X-ray data, but is also ideally suited to explore the scientific capabilities of future X-ray facilities, with a strong focus on investigations of distant groups and clusters of galaxies. Methods: EXSdetect combines a fast Voronoi tessellation code with a friends-of-friends algorithm and an automated deblending procedure. The values of key parameters are matched to fundamental telescope properties such as angular resolution and instrumental background. In addition, the software is designed to permit extensive tests of its performance via simulations of a wide range of observational scenarios. Results: We applied EXSdetect to simulated data fields modeled to realistically represent the Swift X-ray Cluster Survey (SXCS), which is based on archival data obtained by the X-ray telescope onboard the Swift satellite. We achieve more than 90% completeness for extended sources comprising at least 80 photons in the 0.5-2 keV band, a limit that corresponds to 10-14 erg cm-2 s-1 for the deepest SXCS fields. This detection limit is comparable to the one attained by the most sensitive cluster surveys conducted with much larger X-ray telescopes. While evaluating the performance of EXSdetect, we also explored the impact of improved angular resolution and discuss the ideal properties of the next generation of X-ray survey missions. The Phyton code EXSdetect is available on the SXCS website http://adlibitum.oats.inaf.it/sxcs

  17. SU-E-I-63: Quantitative Evaluation of the Effects of Orthopedic Metal Artifact Reduction (OMAR) Software On CT Images for Radiotherapy Simulation

    SciTech Connect

    Jani, S

    2014-06-01

    Purpose: CT simulation for patients with metal implants can often be challenging due to artifacts that obscure tumor/target delineation and normal organ definition. Our objective was to evaluate the effectiveness of Orthopedic Metal Artifact Reduction (OMAR), a commercially available software, in reducing metal-induced artifacts and its effect on computed dose during treatment planning. Methods: CT images of water surrounding metallic cylindrical rods made of aluminum, copper and iron were studied in terms of Hounsfield Units (HU) spread. Metal-induced artifacts were characterized in terms of HU/Volume Histogram (HVH) using the Pinnacle treatment planning system. Effects of OMAR on enhancing our ability to delineate organs on CT and subsequent dose computation were examined in nine (9) patients with hip implants and two (2) patients with breast tissue expanders. Results: Our study characterized water at 1000 HU with a standard deviation (SD) of about 20 HU. The HVHs allowed us to evaluate how the presence of metal changed the HU spread. For example, introducing a 2.54 cm diameter copper rod in water increased the SD in HU of the surrounding water from 20 to 209, representing an increase in artifacts. Subsequent use of OMAR brought the SD down to 78. Aluminum produced least artifacts whereas Iron showed largest amount of artifacts. In general, an increase in kVp and mA during CT scanning showed better effectiveness of OMAR in reducing artifacts. Our dose analysis showed that some isodose contours shifted by several mm with OMAR but infrequently and were nonsignificant in planning process. Computed volumes of various dose levels showed <2% change. Conclusions: In our experience, OMAR software greatly reduced the metal-induced CT artifacts for the majority of patients with implants, thereby improving our ability to delineate tumor and surrounding organs. OMAR had a clinically negligible effect on computed dose within tissues. Partially funded by unrestricted

  18. Reflight certification software design specifications

    NASA Technical Reports Server (NTRS)

    1984-01-01

    The PDSS/IMC Software Design Specification for the Payload Development Support System (PDSS)/Image Motion Compensator (IMC) is contained. The PDSS/IMC is to be used for checkout and verification of the IMC flight hardware and software by NASA/MSFC.

  19. Software reengineering

    NASA Technical Reports Server (NTRS)

    Fridge, Ernest M., III

    1991-01-01

    Today's software systems generally use obsolete technology, are not integrated properly with other software systems, and are difficult and costly to maintain. The discipline of reverse engineering is becoming prominent as organizations try to move their systems up to more modern and maintainable technology in a cost effective manner. JSC created a significant set of tools to develop and maintain FORTRAN and C code during development of the Space Shuttle. This tool set forms the basis for an integrated environment to re-engineer existing code into modern software engineering structures which are then easier and less costly to maintain and which allow a fairly straightforward translation into other target languages. The environment will support these structures and practices even in areas where the language definition and compilers do not enforce good software engineering. The knowledge and data captured using the reverse engineering tools is passed to standard forward engineering tools to redesign or perform major upgrades to software systems in a much more cost effective manner than using older technologies. A beta vision of the environment was released in Mar. 1991. The commercial potential for such re-engineering tools is very great. CASE TRENDS magazine reported it to be the primary concern of over four hundred of the top MIS executives.

  20. Antiterrorist Software

    NASA Technical Reports Server (NTRS)

    Clark, David A.

    1998-01-01

    In light of the escalation of terrorism, the Department of Defense spearheaded the development of new antiterrorist software for all Government agencies by issuing a Broad Agency Announcement to solicit proposals. This Government-wide competition resulted in a team that includes NASA Lewis Research Center's Computer Services Division, who will develop the graphical user interface (GUI) and test it in their usability lab. The team launched a program entitled Joint Sphere of Security (JSOS), crafted a design architecture (see the following figure), and is testing the interface. This software system has a state-ofthe- art, object-oriented architecture, with a main kernel composed of the Dynamic Information Architecture System (DIAS) developed by Argonne National Laboratory. DIAS will be used as the software "breadboard" for assembling the components of explosions, such as blast and collapse simulations.

  1. Integrating NASA's Land Analysis System (LAS) image processing software with an appropriate Geographic Information System (GIS): A review of candidates in the public domain

    NASA Technical Reports Server (NTRS)

    Rochon, Gilbert L.

    1989-01-01

    A user requirements analysis (URA) was undertaken to determine and appropriate public domain Geographic Information System (GIS) software package for potential integration with NASA's LAS (Land Analysis System) 5.0 image processing system. The necessity for a public domain system was underscored due to the perceived need for source code access and flexibility in tailoring the GIS system to the needs of a heterogenous group of end-users, and to specific constraints imposed by LAS and its user interface, Transportable Applications Executive (TAE). Subsequently, a review was conducted of a variety of public domain GIS candidates, including GRASS 3.0, MOSS, IEMIS, and two university-based packages, IDRISI and KBGIS. The review method was a modified version of the GIS evaluation process, development by the Federal Interagency Coordinating Committee on Digital Cartography. One IEMIS-derivative product, the ALBE (AirLand Battlefield Environment) GIS, emerged as the most promising candidate for integration with LAS. IEMIS (Integrated Emergency Management Information System) was developed by the Federal Emergency Management Agency (FEMA). ALBE GIS is currently under development at the Pacific Northwest Laboratory under contract with the U.S. Army Corps of Engineers' Engineering Topographic Laboratory (ETL). Accordingly, recommendations are offered with respect to a potential LAS/ALBE GIS linkage and with respect to further system enhancements, including coordination with the development of the Spatial Analysis and Modeling System (SAMS) GIS in Goddard's IDM (Intelligent Data Management) developments in Goddard's National Space Science Data Center.

  2. FMT (Flight Software Memory Tracker) For Cassini Spacecraft-Software Engineering Using JAVA

    NASA Technical Reports Server (NTRS)

    Kan, Edwin P.; Uffelman, Hal; Wax, Allan H.

    1997-01-01

    The software engineering design of the Flight Software Memory Tracker (FMT) Tool is discussed in this paper. FMT is a ground analysis software set, consisting of utilities and procedures, designed to track the flight software, i.e., images of memory load and updatable parameters of the computers on-board Cassini spacecraft. FMT is implemented in Java.

  3. [Software version and medical device software supervision].

    PubMed

    Peng, Liang; Liu, Xiaoyan

    2015-01-01

    The importance of software version in the medical device software supervision does not cause enough attention at present. First of all, the effect of software version in the medical device software supervision is discussed, and then the necessity of software version in the medical device software supervision is analyzed based on the discussion of the misunderstanding of software version. Finally the concrete suggestions on software version naming rules, software version supervision for the software in medical devices, and software version supervision scheme are proposed.

  4. Gamma ray induced DNA damage in human and mouse leucocytes measured by SCGE-Pro: a software developed for automated image analysis and data processing for Comet assay.

    PubMed

    Chaubey, R C; Bhilwade, H N; Rajagopalan, R; Bannur, S V

    2001-02-20

    The studies reported in this communication had two major objectives: first to validate the in-house developed SCGE-Pro: a software developed for automated image analysis and data processing for Comet assay using human peripheral blood leucocytes exposed to radiation doses, viz. 2, 4 and 8 Gy, which are known to produce DNA/chromosome damage using alkaline Comet assay. The second objective was to investigate the effect of gamma radiation on DNA damage in mouse peripheral blood leucocytes using identical doses and experimental conditions, e.g. lyses, electrophoretic conditions and duration of electrophoresis which are known to affect tail moment (TM) and tail length (TL) of comets. Human and mouse whole blood samples were irradiated with different doses of gamma rays, e.g. 2, 4 and 8 Gy at a dose rate of 0.668Gy/min between 0 and 4 degrees C in air. After lyses, cells were electrophorased under alkaline conditions at pH 13, washed and stained with propidium iodide. Images of the cells were acquired and analyzed using in-house developed imaging software, SCGE-Pro, for Comet assay. For each comet, total fluorescence, tail fluorescence and tail length were measured. Increase in TM and TL was considered as the criteria of DNA damage. Analysis of data revealed heterogeneity in the response of leucocytes to gamma ray induced DNA damage both in human as well as in mouse. A wide variation in TM and TL was observed in control and irradiated groups of all the three donors. Data were analyzed for statistical significance using one-way ANOVA. Though a small variation in basal level of TM and TL was observed amongst human and mouse controls, the differences were not statistically significant. A dose-dependent increase in TM (P<0.001) and TL (P<0.001) was obtained at all the radiation doses (2-8 Gy) both in human and mouse leucocytes. However, there was a difference in the nature of dose response curves for human and mouse leucocytes. In human leucocytes, a linear increase in TM

  5. Strategy for the lowering and the assessment of exposure to nanoparticles at workspace - Case of study concerning the potential emission of nanoparticles of Lead in an epitaxy laboratory

    NASA Astrophysics Data System (ADS)

    Artous, Sébastien; Zimmermann, Eric; Douissard, Paul-Antoine; Locatelli, Dominique; Motellier, Sylvie; Derrough, Samir

    2015-05-01

    The implementation in many products of manufactured nanoparticles is growing fast and raises new questions. For this purpose, the CEA - NanoSafety Platform is developing various research topics for health and safety, environment and nanoparticles exposure in professional activities. The containment optimisation for the exposition lowering, then the exposure assessment to nanoparticles is a strategy for safety improvement at workplace and workspace. The lowering step consists in an optimisation of dynamic and static containment at workplace and/or workspace. Generally, the exposure risk due to the presence of nanoparticles substances does not allow modifying the parameters of containment at workplace and/or workspace. Therefore, gaseous or nanoparticulate tracers are used to evaluate performances of containment. Using a tracer allows to modify safely the parameters of the dynamic containment (ventilation, flow, speed) and to study several configurations of static containment. Moreover, a tracer allows simulating accidental or incidental situation. As a result, a safety procedure can be written more easily in order to manage this type of situation. The step of measurement and characterization of aerosols can therefore be used to assess the exposition at workplace and workspace. The case of study, aim of this paper, concerns the potential emission of Lead nanoparticles at the exhaust of a furnace in an epitaxy laboratory. The use of Helium tracer to evaluate the performance of containment is firstly studied. Secondly, the exposure assessment is characterised in accordance with the French guide “recommendations for characterizing potential emissions and exposure to aerosols released from nanomaterials in workplace operations”. Thirdly the aerosols are sampled, on several places, using collection membranes to try to detect traces of Lead in air.

  6. Software Reviews.

    ERIC Educational Resources Information Center

    Wulfson, Stephen, Ed.

    1990-01-01

    Reviewed are six software packages for Apple and/or IBM computers. Included are "Autograph,""The New Game Show,""Science Probe-Earth Science,""Pollution Patrol,""Investigating Plant Growth," and "AIDS: The Investigation." Discussed are the grade level, function, availability, cost, and hardware requirements of each. (CW)

  7. Software Reviews.

    ERIC Educational Resources Information Center

    Science and Children, 1989

    1989-01-01

    Reviews of seven software packages are presented including "The Environment I: Habitats and EcoSystems; II Cycles and Interactions"; "Super Sign Maker"; "The Great Knowledge Race: Substance Abuse"; "Exploring Science: Temperature"; "Fast Food Calculator and RD Aide"; "The Human Body: Circulation and Respiration" and "Forces in Liquids and Gases."…

  8. Star Software.

    ERIC Educational Resources Information Center

    Kloza, Brad

    2000-01-01

    Presents a collection of computer software programs designed to spark learning enthusiasm at every grade level and across the curriculum. They include Reader Rabbit's Learn to Read, Spelling Power, Mind Twister Math, Community Construction Kit, Breaking the Code, Encarta Africana 2000, Virtual Serengeti, Operation: Frog (Deluxe), and My First…

  9. Software Reviews.

    ERIC Educational Resources Information Center

    Science and Children, 1988

    1988-01-01

    Reviews five software packages for use with school age children. Includes "Science Toolkit Module 2: Earthquake Lab"; "Adaptations and Identification"; "Geoworld"; "Body Systems II Series: The Blood System: A Liquid of Life," all for Apple II, and "Science Courseware: Life Science/Biology" for Apple II and IBM. (CW)

  10. Software Update.

    ERIC Educational Resources Information Center

    Currents, 2000

    2000-01-01

    A chart of 40 alumni-development database systems provides information on vendor/Web site, address, contact/phone, software name, price range, minimum suggested workstation/suggested server, standard reports/reporting tools, minimum/maximum record capacity, and number of installed sites/client type. (DB)

  11. Software Comparison

    NASA Technical Reports Server (NTRS)

    Blanchard, D. C.

    1986-01-01

    Software Comparison Package (SCP) compares similar files. Normally, these are 90-character files produced by CDC UPDATE utility from program libraries that contain FORTRAN source code plus identifier. SCP also used to compare load maps, cross-reference outputs, and UPDATE corrections sets. Helps wherever line-by-line comparison of similarly structured files required.

  12. Software Reviews.

    ERIC Educational Resources Information Center

    Classroom Computer Learning, 1990

    1990-01-01

    Reviewed are two computer software packages: "Super Solvers Midnight Rescue!" a problem-solving program for IBM PCs; and "Interactive Physics," a simulation program for the Macintosh computer. The functions of the package are discussed including strengths and weaknesses and teaching suggestions. (CW)

  13. Software Reviews.

    ERIC Educational Resources Information Center

    Bitter, Gary G., Ed.

    1989-01-01

    Describes three software packages: (1) "MacMendeleev"--database/graphic display for chemistry, grades 10-12, Macintosh; (2) "Geometry One: Foundations"--geometry tutorial, grades 7-12, IBM; (3) "Mathematics Exploration Toolkit"--algebra and calculus tutorial, grades 8-12, IBM. (MVL)

  14. Software reengineering

    NASA Technical Reports Server (NTRS)

    Fridge, Ernest M., III

    1991-01-01

    Programs in use today generally have all of the function and information processing capabilities required to do their specified job. However, older programs usually use obsolete technology, are not integrated properly with other programs, and are difficult to maintain. Reengineering is becoming a prominent discipline as organizations try to move their systems to more modern and maintainable technologies. The Johnson Space Center (JSC) Software Technology Branch (STB) is researching and developing a system to support reengineering older FORTRAN programs into more maintainable forms that can also be more readily translated to a modern languages such as FORTRAN 8x, Ada, or C. This activity has led to the development of maintenance strategies for design recovery and reengineering. These strategies include a set of standards, methodologies, and the concepts for a software environment to support design recovery and reengineering. A brief description of the problem being addressed and the approach that is being taken by the STB toward providing an economic solution to the problem is provided. A statement of the maintenance problems, the benefits and drawbacks of three alternative solutions, and a brief history of the STB experience in software reengineering are followed by the STB new FORTRAN standards, methodology, and the concepts for a software environment.

  15. Software Patents.

    ERIC Educational Resources Information Center

    Burke, Edmund B.

    1994-01-01

    Outlines basic patent law information that pertains to computer software programs. Topics addressed include protection in other countries; how to obtain patents; kinds of patents; duration; classes of patentable subject matter, including machines and processes; patentability searches; experimental use prior to obtaining a patent; and patent…

  16. Software Reviews.

    ERIC Educational Resources Information Center

    Mathematics and Computer Education, 1987

    1987-01-01

    Presented are reviews of several microcomputer software programs. Included are reviews of: (1) Microstat (Zenith); (2) MathCAD (MathSoft); (3) Discrete Mathematics (True Basic); (4) CALCULUS (True Basic); (5) Linear-Kit (John Wiley); and (6) Geometry Sensei (Broderbund). (RH)

  17. Software Reviews.

    ERIC Educational Resources Information Center

    Smith, Richard L., Ed.

    1988-01-01

    Reviews two software packages, "Solutions Unlimited" and "BASIC Data Base System." Provides a description, summary, strengths and weaknesses, availability and costs. Includes reviews of three structured BASIC packages: "True BASIC (2.0)"; "Turbo BASIC (1.0)"; and "QuickBASIC (3.0)." Explains significant features such as graphics, costs,…

  18. Reviews: Software.

    ERIC Educational Resources Information Center

    Mackenzie, Norma N.; And Others

    1988-01-01

    Reviews four computer software packages including: "The Physical Science Series: Sound" which demonstrates making waves, speed of sound, doppler effect, and human hearing; "Andromeda" depicting celestial motions in any direction; "Biology Quiz: Humans" covering chemistry, cells, viruses, and human biology; and "MacStronomy" covering information on…

  19. Reviews, Software.

    ERIC Educational Resources Information Center

    Science Teacher, 1988

    1988-01-01

    Reviews two software programs for Apple series computers. Includes "Orbital Mech," a basic planetary orbital simulation for the Macintosh, and "START: Stimulus and Response Tools for Experiments in Memory, Learning, Cognition, and Perception," a program that demonstrates basic psychological principles and experiments. (CW)

  20. Software Reviews.

    ERIC Educational Resources Information Center

    Teles, Elizabeth, Ed.; And Others

    1990-01-01

    Reviewed are two computer software packages for Macintosh microcomputers including "Phase Portraits," an exploratory graphics tool for studying first-order planar systems; and "MacMath," a set of programs for exploring differential equations, linear algebra, and other mathematical topics. Features, ease of use, cost, availability, and hardware…

  1. Software Reviews.

    ERIC Educational Resources Information Center

    Wulfson, Stephen, Ed.

    1989-01-01

    Six software packages are described in this review. Included are "Molecules and Atoms: Exploring the Essence of Matter"; "Heart Probe"; "GM Sunraycer"; "Six Puzzles"; "Information Laboratory--Life Science"; and "Science Test Builder." Hardware requirements, prices, and a summary of the abilities of each program are presented. (CW)

  2. Software Reviews.

    ERIC Educational Resources Information Center

    Wulfson, Stephen, Ed.

    1989-01-01

    Presents comments by classroom teachers on software for science teaching including topics on: the size of a molecule, matter, leaves, vitamins and minerals, dinosaurs, and collecting and measuring data. Each is an Apple computer series. Availability and costs are included. (RT)

  3. Software Reviews.

    ERIC Educational Resources Information Center

    Smith, Richard L., Ed.

    1987-01-01

    Reviewed are three computer software programs: the Astronomer (astronomy program for middle school students and older); Hands-on-Statistics: Explorations with a Microcomputer (statistics program for secondary school students and older); and CATGEN (a genetics program for secondary school students and older). Each review provides information on:…

  4. Software Review.

    ERIC Educational Resources Information Center

    McGrath, Diane, Ed.

    1989-01-01

    Reviewed is a computer software package entitled "Audubon Wildlife Adventures: Grizzly Bears" for Apple II and IBM microcomputers. Included are availability, hardware requirements, cost, and a description of the program. The murder-mystery flavor of the program is stressed in this program that focuses on illegal hunting and game management. (CW)

  5. Software Reviews.

    ERIC Educational Resources Information Center

    Science and Children, 1990

    1990-01-01

    Reviewed are seven computer software packages for IBM and/or Apple Computers. Included are "Windows on Science: Volume 1--Physical Science"; "Science Probe--Physical Science"; "Wildlife Adventures--Grizzly Bears"; "Science Skills--Development Programs"; "The Clean Machine"; "Rock Doctor"; and "Geology Search." Cost, quality, hardware, and…

  6. Software Reviews.

    ERIC Educational Resources Information Center

    McGrath, Diane, Ed.

    1989-01-01

    Reviewed are two computer software programs for Apple II computers on weather for upper elementary and middle school grades. "Weather" introduces the major factors (temperature, humidity, wind, and air pressure) affecting weather. "How Weather Works" uses simulation and auto-tutorial formats on sun, wind, fronts, clouds, and storms. (YP)

  7. Software Reviews.

    ERIC Educational Resources Information Center

    Bitter, Gary G., Ed.

    1989-01-01

    Reviews three software packages: (1) "The Weather Machine Courseware Kit" for grades 7-12; (2) "Exploring Measurement, Time, and Money--Level I," for primary level mathematics; and (3) "Professor DOS with SmartGuide for DOS" providing an extensive tutorial covering DOS 2.1 to 4.0. Discusses the strengths and weaknesses of each package. (YP)

  8. Software Reviews.

    ERIC Educational Resources Information Center

    Science and Children, 1990

    1990-01-01

    Reviewed are six computer software packages including "Invisible Bugs,""Chaos Plus...,""The Botanist's Apprentice,""A Baby is Born," Storyboard Plus-Version 2.0," and "Weather." Hardware requirements, functions, performance, and use in the classroom are discussed. (CW)

  9. Software Reviews.

    ERIC Educational Resources Information Center

    Science and Children, 1988

    1988-01-01

    Reviews six software packages for use with school age children ranging from grade 3 to grade 12. Includes "The Microcomputer Based Lab Project: Motion, Sound"; "Genetics"; "Geologic History"; "The Microscope Simulator"; and "Wiz Works" all for Apple II and "Reading for Information: Level II" for IBM. (CW)

  10. Software Reviews.

    ERIC Educational Resources Information Center

    Mackenzie, Norma N.; And Others

    1988-01-01

    Describes computer software for use with various age groups. Topics include activities involving temperature, simulations, earth science, the circulatory system, human body, reading in science, and ecology. Provides information on equipment needed, availability, package contents, and price. Comments of reviews are presented by classroom teachers.…

  11. VICAR/IBIS Software System

    NASA Technical Reports Server (NTRS)

    Stanfill, Daniel F., IV; Girard, Michael A.

    1988-01-01

    Collection of programs provides extensive capabilities for manipulation of imagery and geographical data. VICAR/IBIS software system is combination of JPL VICAR (Video Image Communications and Retrieval System) image-processing system and JPL IBIS (Image Based Information System) geographic-information-management system. Provides user with extensive general-purpose image-processing capabilities, also information-management system for accepting, converting, and operating on vector (graphical) and tabular data. System used to perform various image processing functions on any sort of digitized image data, including such remotely sensed data as those from Landsat multispectral scanner.

  12. Space Station Software Issues

    NASA Technical Reports Server (NTRS)

    Voigt, S. (Editor); Beskenis, S. (Editor)

    1985-01-01

    Issues in the development of software for the Space Station are discussed. Software acquisition and management, software development environment, standards, information system support for software developers, and a future software advisory board are addressed.

  13. Scientific Software

    NASA Technical Reports Server (NTRS)

    1995-01-01

    The Interactive Data Language (IDL), developed by Research Systems, Inc., is a tool for scientists to investigate their data without having to write a custom program for each study. IDL is based on the Mariners Mars spectral Editor (MMED) developed for studies from NASA's Mars spacecraft flights. The company has also developed Environment for Visualizing Images (ENVI), an image processing system for easily analyzing remotely sensed data written in IDL. The Visible Human CD, another Research Systems product, is the first complete digital reference of photographic images for exploring human anatomy.

  14. Analysis Software

    NASA Technical Reports Server (NTRS)

    1994-01-01

    General Purpose Boundary Element Solution Technology (GPBEST) software employs the boundary element method of mechanical engineering analysis, as opposed to finite element. It is, according to one of its developers, 10 times faster in data preparation and more accurate than other methods. Its use results in less expensive products because the time between design and manufacturing is shortened. A commercial derivative of a NASA-developed computer code, it is marketed by Best Corporation to solve problems in stress analysis, heat transfer, fluid analysis and yielding and cracking of solids. Other applications include designing tractor and auto parts, household appliances and acoustic analysis.

  15. Scheduling Software

    NASA Technical Reports Server (NTRS)

    1993-01-01

    Advanced Scheduling Environment is a software product designed and marketed by AVYX, Inc. to provide scheduling solutions for complex manufacturing environments. It can be adapted to specific scheduling and manufacturing processes and has led to substantial cost savings. The system was originally developed for NASA use in scheduling Space Shuttle flights and satellite activities. AVYX, Inc. is an offshoot of a company formed to provide computer-related services to NASA. TREES-plus, the company's initial product became the programming language for the advanced scheduling environment system.

  16. Space Software

    NASA Technical Reports Server (NTRS)

    1990-01-01

    Xontech, Inc.'s software package, XonVu, simulates the missions of Voyager 1 at Jupiter and Saturn, Voyager 2 at Jupiter, Saturn, Uranus and Neptune, and Giotto in close encounter with Comet Halley. With the program, the user can generate scenes of the planets, moons, stars or Halley's nucleus and tail as seen by Giotto, all graphically reproduced with high accuracy in wireframe representation. Program can be used on a wide range of computers, including PCs. User friendly and interactive, with many options, XonVu can be used by a space novice or a professional astronomer. With a companion user's manual, it sells for $79.

  17. Simulation Software

    NASA Technical Reports Server (NTRS)

    1996-01-01

    Various NASA Small Business Innovation Research grants from Marshall Space Flight Center, Langley Research Center and Ames Research Center were used to develop the 'kernel' of COMCO's modeling and simulation software, the PHLEX finite element code. NASA needed it to model designs of flight vehicles; one of many customized commercial applications is UNISIM, a PHLEX-based code for analyzing underground flows in oil reservoirs for Texaco, Inc. COMCO's products simulate a computational mechanics problem, estimate the solution's error and produce the optimal hp-adapted mesh for the accuracy the user chooses. The system is also used as a research or training tool in universities and in mechanical design in industrial corporations.

  18. Seminar Software

    NASA Technical Reports Server (NTRS)

    1993-01-01

    The Society for Computer Simulation International is a professional technical society that distributes information on methodology techniques and uses of computer simulation. The society uses NETS, a NASA-developed program, to assist seminar participants in learning to use neural networks for computer simulation. NETS is a software system modeled after the human brain; it is designed to help scientists exploring artificial intelligence to solve pattern matching problems. Examples from NETS are presented to seminar participants, who can then manipulate, alter or enhance them for their own applications.

  19. Software system safety

    NASA Technical Reports Server (NTRS)

    Uber, James G.

    1988-01-01

    Software itself is not hazardous, but since software and hardware share common interfaces there is an opportunity for software to create hazards. Further, these software systems are complex, and proven methods for the design, analysis, and measurement of software safety are not yet available. Some past software failures, future NASA software trends, software engineering methods, and tools and techniques for various software safety analyses are reviewed. Recommendations to NASA are made based on this review.

  20. Going to where the users are! Making the collaborative resource management and science workspace mobile

    NASA Astrophysics Data System (ADS)

    Osti, D.; Osti, A.

    2013-12-01

    People are very busy today and getting stakeholders the information they need is an important part of our jobs. The BDL application is the mobile extension of the California collaborative resource management portal www.baydeltalive.com. BDL has been visited by more than 250,000 unique visitors this past year from various areas of water use and management including state and federal agencies, agriculture, scientists, policy makers, water consumers, voters, operations management and more. The audience is a qualified user group of more than 15,000 individuals participating in California hydrological ecosystem science, water management and policy. This is an important effort aimed to improve how scientists and policy makers are working together to understand this complicated and divisive system and how they are becoming better managers of that system. The BayDetaLive mobile application gives California watershed management stakeholders and water user community unprecedented access to real time natural resource management information. The application provides user with the following: 1. Access to Real Time Environmental Conditions from the more than the 600 California Data Exchange Sensors including hydrodynamic, water quality and meteorological data. Save important stations as favorites for easy access later. 2. Daily Delta Operations Data including estimated hydrology, daily exports, status of infrastructure operations, reservoir storage, salvage data, major stations, drinking water quality reports, weather forecasts and more. 3. Photos/Videos/Documents: Browse and share from the more than 1000 current documents in the BDL library. Relevant images, videos, science journals, presentations and articles. 4. Science: Access the latest science articles, news, projects and journals. 5. Data Visualizations: View recently published real time data interpolations of Delta Conditions. From 30-day turbidity models to daily forecasts. This service is published as conditions

  1. PIV Data Validation Software Package

    NASA Technical Reports Server (NTRS)

    Blackshire, James L.

    1997-01-01

    A PIV data validation and post-processing software package was developed to provide semi-automated data validation and data reduction capabilities for Particle Image Velocimetry data sets. The software provides three primary capabilities including (1) removal of spurious vector data, (2) filtering, smoothing, and interpolating of PIV data, and (3) calculations of out-of-plane vorticity, ensemble statistics, and turbulence statistics information. The software runs on an IBM PC/AT host computer working either under Microsoft Windows 3.1 or Windows 95 operating systems.

  2. Software Engineering Support of the Third Round of Scientific Grand Challenge Investigations: Earth System Modeling Software Framework Survey

    NASA Technical Reports Server (NTRS)

    Talbot, Bryan; Zhou, Shu-Jia; Higgins, Glenn; Zukor, Dorothy (Technical Monitor)

    2002-01-01

    One of the most significant challenges in large-scale climate modeling, as well as in high-performance computing in other scientific fields, is that of effectively integrating many software models from multiple contributors. A software framework facilitates the integration task, both in the development and runtime stages of the simulation. Effective software frameworks reduce the programming burden for the investigators, freeing them to focus more on the science and less on the parallel communication implementation. while maintaining high performance across numerous supercomputer and workstation architectures. This document surveys numerous software frameworks for potential use in Earth science modeling. Several frameworks are evaluated in depth, including Parallel Object-Oriented Methods and Applications (POOMA), Cactus (from (he relativistic physics community), Overture, Goddard Earth Modeling System (GEMS), the National Center for Atmospheric Research Flux Coupler, and UCLA/UCB Distributed Data Broker (DDB). Frameworks evaluated in less detail include ROOT, Parallel Application Workspace (PAWS), and Advanced Large-Scale Integrated Computational Environment (ALICE). A host of other frameworks and related tools are referenced in this context. The frameworks are evaluated individually and also compared with each other.

  3. The Analysis of the Patterns of Radiation-Induced DNA Damage Foci by a Stochastic Monte Carlo Model of DNA Double Strand Breaks Induction by Heavy Ions and Image Segmentation Software

    NASA Technical Reports Server (NTRS)

    Ponomarev, Artem; Cucinotta, F.

    2011-01-01

    To create a generalized mechanistic model of DNA damage in human cells that will generate analytical and image data corresponding to experimentally observed DNA damage foci and will help to improve the experimental foci yields by simulating spatial foci patterns and resolving problems with quantitative image analysis. Material and Methods: The analysis of patterns of RIFs (radiation-induced foci) produced by low- and high-LET (linear energy transfer) radiation was conducted by using a Monte Carlo model that combines the heavy ion track structure with characteristics of the human genome on the level of chromosomes. The foci patterns were also simulated in the maximum projection plane for flat nuclei. Some data analysis was done with the help of image segmentation software that identifies individual classes of RIFs and colocolized RIFs, which is of importance to some experimental assays that assign DNA damage a dual phosphorescent signal. Results: The model predicts the spatial and genomic distributions of DNA DSBs (double strand breaks) and associated RIFs in a human cell nucleus for a particular dose of either low- or high-LET radiation. We used the model to do analyses for different irradiation scenarios. In the beam-parallel-to-the-disk-of-a-flattened-nucleus scenario we found that the foci appeared to be merged due to their high density, while, in the perpendicular-beam scenario, the foci appeared as one bright spot per hit. The statistics and spatial distribution of regions of densely arranged foci, termed DNA foci chains, were predicted numerically using this model. Another analysis was done to evaluate the number of ion hits per nucleus, which were visible from streaks of closely located foci. In another analysis, our image segmentaiton software determined foci yields directly from images with single-class or colocolized foci. Conclusions: We showed that DSB clustering needs to be taken into account to determine the true DNA damage foci yield, which helps to

  4. Software for surface analysis

    NASA Astrophysics Data System (ADS)

    Watson, D. G.; Doern, F. E.

    1985-04-01

    Two software packages designed to aid in the analysis of digitally stored Secondary Ion Mass Spectrometric (SIMS) and electron spectroscopic data are described. The first, MASS, is a program that normalizes, and allows the application of sensitivity coefficients to SIMS depth profiles. The second, DIP, is a digital image processor designed to enhance secondary, backscattered, and Auger electron spectroscopic (AES) maps. DIP can also provide quantitative area analysis of AES maps. The algorithms are currently optimized to handle data generated by Physical Electronics Industries data acquisition systems, but are generally applicable.

  5. Choosing Software for Children.

    ERIC Educational Resources Information Center

    Spencer, Mima

    This Digest points out characteristics of quality computer software for children, describes different kinds of software, and suggests ways to get software for preview. The need to consider the purpose for which the software is to be used and the degree to which the software meets its stated goals is noted. Desirable software characteristics and…

  6. Should software hold data hostage?

    SciTech Connect

    Wiley, H S.; Michaels, George S.

    2004-08-01

    development of facile user interfaces and robust environments. This is where some companies have provided real value to the community, building on the foundation of open source software. Outside of genomics and bioinformatics, there is still a critical need for software tools, particularly in areas such as imaging, biochemistry and cell signaling. The computer skills of investigators in these fields is generally more rudimentary, and thus the open source options are much more limited. Commercial software dominates these areas, but open source has the potential to contribute more in the future.

  7. The LSST Software Stack

    NASA Astrophysics Data System (ADS)

    Jenness, Timothy; LSST Data Management Team

    2016-01-01

    The Large Synoptic Survey Telescope (LSST) is an 8-m optical ground-based telescope being constructed on Cerro Pachon in Chile. LSST will survey half the sky every few nights in six optical bands. The data will be transferred to the data center in North America and within 60 seconds it will be reduced using difference imaging and an alert list be generated for the community. Additionally, annual data releases will be constructed from all the data during the 10-year mission, producing catalogs and deep co-added images with unprecedented time resolution for such a large region of sky. In the paper we present the current status of the LSST stack including the data processing components, Qserv database and data visualization software, describe how to obtain it, and provide a summary of the development road map.

  8. Terra Harvest software architecture

    NASA Astrophysics Data System (ADS)

    Humeniuk, Dave; Klawon, Kevin

    2012-06-01

    Under the Terra Harvest Program, the DIA has the objective of developing a universal Controller for the Unattended Ground Sensor (UGS) community. The mission is to define, implement, and thoroughly document an open architecture that universally supports UGS missions, integrating disparate systems, peripherals, etc. The Controller's inherent interoperability with numerous systems enables the integration of both legacy and future UGS System (UGSS) components, while the design's open architecture supports rapid third-party development to ensure operational readiness. The successful accomplishment of these objectives by the program's Phase 3b contractors is demonstrated via integration of the companies' respective plug-'n'-play contributions that include controllers, various peripherals, such as sensors, cameras, etc., and their associated software drivers. In order to independently validate the Terra Harvest architecture, L-3 Nova Engineering, along with its partner, the University of Dayton Research Institute, is developing the Terra Harvest Open Source Environment (THOSE), a Java Virtual Machine (JVM) running on an embedded Linux Operating System. The Use Cases on which the software is developed support the full range of UGS operational scenarios such as remote sensor triggering, image capture, and data exfiltration. The Team is additionally developing an ARM microprocessor-based evaluation platform that is both energy-efficient and operationally flexible. The paper describes the overall THOSE architecture, as well as the design decisions for some of the key software components. Development process for THOSE is discussed as well.

  9. PROMOTIONS: PROper MOTION Software

    NASA Astrophysics Data System (ADS)

    Caleb Wherry, John; Sahai, R.

    2009-05-01

    We report on the development of a software tool (PROMOTIONS) to streamline the process of measuring proper motions of material in expanding nebulae. Our tool makes use of IDL's widget programming capabilities to design a unique GUI that is used to compare images of the objects from two epochs. The software allows us to first orient and register the images to a common frame of reference and pixel scale, using field stars in each of the images. We then cross-correlate specific morphological features in order to determine their proper motions, which consist of the proper motion of the nebula as a whole (PM-neb), and expansion motions of the features relative to the center. If the central star is not visible (quite common in bipolar nebulae with dense dusty waists), point-symmetric expansion is assumed and we use the average motion of high-quality symmetric pairs of features on opposite sides of the nebular center to compute PM-neb. This is then subtracted out to determine the individual movements of these and additional features relative to the nebular center. PROMOTIONS should find wide applicability in measuring proper motions in astrophysical objects such as the expanding outflows/jets commonly seen around young and dying stars. We present first results from using PROMOTIONS to successfully measure proper motions in several pre-planetary nebulae (transition objects between the red giant and planetary nebula phases), using images taken 7-10 years apart with the WFPC2 and ACS instruments on board HST. The authors are grateful to NASA's Undergradute Scholars Research Program (USRP) for supporting this research.

  10. Software Model Of Software-Development Process

    NASA Technical Reports Server (NTRS)

    Lin, Chi Y.; Synott, Debra J.; Levary, Reuven R.

    1990-01-01

    Collection of computer programs constitutes software tool for simulation of medium- to large-scale software-development projects. Necessary to include easily identifiable and more-readily quantifiable characteristics like costs, times, and numbers of errors. Mathematical model incorporating these and other factors of dynamics of software-development process implemented in the Software Life Cycle Simulator (SLICS) computer program. Simulates dynamics of software-development process. In combination with input and output expert software systems and knowledge-based management software system, develops information for use in managing large software-development project. Intended to aid managers in planning, managing, and controlling software-development processes by reducing uncertainties in budgets, required personnel, and schedules.

  11. MORPH-II, a software package for the analysis of scanning-electron-micrograph images for the assessment of the fractal dimension of exposed stone surfaces

    USGS Publications Warehouse

    Mossotti, Victor G.; Eldeeb, A. Raouf

    2000-01-01

    Turcotte, 1997, and Barton and La Pointe, 1995, have identified many potential uses for the fractal dimension in physicochemical models of surface properties. The image-analysis program described in this report is an extension of the program set MORPH-I (Mossotti and others, 1998), which provided the fractal analysis of electron-microscope images of pore profiles (Mossotti and Eldeeb, 1992). MORPH-II, an integration of the modified kernel of the program MORPH-I with image calibration and editing facilities, was designed to measure the fractal dimension of the exposed surfaces of stone specimens as imaged in cross section in an electron microscope.

  12. IRCAMDR: IRCAM3 Data Reduction Software

    NASA Astrophysics Data System (ADS)

    Aspin, Colin; McCaughrean, Mark; Bridger, Alan B.; Baines, Dave; Beard, Steven; Chan, S.; Giddings, Jack; Hartley, K. F.; Horsfield, A. P.; Kelly, B. D.; Emerson, J. P.; Currie, Malcolm J.; Economou, Frossie

    2014-06-01

    The UKIRT IRCAM3 data reduction and analysis software package, IRCAMDR (formerly ircam_clred) analyzes and displays any 2D data image stored in the standard Starlink (ascl:1110.012) NDF data format. It reduces and analyzes IRCAM1/2 data images of 62x58 pixels and IRCAM3 images of 256x256 size. Most of the applications will work on NDF images of any physical (pixel) dimensions, for example, 1024x1024 CCD images can be processed.

  13. Software attribute visualization for high integrity software

    SciTech Connect

    Pollock, G.M.

    1998-03-01

    This report documents a prototype tool developed to investigate the use of visualization and virtual reality technologies for improving software surety confidence. The tool is utilized within the execution phase of the software life cycle. It provides a capability to monitor an executing program against prespecified requirements constraints provided in a program written in the requirements specification language SAGE. The resulting Software Attribute Visual Analysis Tool (SAVAnT) also provides a technique to assess the completeness of a software specification.

  14. Lack of exposure to natural light in the workspace is associated with physiological, sleep and depressive symptoms.

    PubMed

    Harb, Francine; Hidalgo, Maria Paz; Martau, Betina

    2015-04-01

    The diurnal light cycle has a crucial influence on all life on earth. Unfortunately, modern society has modified this life-governing cycle by stressing maximum production and by giving insufficient attention to the ecological balance and homeostasis of the human metabolism. The aim of this study is to evaluate the effects of exposure or lack of exposure to natural light in a rest/activity rhythm on cortisol and melatonin levels, as well as on psychological variables in humans under natural conditions. This is a cross-sectional study. The subjects were allocated split into two groups according to their workspace (10 employees in the "with window" group and 10 in the "without window" group). All participants were women and wore anactigraph (Actiwatch 2, Philips Respironics), which measures activity and ambient light exposure, for seven days. Concentrations of melatonin and cortisol were measured from the saliva samples. Participants were instructed to collect saliva during the last day of use of the actigraph at 08:00 am, 4:00 pm and 10:00 pm. The subjects answered the Self-Reporting Questionnaire-20 (SRQ-20) to measure the presence of minor psychiatric disorders; the Montgomery-Asberg (MA) scale was used to measure depression symptoms, and the Pittsburgh Sleep Quality Index questionnaire (PSQI) was used to evaluate the quality of sleep. The Rayleigh analysis indicates that the two groups, "with window" an d "without window", exhibited similar activities and light acrophases. In relation to light exposure, the mesor was significantly higher (t = -2.651, p = 0.023) in t he "with window" group (191.04 ± 133.36) than in the "without window" group (73.8 ± 42.05). Additionally, the "with window" group presented the highest amplitude of light exposure (298.07 ± 222.97). Cortisol levels were significantly different between the groups at 10:00 pm (t = 3.009, p = 0.008; "without window" (4.01 ± 0.91) "with window" (3.10 ± 0.30)). In

  15. Global Software Engineering: A Software Process Approach

    NASA Astrophysics Data System (ADS)

    Richardson, Ita; Casey, Valentine; Burton, John; McCaffery, Fergal

    Our research has shown that many companies are struggling with the successful implementation of global software engineering, due to temporal, cultural and geographical distance, which causes a range of factors to come into play. For example, cultural, project managementproject management and communication difficulties continually cause problems for software engineers and project managers. While the implementation of efficient software processes can be used to improve the quality of the software product, published software process models do not cater explicitly for the recent growth in global software engineering. Our thesis is that global software engineering factors should be included in software process models to ensure their continued usefulness in global organisations. Based on extensive global software engineering research, we have developed a software process, Global Teaming, which includes specific practices and sub-practices. The purpose is to ensure that requirements for successful global software engineering are stipulated so that organisations can ensure successful implementation of global software engineering.

  16. [The Development of a Normal Database of Elderly People for Use with the Statistical Analysis Software Easy Z-score Imaging System with 99mTc-ECD SPECT].

    PubMed

    Nemoto, Hirobumi; Iwasaka, Akemi; Hashimoto, Shingo; Hara, Tadashi; Nemoto, Kiyotaka; Asada, Takashi

    2015-11-01

    We created a new normal database of elderly individuals (Tsukuba-NDB) for easy Z-score Imaging System (eZIS), a statistical imaging analysis software, comprised of 44 healthy individuals aged 75 to 89 years. The Tsukuba-NDB was compared with a conventional NDB (Musashi-NDB) using Statistical Parametric Mapping (SPM8), eZIS analysis, mean images, standard deviation (SD) images, SD values, specific volume of interest analysis (SVA). Furthermore, the association of the mean cerebral blood flow (mCBF) with various clinical indicators was statistically analyzed. A group comparison using SPM8 indicated that the t-value of the Tsukuba-NDB was lower in the frontoparietal region but tended to be higher in the bilateral temporal lobes and the base of the brain than that of the Musashi-NDB. The results of eZIS analysis by Musashi-NDB in 48 subjects indicated the presence of mild decreases in cerebral blood flow in the bilateral frontoparietal lobes of 9 subjects, precuneus and posterior cingulate gyrus of 5 subjects, lingual gyrus of 4 subjects, and near the left frontal gyrus, temporal lobe, superior temporal gyrus, and lenticular nucleus of 12 subjects. The mean images showed that there were no visual differences between both NDBs. The SD images intensities and SD values were lower in Tsukuba-NDB. Clinical case comparison and visual evaluation demonstrated that the sites of decrease in blood flow were more clearly indicated by the Tsukuba-NDB. Furthermore, mCBF was 40.87 ± 0.52 ml/100 g/min (mean ± SE), and tended to decrease with age. The tendency was stronger in male subjects than female subjects. Among various clinical indicators, the platelet count was statistically significantly correlated with CBF. In conclusion, our results suggest that Tsukuba-NDB, which is incorporated into a statistical imaging analysis software, eZIS, is sensitive to changes in cerebral blood flow caused by Cranial nerve disease, dementia and cerebrovascular accidents, and can provide precise

  17. [The Development of a Normal Database of Elderly People for Use with the Statistical Analysis Software Easy Z-score Imaging System with 99mTc-ECD SPECT].

    PubMed

    Nemoto, Hirobumi; Iwasaka, Akemi; Hashimoto, Shingo; Hara, Tadashi; Nemoto, Kiyotaka; Asada, Takashi

    2015-11-01

    We created a new normal database of elderly individuals (Tsukuba-NDB) for easy Z-score Imaging System (eZIS), a statistical imaging analysis software, comprised of 44 healthy individuals aged 75 to 89 years. The Tsukuba-NDB was compared with a conventional NDB (Musashi-NDB) using Statistical Parametric Mapping (SPM8), eZIS analysis, mean images, standard deviation (SD) images, SD values, specific volume of interest analysis (SVA). Furthermore, the association of the mean cerebral blood flow (mCBF) with various clinical indicators was statistically analyzed. A group comparison using SPM8 indicated that the t-value of the Tsukuba-NDB was lower in the frontoparietal region but tended to be higher in the bilateral temporal lobes and the base of the brain than that of the Musashi-NDB. The results of eZIS analysis by Musashi-NDB in 48 subjects indicated the presence of mild decreases in cerebral blood flow in the bilateral frontoparietal lobes of 9 subjects, precuneus and posterior cingulate gyrus of 5 subjects, lingual gyrus of 4 subjects, and near the left frontal gyrus, temporal lobe, superior temporal gyrus, and lenticular nucleus of 12 subjects. The mean images showed that there were no visual differences between both NDBs. The SD images intensities and SD values were lower in Tsukuba-NDB. Clinical case comparison and visual evaluation demonstrated that the sites of decrease in blood flow were more clearly indicated by the Tsukuba-NDB. Furthermore, mCBF was 40.87 ± 0.52 ml/100 g/min (mean ± SE), and tended to decrease with age. The tendency was stronger in male subjects than female subjects. Among various clinical indicators, the platelet count was statistically significantly correlated with CBF. In conclusion, our results suggest that Tsukuba-NDB, which is incorporated into a statistical imaging analysis software, eZIS, is sensitive to changes in cerebral blood flow caused by Cranial nerve disease, dementia and cerebrovascular accidents, and can provide precise

  18. Analytical approaches to image orientation and stereo digitization applied in the Budnlab software. (Polish Title: Rozwiazania analityczne zwiazane z obsluga procesu orientacji zdjec oraz wykonywaniem opracowan wektorowych w programie Bundlab)

    NASA Astrophysics Data System (ADS)

    Kolecki, J.

    2015-12-01

    The Bundlab software has been developed mainly for academic and research application. This work can be treated as a kind of a report describing the current state of the development of this computer program, focusing especially on the analytical solutions. Firstly, the overall characteristics of the software are provided. Then the description of the image orientation procedure starting from the relative orientation is addressed. The applied solution is based on the coplanarity equation parametrized with the essential matrix. The problem is reformulated in order to solve it using methods of algebraic geometry. The solution is followed by the optimization involving the least square criterion. The formation of the image block from the oriented models as well as the absolute orientation procedure were implemented using the Horn approach as a base algorithm. The second part of the paper is devoted to the tools and methods applied in the stereo digitization module. The solutions that support the user and improve the accuracy are given. Within the paper a few exemplary applications and products are mentioned. The work finishes with the concepts of development and improvements of existing functions.

  19. Software Design for Smile Analysis

    PubMed Central

    Sodagar, A.; Rafatjoo, R.; Gholami Borujeni, D.; Noroozi, H.; Sarkhosh, A.

    2010-01-01

    Introduction: Esthetics and attractiveness of the smile is one of the major demands in contemporary orthodontic treatment. In order to improve a smile design, it is necessary to record “posed smile” as an intentional, non-pressure, static, natural and reproducible smile. The record then should be analyzed to determine its characteristics. In this study, we intended to design and introduce a software to analyze the smile rapidly and precisely in order to produce an attractive smile for the patients. Materials and Methods: For this purpose, a practical study was performed to design multimedia software “Smile Analysis” which can receive patients’ photographs and videographs. After giving records to the software, the operator should mark the points and lines which are displayed on the system’s guide and also define the correct scale for each image. Thirty-three variables are measured by the software and displayed on the report page. Reliability of measurements in both image and video was significantly high (α=0.7–1). Results: In order to evaluate intra- operator and inter-operator reliability, five cases were selected randomly. Statistical analysis showed that calculations performed in smile analysis software were both valid and highly reliable (for both video and photo). Conclusion: The results obtained from smile analysis could be used in diagnosis, treatment planning and evaluation of the treatment progress. PMID:21998792

  20. Report: Scientific Software.

    ERIC Educational Resources Information Center

    Borman, Stuart A.

    1985-01-01

    Discusses various aspects of scientific software, including evaluation and selection of commercial software products; program exchanges, catalogs, and other information sources; major data analysis packages; statistics and chemometrics software; and artificial intelligence. (JN)

  1. Space Flight Software Development Software for Intelligent System Health Management

    NASA Technical Reports Server (NTRS)

    Trevino, Luis C.; Crumbley, Tim

    2004-01-01

    The slide presentation examines the Marshall Space Flight Center Flight Software Branch, including software development projects, mission critical space flight software development, software technical insight, advanced software development technologies, and continuous improvement in the software development processes and methods.

  2. Software Engineering Guidebook

    NASA Technical Reports Server (NTRS)

    Connell, John; Wenneson, Greg

    1993-01-01

    The Software Engineering Guidebook describes SEPG (Software Engineering Process Group) supported processes and techniques for engineering quality software in NASA environments. Three process models are supported: structured, object-oriented, and evolutionary rapid-prototyping. The guidebook covers software life-cycles, engineering, assurance, and configuration management. The guidebook is written for managers and engineers who manage, develop, enhance, and/or maintain software under the Computer Software Services Contract.

  3. Revision and product generation software

    USGS Publications Warehouse

    ,

    1997-01-01

    The U.S. Geological Survey (USGS) developed revision and product generation (RevPG) software for updating digital line graph (DLG) data and producing maps from such data. This software is based on ARC/INFO, a geographic information system from Environmental Systems Resource Institute (ESRI). RevPG consists of ARC/INFO Arc Macro Language (AML) programs, C routines, and interface menus that permit operators to collect vector data using aerial images, to symbolize the data on-screen, and to produce plots and color-separated files for use in printing maps.

  4. Revision and Product Generation Software

    USGS Publications Warehouse

    ,

    1999-01-01

    The U.S. Geological Survey (USGS) developed revision and product generation (RevPG) software for updating digital line graph (DLG) data and producing maps from such data. This software is based on ARC/INFO, a geographic information system from Environmental Systems Resource Institute (ESRI). RevPG consists of ARC/INFO Arc Macro Language (AML) programs, C routines, and interface menus that permit operators to collect vector data using aerial images, to symbolize the data onscreen, and to produce plots and color-separated files for use in printing maps.

  5. Software Configuration Management Guidebook

    NASA Technical Reports Server (NTRS)

    1995-01-01

    The growth in cost and importance of software to NASA has caused NASA to address the improvement of software development across the agency. One of the products of this program is a series of guidebooks that define a NASA concept of the assurance processes which are used in software development. The Software Assurance Guidebook, SMAP-GB-A201, issued in September, 1989, provides an overall picture of the concepts and practices of NASA in software assurance. Lower level guidebooks focus on specific activities that fall within the software assurance discipline, and provide more detailed information for the manager and/or practitioner. This is the Software Configuration Management Guidebook which describes software configuration management in a way that is compatible with practices in industry and at NASA Centers. Software configuration management is a key software development process, and is essential for doing software assurance.

  6. Method comparison of automated matching software-assisted cone-beam CT and stereoscopic kilovoltage x-ray positional verification image-guided radiation therapy for head and neck cancer: a prospective analysis

    NASA Astrophysics Data System (ADS)

    Fuller, Clifton D.; Scarbrough, Todd J.; Sonke, Jan-Jakob; Rasch, Coen R. N.; Choi, Mehee; Ting, Joe Y.; Wang, Samuel J.; Papanikolaou, Niko; Rosenthal, David I.

    2009-12-01

    We sought to characterize interchangeability and agreement between cone-beam computed tomography (CBCT) and digital stereoscopic kV x-ray (KVX) acquisition, two methods of isocenter positional verification currently used for IGRT of head and neck cancers (HNC). A cohort of 33 patients were near-simultaneously imaged by in-room KVX and CBCT. KVX and CBCT shifts were suggested using manufacturer software for the lateral (X), vertical (Y) and longitudinal (Z) dimensions. Intra-method repeatability, systematic and random error components were calculated for each imaging modality, as were recipe-based PTV expansion margins. Inter-method agreement in each axis was compared using limits of agreement (LOA) methodology, concordance analysis and orthogonal regression. 100 daily positional assessments were performed before daily therapy in 33 patients with head and neck cancer. Systematic error was greater for CBCT in all axes, with larger random error components in the Y- and Z-axis. Repeatability ranged from 9 to 14 mm for all axes, with CBCT showing greater repeatability in 2/3 axes. LOA showed paired shifts to agree 95% of the time within ±11.3 mm in the X-axis, ±9.4 mm in the Y-axis and ±5.5 mm in the Z-axis. Concordance ranged from 'mediocre' to 'satisfactory'. Proportional bias was noted between paired X- and Z-axis measures, with a constant bias component in the Z-axis. Our data suggest non-negligible differences in software-derived CBCT and KVX image-guided directional shifts using formal method comparison statistics. A correction was made to the first line of page 7404 of this article on 26 November 2009. The corrected electronic version is identical to the print version.

  7. Software productivity improvement through software engineering technology

    NASA Technical Reports Server (NTRS)

    Mcgarry, F. E.

    1985-01-01

    It has been estimated that NASA expends anywhere from 6 to 10 percent of its annual budget on the acquisition, implementation and maintenance of computer software. Although researchers have produced numerous software engineering approaches over the past 5-10 years; each claiming to be more effective than the other, there is very limited quantitative information verifying the measurable impact htat any of these technologies may have in a production environment. At NASA/GSFC, an extended research effort aimed at identifying and measuring software techniques that favorably impact productivity of software development, has been active over the past 8 years. Specific, measurable, software development technologies have been applied and measured in a production environment. Resulting software development approaches have been shown to be effective in both improving quality as well as productivity in this one environment.

  8. Software distribution using xnetlib

    SciTech Connect

    Dongarra, J.J. |; Rowan, T.H.; Wade, R.C.

    1993-06-01

    Xnetlib is a new tool for software distribution. Whereas its predecessor netlib uses e-mail as the user interface to its large collection of public-domain mathematical software, xnetlib uses an X Window interface and socket-based communication. Xnetlib makes it easy to search through a large distributed collection of software and to retrieve requested software in seconds.

  9. Agile Software Development

    ERIC Educational Resources Information Center

    Biju, Soly Mathew

    2008-01-01

    Many software development firms are now adopting the agile software development method. This method involves the customer at every level of software development, thus reducing the impact of change in the requirement at a later stage. In this article, the principles of the agile method for software development are explored and there is a focus on…

  10. Software Formal Inspections Standard

    NASA Technical Reports Server (NTRS)

    1993-01-01

    This Software Formal Inspections Standard (hereinafter referred to as Standard) is applicable to NASA software. This Standard defines the requirements that shall be fulfilled by the software formal inspections process whenever this process is specified for NASA software. The objective of this Standard is to define the requirements for a process that inspects software products to detect and eliminate defects as early as possible in the software life cycle. The process also provides for the collection and analysis of inspection data to improve the inspection process as well as the quality of the software.

  11. Responsbility for unreliable software

    SciTech Connect

    Wahl, N.J.

    1994-12-31

    Unreliable software exposes software developers and distributors to legal risks. Under certain circumstances, the developer and distributor of unreliable software can be sued. To avoid lawsuits, software developers should do the following: determine what the risks am, understand the extent of the risks, and identify ways of avoiding the risks and lessening the consequences of the risks. Liability issues associated with unreliable software are explored in this article.

  12. Imaging sciences workshop

    SciTech Connect

    Candy, J.V.

    1994-11-15

    This workshop on the Imaging Sciences sponsored by Lawrence Livermore National Laboratory contains short abstracts/articles submitted by speakers. The topic areas covered include the following: Astronomical Imaging; biomedical imaging; vision/image display; imaging hardware; imaging software; Acoustic/oceanic imaging; microwave/acoustic imaging; computed tomography; physical imaging; imaging algorithms. Selected papers are indexed separately for inclusion in the Energy Science and Technology Database.

  13. Segmentation of cervical cell nuclei in high-resolution microscopic images: A new algorithm and a web-based software framework.

    PubMed

    Bergmeir, Christoph; García Silvente, Miguel; Benítez, José Manuel

    2012-09-01

    In order to automate cervical cancer screening tests, one of the most important and longstanding challenges is the segmentation of cell nuclei in the stained specimens. Though nuclei of isolated cells in high-quality acquisitions often are easy to segment, the problem lies in the segmentation of large numbers of nuclei with various characteristics under differing acquisition conditions in high-resolution scans of the complete microscope slides. We implemented a system that enables processing of full resolution images, and proposes a new algorithm for segmenting the nuclei under adequate control of the expert user. The system can work automatically or interactively guided, to allow for segmentation within the whole range of slide and image characteristics. It facilitates data storage and interaction of technical and medical experts, especially with its web-based architecture. The proposed algorithm localizes cell nuclei using a voting scheme and prior knowledge, before it determines the exact shape of the nuclei by means of an elastic segmentation algorithm. After noise removal with a mean-shift and a median filtering takes place, edges are extracted with a Canny edge detection algorithm. Motivated by the observation that cell nuclei are surrounded by cytoplasm and their shape is roughly elliptical, edges adjacent to the background are removed. A randomized Hough transform for ellipses finds candidate nuclei, which are then processed by a level set algorithm. The algorithm is tested and compared to other algorithms on a database containing 207 images acquired from two different microscope slides, with promising results.

  14. Software Quality Assurance Metrics

    NASA Technical Reports Server (NTRS)

    McRae, Kalindra A.

    2004-01-01

    Software Quality Assurance (SQA) is a planned and systematic set of activities that ensures conformance of software life cycle processes and products conform to requirements, standards and procedures. In software development, software quality means meeting requirements and a degree of excellence and refinement of a project or product. Software Quality is a set of attributes of a software product by which its quality is described and evaluated. The set of attributes includes functionality, reliability, usability, efficiency, maintainability, and portability. Software Metrics help us understand the technical process that is used to develop a product. The process is measured to improve it and the product is measured to increase quality throughout the life cycle of software. Software Metrics are measurements of the quality of software. Software is measured to indicate the quality of the product, to assess the productivity of the people who produce the product, to assess the benefits derived from new software engineering methods and tools, to form a baseline for estimation, and to help justify requests for new tools or additional training. Any part of the software development can be measured. If Software Metrics are implemented in software development, it can save time, money, and allow the organization to identify the caused of defects which have the greatest effect on software development. The summer of 2004, I worked with Cynthia Calhoun and Frank Robinson in the Software Assurance/Risk Management department. My task was to research and collect, compile, and analyze SQA Metrics that have been used in other projects that are not currently being used by the SA team and report them to the Software Assurance team to see if any metrics can be implemented in their software assurance life cycle process.

  15. Space-Shuttle Emulator Software

    NASA Technical Reports Server (NTRS)

    Arnold, Scott; Askew, Bill; Barry, Matthew R.; Leigh, Agnes; Mermelstein, Scott; Owens, James; Payne, Dan; Pemble, Jim; Sollinger, John; Thompson, Hiram; Thompson, James C.; Walter, Patrick; Brummel, David; Weismuller, Steven P.; Aadsen, Ron; Hurley, Keith; Ruhle, Chris

    2007-01-01

    A package of software has been developed to execute a raw binary image of the space shuttle flight software for simulation of the computational effects of operation of space shuttle avionics. This software can be run on inexpensive computer workstations. Heretofore, it was necessary to use real flight computers to perform such tests and simulations. The package includes a program that emulates the space shuttle orbiter general- purpose computer [consisting of a central processing unit (CPU), input/output processor (IOP), master sequence controller, and buscontrol elements]; an emulator of the orbiter display electronics unit and models of the associated cathode-ray tubes, keyboards, and switch controls; computational models of the data-bus network; computational models of the multiplexer-demultiplexer components; an emulation of the pulse-code modulation master unit; an emulation of the payload data interleaver; a model of the master timing unit; a model of the mass memory unit; and a software component that ensures compatibility of telemetry and command services between the simulated space shuttle avionics and a mission control center. The software package is portable to several host platforms.

  16. Image

    SciTech Connect

    Marsh, Amber; Harsch, Tim; Pitt, Julie; Firpo, Mike; Lekin, April; Pardes, Elizabeth

    2007-08-31

    The computer side of the IMAGE project consists of a collection of Perl scripts that perform a variety of tasks; scripts are available to insert, update and delete data from the underlying Oracle database, download data from NCBI's Genbank and other sources, and generate data files for download by interested parties. Web scripts make up the tracking interface, and various tools available on the project web-site (image.llnl.gov) that provide a search interface to the database.

  17. Payload software technology: Software technology development plan

    NASA Technical Reports Server (NTRS)

    1977-01-01

    Programmatic requirements for the advancement of software technology are identified for meeting the space flight requirements in the 1980 to 1990 time period. The development items are described, and software technology item derivation worksheets are presented along with the cost/time/priority assessments.

  18. Software Engineering Program: Software Process Improvement Guidebook

    NASA Technical Reports Server (NTRS)

    1996-01-01

    The purpose of this document is to provide experience-based guidance in implementing a software process improvement program in any NASA software development or maintenance community. This guidebook details how to define, operate, and implement a working software process improvement program. It describes the concept of the software process improvement program and its basic organizational components. It then describes the structure, organization, and operation of the software process improvement program, illustrating all these concepts with specific NASA examples. The information presented in the document is derived from the experiences of several NASA software organizations, including the SEL, the SEAL, and the SORCE. Their experiences reflect many of the elements of software process improvement within NASA. This guidebook presents lessons learned in a form usable by anyone considering establishing a software process improvement program within his or her own environment. This guidebook attempts to balance general and detailed information. It provides material general enough to be usable by NASA organizations whose characteristics do not directly match those of the sources of the information and models presented herein. It also keeps the ideas sufficiently close to the sources of the practical experiences that have generated the models and information.

  19. Software Defined Radio with Parallelized Software Architecture

    NASA Technical Reports Server (NTRS)

    Heckler, Greg

    2013-01-01

    This software implements software-defined radio procession over multicore, multi-CPU systems in a way that maximizes the use of CPU resources in the system. The software treats each processing step in either a communications or navigation modulator or demodulator system as an independent, threaded block. Each threaded block is defined with a programmable number of input or output buffers; these buffers are implemented using POSIX pipes. In addition, each threaded block is assigned a unique thread upon block installation. A modulator or demodulator system is built by assembly of the threaded blocks into a flow graph, which assembles the processing blocks to accomplish the desired signal processing. This software architecture allows the software to scale effortlessly between single CPU/single-core computers or multi-CPU/multi-core computers without recompilation. NASA spaceflight and ground communications systems currently rely exclusively on ASICs or FPGAs. This software allows low- and medium-bandwidth (100 bps to approx.50 Mbps) software defined radios to be designed and implemented solely in C/C++ software, while lowering development costs and facilitating reuse and extensibility.

  20. Software Defined Radio with Parallelized Software Architecture

    NASA Technical Reports Server (NTRS)

    Heckler, Greg

    2013-01-01

    This software implements software-defined radio procession over multi-core, multi-CPU systems in a way that maximizes the use of CPU resources in the system. The software treats each processing step in either a communications or navigation modulator or demodulator system as an independent, threaded block. Each threaded block is defined with a programmable number of input or output buffers; these buffers are implemented using POSIX pipes. In addition, each threaded block is assigned a unique thread upon block installation. A modulator or demodulator system is built by assembly of the threaded blocks into a flow graph, which assembles the processing blocks to accomplish the desired signal processing. This software architecture allows the software to scale effortlessly between single CPU/single-core computers or multi-CPU/multi-core computers without recompilation. NASA spaceflight and ground communications systems currently rely exclusively on ASICs or FPGAs. This software allows low- and medium-bandwidth (100 bps to .50 Mbps) software defined radios to be designed and implemented solely in C/C++ software, while lowering development costs and facilitating reuse and extensibility.

  1. Two-Photon Processor and SeNeCA: a freely available software package to process data from two-photon calcium imaging at speeds down to several milliseconds per frame.

    PubMed

    Tomek, Jakub; Novak, Ondrej; Syka, Josef

    2013-07-01

    Two-Photon Processor (TPP) is a versatile, ready-to-use, and freely available software package in MATLAB to process data from in vivo two-photon calcium imaging. TPP includes routines to search for cell bodies in full-frame (Search for Neural Cells Accelerated; SeNeCA) and line-scan acquisition, routines for calcium signal calculations, filtering, spike-mining, and routines to construct parametric fields. Searching for somata in artificial in vivo data, our algorithm achieved better performance than human annotators. SeNeCA copes well with uneven background brightness and in-plane motion artifacts, the major problems in simple segmentation methods. In the fast mode, artificial in vivo images with a resolution of 256 × 256 pixels containing ≈ 100 neurons can be processed at a rate up to 175 frames per second (tested on Intel i7, 8 threads, magnetic hard disk drive). This speed of a segmentation algorithm could bring new possibilities into the field of in vivo optophysiology. With such a short latency (down to 5-6 ms on an ordinary personal computer) and using some contemporary optogenetic tools, it will allow experiments in which a control program can continuously evaluate the occurrence of a particular spatial pattern of activity (a possible correlate of memory or cognition) and subsequently inhibit/stimulate the entire area of the circuit or inhibit/stimulate a different part of the neuronal system. TPP will be freely available on our public web site. Similar all-in-one and freely available software has not yet been published. PMID:23576700

  2. Automated digital image analysis of islet cell mass using Nikon's inverted eclipse Ti microscope and software to improve engraftment may help to advance the therapeutic efficacy and accessibility of islet transplantation across centers.

    PubMed

    Gmyr, Valery; Bonner, Caroline; Lukowiak, Bruno; Pawlowski, Valerie; Dellaleau, Nathalie; Belaich, Sandrine; Aluka, Isanga; Moermann, Ericka; Thevenet, Julien; Ezzouaoui, Rimed; Queniat, Gurvan; Pattou, Francois; Kerr-Conte, Julie

    2015-01-01

    Reliable assessment of islet viability, mass, and purity must be met prior to transplanting an islet preparation into patients with type 1 diabetes. The standard method for quantifying human islet preparations is by direct microscopic analysis of dithizone-stained islet samples, but this technique may be susceptible to inter-/intraobserver variability, which may induce false positive/negative islet counts. Here we describe a simple, reliable, automated digital image analysis (ADIA) technique for accurately quantifying islets into total islet number, islet equivalent number (IEQ), and islet purity before islet transplantation. Islets were isolated and purified from n = 42 human pancreata according to the automated method of Ricordi et al. For each preparation, three islet samples were stained with dithizone and expressed as IEQ number. Islets were analyzed manually by microscopy or automatically quantified using Nikon's inverted Eclipse Ti microscope with built-in NIS-Elements Advanced Research (AR) software. The AIDA method significantly enhanced the number of islet preparations eligible for engraftment compared to the standard manual method (p < 0.001). Comparisons of individual methods showed good correlations between mean values of IEQ number (r(2) = 0.91) and total islet number (r(2) = 0.88) and thus increased to r(2) = 0.93 when islet surface area was estimated comparatively with IEQ number. The ADIA method showed very high intraobserver reproducibility compared to the standard manual method (p < 0.001). However, islet purity was routinely estimated as significantly higher with the manual method versus the ADIA method (p < 0.001). The ADIA method also detected small islets between 10 and 50 µm in size. Automated digital image analysis utilizing the Nikon Instruments software is an unbiased, simple, and reliable teaching tool to comprehensively assess the individual size of each islet cell preparation prior to transplantation. Implementation of this

  3. Two-Photon Processor and SeNeCA: a freely available software package to process data from two-photon calcium imaging at speeds down to several milliseconds per frame.

    PubMed

    Tomek, Jakub; Novak, Ondrej; Syka, Josef

    2013-07-01

    Two-Photon Processor (TPP) is a versatile, ready-to-use, and freely available software package in MATLAB to process data from in vivo two-photon calcium imaging. TPP includes routines to search for cell bodies in full-frame (Search for Neural Cells Accelerated; SeNeCA) and line-scan acquisition, routines for calcium signal calculations, filtering, spike-mining, and routines to construct parametric fields. Searching for somata in artificial in vivo data, our algorithm achieved better performance than human annotators. SeNeCA copes well with uneven background brightness and in-plane motion artifacts, the major problems in simple segmentation methods. In the fast mode, artificial in vivo images with a resolution of 256 × 256 pixels containing ≈ 100 neurons can be processed at a rate up to 175 frames per second (tested on Intel i7, 8 threads, magnetic hard disk drive). This speed of a segmentation algorithm could bring new possibilities into the field of in vivo optophysiology. With such a short latency (down to 5-6 ms on an ordinary personal computer) and using some contemporary optogenetic tools, it will allow experiments in which a control program can continuously evaluate the occurrence of a particular spatial pattern of activity (a possible correlate of memory or cognition) and subsequently inhibit/stimulate the entire area of the circuit or inhibit/stimulate a different part of the neuronal system. TPP will be freely available on our public web site. Similar all-in-one and freely available software has not yet been published.

  4. Space Station Software Recommendations

    NASA Technical Reports Server (NTRS)

    Voigt, S. (Editor)

    1985-01-01

    Four panels of invited experts and NASA representatives focused on the following topics: software management, software development environment, languages, and software standards. Each panel deliberated in private, held two open sessions with audience participation, and developed recommendations for the NASA Space Station Program. The major thrusts of the recommendations were as follows: (1) The software management plan should establish policies, responsibilities, and decision points for software acquisition; (2) NASA should furnish a uniform modular software support environment and require its use for all space station software acquired (or developed); (3) The language Ada should be selected for space station software, and NASA should begin to address issues related to the effective use of Ada; and (4) The space station software standards should be selected (based upon existing standards where possible), and an organization should be identified to promulgate and enforce them. These and related recommendations are described in detail in the conference proceedings.

  5. MPST Software: MoonKommand

    NASA Technical Reports Server (NTRS)

    Kwok, John H.; Call, Jared A.; Khanampornpan, Teerapat

    2012-01-01

    This software automatically processes Sally Ride Science (SRS) delivered MoonKAM camera control files (ccf) into uplink products for the GRAIL-A and GRAIL-B spacecraft as part of an education and public outreach (EPO) extension to the Grail Mission. Once properly validated and deemed safe for execution onboard the spacecraft, MoonKommand generates the command products via the Automated Sequence Processor (ASP) and generates uplink (.scmf) files for radiation to the Grail-A and/or Grail-B spacecraft. Any errors detected along the way are reported back to SRS via email. With Moon Kommand, SRS can control their EPO instrument as part of a fully automated process. Inputs are received from SRS as either image capture files (.ccficd) for new image requests, or downlink/delete files (.ccfdl) for requesting image downlink from the instrument and on-board memory management. The Moon - Kommand outputs are command and file-load (.scmf) files that will be uplinked by the Deep Space Network (DSN). Without MoonKommand software, uplink product generation for the MoonKAM instrument would be a manual process. The software is specific to the Moon - KAM instrument on the GRAIL mission. At the time of this writing, the GRAIL mission was making final preparations to begin the science phase, which was scheduled to continue until June 2012.

  6. Recent results from the Swinburne supercomputer software correlator

    NASA Astrophysics Data System (ADS)

    Tingay, Steven; et al.

    I will descrcibe the development of software correlators on the Swinburne Beowulf supercomputer and recent work using the Cray XD-1 machine. I will also describe recent Australian and global VLBI experiments that have been processed on the Swinburne software correlator, along with imaging results from these data. The role of the software correlator in Australia's eVLBI project will be discussed.

  7. Digital X-ray Pipe Inspector Software

    2009-10-29

    The Digital X-ray Pipe Inspector software requires a digital x-ray image of a pipe as input to the program, such as the image in Attachment A Figure 1. The image may be in a variety of software formats such as bitmap, jpeg, tiff, DICOM or DICONDE. The software allows the user to interactively select a region of interest from the image for analysis. This software is used to analyze digital x-ray images of pipes tomore » evaluate loss of wall thickness. The software specifically provides tools to analyze the image in (a) the pipe walls, (b) between the pipe walls. Traditional software uses only the information at the pipe wall while this new software also evaluates the image between the pipewalls. This makes the inspection process faster, more thorough, more efficient, and reduces expensive reshots. Attachment A Figure 2 shows a region of interest (a green box) drawn by the user around an anomaly in the pipe wall. This area is automatically analyzed by the external pipe wall tool with the result shown in Attachment A Figure 3. The edges of the pipe wall are detected and highlighted in yellow and areas where the wall thickness in less the the minimum wall threshold are shown in red. These measurements are typically made manually in other software programs, which lead to errors and inconsistency because the location of the edges are estimated by the user. Attachment A Figure 4 shows a region of interest (a green box) drawn by the user between the pipe walls. As can be seen there are intensity anomalies that correspond to wall defects. However, this information is not used directly by other software programs. In order to fully investigate these anomalies, the pipe would be reinspected in a different orientation to attempt to obtain a view of the anomaly in the pipe wall rather than the interior of the pipe. The pipe may need to be x-rayed a number of times to obtain the correct orientation. This is very costly and time consuming. The new software can perform the

  8. Software Engineering Improvement Plan

    NASA Technical Reports Server (NTRS)

    2006-01-01

    In performance of this task order, bd Systems personnel provided support to the Flight Software Branch and the Software Working Group through multiple tasks related to software engineering improvement and to activities of the independent Technical Authority (iTA) Discipline Technical Warrant Holder (DTWH) for software engineering. To ensure that the products, comments, and recommendations complied with customer requirements and the statement of work, bd Systems personnel maintained close coordination with the customer. These personnel performed work in areas such as update of agency requirements and directives database, software effort estimation, software problem reports, a web-based process asset library, miscellaneous documentation review, software system requirements, issue tracking software survey, systems engineering NPR, and project-related reviews. This report contains a summary of the work performed and the accomplishments in each of these areas.

  9. Commercial Data Mining Software

    NASA Astrophysics Data System (ADS)

    Zhang, Qingyu; Segall, Richard S.

    This chapter discusses selected commercial software for data mining, supercomputing data mining, text mining, and web mining. The selected software are compared with their features and also applied to available data sets. The software for data mining are SAS Enterprise Miner, Megaputer PolyAnalyst 5.0, PASW (formerly SPSS Clementine), IBM Intelligent Miner, and BioDiscovery GeneSight. The software for supercomputing are Avizo by Visualization Science Group and JMP Genomics from SAS Institute. The software for text mining are SAS Text Miner and Megaputer PolyAnalyst 5.0. The software for web mining are Megaputer PolyAnalyst and SPSS Clementine . Background on related literature and software are presented. Screen shots of each of the selected software are presented, as are conclusions and future directions.

  10. Guidelines for software inspections

    NASA Technical Reports Server (NTRS)

    1983-01-01

    Quality control inspections are software problem finding procedures which provide defect removal as well as improvements in software functionality, maintenance, quality, and development and testing methodology is discussed. The many side benefits include education, documentation, training, and scheduling.

  11. Software assurance standard

    NASA Technical Reports Server (NTRS)

    1992-01-01

    This standard specifies the software assurance program for the provider of software. It also delineates the assurance activities for the provider and the assurance data that are to be furnished by the provider to the acquirer. In any software development effort, the provider is the entity or individual that actually designs, develops, and implements the software product, while the acquirer is the entity or individual who specifies the requirements and accepts the resulting products. This standard specifies at a high level an overall software assurance program for software developed for and by NASA. Assurance includes the disciplines of quality assurance, quality engineering, verification and validation, nonconformance reporting and corrective action, safety assurance, and security assurance. The application of these disciplines during a software development life cycle is called software assurance. Subsequent lower-level standards will specify the specific processes within these disciplines.

  12. The Problem of Software.

    ERIC Educational Resources Information Center

    Alexander, Wilma Jean

    1982-01-01

    Explains how schools can purchase computer software. Lists are presented of (1) sources of published evaluations of selected software, (2) publications which contain names and sources of programs, and (3) magazines providing program listings appropriate for classroom use. (CT)

  13. Design software for reuse

    NASA Technical Reports Server (NTRS)

    Tracz, Will

    1990-01-01

    Viewgraphs are presented on the designing of software for reuse. Topics include terminology, software reuse maxims, the science of programming, an interface design example, a modularization example, and reuse and implementation guidelines.

  14. HEASARC Software Archive

    NASA Technical Reports Server (NTRS)

    White, Nicholas (Technical Monitor); Murray, Stephen S.

    2003-01-01

    (1) Chandra Archive: SAO has maintained the interfaces through which HEASARC gains access to the Chandra Data Archive. At HEASARC's request, we have implemented an anonymous ftp copy of a major part of the public archive and we keep that archive up-to- date. SAO has participated in the ADEC interoperability working group, establishing guidelines or interoperability standards and prototyping such interfaces. We have provided an NVO-based prototype interface, intending to serve the HEASARC-led NVO demo project. HEASARC's Astrobrowse interface was maintained and updated. In addition, we have participated in design discussions surrounding HEASARC's Caldb project. We have attended the HEASARC Users Group meeting and presented CDA status and developments. (2) Chandra CALDB: SA0 has maintained and expanded the Chandra CALDB by including four new data file types, defining the corresponding CALDB keyword/identification structures. We have provided CALDB upgrades for the public (CIAO) and for Standard Data Processing. Approximately 40 new files have been added to the CALDB in these version releases. There have been in the past year ten of these CALDB upgrades, each with unique index configurations. In addition, with the inputs from software, archive, and calibration scientists, as well as CIAO/SDP software developers, we have defined a generalized expansion of the existing CALDB interface and indexing structure. The purpose of this is to make the CALDB more generally applicable and useful in new and future missions that will be supported archivally by HEASARC. The generalized interface will identify additional configurational keywords and permit more extensive calibration parameter and boundary condition specifications for unique file selection. HEASARC scientists and developers from SAO and GSFC have become involved in this work, which is expected to produce a new interface for general use within the current year. (3) DS9: One of the decisions that came from last year

  15. DSS command software update

    NASA Technical Reports Server (NTRS)

    Stinnett, W. G.

    1980-01-01

    The modifications, additions, and testing results for a version of the Deep Space Station command software, generated for support of the Voyager Saturn encounter, are discussed. The software update requirements included efforts to: (1) recode portions of the software to permit recovery of approximately 2000 words of memory; (2) correct five Voyager Ground data System liens; (3) provide capability to automatically turn off the command processor assembly local printer during periods of low activity; and (4) correct anomalies existing in the software.

  16. Software verification and testing

    NASA Technical Reports Server (NTRS)

    1985-01-01

    General procedures for software verification and validation are provided as a guide for managers, programmers, and analysts involved in software development. The verification and validation procedures described are based primarily on testing techniques. Testing refers to the execution of all or part of a software system for the purpose of detecting errors. Planning, execution, and analysis of tests are outlined in this document. Code reading and static analysis techniques for software verification are also described.

  17. Astronomical Software Directory Service

    NASA Technical Reports Server (NTRS)

    Hanisch, R. J.; Payne, H.; Hayes, J.

    1998-01-01

    This is the final report on the development of the Astronomical Software Directory Service (ASDS), a distributable, searchable, WWW-based database of software packages and their related documentation. ASDS provides integrated access to 56 astronomical software packages, with more than 16,000 URL's indexed for full-text searching.

  18. Software Shopper. Revised.

    ERIC Educational Resources Information Center

    Davis, Sandra Hart, Comp.

    This annotated index describes and illustrates a wide selection of public domain instructional software that may be useful in the education of deaf students and provides educators with a way to order the listed programs. The software programs are designed for use on Apple computers and their compatibles. The software descriptions are presented in…

  19. Software Architecture Evolution

    ERIC Educational Resources Information Center

    Barnes, Jeffrey M.

    2013-01-01

    Many software systems eventually undergo changes to their basic architectural structure. Such changes may be prompted by new feature requests, new quality attribute requirements, changing technology, or other reasons. Whatever the causes, architecture evolution is commonplace in real-world software projects. Today's software architects, however,…

  20. Java for flight software

    NASA Technical Reports Server (NTRS)

    Benowitz, E.; Niessner, A.

    2003-01-01

    This work involves developing representative mission-critical spacecraft software using the Real-Time Specification for Java (RTSJ). This work currently leverages actual flight software used in the design of actual flight software in the NASA's Deep Space 1 (DSI), which flew in 1998.